I had some time and put together a Jersey-and-Netty integration that I think is pretty interesting.
Its Github repository is here: https://github.com/microbean/microbean-jersey-netty. Its website is here: https://microbean.github.io/microbean-jersey-netty/.
It lets you run Jersey as a Java SE application using Netty to front it. It supports HTTP and HTTP/2, including upgrades.
Jersey itself ships with a Netty integration, but it seems to have some problems and parts of it are designated by its author as very experimental. I wanted to see if I could do a decent integration myself, both to learn more about Netty and to solve a real problem that I’ve had.
The main challenge with Netty is to ensure that its event loop is never blocked. But the very nature of JAX-RS, with its InputStream
s supplying inbound payloads, means that some blocking in general is necessary, so immediately you’re talking about offloading that blocking onto extra threads or executors to free up the event loop, and therefore coordination of IO activity between the Jersey side of the house and the Netty side of the house.
This in itself is not terribly difficult on the face of it and can be addressed in many different ways. The Jersey-supplied integration accomplishes this by passing InputStream
s to Jersey using blocking queues. This is fine, but now the integration author has to deal with queue capacity, and rejecting submissions, and so forth. As you might expect there is at least one issue around this that turns out to be somewhat severe (apparently). This also involves creating a fair number of objects.
But of course Netty already has a really nice system of queues that it uses to do this same sort of thing, and you can easily get access to it: it’s the event loop itself, which lets you submit
things to it.
Netty also has its ByteBuf
construct, which is a better cheaper faster ByteBuffer
. ByteBuf
s are sort of Netty’s “coin of the realm”. Netty goes to extraordinary lengths to ensure a minimum of object allocations and garbage generation occur when you’re working with ByteBuf
s, so they seem like a good thing to center any integration strategy around. They are not thread-safe, but if you mutate them only on the event loop, you’re good.
So the general approach I take is: instead of making extra queues to shuttle byte arrays or other Java-objects-wrapping-byte-arrays back and forth between Jersey land and Netty land, I use a CompositeByteBuf
that gets the ByteBuf
messages that Netty supplies in its HTTP message constructs added to it as they come in on the Netty event loop, and use the Netty event loop task queue to ensure that all ByteBuf
operations of any kind always take place on the event loop.
This means that I can take advantage of everything Netty gives me under the covers in terms of memory efficiency and potential transparent usage of off-heap buffers and such, while also gleefully punting any task queue saturation issues to Netty itself, which already has customizable strategies for dealing with them. A lower chance of bugs for you, since Netty has to deal with this sort of problem all day every day, and a lower chance of bugs for me, since it is their code, not mine, doing this coordination. Win-win!
On the outbound side, I take advantage of Netty’s ChunkedWriteHandler
, which, contrary to how it might appear at first, has nothing to do with Transfer-Encoding: chunked
. Instead, it is a nice little handler that deals with user-supplied-but-Netty-managed hunks of arbitrarily-typed data, writing them when their contents are available, and doing other things when it can’t. The upshot: your Jersey OutputStream
s are chunked up and enqueued on the event loop using a single ByteBuf
implementation that is optimized for IO as data is written to them.
The net effect is a nice little as-nonblocking-as-JAX-RS-can-get tango that Netty and Jersey perform together, coordinated by Netty’s and Jersey’s own constructs.
microBean™ Jersey Netty has extensive Javadocs and is available on Maven Central.