Conferences

Learnings from Devoxx 2022

Alex Broadbent
6 min readAug 19, 2022

Here are my highlights from this year’s conference if you missed it.

It’s clear there has been a lot of excitement around the future of Java version 18 and this conference was no java.lang.Exception! I had the pleasure of attending many talks around what the future holds over the three day event. Below I will reflect on my key highlights and new learnings.

Key Highlights

REST in Peace. Long live gRPC!

Mario went through the history of inter process communication. REST is essentially “pretty URLs with JSON payloads” (Hadi Hariri), we also looked at the Richardson REST Maturity Model (here’s a great article on this from Martin Fowler) which shows that most REST services are not using all of the REST standard and are therefore not optimised for speed across the network.

REST was intended for getting/setting resources across the network, but when you actually want to have a more complex vocabulary of “doing” something with the data then it doesn’t quite fit the model. In general, the main argument for using REST is that there is readability.

Combined with the fallacy of distributed computing that “transport cost is zero”, making multiple API calls amounts to far more than you’d expect when you total the cost of JSON un/marshalling (eg. using Jackson or Gson). This brings the introduction of Protocol Buffers, which are structures defined in an IDL (Interface Definition Language). This structure brings backwards and forwards compatibility and are language agnostic.

Using gRPC (a binary-serialized data representation), request processing can be up to 6 times after (use your own benchmarks to see if it’s beneficial for your application). gRPC supports several types of representation such as: request-response as well as streaming from client-side, server-side, uni-directional and bi-directional.

Watch the talk by Mario-leander Reimer on YouTube.

A Path From an API To Client Libraries

The toil of maintaining a set of client libraries for your public API was well voiced by Filip, with his company Infobip creating client libraries for it’s messaging API automatically from an OpenAPI schema using a code generator.

The problem with having a public API means supporting many SDKs for each programming language, which requires having specialist knowledge in each language. The API will always be growing with more endpoints and products, rarely ever removing anything.

Infobip use an OpenAPI specification which describes the entirety of their API, they also use it to generate their API documentation and all the client SDK libraries. This means their SDKs stay up to date with the public API in near real-time. Currently, they have an auto-generated SDK for some products in their SDK for Java, PHP, C#, Python and Go.

Watch the talk by Filip Srnec on YouTube.

Java on CRaC: Superfast JVM Startup

Simon excellently ran through the background of why we use Java and how the managed runtime is the biggest selling point. However, he also talked deeply about why the performance warmup is so slow (I won’t try to write a short summery here, I’d really recommend watching the video below).

The target of trying to reduce the warmup time of a JVM application is where the CRaC (Co-ordinated Restore at Checkpoint) part comes in. Based on the CRIU (Co-ordinated Resume in Userspace) project within Linux, which snapshots an entire application’s state to be able to pause and resume.

With this approach, any application (Spring Boot, Micronaut, Quarkus) resuming from a pause took around 40ms to start processing data again where without CRaC it would take multiple seconds.

The tradeoff here is that all connections (file, network, etc) have to closed when taking a snapshot and then reopened afterwards (as they are when the application is resumed). As well as also having potentially very large snapshots as some files within the system can be multiple GB in size.

Watch the talk by Simon Ritter on YouTube. View the project in GitHub.

Tradeoffs, Bad Science, and Polar Bears — The World of Java Optimisation

There are a huge amount of reasons to optimise your Java code, but there should be a main focus on the human-centric design of the system before trying to optimise every small function within a whole application.

There can be changes in use case which make the requirements of the system changes over time, and even if the use case doesn’t change the world around us is constantly changing and will bring about different reasons for definition of what optimised is for an application.

Performance optimisation should always be done through measuring, guessing where the bottlenecks of a system are are often are. But further to this, measuring the right thing is more important. There is a split betweenlagging indicators being easy to measure but hard to change and leading indicators being hard to measure but easy to change.

A lot of advice that engineers hear tend to not be that accurate or have changed over time and context. While we can always find small pieces of code that we think are optimisations, ultimately the JVM has been written to optimise code better than the average developer and it typically is made to optimise clean, predictable code. So writing anything off this will ultimately be less performant.

Watch the talk by Holly Cummins on YouTube.

New Learnings

The biggest thing that I took out of the conference is that Java has moved in a good direction and has a lot of promising features coming out soon, with the projects Loom, Panama, Valhalla and Amber adding a lot of value:

  • Project Loom brings in lightweight threading, these new JVM-controlled virtual threads are separate from the old OS-controlled threads which were larger in stack size (1MB) and were limited to only several thousand.
  • Project Valhalla improves performance by reducing the amount of effort in data access. Where objects are currently stored across the heap with pointers to their memory addresses, Valhalla introduces value types which are programmed like objects but accessed like a primitive. Valhalla has been in the pipeline for a long time due to the wide-ranging changes required to support it in the compiler, class file structure and the JVM.
  • Project Panama aims to simplify connecting to non-Java components such as C-based libraries. The project comprises of the sub-projects Foreign-Memory Access API, Foreign Linker API, and Vector API.
  • Project Amber is a bit of a catch-all project for any productivity-improving feature of Java, this includes text blocks in Java 13, Records and finalised pattern matching in instanceof comparisons in Java 16. There are also Sealed classes, pattern matching in switches, and pattern matching for records and arrays to come.

While Project Amber is bringing the features that most developers want to have, there’s still a lot to be said about Loom and Valhalla. One adversary that Java has is Kotlin, which already does a very good job of all the above issues which Java is catching up on. As a Kotlin-advocate I really am glad to see that a good amount of what I love about Kotlin is coming into Java soon.

Applying New Knowledge from Devoxx at Work

Attending a conference is always exciting but it’s no surprise absorbing new information all day long can also be exhausting. So I thought it be useful to share how I condense my overflowing notes to useful takeaways.

Sharing the knowledge with the team. By making yourself talk about the conference to your team and presenting in a lunch and learn style meeting forces you to reflect, sieve and consume useful information.

After three days, I have pages of notes, all of which need re-reading, converting to real English and summarising for myself. But doing this for others is the perfect way to share new knowledge at work beyond the mere application of a new feature that happens to be relevant.

Enjoyed by blog? Let me know with your applause below and maybe any conferences you recommend attending next. Thanks for reading.

--

--