Once an application reaches a certain size, it can be a really good idea to separate the frontend UI from the backend code. In fact, connecting your user interface with a REST API completely decouples your interface code from the logic that’s implemented on the remote server, and massively decreases the complexity
The efficiency of using APIs
The entire app we’ll build, which downloads, and analyses images using artificial intelligence, is less than 600 lines of Java code on the client side.

And because we use open APIs, we don’t have to write anything on the server either…
For that reason, many enterprise applications get separated into a “client-side”, which consists of the local programs that run on the user’s machine (e.g. user interfaces), and the “server-side”, which runs on a remove server.

What you’ll get from this article
In this article, I’ll talk you through how to connect a JavaFX application with a REST API. In turn, we’ll deal with GET and POST requests, and most importantly, asynchronous requests that don’t freeze the UI
Table of contents:
- Pull data from a REST API with a GET request
- Send and receive data from a REST API with a POST request
- Loading data from a REST API asynchronously
The App you’ll create:
Here, we’re going to be creating quite a simple app, which downloads an image of a dog from one REST API, and analyses it using the OpenVision REST API for image analysis.
The OpenVision API will use artificial intelligence to detect objects in the image. Then, we can add highlights to the image to show the detected objects (ok, I’ll be honest, because of the way I’ve set up this app, the ‘objects’ are mostly dogs…).

What you’ll need
There are a few things you’ll need to build this app, from the dependencies we’ll use to the structure of the module-info.java file.
Dependencies
We’ll be using the lightweight HTTP library Unirest to simplify the code needed to interact with REST APIs and Gson to map the JSON data we receive from the APIs into Java objects.
Both are available from Maven central, so to include them in your project, you’ll need the following dependencies:
<!-- https://mvnrepository.com/artifact/com.squareup.okhttp3/okhttp --> <dependency> <groupId>com.mashape.unirest</groupId> <artifactId>unirest-java</artifactId> <version>1.4.8</version> </dependency> <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson --> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.8.5</version> </dependency>
If you use jars, you can find them at Maven central using the following links:
Gradle implementation groups are also available by the same links.
Module-info.java settings
Finally, we’ll need to modify the module-info.java file to accommodate the dependencies we’ll be using. Gson uses reflection to build the objects as it reads from the JSON data, so we’ll need to expressly allow that reflection in the Java module system.
Gson also needs java.sql, and obviously we’ll need to ‘require’ Unirest and Gson themselves. That means a few lines of extra code in our module-info.java file.This is what my file looks like for this project.
module com.edencoding { //needed for JavaFX requires javafx.controls; requires javafx.fxml; requires javafx.swing; //needed for HTTP requires unirest.java; //needed for JSON requires gson; requires java.sql; //needed for JavaFX opens com.edencoding.controllers to javafx.fxml; //needed for JSON opens com.edencoding.models.openVision to gson; opens com.edencoding.models.dogs to gson; exports com.edencoding; }
The whole code is hosted on my GitHub if you want to see how the pieces fit together.
Linking a JavaFX application with a REST API
JavaFX can be connected with a RESTful API through use of the Java HTTPClient object. On top of that, there are plenty of libraries that simplify the experience such as Apache HTTP Client, Unirest and OkHTTP.
I prefer Unirest’s minimal style, so in this tutorial, we’ll use the Unirest lightweight HTTP client library to make all the GET and POST requests.
We’ll implement them asynchronously ourselves, because that’ll allow us to keep track of things like progress, status and updates to the user interface.
JavaFX application GET request from REST API
The only difference between connecting a Java app with a REST API and connecting a JavaFX one is the way we structure the
To connect a JavaFX app with a REST API, we’ll need to make sure we structure the data we get back so that it can be viewed directly in the user interface. That means integrating it into the domain model.

To get the dog images we’ll need, we’ll use a REST API called Dog API (I’m not joking..). In fact, we’ll use this API to retrieve the URL for an image, and then we’ll fetch the image into our app.
To do this, we’ll create three objects:
- A data access-type object, which we’ll use to interact with the REST API (DogImages.java)
- An object that represents the API’s response (so we can easily access the results)
- A domain model to interact with the View and Controller.
In the next sections, I’ll walk you through the code to Use a GET request to receive JSON data from a REST API, and map the JSON data to a Java object
If you want to see all of the code in one place, check out the dropdown below.
Here’s the code to get and map the requests, producing a BufferedImage object we can use in our code.
package com.edencoding.models.dogs; import com.google.gson.Gson; import com.mashape.unirest.http.HttpResponse; import com.mashape.unirest.http.JsonNode; import com.mashape.unirest.http.Unirest; import com.mashape.unirest.http.exceptions.UnirestException; import javax.imageio.ImageIO; import java.awt.image.BufferedImage; import java.io.IOException; import java.net.URL; public class DogImages { private static String getRandomImageAsStringFromAPI() { try { HttpResponse<JsonNode> apiResponse = Unirest.get("https://dog.ceo/api/breeds/image/random").asJson(); DogResponse dogResponse = new Gson().fromJson(apiResponse.getBody().toString(), DogResponse.class); return dogResponse.getMessage(); } catch (UnirestException e) { e.printStackTrace(); } return null; } public static BufferedImage getImage() { try { return ImageIO.read(new URL(getRandomImageAsStringFromAPI())); } catch (IOException e) { return null; } } }
And here’s the Java object we’ll use to map to, allowing us to access the JSON data easily:
package com.edencoding.models.dogs; public class DogResponse { private final String message; private final String status; public DogResponse(String message, String status) { this.message = message; this.status = status; } public String getMessage() { return message; } public String getStatus() { return status; } }
API GET request
Any GET request takes the form of a URL that defines the location of the REST API, followed by either path variables, or query strings. In this case it’s a really simple URL, which directly takes us to a random image end point (an end point is the location without the query strings).
https://dog.ceo/api/breeds/image/random
To create a GET request, we’ll be using the Unirest library, which takes one line to submit a get request)
HttpResponse<JsonNode> apiResponse = Unirest.get("https://dog.ceo/api/breeds/image/random").asJson();
The chained call to asJson() submits the request to the API and returns the result as a HttpResponse<JsonNode>
object (a Unirest object that can be used to get the JSON data). So far this code’s not asynchronous, but we’ll deal with this later.
We can extract the JSON data directly as a String using the HttpResponse’s method getBody()
and chaining toString()
.
String responseJsonAsString = apiResponse.getBody().toString();
This generates some JSON data (as a String), which we’ll map to a Java object in the next section:
{ "message": "https://images.dog.ceo/url/of/the/dog.jpg", "status": "success" }
How to add query strings to a Unirest GET request
Obviously not all APIs are as simple as the one to get random dog images from the web. Often, they’ll take a query string that defines the sort of information you want from the API
Take the example of the Dog Facts API, which generates a random dog fact on request (not that I’m developing a worrying theme that court reporters will hone in on in decades to come..)
https://dog-facts-api.herokuapp.com/api/v1/resources/dogs?number=1
Part of the URL defines the location of the API. That’s followed by a question mark which defines the beginning of the query strings, and finally the query strings themselves as key-value pairs. In this case, it’s the number of dog facts you want the API to produce.
Unirest provides a builder pattern to create requests that have query strings. These are added by chaining .routeParam(String name, String value)
. This is the code to request two dog facts.
Unirest.get("https://dog-facts-api.herokuapp.com/api/v1/resources/dogs") .routeParam("number", "2") .asJson();
As before, final invocation of .asJson()
submits the request and returns the result.
Mapping an API response to a Java object
The second stage of interacting with a REST API is working out what format the information is going to be provided in. In JavaScript, you might just navigate through it, but in Java the tendency is to map it to a Java object.

To map a JSON object to a Java object, we’ll need a Java object that represents (with fields) what the JSON object looks like.
JSON Response
The JSON response from the Dog API is extremely simple, consisting of a URL under the object “message”, and a status object with the message “success”
{ "message": "https://images.dog.ceo/url/of/the/dog.jpg", "status": "success" }
In this app, we won’t check the status message, we’ll just handle errors in loading the URL later. In larger applications you might want to check this
Java object
The Java object we’ll use to map the JSON response is really simple.
package com.edencoding.models.dogs; public class DogResponse { private final String message; private final String status; public DogResponse(String message, String status) { this.message = message; this.status = status; } public String getMessage() { return message; } public String getStatus() { return status; } }
Finally, to complete the process of mapping the JSON response to a Java object, we’ll use the Gson library, specifically it’s fromJson()
method, which takes the String data, and the class we want to map to as parameters:
DogResponse dogResponse = new Gson().fromJson(apiResponse.getBody().toString(), DogResponse.class);
We can now access the URL from the JSON response using dogResponse.getMessage()
.
Fetching an image from the web for JavaFX.
With the URL for the dog in hand, we can load it into application memory using another single line of code. This uses the Java object ImageIO
, which actually reads in an AWT BufferedImage
.
The use of the BufferedImage
here is actually quite useful, because it allows us to keep the flexibility to both save it to file later, and to convert it into a JavaFX
Image.
This is the code from the DogImages class, which uses getRandomImageAsStringFromAPI()
to get the image URL from the REST API.
public static BufferedImage getImage() throws IOException { return ImageIO.read(new URL(getRandomImageAsStringFromAPI())); }
We now have a BufferedImage
(remember that’s a java.awt
object) in our application memory, which we’ve pulled from the web based on information we got from a REST API.
In the next section, we’ll generate a JavaFX Image, and integrate it into our domain model so we can display it in our interface.
Integrating the response into the domain model
In any JavaFX app, we want to separate the “code for handling data” (business logic) from the “code for controlling the interface” (view logic). This is a fundamental feature of the MVC pattern, which I love and JavaFX is basically built for.
If you’re interested in why MVC is so important to JavaFX, take a look at this JavaFX MVC article, which shows you the how everything should be linked up to maximise your code-reusability, and minimise later upkeep.
So in our case, we’ll create a Model class, which is going to handle a lot of the data manipulation for us. It’s going to:
- Provide a public method to refresh the image (then handle all the data manipulation involved in actually getting and converting the image), and
- Maintain a property that stores the current Image for display.
So, all we need at this point, is a method to load the image from the API (which can be invoked from the Controller), and an ObjectProperty<Image>
which can be bound to the ImageView
in the View.

The only genuinely new piece of code well use in this section is the utility method to convert the BufferedImage
into a JavaFX Image
.
Image image = SwingFXUtils.toFXImage(image, null);
Then, as the image is loaded, we can update our ObjectProperty<Image>
, which will define the current Image being displayed in the View. We can then expose the current image using a few getters, so the Model can bind the image in its ImageView
so it will automatically update when the image changes.
The Model
Here’s what the Model looks like for this first, simple version of the app.
public class ImageInterpretationModel { private final ObjectProperty<Image> loadedImage = new SimpleObjectProperty<>(); public Image getLoadedImage() { return loadedImage.get(); } public ObjectProperty<Image> loadedImageProperty() { return loadedImage; } private void setLoadedImage(Image loadedImage) { this.loadedImage.set(loadedImage); } public void loadNewImage() { DogImages.getImage().ifPresent(this::updateLoadedImage); } private void updateLoadedImage(BufferedImage image){ setLoadedImage(SwingFXUtils.toFXImage(image, null)); } }
Showing the information in the View
Finally, we want to create a Controller that lets our user load a new image whenever they want, and a View that shows the images.
The Controller
The controller’s role here is going to be to:
- Create the Model to connect to the View (in more complex apps we could inject the Model, but here we’ll just create it…)
- Bind the
Image
in the View to theObjectProperty<Image>
in the Model - Ensure when an image is loaded, it is the right size on screen
- Provide a method that can be fired from the View to allow the user to load a new image.

We’ll fire the first three from the initialize()
method of the Controller, and we’ll create a fourth method loadNewImage()
, which will prompt the Model to pull another image from the REST API.
package com.edencoding.controllers; import com.edencoding.models.domain.ImageInterpretationModel; import com.edencoding.models.openVision.AABB; import com.edencoding.models.openVision.Prediction; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.scene.image.ImageView; import javafx.scene.paint.Color; import javafx.scene.shape.Rectangle; public class DogImageController { //layout defaults private static final int MAX_IMAGE_WIDTH = 200; private static final int MAX_IMAGE_HEIGHT = 300; //View nodes @FXML private ImageView imageDisplayNode; //Model private ImageInterpretationModel model; public void initialize() { createModel(); bindImageToModelImage(); setSizeAndPosition(); } private void createModel() { model = new ImageInterpretationModel(); } private void setSizeAndPosition() { imageDisplayNode.setFitWidth(MAX_IMAGE_WIDTH); imageDisplayNode.setFitHeight(MAX_IMAGE_HEIGHT); imageDisplayNode.imageProperty().addListener((observable, oldImage, newImage) -> { double aspectRatio = newImage.getWidth() / newImage.getHeight(); System.out.println("Aspect ratio: " + aspectRatio); if (aspectRatio > 1.5) { imageDisplayNode.setFitWidth(MAX_IMAGE_WIDTH); imageDisplayNode.setFitHeight(MAX_IMAGE_WIDTH / aspectRatio); } else { imageDisplayNode.setFitHeight(MAX_IMAGE_HEIGHT); imageDisplayNode.setFitWidth(MAX_IMAGE_HEIGHT * aspectRatio); } }); } private void bindImageToModelImage() { imageDisplayNode.imageProperty().bind(model.loadedImageProperty()); } public void loadNewImage(ActionEvent event) { model.loadNewImage(); event.consume(); } }
The View
The View’s going to be pretty simple, with an ImageView
to show our image, and a Button
to let the user load a new image. Note the onAction
attribute of the Button
, which connects the button to a method we’ll create in the Controller, which will load a new image into the View.
<?xml version="1.0" encoding="UTF-8"?> <?import javafx.geometry.*?> <?import javafx.scene.control.*?> <?import javafx.scene.image.*?> <?import javafx.scene.layout.*?> <VBox alignment="TOP_CENTER" prefHeight="500.0" prefWidth="700.0" spacing="10" styleClass="background" stylesheets="@../css/styles.css" xmlns="http://javafx.com/javafx/10.0.2-internal" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.edencoding.controllers.DogImageController"> <Label alignment="CENTER" maxWidth="700.0" styleClass="title" text="Load a random image into App to analyse" /> <StackPane VBox.vgrow="ALWAYS"> <ImageView fx:id="imageDisplayNode" preserveRatio="true" /> <Button onAction="#loadNewImage" text="Load a new image!" StackPane.alignment="BOTTOM_CENTER" /> <padding> <Insets topRightBottomLeft="20" /> </padding> </StackPane> </VBox>
App
The app we’ve produced so far requests an image URL from a REST API, loads it into memory, and displays it to users. If you’ve used the same FXML document I have, your app should look like this:

POST requests from a JavaFX app
Many APIs handle both GET and POST requests. Generally speaking, GET requests are when no data is handed to the API, or when it’s passed as query strings. Simply put, you could put a GET request into the address bar of a browser.

POST requests allow us to pass ‘hidden’ data, which you couldn’t put into the address bar of a browser. One example of this is passing files to a REST API, such as an image.
In this case we’re going to be using the OpenVision API for identifying objects in images. Then, the API will return information to us about what’s inside the image.
To do this, we’ll have to:
- Save the image to disk
- Pass the image to the API as a POST request and map the response to a Java object
- Update the Model with OpenVision results
- Visualise those to the screen through the View and Controller
I’ll do each in turn.
Saving a BufferedImage to disk.
In order to pass the image itself, we’ll need to quickly write it to disk. We could mangle it into a byte array and try to pass it through a HttpURLConnection ourselves. But, from experience, it’s a lot less complicated (and error-prone) to let the Unirest library read it from disk itself.
Here’s a quick method to save a BufferedImage to disk before we send it with the user interface. You’ll 100% want to implement some half-decent error-handling, but this is the basic frame of how to write it to disk.
private File writeToFile(BufferedImage image){ File file = new File("downloaded.jpg"); try { ImageIO.write(image, "jpg", file); return file; } catch (IOException e) { e.printStackTrace(); return null; } }
Passing an image as a POST request and mapping the response to a Java object
Thankfully, using the lightweight Unirest library, creating a POST request is as simple as creating a GET request. In fact, instead of invoking Unirest.get()
, we just need to invoke Unirest.post()
, chaining the fields we need to include as part of the request.
The fields we’ll need to pass to the OpenVision API are:
- The name of the artificial intelligence model we want it to use for image interpretation (we’ll use “yolov4”)
- The image itself
Now we have the file on disk, we just need to pass it directly to Unirest. Then, once we’ve done that we’ll use the Gson library to map the response to a Java object. Here’s the expected format of the JSON response, and the Java object we’ll map it to:
JSON Response
The API response is a description (here, “Detected objects”). It also provides a list of predictions (no predictions is supplied as an empty list).
Each prediction has a label, a score (confidence) and a bounding box. The bounding box describes in pixel coordinates the area in which the object has been detected.
{ "description": "Detected objects", "predictions": [ { "score": "0.94", "bbox": { "y1": 58, "x1": 64, "y2": 193, "x2": 292 }, "label": "dog" }, { "score": "0.71", "bbox": { "y1": 19, "x1": -16, "y2": 273, "x2": 302 }, "label": "chair" }, { "score": "0.64", "bbox": { "y1": 0, "x1": 14, "y2": 94, "x2": 82 }, "label": "chair" } ] }
Java Objects
We’ll actually need three Java objects to correctly map this response.
OpenVisionResponse class
This summarises the entire response.
public class OpenVisionResponse { private final String description; private final List<Prediction> predictions; //getters }
Prediction class
This descrbes one prediction in the list, wth its label, score and bounding box.
public class Prediction { private final BoundingBox bbox; private final String label; private final Double score; //getters }
BoundingBox class
The bounding box itself, in pixel coordinates specific to the image we provided.
public class BoundingBox { private final Integer x1, y1, x2, y2; }
Once we have the objects set up, we can use Gson to map the response as soon as it arrives. Again, we can deal with making this request asynchronous in the next section. For now, there’ll be a small amount of time where the user interface will ‘hang’ while we get the results.
Unirest has the capability to handle asynchronous requests itself too (look at asJsonAsync()
if you’re interested), but I want in this case to demonstrate how to handle asynchronous stuff inside the model itself rather than in the data access objects.
Here’s a whole class which sends the POST request, maps the JSON and returns an OpenVisionResponse
object.
public class OpenVision { public static OpenVisionResponse submitImageToAPI(File image) { HttpResponse<JsonNode> response; try { response = makeRequestToAPI(image); Gson gson = new Gson(); return gson.fromJson(response.getBody().toString(), OpenVisionResponse.class); } catch (UnirestException e) { return null; } } private static HttpResponse<JsonNode> makeRequestToAPI(File image) throws UnirestException { Unirest.setTimeouts(0, 0); return Unirest.post("https://api.openvisionapi.com/api/v1/detection") .field("model", "yolov4") .field("image", image, "image/jpeg") .asJson(); } }
Now we have the results as an OpenVisionResponse object, which we can use later. As with the GET request, we’ll return a null reference if we can’t load the results from the API. More proficient error handing may be appropriate in more complex applications.
Integrating the response into the domain model
Now we have an OpenVisionResponse object, we need to integrate this into our Model, which will define what to display in the View. To do this, we’ll need to add:
- An
ObservableList
ofPrediction
objects, which we can use to highlight areas of the image that have been detected - A private method that we can invoke (inside the model) that will update the
ObservableList<Prediction>
when we have appropriate data. - A public method to retrieve the
ObservableList<Prediction>
so we can bind the View objects to it later. - A public method that can be invoked by the Controller to request the Model load and analyse a new image.
And here’s what they look like in code:
private final ObservableList<Prediction> predictions = FXCollections.observableArrayList(); public ObservableList<Prediction> predictions(){ return this.predictions; } private void setPredictions(OpenVisionResponse results) { this.predictions.setAll(results.getPredictions()); } public void loadNewImage() { BufferedImage image = DogImages.getImage(); if (image != null) { OpenVisionResponse response = OpenVision.submitImageToAPI( writeToFile(image) ); if (response != null) setPredictions(response.getPredictions()); } }
Note: The method loadNewImage()
is currently not asynchronous. I’ll create an asynchronous version of the method below in “Asynchronous RESTful API interactions with JavaFX“, which uses the JavaFX concurrency objects Task
and Service
.
Showing the information in the View
Finally, we want to create a View that will highlight the areas around the objects identfied (dogs, for example!) and a Controller that binds the new View nodes to the Model.
The View
Because there are some significant changes from the initial view, I’ll start here, and then describe how I’ll bind the Model data to it in the Controller in the next section.
On top of the View we created before, we’ll also need:
- A
ListView
, which we’ll populate with the “descriptions” of each of the objects identified in the OpenVision results. - A pane on top of the ImageView, which will allow us to draw on the ‘highlight’ rectangles to show where something’s been identified.
<VBox alignment="TOP_CENTER" prefHeight="500.0" prefWidth="700.0" spacing="10" styleClass="background" stylesheets="@../css/styles.css" xmlns="http://javafx.com/javafx/10.0.2-internal" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.edencoding.controllers.MainViewController"> <Label alignment="CENTER" maxWidth="700.0" styleClass="title" text="Load a random image into App to analyse" /> <HBox VBox.vgrow="ALWAYS"> <StackPane HBox.hgrow="ALWAYS"> <Group> <ImageView fx:id="imageDisplayNode" preserveRatio="true" /> <Pane fx:id="overlayPane"/> </Group> <AnchorPane /> </StackPane> <VBox alignment="TOP_CENTER" spacing="15.0"> <padding> <Insets bottom="10.0" left="10.0" right="10.0" top="10.0" /> </padding> <Label text="Predictions:"> <VBox.margin> <Insets /> </VBox.margin></Label> <ListView fx:id="predictionsListView" prefWidth="200.0" /> <Button onAction="#loadNewImage" text="Load a new image!" /> </VBox> </HBox> </VBox>
I’ve wrapped the ImageView and Pane together in a Group because this is going to ensure their visual bounds are calculated together, so I don’t have to worry about positioning.
I do have to worry about sizing, although we can fix that in the Controller.
The Controller
On top of the methods we’ve already created to handle requesting the dog image from the Dog API, we’ll need to implement the following behaviours:
- Listen to the
ObservableList<Prediction>
in the Model, so we can create new highlights when it changes. - Bind the items of the ListView to the
ObservableList<Prediction>
so it will always reflect up-to-date results. - Ensure the
overlayPane
we’ve just created is always the right size for the ImageView
Again, we’ll fire these from the initialize()
method of the Controller, and we’ll create a utility method updateHighlights()
, which will do the hard work of creating the rectangles, which we’ll draw around the identified regions.
public void initialize() { createModel(); //already defined (see GET method tutorial) bindImageToModelImage(); //already defined (see GET method tutorial) sizeAndPositionPanes(); //updated addOverlayPaneListeners(); bindListViewToPredictions(); } private void bindImageToModelImage() { imageDisplayNode.imageProperty().bind(model.loadedImageProperty()); } private void addOverlayPaneListeners() { model.predictions().addListener((ListChangeListener<Prediction>) c -> { clearHighlights(); createHighlights(model.predictions()); }); } private void bindListViewToPredictions() { predictionsListView.setCellFactory(new Callback<>() { @Override public ListCell<Prediction> call(ListView<Prediction> param) { return new ListCell<>() { @Override public void updateItem(Prediction item, boolean empty) { super.updateItem(item, empty); if (empty || item == null || item.getLabel() == null) { setText(null); } else { setText(item.getLabel()); } } }; } }); predictionsListView.setItems(model.predictions()); } private void clearHighlights(){ overlayPane.getChildren().clear(); } private void createHighlights(ObservableList<Prediction> predictions) { Double imagePixelWidth = model.getLoadedImage().getWidth(); Double imageRealWidth = imageDisplayNode.getFitWidth(); Double imagePixelHeight = model.getLoadedImage().getHeight(); Double imageRealHeight = imageDisplayNode.getFitHeight(); double scalingFactor = Math.min( imageRealWidth / imagePixelWidth, imageRealHeight / imagePixelHeight); for (Prediction prediction : predictions) { BoundingBox boundingBox = prediction.getBbox(); Rectangle rectangle = new Rectangle( boundingBox.getX1() * scalingFactor, boundingBox.getY1() * scalingFactor, boundingBox.getWidth() * scalingFactor, boundingBox.getHeight() * scalingFactor); rectangle.setFill(Color.web("#81c48333")); rectangle.setStroke(Color.web("#81c483")); rectangle.setStrokeWidth(3); overlayPane.getChildren().add(rectangle); } }
App
The app we’ve produced so far requests an image URL from a REST API, loads it into memory, and displays it to users. Then, it sends the image to a second REST API, which analyses the images for objects using artificial intelligence. Finally, we interpreted those results to highlight the areas on the image within the app, alongside a list of results.
If you’ve used the same FXML document I have, your app should look like this:

Asynchronous RESTful API interactions with JavaFX
Finally, we need to make the calls to the REST API asynchronous. That’s because as we’re relying on external code, we don’t know how long it will take to respond. During that time, we don’t want the user interface to stop responding to our user.
To accomplish this, we’ll use the JavaFX concurrency Service
class, which takes a Task
and runs it on a separate thread.

That means the JavaFX Application Thread is still free to continue interacting with the user while we complete our update.
To do this, we’ll create a Task
, and use it to create a Service
. The Task
is going to do the job of loading the image, and then loading the analysis. The Service
is going to do the job of moving it onto a separate thread.
private final Service<Void> service = new Service<>() { @Override protected Task<Void> createTask() { return updateTask(); } }; public void loadNewImage() { if (!service.isRunning()) { service.reset(); service.start(); } } private Task<Void> updateTask() { Task<Void> task = new Task<>() { @Override protected Void call() { updateProgressModel("Loading image...", 0.25); BufferedImage imageFromDogAPI = DogImages.getImage(); if (imageFromDogAPI == null) { updateProgressModel("Error loading image", 0); throw new RuntimeException("Error loading image. This is usually due to an issue resolving HTTP connection with the Dog API. It's usually temporary, and re-running the task may yield better results"); } Platform.runLater(() -> updateLoadedImage(imageFromDogAPI)); updateProgressModel("Saving image...", 0.4); File file = writeToFile(imageFromDogAPI); if(file==null) { updateProgressModel("Error saving image...", 0); throw new RuntimeException("Error saving image. This may be an IO error. If you're running this program in an environment where you don't have write permissions, the program can't save a temp file to upload to the server."); } updateProgressModel("Analysing image...", 0.75); OpenVisionResponse openVisionResponse = null; openVisionResponse = OpenVision.submitImageToAPI(file); if(openVisionResponse == null){ updateProgressModel("Error analysing image", 0); throw new RuntimeException("Error analysing image. This is usually due to an issue resolving HTTP connection with the Unirest API. It's usually temporary, and re-running the task may yield better results"); } updateProgressModel("Adding highlights...", 0.95); setPredictions(openVisionResponse.getPredictions()); updateProgressModel("-- Done! --", 1.0); return null; } }; task.setOnFailed(event -> { progress.set(0); statusText.set("-- Error --"); }); return task; } private void updateProgressModel(String message, double progress) { Platform.runLater(() -> { statusText.set(message); this.progress.set(progress); }); }
An important part of running asynchronous tasks in the background is what happens if they go wrong. In this case, I want to implement the following behaviours:
- The Task will try three times to connect to the APIs and conduct the analysis
- Each time if fails, it will attempt the entire process from the beginning again
- If the process has failed three times, an error message will be created, allowing the user to select the appropraite action.
I’ll define this inside an initializer block, because I want this to always run, even if I create multiple constructors later (currently we’re only using the default no-arg constructor).
{ AtomicInteger fails = new AtomicInteger(); service.setOnFailed(event -> { if (fails.get() <= 3) { updateProgressModel("Error - retrying (" + fails + ")", 0.25); fails.getAndIncrement(); service.reset(); service.start(); } else { updateProgressModel("Fatal Error. Exit", 1); } }); service.setOnSucceeded(event -> fails.set(0)); }
And that’s it! The application should now load the image and analysis in the background whenever it’s requested by the user.
Conclusions
Separating apps into client-side and server-side code usually simplifies code maintenance by simplifying the interface code significantly.
Benefits for the User Interface:
- Simpler code
- Less maintenance
- Smaller application for users
- Can use any API (even open ones you haven’t built)
Benefits for server code:
- Consistent REST interface structure
- Supports multiple applications
- Decoupled from UI implementation, meaning any code base can be used.
It’s also easier than you think to connect a JavaFX app with your APIs. In this case, 90% of the work is in creating the data structures and data-access objects we need to interact with and store information from the API.
Once we have the data in hand, it’s a simple process of hooking up the View with the Model (data) and Controller code.
As an added benefit, the entire codebase for this app, which uses two open APIs, is less than 600 lines of code.
Full code:
If you want the full code, you can get it all in my GitHub here.