Java Rest Api Upload Large Files Using Webclient

or how to avoid Out of Retention

INTRO

This week my team and I faced an issue I read the kickoff time when I was at the college. I completely forgot information technology since this wednesday of october: ship a very big file over HTTP.

Requirement and Design

Our client substituted its CRM with a cloud i and nosotros engaged united states of america to integrate it with the unabridged software map.

One integration catamenia posts documents to CRM from a local storage and assembly them to stored customer accounts; file size has no upper jump and we assumed ane GB equally a medium value.

All CRM integrations are REST based, no shared folders, no staging DB, simply Residual API OAUTH1 secured are allowed.

I sketched for you lot a simplified architectural model in the picture show below.

IMG 1 — Solution Architecture

Our application, in the same way every bit other parts of the application map, runs on premise environment while the CRM is hosted on a cloud tenant.

The exposed API accepts a Multipart body with 2 parts: ane with a JSON doc containing metadata such every bit filename, customer account id and so on, and the other the binary content of the file.

Standard Solution

The application is fabricated of two parts: the starting time a file poller which creates a thread every fourth dimension a new file is seen into the staging folder and one which relates it to the client account and ship to CRM.

If you're interested in creating a file poller, I link y'all Apache Camel Polling Consumer, it's a keen solution to do it easily.

Here I'm interested in talking to talk you about the fashion nosotros transport files to CRM.

Let's start coding; here an extract of out pom.xml is:

          <dependency>
<groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-spider web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>jump-core</artifactId>
</dependency>

Here our standard solution code is

          RestTemplate remoteService = new RestTemplate();          //HTTP has two parts: Header and Body          //Here is the header:          HttpHeader header = new HttpHeader();          //Here is the body          MultiValueMap<String,Object>  bodyMap =  new MultiValueMap<String,Object>();
bodyMap.add("customer_file",new FileSystemResource(fileName));
bodyMap.add("customer_name", customerJSON);
HttpEntity<MultiValueMap> request = new HttpEntity(bodyMap, header);
ResponseEntity<Cord> restResponse = remoteService .exchange(remoteServiceURL, HttpMethod.Mail service, request, Cord);

the customerJson variable is a javax.json.JsonObject; in this mode the multipart request choose the right content type autonomously and the same behaviour is expected while using a org.springframework.core.io.FileSystemResource instance.

We did those tests:

  1. ship a small file in order to await for some malformed requests
  2. ship a huge file proving our awarding robustness

Nosotros faced nothing important regarding this paper with test ane, some header value missing, a wrong URL formatted input and so on.
Test 2 made up us waiting for a couple of minutes and then all Java developer nightmare appared

          java.lang.OutOfMemoryError: Coffee heap space        

The effect didn't arise merely because nosotros ran the code in a development surround but likewise because the app tried to load the entire file content in RAM, making it more than retentiveness hungry than J. Wellington Wimpy.

It was clear analyzing the application memory footprint, sic et simpliciter.

To recap, this wasn't a cracking solution, from an architectural point of view as well, because:

  • we cannot assume a maximum size for incoming files
  • we cannot piece of work sequentially on files

We needed to improve it.

The chunked solution

What we needed was to train our lawmaking not to load the entire file content in memory merely to utilize the characteristic HTTP1.1 supports till my years in college: chunked transfer encoding.

This characteristic tells the server that the incoming request is fabricated of more than one HTTP message and it needs to receive all of them to offset processing.

The advantage from the client side point of view rises from the fact that you load in retentivity only the slice you're transferring at the moment.

If you would like to know more about how HTTP implements the chunked transfer encoding follow those WIKI, W3C.

We improved our grade configuring properly the RestTemplate:

          RestTemplate remoteService = new RestTemplate();          SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();                      requestFactory.setBufferRequestBody(simulated);            remoteService.setRequestFactory(requestFactory);        

Nosotros repeated Exam two and this time it went well.

We observed a memory footprint less than 300 MB for the full transfer of 1.five GB file! A success!

Conclusions and Regards

In this paper I described the solution we plant to transfer a large sized file; y'all can institute multiple others using different libraries.

I would like to add together that this feature come up only with HTTP1.1 and that HTTP two no longer supports chunked transfer encoding; I recall you have to wait for some sort of streaming API.

Hither we chose to use the well known RestTemplate form, instead the newer WebClient: I'm not able to tell you if you tin adapt to information technology.

At the very end I wish to thank Luca and Davide for the time spent working on the full solution which inspired this paper, and, of form, for all the laughs we have everyday.

butlergeory1983.blogspot.com

Source: https://medium.com/swlh/transfer-large-files-using-a-rest-api-a0aa96983ebb

0 Response to "Java Rest Api Upload Large Files Using Webclient"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel