Try to search your question here, if you can't find : Ask Any Question Now ?

Building an API with microservices synchronous vs asynchronous massive throughput difference: how to explain these performance results?

HomeCategory: stackoverflowBuilding an API with microservices synchronous vs asynchronous massive throughput difference: how to explain these performance results?
Avatarwillson asked 3 months ago

I’m designing a simple API / web service system based on microservice architecture.

The architecture is as follows. Two microservice applications both built with Java, Spring and Tomcat. One microservice is responsible for having the endpoints which the external user can access. The second microservice also has endpoints but for interncal communication between microservices. The second microservice does some processing and communicates with my mockserver (WireMock) via HTTP. I have implemented two versions of this system.

My first version is simple. Using Spring’s for RPC communication – to perform REST-based communication between microservices. The first microservice requests data from the second microservice, the second microservice will communicate with mockserver and process once accessed, and return an answer to the first microservice. Finally, the first microservice forwards this reply to its user. Blocking and synchronous communication.

The second version uses RabbitMQ with AMQP for message passing architecture. When a user accesses the first microservice, the user’s open connection is stored in a table, and the user’s request is pushed on a queue for a listener on the second microservice to consume it. Once consumption finishes, the second microservice pushes back on a queue for a listener set up on the first microservice. The listener here locates the open user connection and serves it. Short blocking calls and asynchronous communication.

Under no load, an API response returns an answer in 1s.

In a load test with numbers such as 20 requests per second, I notice the throughput failing miserably in the first setup: the majority of requests taking more than a minute to return an answer at this load. With the second system, I get all responses in under a second.

The problem is that the gap between scaling for both architectures is too big. I know that message based systems are more scalable, but should the difference be this huge? Are there any obvious flaws in my methodology or things I have overlooked?

I thought it is a thread/resource problem with the first setup, so I looked closely at Tomcat and increased its maxthreads to 400 where the default is 200 and I saw minor differences. If this isn’t the issue, then how can I explain what is going on? How can I investigate my performance more closely?

1 Answers
Best Answer
AvatarFernando answered 3 months ago
Your Answer

2 + 4 =

Popular Tags

WP Facebook Auto Publish Powered By :