Performance Testing for WSO2 Deployments.

Lashan Sivaganeshan
9 min readJun 21, 2024

--

Before initiating performance tests, it’s essential to understand the objectives of performance testing. The key objectives are …

  1. Identifying Bottlenecks.
  2. Detecting and Preventing Issues and Mitigating Risks.
  3. Optimizing Resource Utilization.

With all these objectives, the ultimate goal is to provide a better user experience for the consumers of the software solution. In this regard, the following details are required to be considered when it comes to performance tuning and these will help in achieving the above objectives.

Firstly, performance and performance testing shouldn’t be an afterthought. The stability of a solution along with the expected performance should be guaranteed before the Go-Live of a software solution. And this needs to be a continuous activity given the fact that software solutions will always continue to evolve and change. The following are the factors that should be considered when planning performance testing.

  1. At least one environment should be identical to the production environment. Performance testing should be conducted in this enviornment.
  2. Performance tests should be done while considering the daily traffic patterns. i.e.: Through the performance tests, all APIs and integrations should be triggered similarly to what happens in the production environment (when the traffic is replicated as much as possible to production, more realistic results should be observed). Considering traffic spikes and regular growth of traffic, you may consider conducting performance tests for 5x or 10x of the regular traffic.
  3. To replicate the production traffic, first, it will need to be estimated. A few pointers to assist with these estimations are shared below.
  4. It’s important to run long-running tests as well (for multiple hours or days).
  5. Performance tests should be executed individually against components and each component should be tuned to collectively receive better performance for end-to-end use cases.
  6. It’s essential to run multiple test rounds (the duration of a single test round should be adjusted based on observations and objectives) until the best outcome is received from individual components while tuning the included parameters.
  7. It’s important to use appropriate tools to simulate the traffic for performance tests. For this, the tools should also have enough computing resources and the resource usage (CPU, memory) of the tools should also be monitored and scaled appropriately.

Now, with regard to WSO2 software products (these are tools to build solutions), the team publishes performance test results normally during product version releases. You can refer to the following artefacts about performance tests done by WSO2.

WSO2 API Manager [1]
WSO2 Micro Integrator [2]
WSO2 Identity Server [3]

Notes:

  1. Stats captured in the above links regarding WSO2 tests are only for reference and each solution/deployment will have differences depending on multiple factors explained below.
  2. In WSO2 performance tests, they have used Apache Jmeter [4] as the testing tool. A tool like Jmeter will allow all stats including throughput, average response time, etc. It will allow you to manage the concurrency as well.

Overall performance of a software solution depends on various factors and performance tests should be done to isolate concerning factors and to validate resolutions and tuning on each of them. The performance of a system would generally depend on the following elements.

  1. Network latencies.
  2. Computing power.
  3. Processing delays of each component.
  4. Transaction flows and data involved in transactions.
  5. Thread pools, connection pools etc available in the Software Component.

Each solution/deployment will have differences depending (not limited to) on the above factors. Therefore, it’s critical to have comprehensive plans for performance setup/setups for each project. These artefacts and infrastructure will help determine proper tuning whenever scaling is needed in the deployment.

With regard to replicating the production traffic in a lower environment for the purpose of performance validation and tuning, the input parameter will be the request concurrency. This is the number of requests reaching each software component within a given time period (ex: the number of requests per second). While planning performance tests, the expected concurrency for a number of upcoming weeks/months needs to be calculated for the estimations. The following parameters will assist in determining a comprehensive test input while simulating production traffic.

  1. Number of published/deployed APIs in the WSO2 API Manager.
  2. Number of applications subscribing to each API.
  3. Number of users included with each application.
  4. Number of integrations (proxies, APIs, scheduled tasks etc) in the WSO2 Micro Integrator.
  5. Grant flows included in the login (token requests) (This might be the WSO2 Identity Server or the WSO2 API Manager depending on the deployment).

Based on the above parameters, the expected concurrency for each application and each API needs to be speculated.

Once the concurrency is determined, that’s what we should implement in the test suite. A tool like Jmeter will allow you to implement this behaviour using “threads”.

While running the tests, the following details should be captured for each test run.

  1. Test flow (included APIs, grant types, back end and client components etc).
  2. Concurrency of requests (As explained above, this is a parameter we adjust as an input of the test suite and to achieve the expected concurrency, the test suite should have the correct computing resources, for higher concurrencies it might require multiple instances of the test clients to run in parallel ).
  3. Throughput (This is a result we observe from the test execution and is explained below).
  4. Average response time and percentiles (Similar to “Throughput”, this is a result of the test execution and is explained below).
  5. Error rate (if any and ideally this should be zero or very close to zero).
  6. CPU and memory usage of the host machine (VM or cluster).
  7. Logs of each software component to troubleshoot any errors.

Test results from the reference WSO2 tests are available in the following links [1][2][3] and you will be able to see how the above details are captured for each test run.

Following is one of the sample test results extracted from [5] (with the minimum details required to analyse performance).

Test Scenario: WSO2 Identity Server-5.10 (two nodes 4 cores each) — Obtain an access token and an id token using the OAuth 2.0 password grant type.

If we try to break down the numbers above…

  1. The first column refers to the number of concurrent users that will access the system within a given amount of time. This means all these users are invoking a service simultaneously (in the above case, the service is to obtain an access token and an ID token using the OAuth 2.0 password grant type from the WSO2 Identity Server). This is the input parameter of the performance test and as explained above, depending on the software usage this needs to be determined and used to plan the performance test. The test tool’s objective is to simulate the production concurrency as much as possible.
  2. The second and third columns are the actual test results (outcome of the test). Throughput refers to the number of processed transactions within a given period (as per the above stats, the number of processed requests within a second). Average Response Time is the time taken to process one single request (as per the above stats, in milliseconds).

As depicted above, the same test scenario has been executed multiple times while increasing the concurrency. Suppose certain parameters are tuned as findings from the analysis, in that case, the same test with the same concurrency will be executed multiple times to validate the tuned parameters.

There’s a direct relationship between concurrency, response time and the throughput depending on the performance of the software. If we check the above table’s first row, the details are as follows.

Concurrent users (concurrency): 50
Average response time: 106.97 ms (~100 ms)
Throughput: 466.64 req/second (~500 req/second)

What these numbers mean is that 50 users (50 threads from the test tool) are continuously sending requests to the server throughout the entire test duration (duration is also a parameter controlled for the test). For 1 thread to get a response, it will roughly take 100 ms. In that sense, a single thread from the test tool can send 10 requests within a second and get processed responses (i.e. 100ms x 10 = 1000ms => 1 second). When all 50 threads manage to get processed responses that will be 50x10 = 500. And that’s the rough throughput (Transactions Per Second — TPS) of the server.

Concurreny — Response Time — Throughput

If we go through each row of the above table, it’s clear that the response time of the software increases when the concurrency is increased. And depending on the concurrency the throughput is observed. This observation is depicted in the following diagrams in the below document [6].

WSO2 API Manager 4.3.0 — Response Time against concurrency
WSO2 API Manager 4.3.0 — Throughput against concurrency

Once again, the above graphs are just references. To determine the capacity of the actual solution, conducting performance tests with similar conditions is absolutely needed. That’s when decisions on scaling (horizontal and vertical) can be made. The decisions will depend on whether the provided response time is sufficient for the SLA of the business and if it addresses the end user expectations.

Another point to consider with performance tests would be to check the resilience for request bursts. In production setups, due to various reasons such as promotional activities and events, there can be request spikes. It would make sense to simulate these scenarios as well in performance tests to guarantee the stability of the solution. The WSO2 team has done a test activity on this with the WSO2 Identity Server and the results with details are available in [7].

Finally, I want to touch upon conducting tests individually against components using the following solutions diagram.

Perfomance Testing on WSO2 deployments

Depicted above is a generic diagram to explain a software solution that has the WSO2 products to address the following requirements.

WSO2 Identity Server — Identity and Access Management and key generation for API requirements.

WSO2 API Management — Management and Governance of APIs and enforcing policies to API calls.

WSO2 Micro Integrator — Integrating different software components.

The performance of the complete solution will depend on the performance of each of the above WSO2 products as well as the capacity and performance of the service endpoints and client applications. To narrow down and tune each component individually while also understanding the scaling needs of each independent component, performance tests are required to be carried out at each individual layer as shown in the above diagram.

In this aspect, it’s important to individually assess the performance of the service endpoints (backend services) first (Jmeter 1). Once that’s established, the WSO2 components can be battle-tested for performance (Jmeter 2,3,4). Finally, end-to-end solution-level testing can be done while including the client applications. The details highlighted above will be valid for each layer of the solution.

Conclusion
In conclusion, performance testing for WSO2 deployments is a critical and ongoing activity that ensures software solutions meet user expectations and business requirements. By identifying bottlenecks, preventing issues, and optimizing resource utilization, performance testing ultimately enhances the user experience. It’s crucial to conduct these tests in environments that mirror production, using appropriate tools to simulate realistic traffic patterns and capture detailed metrics. The comprehensive approach to performance testing — from estimating traffic to fine-tuning individual components — ensures that each part of the deployment can handle the expected load efficiently. Regularly updating performance benchmarks and running long-duration tests help in maintaining the stability and scalability of the solution. By following these best practices, organizations can ensure their WSO2 deployments are robust, resilient, and capable of delivering optimal performance under varying conditions.

Well done if you have fully read all details. :) Complexity of performance testing can vary depending on several factors, please feel free to comment your thoughts as well on performance tests and share any additional insights or experiences you may have in optimizing WSO2 deployments for maximum efficiency.

Thanks.

[1] https://github.com/wso2/performance-apim/tree/performance-test-448–2024–03–12_13–09–17/performance/benchmarks

[2] https://github.com/wso2/micro-integrator/tree/v4.3.0-m1/performance/benchmarks

[3] https://github.com/wso2/performance-is/tree/master/benchmarks/7.0.0

[4] https://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.html

[5] https://github.com/wso2/performance-is/blob/master/benchmarks/5.10.0/5.10.0_two-nodes_4-core.md

[6] https://apim.docs.wso2.com/en/latest/install-and-setup/setup/deployment-best-practices/performance-tests-results/

[7] https://github.com/wso2/performance-is/blob/performance-graphs/benchmarks/6.1.0/performance_visualization_v2/summary-graph.md

--

--

Lashan Sivaganeshan

What you search is out there. It's a matter of pressing the right keys.