Testing System Performance for a Rating Agency: Analytics, Harmonization & Sharing
Our client is a multinational rating agency. Their data, analytical solutions and insights help decision-makers identify opportunities and manage the risks of doing business with others. It is over 11,000 employees in more than 40 countries, global presence and over a century of experience in financial markets.
The company’s main domains are:
- accessing the financial reports in different formats from pubic and private
- rating creation.
The processes above are organized through an internal system that automates all company’s workflow. To evaluate the system’s performance they employed the Performance Lab team.
Got a project in mind?
There is no better place for a QA solution than Performance Lab.Drop us a line to find out what our team can do for you.
Our goal was to ensure that there are no critical performance defects in the main or deployment functionalities. Additionally, we intended to compare the performance measurements for different system releases (to run the regression tests).
We were going to work with several system modules that are responsible for the quick document processing and the following analytics.
System based on Pega Case Manager Portal. The system’s main functionalities are the following: creating spreads (manual/automated); determining spread parameters, such as the period and the accounting standard; loading documents for the industry/sector/issuer company; changing and monitoring spread statuses (audit); document requests; spread enrichment.
Java EE application on JBoss. The main functionality is the work with a certain spread: mapping the key-value pairs of the financial parameters into a tree structure; parameter adjustment; saving the data; submitting the spread.
First we had to test the components separately, after that we were to observe the systemas a whole.
How we have managed it
The work scope has included the following activities: monitoring of any performance deviations after changes of the source code, providing utilization statistics of the backend resources and response times, estimating how many clients the system can handle in the current state, identifying system bottlenecks, which prevent the system to achieve a higher load, and providing a list of recommendations for improvement with associated efforts required.
The instrument choice was based on the system specifics, the work scope and our professional experience. We used the following technology while testing:
- СНТ – HP ALM Performance Center;
Protocols: http/html, Truclient
Network visualization: HP Shunra
Script communication: Virtual Table Server
- Traffic sniffer: Fiddler 4;
- Monitoring/reports: AppDynamics, Splunk, HP Analysis, Trend Reports (Performance LifeCycle), AWR – analysis;
- Bugtracking: JIRA.
Even when the workflow is perfect and the team is highly professional, it’s impossible to consider all details. We’ve faced quite many problems:
Complicated traffic that is typical for the PEGA platform. We’ve solved it using Truclientscripts, partly because there are not so many users.
Managing the testing team that is geographically dispersed. It is not easy to work as productively from home as from the office. We’ve worked it out by following the work schedule, the scrum rules and by using the coordination tools.
During the first 9 project months we didn’t receive the non-functional requirements. Then we’ve used the branch standards.
The project was carried through by a team in home-office that did it best to use the time difference as an asset. And we’ve managed it!
We’ve built the testing processes from scratch in the customer company and fixed uncountably many errors in the system. As a result, we’ve successfully deployed the update and made the client happy.
In our turn we’ve added to our knowledge in testing such systems, as well as become more competent Truclient users.