Quality Gates Next use cases illustrated in a real banking project
Modern software development trends are focused on speed. But tasks need to be performed
not only quickly, but also with high quality. To this end, we offer Quality Gates Next.
What is Quality Gates Next?
Quality Gates Next is a solution built on tools that provide continuous monitoring and improvement of product quality through state-of-the-art development methodologies, such as Agile and DevOps. This solution is already available. It is well-tuned and can be quickly adapted.
QGN provides 8 quality gates that a product must pass before it is released. These gates are
positioned at the end of each significant stage in the development life cycle.
Distinctive features of application of the methodology:
- Continuous quality assurance
- Discovery of most problems in the early stages of development
- Simple detection of points of quality degradation
- Reduced testing costs
- Synergy with DevOps practices
- No need to change familiar development processes
Got a project in mind?
There is no better place for a QA solution than Performance Lab.Drop us a line to find out what our team can do for you.
So, what did Performance Lab need to achieve?
Our customer is a leading bank, providing a wide range of banking products and services to retail and corporate clients. The bank’s main activities are retail, corporate, and investment banking.
The customer asked us to help automate the processes used to test the banking system. The purpose of the project was to move deliveries into production more quickly and to increase quality.
How did we do it?
- We used the Java programming language to develop automated tests. Selenide was our tool of choice for automating browser actions, with Selenoid used to organize the browser infrastructure. The project was built using Gradle.
- To implement one of the gates, we used the Kubernetes container-orchestration tool. We developed our pipeline using Jenkins, which deploys the application when developers make a delivery. It also runs the automated tests.
- We wrote UI tests that emulate the behavior of real users and API tests for testing back-end requests.
- Test scenarios were developed using Gherkin.
- We displayed the test results in an Allure report. All source code was stored in GitLab.
- Depending on the results of an automated test run, the pipeline was either interrupted or proceeded to the next stage, which in our case was manual functional testing.
- The stages were displayed in the Jenkins plugin. As soon as all stages were completed and the results satisfied the gate criteria, the delivery moved to pre-production, where acceptance testing was performed.
- We used Apache JMeter to develop scripts for load testing. We created stubs for external systems using Mountebank.
- We built our monitoring based on Telegraf, InfluxDB, and Grafana, which made it possible to monitor the test progress in real time and easily analyze the results.
- Testing was launched from Jenkins, where several tasks were configured for this.
What problems did the team encounter along the way?
Folks on the business team wanted to get a more meaningful metric than simply the number of API tests passed, since each API test could consist of a completely different number of checks. Accordingly, we needed to compile statistics regarding passed, failed, and skipped checks.
Unfortunately, neither standard test frameworks (TestNG and JUnit) nor the RestAssured tool make it possible to collect such detailed statistics. To overcome this problem, we reworked the test automation framework to work around this problem, and also customized the Allure report template to display these statistics in a special information block.
The infrastructure of the tested application and the environment itself was built on container technologies that provide lower performance overhead compared to the classic full virtualization approach. Therefore, when implementing the UI autotests, we decided to change the approach to organizing the browser infrastructure. We migrated from Selenium Grid to Selenoid, which gave us the optimal way to put the automation solution into the Kubernetes cluster environment, gave us a more flexible configuration, and gave us the ability to rapidly restart containers with target browsers.
There was network with the servers had no direct access to network Internet. To solve this problem, a local Docker image repository: Harbor Docker Registry. This solution made it possible to provide secure access to the repository via SSL, and to flexibly manage access rights for writing and reading images in the repository.
When running the project on a test bench, it was necessary to quickly deploy the stubs for two systems. We weighed all the possible solutions. Instead of writing a stub from scratch, we selected the Mountebank application, which we were able to easily configure in order to quickly and easily create the working stubs for these systems.
While analyzing information about the system, we learned that the business is unable to provide statistical information about users’ activities on the production server. Using our rich experience in testing online banking systems for other banks, we were able to successfully build a load profile, which later allowed us to carry out high-quality tests.
What results were achieved?
Our testing helped us identify bottlenecks and give the customer recommendations on how to optimize the system.
Deployment to the Kubernetes cluster was implemented efficiently and well, subject to existing limitations. The cluster along with tools accompanying the project were put into the production environment. Bank employees were trained in using the graphical interface for managing and monitoring the K8s cluster.
Our work achieved the main purpose of the project. The time required to install updates was reduced by five times and risks were minimized. The project managers on the bank’s side gave a high rating based on the results gained from introducing modern IT tools. And in turn, Performance Lab gained valuable experience.