ui automated testing

Automated Ui Testing

Our client is a social dating app with worldwide recognition and presence in over 100 countries.
Being very successful social media application with millions of subscribers and over a billion swipes per day, client’s engineering team was and still is challenged to release regular software updates across Android, iOS, and Web platform on a bi-weekly basis while keeping high standards of quality to satisfy its regular users.

Rate of changes per release was increasing drastically while product was pushing new ideas via A/B tests thus having only You can find more information about this service manual functional testing could not cater bi-weekly releases.
In 2016, Performance Lab was asked to join forces with the in-house engineering team to build what at the time was, native test automation solution using tools for automated ui testing - newly introduced XCUITest library for iOS and Espresso library for Android.

As a result:

  • . ~20% of all functional test cases were automated
  • . full regression time was reduced by ~30%
  • . automated client analytics were finding P0/P1 bugs almost every release (extremely time consuming to test functionality manually)

Success seemed to be achieved until our team start running into CI battles with devs.
Let’s examine a typical CI architecture:

CI with pre-merge tests (Classical Case):

  • 1. GitHub repository with app codebase
  • 2. Jenkins CI
  • 3. Each PR and its consecutive commit is triggering a chain of checks against PR branch which include but not limiting to code compilation, unit tests, code style validation
  • 4. Checks described above are blocking PR merge -- if one of the listed above checks fail, PR won’t be merged into main developing branch until the issue is addressed.



Add automated UI tests for Android/iOS repositories to run along with other checks - compilation, unit tests, lint on each Pull Request


  • 1 UITests due to its nature might be flaky. Flakiness could depend on many factors - USB connections on devices, internet connection

  • 2 Constant UI changes in the app lead to UITest failures and require constant updates in the tests code

  • 3 Since we are using native testing frameworks (Espresso, XCTest) for writing fast and reliable iOS and Android UI Tests, these tests reside in the same repository along with app’s code. Therefore when a developer makes PR and brakes one or more of these tests, the change in test or exclusion of broken test would require a commit or another PR. As a result, all checks would need to be run once again which is time consuming. Not only developer is blocked but also irritated since he might of change UI flow which made UI Test to react to such change ( false positive ) Unhappy and irritated developer will obviously opposed to run UI Tests in pre-merge manner and would fight for move UI Test to post merge execution.

Find more deliverables from us

See our case studies to have detailed information about the projects we have worked on. Take deep into
the tasks we managed to solve and implemented solutions.

Read all cases

Test Orchestrator



Test Orchestrator - a framework for managing tests in CI.

  • 1 The ability to enable / disable specific automation tests without any change in source code. If test is failing due to a developer change or other cause, one could be quickly disabled from main pool of tests to unblock CI. The orchestrator works both with Unit and UiTests!

  • 2 Quarantine is another feature of Test Orchestrator. A newly added test would have to pass 10 consecutive times before being added to the CI pool of tests. The same rule applies for updated/fixed tests. Such feature eliminates flakiness in CI!

Use cases:

  • 1 Test fails in CI due to UI changes. Developer is blocked, although he did not brake test explicitly. Automation team or developer himself would disable test from Test Orchestrator’s web portal while opening a new Jira task for automation team to address test modification.

  • 2 Test failed due to the actual bug which developer introduced in a Pull Request. After examining a new bug, the product team decided to fix it in the following sprint. Dev or QA engineer would perform the following actions:
    • a. Developer or QA Engineer disables test from Test Orchestrator web portal and link Jira bug for reference.
    • b. When the defect is fixed, the status of the test gets updated and it moves to quarantine job for validation.


Looking back at all the work performed on the project, we can definitely call it a success. Despite having to deal with a constantly changing app and working on very tight deadlines, we managed not only successfully write and run multiple automated ui test cases, but also verify the need and importance of changes in the process and subsequently come up with a solution that eliminates many issues associated with You can find more information about this service test automation .

All this helped our Performance Lab team recognize the importance of strategy and detailed planning, especially at the early stages of a project. In addition, working on a multinational team across different time zones and especially around tight deadlines gave us all an opportunity to work using Agile methodology and focus on the quality of the end product for our client. In doing all that, we not only helped deliver a better product and service or improved a few metrics, but our team also managed to learn and grow as well.

Got a project in mind?

There's no better place for a QA solution than Performance Lab.
Drop us a line to find out what our team can do for you.

Request a quote