Diving Headfirst into Automated Testing

  • Tanya Bala, QA Engineering Manager
  • Harsha Pai, AVP & Head of Engineering

We at Chai Point have been at the heart of the chai-led beverage revolution in India for more than 10 years now. Serving over 7 lakh cups per day and fulfilling more than 100 million cups delivered over the past year, our omnichannel network of physical stores, cloud kitchens, virtual stores, 3rd party vending sites at offices, F&B players etc. make us the largest chai-led beverage platform in the world.

Our aspiration is to reach a scale of 60,000+ distribution points and achieve a daily cup count of 10 million beverages.

Aspiration of this scale is only possible through deep investments in our cloud connected brewing systems.

This series of blogs is to share greater details on different building blocks of our technology stack.

In this blog, we present our test automation journey.

How often have you been stuck in the below never ending cycle?

A diagram of a process

Description automatically generated

Well, we at Chai Point have been often and this is our story of how we managed to come out of it, and ensure that automated testing is incorporated into our development life cycle.

Start-ups are cut throat, the environment is fast paced, the expectations are sky high, time and resources are often limited but the outcome still needs to be better than the best. This was no different at Chai Point. Without any dedicated Automation Engineer, there would never really be a golden window to get things kick-started. So we just dove right into it and figured things out along the way. After all what is a journey without a few hiccups?

We outlined our journey to cover the following aspects:

  • Identifying what needs to be automated and prioritized
  • Selecting a framework that supports all our needs
  • Ensuring that the framework supports cross-platform testing
  • Building the framework to support multiple data sources for verification
  • Sticking not only to UI automation, but get better ROI with API automation
  • Finding open source tools to ensure no extra budget is required
  • Making the frameworks easy enough for QA Engineers with minimal coding experience to be able to use it easily

Once the above was outlined we had to figure out the what and the how. Figuring out the what was pretty straightforward, as we simply had to identify the tests which were a conjunction of the following:

  • Frequently executed
  • Touches key user flows
  • High execution time
  • Prone to bugs
  • Independent of 3rd party systems

The how was the trickier part where we had to finalize the framework and architecture, and run a POC to ensure it works the way we want it to. For the framework we did the usual rounds of R&D and finalized the following aspects based on our needs:

  • RestAssured with TestNG for API Automation
  • A hybrid of Appium, Selenium and Cucumber for UI Automation

We tried to keep our architecture simple and maintainable which allowed the user to tackle multiple problems in a single framework.

API Automation Architecture

At a high level, our architecture and design comprised of the following components:

  • RestAssured: A generic framework that made it easy to test RESTful APIs irrespective of whether they are a part of micro-services or event-driven architecture. 
  • TestNG: A framework that supported multiple test suites and test cases, and provided listeners which were used to perform custom actions before, during, and after tests making it highly configurable.
  • Test Data: The test data for our automation tests were maintained in separate classes/ files. This made it easy to keep the test data organized and maintainable. The framework also supported storing the test data in multiple formats so that based on the test requirements it could be extracted, manipulated and used. For example, in case of an API requiring a static request body, the framework would directly use a json file to retrieve the data, whereas in case of a test which would require dynamic data obtained from other tests, POJO classes were utilized.  
  • Data Verification: For verification the framework used Hamcrest Matchers for straightforward data, and POJO classes for more complicated response bodies. 
  • Database Verification: The results of the API requests were verified using not only the API responses, but the database information as well. This ensured that the tests were accurate and reliable. The framework turned out to be flexible enough to support extraction of data from SQL databases, NoSQL databases, and S3 buckets.
  • API Chaining: The framework supported the ability to test multiple interdependent APIs together. This was useful when the behaviour of one API is strictly dependent on the behaviour of another. For example, when testing the API for retrieving current orders of a user, it was imperative to execute and store the results of the order creation API to ensure the data was correctly retrieved.

UI Automation Architecture

The key components of the UI automation design were:

  • Selenium and Appium: The use of a hybrid framework implementing Appium and Selenium to ensure both Web and Mobile testing has been included in the same framework, and is interchangeable based on the script being executed.
  • Given, When, Then: Using Cucumber, the tests were written in a simple format which made it easily understandable not just for the testers but also for the business users. This helped to serve as a living document for the features as well.
  • Page Object Model: We have built the framework such that there’s a page with locators for each view, or sometimes even for a part of the view based on the use case. This kept maintenance of the locators and their actions separate from that of the actual scripts, and helped in the overall ease of scripting.
  • Modularization: Keeping separate feature files for different modules, helped in executing tests in silos as and when required. 
  • Dependency Management: This ensured that tests are executed in the correct order, and that dependent tests were not executed unless the previous test ran correctly. This helped to prevent errors and ensure the integrity of the tests.

Reporting

Seeing is believing, so after everything was built and run, we had to figure out how to visualize our test runs. A host of options were available, either to use the basic TestNg generated reports, or the Cucumber default ones, but we came to the conclusion that adding Allure to our framework gave us a more comprehensive view of the test cases run, and their past behaviour.

It ensured that we had step by step records of the test runs, and also screenshots/ response data for failures which helped in debugging. The Allure Report is generated and hosted in a S3 URL to ensure it’s available for viewing for the entire team.

Here’s what a single feature run report looks like, right now:

A screenshot of a computer

Description automatically generated

Conclusion

On a final note, even though the task of getting started with automation in a purely manual testing team looks quite daunting, it’s absolutely essential to advance the quality process and bring forth more robust features in a shorter duration. 

Here’s are the highlights of our solution:

  • It is scalable and flexible, and it can be used to test a wide variety of applications.
  • It is easy to maintain and update.
  • It ensures that the tests are accurate and reliable.
  • It helps to improve the quality of our software applications.
  • It reduces our Regression time effectively by 50%

Even though we are thrilled about what we have achieved so far, we still have a long road ahead. We are constantly looking for ways to improve our automation framework, and we are excited to see what we can achieve in the future.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top