AI is (probably) the only way to really scale up a testing strategy

Benoit Lamouche
4 min readSep 15, 2022

--

As part of my current projects, we have been working on testing strategies for years. We spent extensive time and energy finding the proper processes, tools, and methodologies in order to deliver a great experience, at the top of quality standards. Our QA stack is now mature and fully integrated into our development and delivery workflows (Huge thanks to the team for the great job delivered).

What’s next?

Working in such an environment is a real challenge. We are talking about hundreds of websites and thousands of pages and features to test, several times, every day.

How to maintain the stack? How to keep the testing scenarios up-to-date? How to manage exceptions? How to build on top of all the learning we have? How to take into consideration the huge amount of data available on our servers?

Here I’m not talking about the technical stack. According to me, the main parameter for a good QA process is the test cases pool. Do we have enough test cases? Are they up-to-date? Are they relevant? Do they fit with the real user needs?

Both manual and automation teamwork are based on the test cases pool, so if the test cases pool is good, there are good chances to get high quality, if the test cases pool is bad, there are big risks of low quality.

The actual process we are following is a kind of classic testing methodology :

The two main limitations with this setup are :

  • Human capacity: this process requires a lot of people and if we need to double the test case number, we have to +/- double the size of the team.
  • Limited usage of available data: Most of the test cases are based on user stories, but we don’t take into account other data coming from analytics. It’s a very “static” management of test cases.

How to resolve the limitations?

There are not many options. As most of the pool maintenance is managed by team members, the only way to remove limitations is to increase the size of the team. By adding more capacity we can manage more test cases and maybe take into consideration some “external” data coming from other sources (analytics for example).

But there will remain anyways limitations :

  • At some point, by increasing a lot the size of the team we will reach other limitations (organization, communication, synchronization, team spirit…)
  • Some data (logs for example) will remain extremely difficult to analyze manually and will be extremely time-consuming.

An AI-oriented approach could resolve the limitations while keeping the benefits of the manual + automation combo.

How AI can help to scale up the testing strategy ?

By keeping the test cases pool as the top priority to deliver high quality, we can easily consider AI as the QA team’s best friend for data analysis and test case management.

The main difference stands in the integration of an AI module (machine learning + decision making) between the test cases management and the data source.

The module can increase significantly the number of data we are using to generate test cases.

  • Manually generated data: User stories, specific scenarios… This is the same as what we have in the “static” setup.
  • Collected data: All the strategic and relevant data we can collect from our tools and logs. Analytics, user logs, error logs, marketing logs… This is qualitative, dynamic data we have to use if we want to make our testing strategy more robust.
  • Dynamically generated data: All the data and outputs, generated by the testing system itself. This can make the system even more robust by learning from his own success and failures.

The possibilities are endless and QA will have a real 360 analysis of what needs to be tested.

Do we still need manual testers? YES. The purpose of AI is to generate test cases, not to replace the manual testing loop. The manual testing loop remains the same, focusing on high-priority and sensitive features/pages. Manual testers may have to extend their scope of work and work as validators of the AI outputs.

Do we still need automation testers? YES. Again, the purpose is to generate test cases. There are big chances to continue to work with the exact same stack for automation testing. The automation testing team may have to work on improving the scalability and capacity of the automation stack, in order to manage potential load variations coming from AI.

This setup can make the most of the smart human brains we have, while removing some of the boring and time-consuming tasks we have to perform on a daily basis. This is also a very good option to make the most of all available data, representing the real usage of the application, because what matters the most, is the real usage and experience from the customer.

--

--

Benoit Lamouche
Benoit Lamouche

Written by Benoit Lamouche

Digital Factory Director & Tech culture addict https://lamouche.fr/ - Creator of The Hidden Leader https://thehiddenleader.substack.com/

No responses yet