Manual and Automated Testing for the Mental Health Support Application
Industry
Country
Type of Service
Cooperation Type
Project Type
Overview
The client* is an online platform dedicated to providing accessible mental health support. It connects users with licensed therapists through text-based chat sessions, offering an easy and approachable way for people to access help and guidance for their emotional well-being. The platform’s primary goal is to make mental health services more user-friendly and available to a broader audience.
* We recognize the importance of protecting our clients’ privacy and follow the policies to maintain their confidentiality and security. That is why the company name will not be disclosed.
Challenge
The client’s request was to find a manual and automation QA expert with relevant domain expertise to join their dev team. Since the company didn’t have an in-house QA specialist, they faced issues with setting up the test process and a constant flow of tasks.
The task for the QA Madness expert was to test manually the chat app, including functional, UI, compatibility, and other types of testing. The next task was to cover repetitive tasks with automated tests.
Solution
Since we received the tasks after the new functionality was developed, work by estimate suited this scenario best. Before preparing the checklist, the team studied the product and did a small research on how questionnaires and psychological tests featured in the application work in general, online and offline.
Using checklists was more effective at the initial stage than test cases, as the functionality was under development. Checklists enable more flexibility in updating documentation and become a good background for knowledge maintenance in the future.
Manual Testing
Functional and UI testing make a default minimum QA package for every new application. When working with healthcare software, it is also crucial to check for compatibility and accessibility.
QA engineers checked various flows and features to see what works as planned and what requires fixes. The team also checked the input fields, forms, buttons, links, etc., for functionality and performance.
Regarding compatibility testing, engineers used real Android and iOS devices to get accurate results on the app’s behavior on different screens and hardware. Since the app relates to the healthcare niche, we also provided an accessibility test and checked the app’s Web Content Accessibility Guidelines and ADA Accessibility Standards.
One more important aspect was change-related testing. Ensuring stable functionality and performance is critical for new products, with functionalities developed from scratch and evolving through time. After each interaction, the QA team ran the following checks:
- Smoke testing – inspection of the most crucial functionality of the application, such as registration, logging in, and the main functions.
- Regression testing – verification of the stable functionality after code iteration; it is necessary to confirm that unchanged functions haven’t been affected accidentally.
- Retesting – examination of the functionality after bug fixes to confirm that the defects are gone, and everything works as expected.
Automated Testing
Two years into our cooperation, the client requested to set up automated testing. Given the workload, we agreed on assigning an expert in both manual and automated testing to work with the project full-time.
The manual testing covered the whole functionality of the app, and regression was automated. To set up the process, the QA engineer:
- Wrote test cases to cover the maximum of the app’s functionality. These tests were executed after every build because each iteration came with many code changes.
- Selected test cases for regression to use as the basis for automated scripts. It also involved further review and maintenance of the test suite.
When the development was at the active stage, the team released new builds once every two to four weeks. After every release, the regression was run manually first. When the application became more stable, the majority of the regression tests were automated, with just a few left for manual inspection.
Results
- We established long-term cooperation with the client’s team and assisted in adjusting the testing process as the product evolved.
- The QA engineers ran over 500 test cases on six mobile devices to provide sufficient coverage.
- Approximately a hundred critical defects were detected and reported. Fixing them made the app more stable, enhancing user experience and business performance.
- Automating the most time-consuming tasks increased testing accuracy and shortened the turnaround time, accelerating the release lifecycle and reducing the cost of testing.
Let’s Start a New Project Together
QA Madness helps tech companies strengthen their in-house teams by staffing dedicated manual and automated testing experts.
Anastasiia Letychivska
Head of Growth
Relevant Case Studies
Ready to speed up the testing process?