QA Madness Blog   Should You Hire a Professional QA Provider to Test Your Taxi App?

Should You Hire a Professional QA Provider to Test Your Taxi App?

Reading Time: 9 minutes

One look at the headline, and you might think we’re just trying to sell our mobile testing services. Well, our team’s always glad to work on new projects, but that’s not the thing. While more and more people have started to recognize the value of professional QA, some still may remain skeptical about it or don’t fully understand what they get.

A while ago, a situation when everyone in the product team became a tester of their own software was quite common. In practice, being an active user of a particular kind of mobile apps or knowing the details of your product, unfortunately, doesn’t equal the expertise you get with QA providers.

How’s so? You can read many books and watch many films, but that doesn’t make you a professional critic. You need to build a theoretical background, too. You can cook meals daily, but that still doesn’t make you a great chef. This list can go on, but you’ve got the idea.

Proper theoretical knowledge, skillset, and experience are what makes a person good at their job. For example, if you are a Product Owner or a developer willing to test the app before the release, where would you start? One way is to check several frequently used flows. The other way is to check the functionality that was the most difficult to code.

Eventually, you’ll likely end up googling “how to test an app like Uber” to see what others suggest to check. But that still doesn’t compensate for the lack of expertise. Here’s why.

The Value of Professional QA

Before writing this article, we scanned how companies and professionals cover this subject and what advice they share. To keep it short, you’ll mostly see step-by-step presentations of the development process with the lists of functionality every such app needs. One can surely use this list as a reference for testing. However, even if well-explained, it won’t be the same as a checklist (needless to mention, test cases that come with yet more information).

A checklist requires a certain level of detalization. For example, the very first step – registration – falls into several scenarios: UI screen (visual check), registering with valid and invalid credentials, empty fields, an email that is already in the system, unticked required boxes, etc.

A QA engineer keeps these things at the top of their mind. At the same time, other experts would approach an application based on their experience, often forgetting to include common scenarios just because that’s not what they’re dealing with constantly (which is perfectly fine!).

You can often find a list of test types one needs to run when working with a taxi app. This is handy information, but primarily for beginner QA specialists or product team members who want to get familiar with the procedure.

You get to know that performance testing is essential. But how much of the performance testing would be enough? Do you need to check an average load or run spike, volume, scalability testing, too? Do you have enough devices to run compatibility tests? And if not, would simulators and emulators be equally effective? Also, what’s up with security testing, and who can cover that, except white hat hackers?

Finally, the very process is usually described a bit too simple. You get to learn about the basic (and usually obvious) steps:

  1. Get familiar with the product through documentation and/or interaction.
  2. Prepare test documentation – test plan, test cases or checklists, etc.
  3. Run testing based on your documentation.
  4. Report the bugs you found in the process.
  5. Wait for the fixes and retest to verify those fixes.

This is how it goes if you need to outline the procedure schematically. In practice, it will be an ongoing process, with each type of testing taking a peculiar place, making a defined sequence of actions, and often QA activities intertwined with development, business analysis, etc. That’s what we decided to focus on further.

How to Test a Taxi App: The Step-by-Step Process

We asked our QA Director (who, among other projects, has been working with taxi and delivery apps for around ten years) to break down the standard step-by-step testing process for taxi booking apps. As you can guess, we got a more complex sequence of actions than the one in the previous part. This particular example is based on one of our long-term projects, so that’s how it works for teams that plan to develop their product, update it regularly, and keep up with the market and tech trends.

#0. Before There’s a Build

When we mention software testing, many would think about a ready product that is about to be released and, thus, needs to undergo one final inspection. However, this process starts much, much earlier. When development is in progress (and often, even before it starts), a QA team is busy with documentation testing and preparing test scenarios based on the mentioned documentation.

The process of testing something already finished does begin with receiving a close-to-final build. For agile development teams, it starts over with every new sprint, an updated build, and an accordingly updated list of tasks. In traditional development methodologies, testing is a phase between development and release, but QA engineers also play an active role in software maintenance.

#1. Manual Acceptance Testing

Acceptance testing is a test level that focuses on determining whether to accept the system. – ISTQB Glossary

Let’s focus on how testing goes for an evolving product that has been around for a while. After developers add a new feature to an app, they share an updated build that works in a testing environment. Some consider it a starting point for the QA team, but you already know that it is a starting point only in terms of this particular sprint, and quality assurance begins much earlier.

QA engineers run manual acceptance testing to verify the newest feature against the requirements and ensure the team has got what it planned. It is essential to understand that such new features are tested both on the user and admin side.

Every time we are to offer something new to a user, we add a way to set it up and manage it on the company’s side. That’s the thing many forget to consider when covering the testing theory: the admin part matters equally much.

#2. Automated End-to-End Regression Testing

Regression testing is a type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software. – ISTQB Glossary

End-to-end testing is a type of testing in which business processes are tested from start to finish under production-like circumstances. – ISTQB Glossary

The project we’re referring to started long ago, and the application had stable core functionality. Thus, some of the tests were automated to reduce time on manual effort and streamline the releases.

After manual acceptance testing in the test environment, we run automated end-to-end regression tests. They cover the business-critical user scenarios that imitate core paths from the first click to completing the target/intended action. This allows for ensuring that a newly added feature doesn’t conflict with the original functionality.

#3. Automated API Regression Testing

API testing is testing performed by submitting requests to the test object using its application programming interface. – ISTQB Glossary

In parallel with end-to-end tests, we check the functional aspect of integrations between the internal and external systems that work via APIs (application programming interfaces).

Automated regression tests for API are executed automatically every time a developer commits to a test environment. In other words, automated API regression testing is meant to check all API methods following any changes in the test environment. Even if there’s a single bug fix, the auto test will start, and the number of executions will equal the number of commits in the test environment by the developers.

In general, QA engineers can test different aspects of API – functionality, performance, usability, security, etc. If we’re speaking about a constantly evolving application, API testing would be one of the first candidates to automate since it allows you to deal with large datasets quickly and with high accuracy.

#4. Bug Fixing and Polishing

Bug fixing comes along with retesting and verifying the functionality. Meanwhile, “polishing” may require an explanation. When we say something is polished, we mean it is smooth, without sharp edges, feels nice, and so on. The same goes for software.

Polishing can refer to alpha testing when the team members interact with the new build as the ready product. Altogether, it helps enhance the user experience by gathering feedback regarding visual and usability aspects. Some of these comments can be intuitive, but mostly, they’re based on the QA specialists’ previous experience with similar software products.

#5. Backend Release

Up to this moment, we were talking about pre-production testing. It can involve several rounds of tests on several different test environments with the following fixes and polishing. After this, we proceed to the release that starts from the backend release to production under feature toggles.

Feature flags (also commonly known as feature toggles) is a software engineering technique that turns select functionality on and off during runtime, without deploying new code. – Atlassian

#6. End-to-End Smoke Test

A smoke test is a test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins. – ISTQB Glossary

After the new feature becomes available for the test users on production, QA engineers run an end-to-end smoke test, also on production. The definition above is self-explanatory, so here’s an example of how it works in taxi apps. A QA engineer:

  1. Installs a new build on a mobile device.
  2. Registers.
  3. Logs in.
  4. Enters the pick-up and destination locations.
  5. Checks an estimated time of arrival for the ride and information about a driver.
  6. Tracks the updates on the driver’s location and estimated time of arrival.
  7. Checks the payment after the ride is completed.

This end-to-end smoke test ensures that the core functionality of the taxi app – account creation, ride requests, driver and ride information, payment processing, and receipts – is working correctly.

#7. Enabling a Feature for Selected Users

There are several strategies for a software build rollout. One of the ways to reduce the risk of something going wrong is enabling a feature for a specific group of users that can access it in production. This group can include company employees, users in a certain location, most active users, or a random group.

As you can guess, QA engineers are among the users that can access the new feature and check it in production. It allows testing the update without making major deployments, and developers can quickly merge their code if everything is fine. If something’s wrong, it’s also easy to disable the feature without an urgent rollback. So we monitor the situation to decide whether to proceed or take a step back.

#8. Gradual Release on Production

Then comes gradual release for all users on production. Again, it is one of the many strategies, but it also allows for reducing risks that can come with a rollout. Unlike enabling a feature for a specific group of users, here we’re talking about slowly increasing the percentage of users who can access the new feature.

The development team can decide on different increments, activating a feature for 25% of users at a time or going for only 10% or 5%. As always, it is easier to do the rollback if fewer users have experienced something going wrong. The pauses between the releases can also be different – from a couple of hours to several days.

#9. Defining Metrics for Monitoring

Obviously, the work doesn’t stop after a new app version is live and available for all users. It is time to decide what metrics to track. We won’t explain why tracking analytics and measuring the right things is essential – that’s the topic for an entirely different story.

By the way, if you’re interested in QA metrics, you can check out one of our older articles: Metrics in Software Testing: How Are They Helpful?

As for the post-release support, below are several metrics teams often find useful to monitor:

  • Impact (of a code modification on the software product).
  • Application crash rate (number of times an application fails per total number of usages).
  • Mean time between failures (average or predicted time between inherent failures during normal operation).
  • Mean time to recover (an average or predicted time it takes for a product to recover from a crash).
  • Escaped defects (a number of defects reported by users after release to production).
  • Defect removal efficiency (a number of defects reported by users in relation to those found prior to delivery).

This is not a complete and definitive list. Most likely, you’ll find some other parameters that can help you grasp the full picture of the app’s current state, business performance, and ways to improve it in the future.

Besides defining the metrics, it is essential to set up monitoring tools to track anomalies in real-time. Based on these reports, a team can decide whether the product is ready for the final release, when it’ll become available for all users. We also keep monitoring the set parameters as the release proceeds.

#10. Enabling a Feature for All Users

Eventually, the feature becomes available to all users. At this stage, the task of the entire product team is to collect feedback. On the one hand, the users can share some subjective comments. This is natural and, sometimes, quite helpful for improving usability. On the other hand, some minor bugs can slip into production, which is also natural and happens occasionally.

#11. Monitoring Metrics on Production

This last one is pretty self-explanatory: the team has entered a new sprint, meanwhile monitoring what is going on in production. It entails working with tickets (if any) and monitoring the metrics mentioned above.

To Sum Up

Company experts can share superficial and in-depth knowledge in their posts, but reading them doesn’t provide the same level of expertise this person would provide if working with a product in-person. So if you’ve considered hiring a professional QA provider at least once and for a brief moment, don’t let this idea slip away. There’s something in it 😉

Nevertheless, in-house testing by non-QA experts is still better than no testing at all. So we’ll focus on the main testing types and test scenarios for taxi apps in the next article.

Ready to speed up
the testing process?