This post is co-authored with Olha Hladka, a QA Engineer at QA Madness.
Interviewing for a QA job position is always a two-step process. At first, an HR specialist checks whether your profile matches the described quality assurance job requirements and discusses work-related moments. After that, a tech specialist comes for a technical interview, where a candidate is to prove to fit the QA position requirements in practice.
While QA roles, skills, and responsibilities are described in Software Testing Engineer’s job requirements, you won’t find a list of questions that help to prepare for a tech interview. So how should you prepare for it? Is there anything specific to repeat?
As a rule, Junior specialists can expect the general questions, like “What’s the difference between QA and QC?” Senior QA Engineers and QA Lead should expect similar queries, but with some twists and turns. Most likely, an interviewer would require specific examples based on the previous experience. The core questions, however, remain quite standard and related to the basics of testing.
We decided to recollect some theory and help out those who are preparing for a job interview. So, beginners, get ready to learn what to expect during the interview. More seasoned professionals can refresh the memory and recollect the theory. And if you are about to interview a person for a QA position, feel free to use some of the questions below.
Quality assurance, or QA, is a set of activities covering all technological stages of software development, release, and exploitation. QA is an ongoing process that lasts through all the phases of a software development life cycle (SDLC). The purpose of QA is to ensure the required level of quality for the product.
Software testing is a part of the quality assurance process. It encompasses all the testing activities that take place during the software development life cycle. These activities can be related to planning, preparation, or evaluation of a software product, reporting, etc. Software testing aims to:
Traditionally, QA specialists tend to distinguish between four levels of software testing:
Unit testing focuses on the minor components of software code. It looks into the smallest functioning parts of an application that can work and, therefore, tested for defects separately. These tiniest software parts are program modules, objects, classes, functions, etc.
Integration testing checks the interaction between several components of the system after they have been checked individually.
System testing is an inspection of the whole software system based on functional and nonfunctional requirements. It takes place after different subsystems are connected into one. At this stage, we can find the following defects:
As for the acceptance testing, we prefer to distinguish it as a separate type of testing, not one of the levels. In practice, a team runs acceptance tests to check if a product meets the requirements. It is not a one-time check but an ongoing process. In other words, acceptance tests are present at each of the levels mentioned above, in one form or another.
We can distinguish two categories of tests – functional and nonfunctional.
Black box testing is a method of testing functional software behavior from a user’s point of view. Simply put, a QA specialist is not familiar with the internal structure of the tested object. Black box testing is a systematic selection of features to inspect and writing tests to cover those features. Technical requirements and specifications become the basis for the behavioral test strategy.
A bug is an inconsistency of an expected and an actual result of a testing iteration. It can be an incorrect output for an agreed input, performance that differs from the agreed specifications, etc.
Bug severity is a level of the potential impact a particular defect can have on a design or operation of a certain component or system.
Bug priority is the severity assigned to the bug. In other words, it determines how urgent the bug fix should be.
Test design is a stage of the software testing process, during which QA engineers write test scenarios based on previously defined quality criteria and testing goals.
There are three main test design techniques that help to reduce the number of required test cases: equivalent partition, boundary value analysis, and pairwise testing.
A software emulator is a fully functional analog of an original device or its version. An emulator mimics software and hardware behavior, modeling the core capabilities and restrictions of the functionality.
A software simulator is a model of an original device with its logic implemented partially or completely. A simulator, however, doesn’t mimic hardware features. Therefore, a simulator can reproduce software behavior and interface closely, but not the resources of a native system.
There are three main approaches to integration testing – the bottom-up, the top-down, and the big bang approach.
Using this approach, you put all the low-level modules, procedures, and functions together and start testing a product. Then, assemble the next level of modules for integration testing, and so on. It is reasonable to use the bottom-up approach if all or almost all of the units are ready.
First, we test all the high-level modules. Then, we gradually add low-level modules, one by one. Instead of the lower-level modules that aren’t ready yet, QA engineers use stubs with similar functionality to simulate the features. And when all the active components are ready, we use them to replace stubs.
In this case, we assemble all or the majority of modules to get a close-to-complete system. Then, we run integration testing. This approach is time-saving – that’s the main benefit. However, if you don’t record test cases or their results correctly, the integration process can become complicated and create obstacles during tests.
Requirements are the specifications of the functionality of the developed software described in the product documentation. In other words, it is what the team needs to create and implement.
There are certain requirements to requirements 🙂 In this case, well-written encompasses the following criteria:
A requirements traceability matrix is a two-dimensional table that maps product’s functional requirements to the prepared test cases. For example, table columns list requirements, and table rows list test scenarios. At the intersection, a QA engineer places a mark that helps to figure out if a test case covering a particular requirement exists.
Sanity testing is a highly targeted type of testing used to prove that a particular function works as stated in the specifications. It allows you to determine if a certain part of an application is ‘healthy’ enough after changes so the team can proceed with the testing activities. Sanity testing is a subset of regression testing, and it is mostly manual.
Though not technical, you should always be ready to answer this one. Tell about the advantages you’ve experienced working as a QA engineer, about what you aim to achieve, and why this career path attracts you in the first place. While QA job requirements describe what a QA company expects from you, telling an interviewer about your own expectations and involvement reveals as much as your CV.
Hopefully, you’ve found some helpful information in this article. Good luck with your interview! 😉
Quality control is obsolete. The spread of Agile, DevOps, and shift-left approach has pushed traditional…
Be honest, if your phone disappeared right now, your world would be in shambles. Data…
Teams have a love-hate relationship with Android. It’s highly customizable and has an incredibly vast…
Apple applications are easy to test. Compared to Android, that is. But when it comes…
Result-driven QA isn’t always about planning and strategizing. Sometimes, the best thing for your product…
A flimsy UI doesn’t lead to customer frustration, negative reviews, and high churn. When people…