For any SaaS company, Quality assurance (QA) plays an essential role in delivering stable, high-quality products. However, not all QA practices are created equally. Some can actually hinder your product development pipeline and reduce product quality. In this article we'll look at nine "bad QA smells" – nine heuristics for identifying poor QA practices – and demonstrate how to address them.

1. Siloed QA Teams

When your QA team works in isolation from developers, designers, project managers, and other stakeholders, there is a great disconnect from product quality until your product nears release. Having a QA team work only at the end of the product development pipeline can lead to miscommunication, delays, and missed defects. This is because there's great pressure to stick with a release schedule, which oftentimes leaves your QA team scrambling to find problems and the rest of your product team to fix those as quickly as possible.

The solution to this problem is to use an agile QA management strategy such as "Shift-Left" QA. Transitioning to "Shift-Left" QA means having QA part of the entire product development pipeline. This way you have a team spotting problems in the earliest phases of product development, helping you build excellent products while saving time. In previous industry-wide surveys, it was found 92% of organizations use some kind of agile methodology for managing their QA.

2. Lack of Automated Testing

According to a recent report by Practitest, 49% of organizations depend on manual testing for more than half their testing time. This approach is time-consuming, prone to human error, and difficult to scale. Moreover, your developers are less likely to catch bugs if they are under any time pressure. These problems create delays and prevent your company from releasing products your customers love.

If you're unsure where to start, you may want to think about hiring a fractional QA service. This way you can develop a test automation strategy and start implementing tests into your product development pipeline.

3. Skipping out on continuous testing

Without continuous testing, where test automation scripts are run anytime your developers upload code changes to your company's repository, you're missing out on some serious organizational efficiency. In fact, Practitest's 2024 state of testing report found that nearly three-quarters of development teams think their team releases features and functionalities faster when using this agile methodology.

4. Overreliance on Bug Metrics

While bug counts and fix rates can provide valuable insights, focusing solely on these metrics can lead to a narrow view of quality. If you over-incentivize productivity concerning these metrics your team may start letting more bugs be detected by the QA team instead of dealing with them before QA. This unintended consequence because of incentivizing over-optimizing a metric is a classic example of Goodhart's law.

Instead of focusing on narrow metrics related to bug fixes, it's better to implement a balanced scorecard approach, considering factors such as Net Promotor Score, performance metrics like DORA, and business value like customer churn, alongside traditional bug metrics. This way you have a set of self-correcting metrics which guide your team to success.

5. Ignoring Non-Functional Testing

Many QA teams focus primarily on functional testing, overlooking crucial aspects like usability, performance, and stress testing. This leaves your product development pipeline with many weak points and can stifle progress as your product matures.

Instead of having your QA team focus solely on implementing automated tests, incorporate non-functional testing into your QA strategy. This means having QA assess product usability, user workflows, load and stress testing, and incorporating QA at each part of the product development pipeline. Using this strategy lets your QA team find problems much earlier saving your company time, and money, and reduces churn because of quality issues.

6. Inadequate Test Environment Management

Poorly managed and inconsistent test environments can lead to unreliable test results which wastes developer time. If your developers cannot expect their continuous testing environment to reliably test their code changes, you're going to see more bugs find their way into production. In fact, in a recent Practitest report, over 72% of testing teams find managing the test data and environments to be a challenge Practitest 2021.

You can fend off these problems by implementing robust test environment management practices. This includes containerizing test environments with tools like Docker, including version control into your testing environment setup, and using configuration management tools for consistent test configurations.

7. Lack of Risk-Based Testing

Dedicating the same testing resources equally across all features can lead to inefficient resource allocation causing your QA team to miss critical defects. This can have ripple effects across your organization causing headaches and preventing your developers from releasing features as quickly.

Instead of blindly applying test automation across the board, it's better to identify core user workflows to ensure their stability across releases. This way critical bugs can be detected and your customers can expect a functional experience across time.

8. Insufficient Test Data Management

Poor test data management can lead to critical security leaks and privacy issues. For example, in 2016 a UK parenting retailer Kiddicare suffered a data breach because their test website had lax security which exposed the data of nearly 800k customers BBC. Instead of anonymizing production data inside their test environment, they simply copied the data and left it at that. Today this kind of breach isn't an isolated attack.

To mitigate security problems like these, CISA recommends companies implement test data controls where personnel only have access to the data they need to accurately perform their job CISA. If the security team had just run some data transformations, such as generating random addresses, emails, usernames, etc. then hackers would have broken into a database full of fake info. It's best to consult with a security expert whenever handling sensitive data and have them regularly audit your test environment for any security issues.

9. Ignoring User Feedback

QA teams who don't incorporate user feedback into their testing processes risk accidentally skipping real-world problems users find. No matter how rigorous of a testing process you have in place, there will be edge cases users will find because of edge cases that aren't easily predictable. This generally happens because users find their way to use products in ways the developers didn't originally intend.

To mitigate problems like these, it's a good idea to integrate analytics tools, like PostHog, which lets you record and replay user sessions, log issues from the customer-facing software, and survey users about their experience with your products.

References