Skip To Content

Making QA Work Better 2

In my last post, I defined some of the key principles of QA and provided several tips on how to make it a successful part of your organizational process. Today, I'll continue with several more practical lessons, this time focused around the proper way to set up your team and internal tools for the most efficient QA possible.

I've always found it interesting that QA means different things to different people. Not just different organizations, but even within a small team. For example, a developer on an agile team might consider QA to fully integrated into his or her day-to-day routine, while another may think it's the role for the "QA team".

So with so much diversity in thought on this issue, how can you create a team structure that maximizes QA success? As I noted in Part 1, while it's never an exact science, here are some great lessons the BlueModus team has learned:

1. MAKE SURE YOUR TEAM AND PROCESS ARE AMENABLE TO GOOD QA

It's important to find the right approaches that work for your teams and your engagements. Here at BlueModus, we try to engage QA from the outset and involve testing at all phases. A good rule of thumb for us is one tester per 4-5 developers, but that can vary. Some projects are so complex that they require more testing; some are relatively simple and the dev team can handle the bulk of QA.

One thing to try to avoid, though, is letting developers QA their own work, even if it seems to be the fastest way forward. People get familiar with the inner works and tend to skip or miss (on purpose or accidentally) key QA steps and the results will suffer.

Also, don't forget the potential value that automated QA can bring to the table. It certainly doesn't make sense for all test cases, but having a fully-baked automated process to test a checkout workflow, for example, can be a tremendous boon to the project and team.

2. USE THE RIGHT TESTING TOOLS AND PROCESS (SMARTER, NOT HARDER)

There are scores of books about tools and process for QA. There are fantastic frameworks and services that can do amazing things. However, if you don't match them to your needs, you're leaving something on the table. 

Sometimes, the best tool is a web browser and a list of issues or bugs. Other times, a fully automated deployment-based test suite is best. Usually it's a mix of several things, and the criteria for "what's right" should be simple: What gives your team the best opportunity to not just quickly find issues, but to improve so that fewer issues exist in the first place.

As an example, in an e-commerce application, the path to purchase and checkout workflow is often the most important piece. It often makes sense to automate this with every possible combination of user and product types. These automated tests can be executed on a continuous basis, so the team will receive quick notifications if anything breaks.

However, adding automated tests to other parts of the site, administrative screens or content-heavy areas may be a major waste of time. Conversely, manually testing the checkout process could be very inefficient.

The most import things to consider are: what gives us the best coverage with the lowest overhead? What is the 80/20 rule (20% effort hits 80% of potential test cases)? What is the cost of a potential bug, and is it worth investing in automation or other tools to make sure we have those handled?

3. GIVE THE TEAM VISIBILITY TO THE QA METRICS

This absolutely should not be a "wall of shame", but rather a real-time view into how things are working. If the team considers bugs to be process problems, rather than individual failings, they will be more motivated (in general) to attack those process problems. 

We're all human, we all make mistakes, and expecting developers to never make mistakes is ridiculous. However, if you don't know how well you're doing, you can't improve or fix issues. Make sure that you have some way to see how many issues are re-opened, how many pass/fail QA, and what the downstream effect is. Even if you can't put hard numbers on anything, look to find broad metrics that at least can indicate improvement or degradation over time.

4. PROVIDE THE TEAM WITH THE RIGHT GOALS AND INCENTIVES VIS-A-VIS QA

Related to the metrics above, giving your team the right incentives is important. It's also easy to create bad incentives that lead to negative behavior. For example, giving recognition or bonuses to team members that "write fewer bugs" will often lead to gold-plating, clever masking, and other things that, while not necessarily based on bad intentions, will lead to bad results in the long run.

Incentivizing teams to focus on idea that "bugs are a process problem, not an individual problem" will help with this. Look for rewarding teams that identify areas for improvement and find ways to make them better. Be careful to not isolate individuals, as this can lead to negative feelings and can cause problems in the team.

CONCLUSION

Remember, as I noted last time, QA and testing is a journey, not a destination. While it will never be perfect, as long as you're always striving to improve your approach and processes, you'll be in a good place.

I hope that this has been helpful, and I'd love to hear your feedback, especially if you have experiences that either are in line with the thoughts above, or, more importantly, disagree. It's a complex topic and we all believe that teams can learn a lot by sharing their experiences.

A couple of useful links referenced in this post: