Modern testing in an agency setting

Following on from my previous post on Modern Testing Principles, I've been listening through the back catalogue of the AB Testing podcast to try and get more insight into the ideas behind them. In particular I've been thinking a lot about how they apply to my particular situation as a tester in an agency rather than in a team dedicated to a single product.

One clarification I've picked up from the episodes I've listened to so far (and there are many more to go so my opinion may change again) is that the principles are not talking about how the modern tester should behave, more how testing should be carried out in a modern project setting. And the conclusion seems to be that the developer should be doing their own testing; and any errors that make it through to production should be picked up by analysis of available data (server logs, analytics, maybe even direct customer complaints) and then fixed in double-quick time through continuous deployment.

Which kind of answers a problem I've been wrestling with for a while - what is it that testers do that other people can't? More and more I suspect the answer is "nothing", or at least nothing others couldn't do without practice and guidance. To me that's what the principles are stating - we should "shift left" to the extreme of there being no independent test role before initial deployment (a phrase used on the podcast is "removing the safety net"). I'm all for that; there seems to me no reason why (given enough time) a developer can't run all the checks a tester does and produce code of a much higher standard. It's often said that testers are looking for something that shouldn't be there; this approach is shifting the onus back onto the developer to ensure the code is correct in the first place rather than delivering something that is "good enough" in the knowledge that the test team will pick up any further deficiencies. This focus of people currently in a tester role switches from creating and running the tests to passing on testing wisdom to others as a kind of mentor.

Where I have a problem with this approach (although it may be that I am misunderstanding) is when I come to apply these principles to my own situation. In an agency setting, you often do not have the follow-through to live that you might expect when working on a continuously delivered project; often the pieces of work that we pick up are transient and we do not have the luxury of being able to deliver something that might fail in the knowledge that it can be fixed quickly. There is an implication of a feedback loop in the modern testing structure in that data from the live system can be fed back into the ongoing development process; in the agency world the production environment may not be under the control of the development team, being run instead by the client, with our work just feeding in to their process. Unless we are on a fixed retainer, any updates that are required following feedback will not be as immediate as an in-house team could deliver - there are extra steps such as the drawing up of contracts, agreement of the work to be undertaken and securing of resources. I remember hearing a story (I think from Gem Hill's podcast) where having moved from an agency setting into a larger corporation she was amazed that her proposed (but unproven) change to test methodology was immediately put into place on the grounds that it could always be removed or changed if it didn't work out. In an agency scenario, implementing a solution that does not work will not get you repeat business.

I guess in some ways what I'm saying could be interpreted as an argument against using agencies; but there are plenty of companies that are not in a position (nor have the desire) to run their own development in-house, so I think the role of tester will still be required in this instance, partly to ensure that the client is getting what they asked for (a happy client will keep coming back) and also to provide proof that work has been delivered to the required standard (so that payment can be made). You'd be surprised how difficult it can be to get a client to check what you've done matches what they asked for.

The brevity and one-off nature of many projects is also a bar to being able to risk a potential live failure - much of my work is around unique short-term campaigns rather than adding additional features that will last long-term; if we discover a problem on e.g. a page advertising a 24 hour flash sale on the live environment it's liable the damage will already have been done before a change can be made. In this case there is not sufficient time to close the feedback loop.

As I said, I'm still working my way through the conversations on AB Testing; this is how I currently see things but I will no doubt post another update in the future when I've had a chance to think about this a bit more.

Comments

  1. Hi!
    Great post. I would like to comment your statement:

    "More and more I suspect the answer is "nothing", or at least nothing others couldn't do without practice and guidance".

    1. In my experience, even with practice and guidance, people can still do poor testing. Because guidance are not checklists. In order to do excellent testing, you can just give meta guidance. And with that meta guidance, great testers will create context driven testing techniques ( check lists).
    2. Testers are the most competent to create meta guidance for software testing. Black Box Software Testing course is one example.

    Regards, Karlo.
    https://blog.tentamen.eu

    ReplyDelete
  2. I believe your understanding of Modern Testing to be correct. The problem is not that you are missing something - the problem is that Modern Testing might apply only to some specific contexts and situations. If your development team is external to business, if you can't move towards continuous deployment or if you can't gather feedback from live systems, trying to apply Modern Testing principles probably won't bring you many improvements.

    One can say that development should be closer to business, or you should remove barriers preventing continuous deployment, or you need to work on getting better analytics on live systems. But such opinions ignore very real and very valid constraints. If you work in agency and your contract specifies that you deliver software according to spec and might never hear from client again, then that's it - you won't be able to respond to data gathered on live system to improve it.

    One of my rules of thumb is to look at who is saying instead of only on what they are saying. Alan and Brent work as upper-level managers in organizations producing and selling software. We don't necessarily share their experiences and point of view, they might never had some problems that we have on daily basis and their solutions might not be good fit for problems that we have.

    ReplyDelete

Post a Comment