I'll raise my hand to this one - a couple of weeks ago I very nearly made a huge estimation error. It was on a mega-busy day (planning for an out-of-office trip) and had to be done in short order, so I gave the brief a quick read and dashed off an email to get it off my plate. It bounced back to me within the hour with a polite note saying it looked a little low (which was a diplomatic understatement). By then I had a slightly clearer mind (and workload), was able to issue a quick correction and all was well. So, with the benefit of hindsight, what went wrong?
In order to work this out, I had a good look at my usual estimation process to see if I had missed any steps.
1) Fully understand the project scope
While there probably won't be time to go fully in depth into the solution, you need to understand everything that will require your attention - if there's one area that you're not sure about, chances are it will be the one that comes back to bite you. A quick discussion with the lead developer (or someone involved with scoping the project) can be a great help for this, often uncovering areas that you may otherwise miss.
2) Break it down
This is probably my biggest tip when it comes to estimation - bigger tasks are much harder to gauge, smaller tasks much more tangible. Often I do this as behind-the-scenes work and don't include it in the estimate I send through, usually on a spreadsheet so it's easy to multiply up the repetitive tasks. So if a website has 12 pages, estimate the time needed for one page and multiply up. Cross browser testing? Estimate the time for one browser then scale up. Project managers won't usually need this level of detail, but it can be useful in case you're asked to incorporate a scope change (e.g. a new page, 3 extra browsers) - pump the numbers in the spreadsheet and out pops the new estimate. But bear in mind the next point...
3) Economies of scale
The more you repeat a task, the quicker you can do it; and for something like cross-browser testing, the more browsers you successfully test, the more confidence you'll have in those that remain. You will also have a better idea of the high-risk areas, so the testing can be more focussed. I often pitch a single test on one browser first (to iron out the common issues that would be found everywhere) followed by a check for individual browser quirks.
4) Estimate for the full project, not one cycle
It's an easy mistake to make - remember that you are asking for the time you will need for all of the testing, not just the initial test and defect raising; you need to build in time for other activities such as retesting of defects, regression testing, integration testing etc.
5) Include options
Testers don't fix bugs; they make the project team aware of them. Similarly, an estimate does not have to be a hard-and-fast number - if there are dependencies or options, list these and let the PM make the decision about what they will present before the client.
6) Look for problems
Techniques like Three-point estimation allow you to look at the best and worst case scenarios as well as the likely route (which is what people naturally focus on). While I wouldn't necessarily go to the lengths of putting numbers against the extreme cases, it's worth at least thinking about what could go wrong to get an idea of how your time will be impacted if it does. If there's a possibility of an area being tricky, giving yourself a bit of leeway in the timings is always worthwhile.
7) Do a sense check against the development time
Looking back at old project data, you should be able to get a feel for the ratio between the time taken for test compared to development; I used to pitch this at about 20%, but in recent years I've found it more comfortable to allow 25-30% of development time (which either means the developers are more efficient or the testing has got harder). That will of course depend on the nature of the project - particularly where a lot of cross-browser testing is involved, the test time may well be much higher, or conversely a very technical build that has little impact on the front end may require a smaller proportion.
8) List the risks
If you spot anything that might cause the project to blow up while researching your estimate, make a note of it and let the team know. They may be able to set your mind at rest, or (as has happened to me in the past) they might suddenly revise their own estimates to account for it. It's also worth listing out dependencies you can foresee (e.g. client must supply logins, data etc.) as the project should not have to bear the brunt of any time lost due to failures out of the project team's control.
So where did I go wrong? Well, it was a combination of an error in 1) and missing out 7). I had thought we were implementing an off-the-shelf product; it turns out that it was much more bespoke than I had imagined, so the development time was a lot higher. In turn, the individual cases that I thought could all be tested as one would have to be taken individually (i.e. I had assumed it would either work or it wouldn't, whereas it actually needed a lot of cases to be thoroughly checked). Secondly, I had not asked for the developer's estimate to compare against - as soon as I saw their numbers I knew I had misjudged the scale. Fortunately, the way I had broken down the testing was proportionally correct, so all I had to do was scale everything up. I was also lucky to have one of the rare gems; an understanding Project Manager (at least in my estimation).
In order to work this out, I had a good look at my usual estimation process to see if I had missed any steps.
1) Fully understand the project scope
While there probably won't be time to go fully in depth into the solution, you need to understand everything that will require your attention - if there's one area that you're not sure about, chances are it will be the one that comes back to bite you. A quick discussion with the lead developer (or someone involved with scoping the project) can be a great help for this, often uncovering areas that you may otherwise miss.
2) Break it down
This is probably my biggest tip when it comes to estimation - bigger tasks are much harder to gauge, smaller tasks much more tangible. Often I do this as behind-the-scenes work and don't include it in the estimate I send through, usually on a spreadsheet so it's easy to multiply up the repetitive tasks. So if a website has 12 pages, estimate the time needed for one page and multiply up. Cross browser testing? Estimate the time for one browser then scale up. Project managers won't usually need this level of detail, but it can be useful in case you're asked to incorporate a scope change (e.g. a new page, 3 extra browsers) - pump the numbers in the spreadsheet and out pops the new estimate. But bear in mind the next point...
3) Economies of scale
The more you repeat a task, the quicker you can do it; and for something like cross-browser testing, the more browsers you successfully test, the more confidence you'll have in those that remain. You will also have a better idea of the high-risk areas, so the testing can be more focussed. I often pitch a single test on one browser first (to iron out the common issues that would be found everywhere) followed by a check for individual browser quirks.
4) Estimate for the full project, not one cycle
It's an easy mistake to make - remember that you are asking for the time you will need for all of the testing, not just the initial test and defect raising; you need to build in time for other activities such as retesting of defects, regression testing, integration testing etc.
5) Include options
Testers don't fix bugs; they make the project team aware of them. Similarly, an estimate does not have to be a hard-and-fast number - if there are dependencies or options, list these and let the PM make the decision about what they will present before the client.
6) Look for problems
Techniques like Three-point estimation allow you to look at the best and worst case scenarios as well as the likely route (which is what people naturally focus on). While I wouldn't necessarily go to the lengths of putting numbers against the extreme cases, it's worth at least thinking about what could go wrong to get an idea of how your time will be impacted if it does. If there's a possibility of an area being tricky, giving yourself a bit of leeway in the timings is always worthwhile.
7) Do a sense check against the development time
Looking back at old project data, you should be able to get a feel for the ratio between the time taken for test compared to development; I used to pitch this at about 20%, but in recent years I've found it more comfortable to allow 25-30% of development time (which either means the developers are more efficient or the testing has got harder). That will of course depend on the nature of the project - particularly where a lot of cross-browser testing is involved, the test time may well be much higher, or conversely a very technical build that has little impact on the front end may require a smaller proportion.
8) List the risks
If you spot anything that might cause the project to blow up while researching your estimate, make a note of it and let the team know. They may be able to set your mind at rest, or (as has happened to me in the past) they might suddenly revise their own estimates to account for it. It's also worth listing out dependencies you can foresee (e.g. client must supply logins, data etc.) as the project should not have to bear the brunt of any time lost due to failures out of the project team's control.
So where did I go wrong? Well, it was a combination of an error in 1) and missing out 7). I had thought we were implementing an off-the-shelf product; it turns out that it was much more bespoke than I had imagined, so the development time was a lot higher. In turn, the individual cases that I thought could all be tested as one would have to be taken individually (i.e. I had assumed it would either work or it wouldn't, whereas it actually needed a lot of cases to be thoroughly checked). Secondly, I had not asked for the developer's estimate to compare against - as soon as I saw their numbers I knew I had misjudged the scale. Fortunately, the way I had broken down the testing was proportionally correct, so all I had to do was scale everything up. I was also lucky to have one of the rare gems; an understanding Project Manager (at least in my estimation).
Comments
Post a Comment