Saturday, November 21, 2009

What makes up the Session-Based Test Management

Session-Based Test Management is an approach to managing exploratory testing introduced by Jonathan and James Bach. I read the articles on the topic long time ago, remembered the bits and pieces I found useful at that point, and started practicing.

After some years, I'm at a point where I reflect my personal style of managing exploratory testing with the original session-based test management.

On 12.9.2009 I made a summary of the differences and posted it on the bach-sbtm@yahoogroups.com to get a few replies. Unfortunately, right after posting I got busy with new work and did not follow through the discussions I started.

I identified these potential differences while comparing to the original article:
- Charter list comes from the testers that add stuff based on what they've learned while doing testing. I could add stuff as test manager, but I'd usually choose not to, but encourage the tester in that area in realizing the need of adding the charter.
- I use charters in a time-boxed, instead of estimate fashion. If the time-box is 90 minutes, the testing is interrupted at 90 minutes and the results and level of testing done are addressed
- When a session is over, there's three choices related to the charter that was worked on: claim it done, prioritize rest of it lower and go back later (if ever), or prioritize the rest of it high and continue on the next session.
- I tend to report bugs off session or in their own sessions due to time-boxing. Bug reporting I'd actually not time-box, but it's a task that takes as long as it does until complete.
- I monitor rough metrics with the setup / test / bug division, but typically more like asking once a day or twice a week than having anyone write a session sheet
- The most essential metric for me is velocity (progress in charters) compared to planned capacity (people and schedule) and work remaining. If "testing" portion in metrics is small, the charters tend to remain on the list.
- I tend to make sure people get feedback from things they've missed by encouraging overlapping work in some extent
- I separate debriefing (sharing info with team) from coaching. Coaching I'd do with individuals based on what I notice in debriefings. Main purpose of debriefing for me is to share ideas of how far are we and what we need to do next with other people in the same sandbox/area.
- As a "tool" for the tester, I suggest a piece of paper split in four areas (mission/sandbox, current charter, details, new charters) and a pile of post-it notes to keep track of what was supposed to be done and avoiding the accidental wondering off to wrong things. Intentional change of charter is allowed by the tester deciding she cancels the current session and starts a new one.
- My preferred session length is unit of half-a-days work, or a bit that can fit into half-a-day uninterrupted.

I also wrote then: "Based on the article, I got the impression that the core of the SBTM-approach is the tool-supported session sheet. I've just chosen to live with the interpretation that the core of SBTM is the session – splitting testing time into smaller chunks." -- what is the core? When is the approach to managing exploratory testing no longer session-based test management?

I will post my update on what I think after reviewing the thread of replies I should have looked at already in September.

Sunday, October 11, 2009

Analyzing own learnings for value of tests

I was reading Markus Hjort's blog (http://www.jroller.com/mhjort/) about challenges with testers and trust - or lack of thereof. In his story, he looks at the testers wanting to test leap days from the outside and ponders about how to make people understand what kind of problems are likely and worth the testing effort.

I started my testing career a number of years ago in localization testing. While finding the problems with that type of testing just as challenging, there is the part of "working original software" to compare against giving it its own twist of flavor.

Not that many years ago, years after my first introduction to software testing and learnings from localization testing, I started to look at the value and logic of my own testing. I realized my early learnings from localization testing, namely that there's a difference between localized operating systems and I should treat e.g. Finnish and German OS as separate for my testing, had a deep impact on what I was still doing.

I went back to thinking how did I learn that there was a difference. I still can remember the feelings of inadequacy, surprise and shock when I got the feedback of missing bugs early in my career. It was enough to teach me not to fail that way again. As the saying goes, mistake once is ok, but twice, the same, that's just stupid.

As I was looking at how I test and how would I rationalize it to myself, I also asked myself whether there was value in the type of tests I did. I went back to my notes, to realize that there had not been any indication whatsoever of this OS-language difference biting the software I had been testing, for quite a while.

I had to admit to myself, that I had made an unconscious choice in my priorities. I had decided that this one was worth the time and effort, even though it was not, as there had been a subtle-to-me change in technology related to OS-language versions that made the risk less relevant than what I thought on my experiences in the past. I had used time on repeating something, and thus not using the same amount of time on something else.

I started this reflection from the idea that there's perhaps lack of trust for testers due to lack of quality experienced in the past. How to tackle that best in agile? I'd say, let the testers test, but ask them to gradually let go. First, ask them not to run a test they are used to running every week for a week or two, let them go back to it, and reflect. Did it break? If not, why would you think it does? It may bring to surface similar experiences than the one I described. And for me it has.

Having the test automated is the second option for me, if there's value. The value-question needs to be addressed first.

Wednesday, May 20, 2009

Testing in Definition of Done

At a time we were getting started with Agile methods, a lot of energy went into working out the definition of done. We followed the debates on whether that is something the team decides or something the product owner decides, and went on with our share of discussions.

At first, it was not easy to even include testing in the definition of done. At least not all kinds of testing that were actually needed. Eventually that passed, and the lesson was learned: if it is not tested (and ready to be published), it is not actually done. The value is not available from concept to cash, as the lean thinking goes.

I still feel the definition of done, especially for the testing part, is quite a complex exercise. Testing is an endless task. At some point, however, it stops providing value, and should be deliberately stopped.

This is typical approach in "traditional testing" with a risk-based test management focus. Thus what I tried introducing is a practice of "risk-based test management for definition of done". Essentially this is a practice of discussing what "testing" in definition of done should be for each of the product backlog items through understanding the acceptable level of risk with that item.

"Testing" in the definition of done is not just one. Some changes can be quite safely tested mostly on unit level. Some changes can quite safely be tested with automation. Some changes need extensive exploratory testing.

Similarly "acceptable risk" is not the same for all product backlog items. Some items end up being very visible and commonly used features. Some items are for fewer users, but perhaps more important as customers. Some items are tick box features for sales purposes. You would look at acceptable risk very differently on each of these. Risk-avoidance through added testing adds costs. While velocity may remain similar (when the sizes are visible in the product backlog items), the value experience by users for the same velocity would not be.

Friday, May 8, 2009

Role of a tester in scrum environment

I just read an email sent on the scrumdevelopment yahoogroups list. The email mentioned a small company having given a notice of redundancy for - apparently all of - their testers. The developers had been told to do the testing, and the testers have some days to justify why they should be kept. Interesting dilemma.

Some weeks back, there was a session to learn about agile testing with James Lyndsay with the Finnish Association of Software Testing. There were some 15 people there, and at the end of the paper plane building -session, we identified key learning points. What stroke me specifically is a learning point that got the lowest number of scores - we agreed on it least based on the voting: "You don't need testers, you just need testing". Sounds a bell with the idea of notice of redundancy. Yet people around - testers specifically - did not agree with that.

I too am a tester, and I was the one writing that particular learning point to go around, since I felt that was one of really key things I had learned. Yet as a tester that works in a scrum environment - at least used to - I quite strongly feel testers are useful.

I believe this is related to a theme I just blogged about in Finnish. There's huge differences between experienced testers. There's people who have 5 years of experience and people who have 5 times a year of experience. Those testers who actively learn while testing and about testing, tend to be way beyond in useful experience over those who have learned their testing by following test scripts that they may have created themselves and checklists that keep them in discipline since they can't find motivation to be disciplined just from the importance of results they could be providing. An experienced tester that is an experienced machine part can be replaced with automation or someone who could do the work for cheaper. This could be the developers - just to save the cost of teaching the same things to yet another person who would not provide value for the invested time - or someone from lower cost countries.

I believe that you don't need testers in scrum environment but you need testing. It is not straighforward that the team is able to include the testing as it should be included if there is no specialist in the topic. Then again, having someone called a tester, or even having someone who has been a tester comparing expected to the seen, does not mean that person could actually help include the right kind of understanding in the team.

In some cases removing testers makes things better for the team in my experience. It helps the other team members take responsibility over quality, it makes the team start automation they've too long postponed, it makes them stop building fences over their own component and work together with other developers. While it may make them seemingly slow at first, they may recover fast and become better.

In other cases removing testers makes things bad - when the team left to do the work is not willing nor capable to do the testing. Things get declared done too soon and problems going further in the chain may increase.

I find that the potential value of testers in scrum comes from the testers' potential of thinking and acting like a tester - providing information that was not yet known, on time, in a way that saves time overall.

Being called a tester does not make one a useful tester.