Friday, October 21, 2016

Safety in being heard

Today, I've been thinking about asking. Let me tell you a few stories that are serving as my inspiration.

You don't know to ask for things you've never experienced

I'm speaking in a conference and as my speaker fee, I negotiated a free ticket - something I've been doing in Finland for quite a while. It means that not only I get to go, but I get to take someone with me. In past years, this has opened the expensive commercial course for people in my community, and people in the same company I work at. Last time I passed a ticket to a colleague, he did not use it. I wanted to make sure this time my work would go for good purpose, so I kept checking with the tester at work I had in mind to take.

In the process of discussing all this, I learned that this was the tester's first ever conference (something I really did not expect) and things like "food is included" was a surprise. In the discussion, I realized that as a regular conference speaker and goer, I take a lot of things for granted. I don't understand anymore that they might not be clear for others.

So I felt grateful for having enough interaction to realize that the unspoken questions in the puzzled looks were things I could pick up. The tester might not have known enough to ask the questions. Then again, here not knowing would have clearly been ok, and learned later.

You get answers when you know to ask

When you have a question, people rarely say no to answering your question. I'm new to my job, so I have a lot of questions, and as long as I come up with the questions, things are moving on nicely.

Yesterday, I was feeling back pain. Sitting in my office chair, I suddenly realized that I had been sitting long days in a non-ergonomic unadjustable chair. I never paid attention, until my body made it obvious I should have, basically crippling me for the day. As soon as I asked for a proper chair, I got it. But I had to ask. Learning to ask was still not too late.

People tend to reject info they don't ask for

I've been experiencing a recurring pattern over last weeks where I point out unfinished work (usually of surprising kind) and the developer I talk to brushes it off. It's often "some other team's responsibility" or "agreed before I joined" or "will be done later". Having been hired to test (provide feedback), rejecting my work categorically feels bad. And it feels worse when I follow up on the claim, and come back with info of what the other party says and then the unfinished work gets acknowledged.

This has lead me to think about the fact that whoever asked me to provide the information as a tester is different from the developer who gets to react to my feedback. And as a new person on the job, I would love a little consideration for my efforts. They are not noise, I pay a lot of attention to that.

Why all this?

All of this makes me again think of psychological safety. Being safe means being heard. Being safe means being heard without fighting for your voice. Being safe means being heard even if you had no words to describe your questions.

As a tester, I've learned to never give up even when I feel unsafe. And simultaneously, I look around and wonder what makes some of the other testers so passive, accepting of what is being told. And yet, they work hard in the tester jobs.

It makes me think that while I'm comfortable with confrontation, it still eats up my energy. Everyone should be allowed to feel safe.

And to get there, we need to learn to listen. 

Thursday, October 20, 2016

Testing in the DevOpsian World

There is an absolutely wonderful blog post that describes Dan Ashby's reaction to being in non-testing conferences that seem to make testing vanish. The way Dan brings testing back is almost magical: testing is everywhere!

At first, I was inclined to agree. But then I decided to look at the DevOps model with more empathy for the DevOpsers and less for the Tester profession, I no longer did.

The cycle, as I've experienced it with the DevOpsers, is used to explain continuous flow of new features through learning about how the system works in production. It's not about setting up branching systems or strategies. It's not about questioning the mechanisms we use to deploy multiple times a day - just the delivery of value to the application.

I drew my version of the testing enhanced model:
In this model, testing isn't everywhere. But it is in places where DevOpsers can't really see it. Like the fact that code is much more than writing code, code is just end result of what ever manual work we choose to put on the value item delivery. All the manual work is done in a branch, isolating the changes from whatever else is going on, and it includes what ever testing is necessary. With a DevOpsian mindset, we'd probably seek even exploratory testing at this point to be driving the automation creation. But we wouldn't mind finding some things of oops where we just adjust our understanding and deliver something that works better, and while some portion of this turns into automation, it's exactly same as with other code: not all things thought around it are in the artifact, and that is ok, even expected.

But as we move forward in the value delivery cycle, we expect the systems to help us with quick moving to production are automated. And even if there is testing there, there's no thinking going on in running the automated tests, the build scripts, deployment scripts and whatever is related to getting the thing into production. Thinking comes in if the systems alert on a problem, and instead of moving forward in the pipeline, we go back to code. Because eventually, code is what needs to change to get through the pipeline, whether it's test code or production code.

On a higher level, we'd naturally pay attention to how well our systems work. We'd care about how long it takes to get a tested build out, and if that ever fails. We would probably test those systems separately as we're building them and extending them. But all of that thinking isn't part of this cycle - it's the cycle of infrastructure creation, that is invisible in this image. Just as the cycle of learning about how we work together as a team is invisible in this image.

However, in the scope of value delivery, exploratory testing is a critical mindset for those operating and monitoring the production. We want to see problems our users are not even telling us on, how could we do it? What would be relevant metrics or trends that could hint that something is wrong? Any aspects that could improve the overall quality of our application/system need to be identified and pushed back into the circle of implementing changes. 

I find that by saying testing is everywhere, we bundle testing to be the perspectives tester thinks testing is. A lot of activities testers would consider testing are design and proper thinking around implementation for non-testers.

By bringing in testing everywhere, we're simultaneously saying the model of value delivery is extended with elements of
  • Infrastructure creation 
  • Team working practice improvement
And it's natural we'd say that as testers. Because, those all are perspectives we consider part of what a tester facilitates. But are they testing of the application and does testing need to go everywhere on a model that isn't about all things development? I would disagree.

My feeling is that the tester community does a disservice to itself saying testing is everywhere. It's like saying only things we label testing make it good. Like things programmers label programming or code wouldn't have same potential.

To stay in the same table discussing and clarifying what truly happens in DevOpsian world, we need to consider speaking in the same scope. Well, I find that useful, at least. 

Wednesday, October 19, 2016

Entitlement - extending our contract

I've got a few examples of things I need to get off my mind - of things where people somehow assume it is someone else's duty to do work for them.

The word on my mind is entitlement. It really puzzles me on how come there are so much of these cases where someone assumes they have free access to my time, just because they had some access to my thoughts in a way I chose to make available. It leads in to what I perceive as a lack of thoughtfulness in requiring services, as if you were entitled to them. And it puzzles me why I think of this so differently, taking it for a fact that I should appreciate what I'm getting on the "free" services and that I could actually need to make it bidirectional in some way if I have specific requirements to fulfill my personal needs.

The Uninvited Debates

The first thing where entitlement comes to play is the idea of debates - whenever, where ever. When you say something and someone questions you, that someone is somehow *entitled* to your answer. Not that I would have the free choice of giving that answer in spirit of dialog and mutual learning, but  that I owe people an answer and an explanation.

I love the idea that my time is mine. It's mine to control, mine to decide on, mine to invest. And investing in a debate (from my perspective) means that I get to choose which debates I stop early and which ones I continue further. And it's not about fear of the other party - it's awareness of the rathole that isn't doing anything but wasting our time.

The Burden of Proof

So I wrote a book. So it's kind of obvious Mob Programming and Mob Testing are close to my heart. The thing that puzzles me is the people who feel that for *evangelizing* something this wasteful (in their perspective), I now need to start a research project or share private company data with numbers to prove mobbing is a good use of time.

I'm happy to say it's a thing you either believe in or not. And that successes with it will most likely be contextual. I also say that my experience was that it made no sense to me before I tried it. None of the rational arguments anyone could have said would have convinced me.

There's a lot of research on pair programming. Yet, I see most people telling it can't work. I welcome anyone to do the research and come to any conclusion they come to, but I'm not planning on setting that up. Again, my time, my choices. Writing a book on something isn't a commitment to have answers to all the questions in the world.

I also find these labels interesting. I've been told I'm an evangelist (for mob programming) and a leader (for testing). I label myself as a sharing practitioner. And my label is what drives my time commitments, not the labels other people's choose for me.

The Conference Requirement

I speak at conferences. A lot. And sometimes I run into conferences that feel that by giving me the space to speak they are entitled to a lot of services and requirements on how those services are delivered.

It's not enough that often these conferences don't pay for the expenses, meaning you *pay to speak*. But in addition, they can have very specific requests. My favorite thing I don't want to do is use of conference template, on anything beyond the title slide. It's a lot of work moving elements around, and that work isn't exactly something I would love to volunteer my time for. And reserving a right to change *my slides* is another. I'm good for removing ads and obscenities, but asking for full editing rights and requiring my compliance to change per feedback sounds to me like I shouldn't be speaking in the first place.

We're not entitled to free services. Sometimes we're lucky to get them. Seeing paid services go down, I get reminded that we are not entitled to those either. We're lucky to have things that are good. Lucky to have people who work with us and share for us.

Saturday, October 15, 2016

Two testers, testing the same feature episode 2

There are two testers, with a lot of similarities but also a lot of differences. Tester 1 focuses on automation. Tester 2 focuses on exploration. And they test the same feature.

And it turns out, the collaborate well, and together can be the super-tester people seem to look for. They pay attention to different things. They find (first) different things. And when that is put together, there's a good foundation for testing of the feature, both now and later.

Tester 1 focusing on automation makes slow progress on adding automation scripts and building coverage for the feature. Any tester, with unfinished software to automate against would recognize her struggles. As she deeply investigates a detail, she finds (and reports) problems. As her automations start to be part of regular runs, she finds crashes and peculiarities that aren't consistent, warranting yet more investigation (and reports). The focus on detail makes her notice inconsistencies in decision rules, and when the needed bits are finally available, not only the other automators can reuse her work directly but also she can now easily scale to volume and numbers.

Tester 2 focusing on exploration has also found (and reported) many bugs, each leading into insights about what the feature is about. She has a deep mind map of ideas to do and done, and organizes that into a nice checklist that helps tester 1 find better ways of automating and adds to the understanding of why things are as experienced. Tester 2 reports mistakes in design that will cause problems - omissions of functionalities that have in the past been (with evidence) issues relevant customers would complain about but also functionalities that will prove useful when things fail in unexpected ways. Tester 2 explores the application code to learn about lack of use of common libraries (more testing!), and placeholders, only to learn that the developer had already forgotten about them. Tester 2 also personally experiences the use of the feature, and points out many things about the experience of using it that result in changes.

Together, tester 1 and 2 feel they have good coverage. And looking forward, there is a chance that either one of them could have ended up in this place alone just as well as together. Then again, that is uncertain.

One thing is for sure. The changes identified by tester 2 early on are things that seem most relevant early on leaving more time for implementing the missing aspects. The things tester 1 contributed could have been contributed by the team's developer without a mindset shift (other than change of programming language). The things tester 2 contributed would have required a change in mindset.

The project is lucky to have the best of both worlds, in collaboration. And the best of it all is the awesome, collaborative developer who welcomes feedback and acts on it in timely fashion and greets all of it with enthusiasm and curiosity.

Tuesday, October 11, 2016

The three ways to solve 'Our Test Automation Sucks' in Scrum

Scrum - the idea of working in shorter increments. The time frame could be a month and when you struggle with a month, you'll try two weeks. Or even one week. But still there's the idea of plan, do, and retrospect.

When we work in short increments, a common understanding is that moving fast can make us break things. And when things could break, we should test. And with the short cycles, we're relying on automation - like it was our lifeline. But what if our test automation sucks, is there no hope?

Option 1. Make it not suck.

I would love this option. Fix the automation. Make it worthwhile. Make it work.

Or like someone advised when I hinted on  troubles with automation: hire someone better. Hire a superstar.

No matter what you need to do to make it not suck, do it. And with a lot of things to test, there's a lot of work on fixing if a lot of it sucks. And what sucks might just be the testability of the application. So don't expect an overnight change.

Also, don't give up. This is the direction you will go to. But it might not be quick enough to save you.

Option 2. Freeze like crazy.

This option seems to be one that people resolve to, and it is really an antipattern. It feels like the worst of two worlds. You slow down your development to make time for your testing. You test, you fix, you despair. And you repeat this again and again, since while the main is "frozen", some work gets bottled up somewhere just to cause a big mess when unfreezing takes place.

Freezing brings in the idea that chance is bad now that we need to fix it. Hey, maybe chance in a way that breaks things is the bad thing, and making developers wait for improving things isn't exactly helping.

Let go. We're not the gatekeepers, remember. Freezing is, a lot of times, gatekeeping. Would there be a safe to fail way to get to the lesson of letting go?

Option 3. Do continuous releases with exploratory testing

I've worked with options 1 and 2 long enough to know that while we work for option 1 to be reality, there's a feasible option too. What if we would only put things in main that can be released now?

What if, instead of thinking of the programming as the only manual tasks, we'd realize the testing is too. Couldn't we find a way not only to program but also to test before we merge our changes into the mainline.

I've lived with option 3 for a few years (with gradually less sucking automation), and I'm having hard time seeing why would anyone choose to work any other way. This basically says: stop doing scrum. Do a feature at a time, and make your features small. Deliver them all the way through the pipeline.

Continuous Delivery without Automation is awesome. With automation, it gets even better. But the exploratory part (the 'manual' thinking work, just like programming the chances) isn't going away any time soon.

An Old Story of a Handoff Test

It was one of these projects, where we were doing a significant system with a contractor. Actually, all software development was done by contractors, and on the customer side we had a customer project manager and then need to set up a little acceptance testing project in the end of it all. 
Acceptance testing was supposed to be 30 days at the end of the whole development effort. If the thing to be delivered was super big, you might have several rounds of deliveries. So it was it in this particular one.

As the time of acceptance testing was approaching, preparations were in full steam. No early versions of the software were made available. A major concern was that when the 30 days of testing starts, there’s no return. You test, you get fixes and you accept when you have no fixes pending. If the quality is bad enough and blocks testing, you’re not well off. 

The state of the art approach for dealing with the risk of bad quality that would block your testing and thus eat away your test time was to set up a handoff test, just before the testing would start. It would often serve a few purposes of confidence:
  • the system to test was properly installed so that testing could happen
  • we’re not wasting our specialists time on work the contractor was hired to do

For a typical handoff test, you needed to define your tests in advance and send the documentation to the contractor at least a week before the day of handoff test. And so we did, fine-tuned and tailored our tests to be prepared for the big day.

As the big day came, we all got together in one location to test. We executed the tests we had planned for, logged bugs and were in for a big surprise. 

The contractor project manager and test manager rejected all the reports. All of them. They reviewed them against the test cases as they read them, forged in iron. “You couldn’t find this problem with exactly these steps and these steps alone”. They did not reject the fact that the problems were not real. They rejected them based on the test cases.

Some hours (and arguments) later, we were back on track and the real bugs were real bugs. 
This experience just popped back from my memories, as I was reading about Iron Scripts where deviation isn’t allowed. I can just say that I’m so lucky to not have seen any of this in … about 6 years. I’m sure my past is still the current struggle for someone. 

Sunday, October 9, 2016

Details into Mob Exploratory Testing

I love exploratory testing, and have a strong belief in the idea that exploration can have many paths to go forward and still end up in great testing. Freedom of choice for the tester is a relevant thing, and I've grown to realize I have a dislike for guidelines such as "find the happy path" first when exploring.

Surely, finding the happy path is not a bad idea. It helps you understand what the application is about and teaches you about putting the priority of your bugs into a context. It gives you the idea of "can this work", before you go digging into all the details that don't work.

I've had to think about the freedom of choice more and more, as I'm doing exploratory testing with a mob. While I alone can decide to focus on a small piece (and understand that I don't know the happy path and the basic use case), people who join a testing mob are not as aware of the choices they are making. People in the mob might need the frame of reference the happy path gives to collaborate. For me, each choice means that it enables something but also leaves something out. Playing with the order of how I go about finding things out can be just as important for my exploration than getting the things done in the first place.

For example, I often decide to postpone reading about things and just try things out without instructions, recognizing documentation will create an expectation I care for. I want to assess quality in the experience of use both without and with documentation, and unseeing is impossible. Yet, recognizing documentation reading matters, I can look at the application later too trying to think of things (with my mind map there to support me) simulating the old me that had not read the documentation.

My latest mob I lead, I ended up with a more strict facilitation. I asked the questions "what will you do next?" and "what are you learning?" much more than before, and enforced a rule of making quick notes of the agreements and learning in the mind map.

When the group got stuck in thinking about a proper phrasing of a concept in the mind map or location of an idea, I noticed myself referring to rules I've learned around mobbing on code. Any name works, we can make it better later, "just call it foo" and learn more to rename it. Any place in the mind map works, we can rearrange as our understanding grows, we don't need to do it at the time we know the least.

Finally, I was left thinking about a core concept of mobbing around code: intentional programming. The shared intention of what we're implementing, working in a way where the intention does not need to be spoken out about, but the code shows it. Test-driven development does this in code, as you first define what you'll be implementing. But what does it in mob exploratory testing?

Working from a charter is intentionally open-ended and may not give the group a shared intention. Even a charter like "Explore Verify(object) with different kinds of objects and contents using Naughty strings -list to find inconsistent behaviors" isn't enough to keep the group on a shared intent. The intent needs to be worked out in smaller pieces of the test ideas.

Looking at this group, they often generated a few ideas at a time. Making  them write those down and execute them one by one seemed to work well on keeping them coherent. So, it looks like I had not given enough credit for the mind map as a source of shared intent for group exploration.