Wednesday, May 24, 2017

Impact of Test Automation in my Everyday Worklife

I'm not particularly convinced of the testing our teams test automation does for us. The scenarios is automation are somewhat simple, yet take extensive time to run. They are *system tests* and I would very much prefer seeing more things around components the team is responsible for. System tests fail often for dependencies outside the team control.

I've been actively postponing the time of doing really something about it, and today I stopped to think about what existence of the minimal automation has meant for me.

The better test automation around here seem to find random crashes (with logs and dumps that enable fixing), but that is really not the case with what I'm seeing close.

The impact existence of test automation has had for my everyday work life is that I can see with a glimpse if the test systems are down so that I don't need to pay attention to installing regularly just to know it still installs.

So I stopped to think: has this really changed something for me, personally. It has. I feel a little less rushed with my routines. And I can appreciate that.

Tuesday, May 9, 2017

Bias for action


'Bias for Action'. That's a phrase I picked up ages ago, yet one that has been keenly on my mind for some time now.

It means (to me) that if I can choose planning and speculating vs. doing something, I should rather be doing something. It's in the work we do that we discover the work that needs doing.

There are things I feel need doing, and I notice myself trying to convince others in doing those over being alone in doing those. I notice being afraid of going in and starting the restructure of our test automation to a shape that would make more sense.

Without bias for action, I procrastinate. I plan. I try to figure out a way of communicating. I don't get anything done.

With bias for action, I make mistakes and learn. I make myself more vulnerable and work with my fears of inadequacy.

It's been such an important thing to remember: things don't change without changing them. And I can be a person to change things I feel strongly for.

Thursday, April 20, 2017

Dear Developer

Dear Developer,

I'm not sure if I should write you to thank you for how enthusiastically you welcome feedback on what you've been working on and how our system behaves, or if I should write you to ask you to understand that is what I do: provide you actionable feedback so that we can be more awesome together.

But at least I want to reach out to ask for you to make my job of helping you easier. Keep me posted on what you're doing and thinking, and I can help you crystallize what threats there might be to the value you're providing and find ways to work with you to have the information available when it is the most useful. What I do isn't magic (just as what you do isn't magic) but it's different. I'm happy to show you how I think well around a software system whenever you want to. Let's pair, just give me a hint and I make the time for you.

You've probably heard of unit tests, and you know how to get your own hands on the software you've just generated. You tested it yourself, you say. So why should you care about a second pair of eyes?

You might think of testing as confirming what ever you think you already know. But there's other information too: there are things you think you knew but were wrong. And there are things you just did not know to know, and spending time with what you've implemented will reveal that information. It could be revealed to you too, but having someone else there, a second pair of eyes, widens the perspectives available to you and can make the two of you together more productive.

Us tester tend to have this skill of hearing the software speak to us, and hinting on problems. We are also often equipped with an analytic mind to identify things you can change that might make a difference, and a patience to try various angles to seeing if things are as they should be.  We focus our energies a little differently.
 
When the software works and provides the value it is supposed to, you will be praised. And when it doesn't work, you'll be the one working late nights and stressing on  the fixes. Let us help you get to praise and avoid the stress of long nights. 

You'll rather know and prepare. That's what we're here for. To help you consider perspectives that are hard to keep track of when you're focused on getting the implementation right.

Thank you for being awesome. And being more awesome together with me.

     Maaret - a tester

Time bombs in products

My desk is covered with post-it notes of things that I'm processing, and today, I seem to have taken a liking to doodling pictures of little bombs. My artistic talent did not allow me to post one here, but just speaking about it lets you know what I think of. I think of things that could be considered time bombs in our products, and ways to better speak of them.

There's one easy and obvious category of time bombs while working in a security company, and that is vulnerabilities. These typically have a few different parts in their life. There's the time when no one knows of them (that we know of). Then there's the time when we know of them but other don't (that we know of). Then there's the time when someone other than us knows of them and we know they know. When that time arrives, it really no longer matters much if we knew before or not, but fixing commences, stopping everything else. And there's times when we know, and let others know as there is an external mitigation / monitoring that people could do to keep themselves safe. We work hard to fix things we know of, before others know of them because working without an external schedule pressure is just so much nicer. And it is really the right  thing to do. The right thing isn't always easy and I love the intensity of analysis and discussions vulnerability related information causes here. It reminds me of the other places where the vulnerabilities were time bombs we just closed eyes on, and even publishing them wouldn't make assessing them a priority without a customer escalation.

Security issues, however, are not the only time bombs we have. Other relevant bugs are the same too. And with other relevant bugs, the question of timing sometimes becomes harder. For things that are just as easy to fix while in production and while developing an increment, timing can become irrelevant. This is what a lot of the continuous deployment approaches rely on - fast fixing. Some of these bugs though, when found have already caused a significant damage. Half of a database is corrupted. Communication between client and server has become irrecoverable. Computer fails to start unless you know how to go in through bios and hack registries so that starting up is again possible. So bugs with impacts other than inconvenience are ones that can bring a business down or slow it to a halt.

There's also the time bombs of bugs that are just hard to fix. At some point, someone gets annoyed enough with a slow website, and you've known for years it's a major architectural change to fix that one.

A thing that seems common with time bombs is that they are missing good conversations. The good conversations tends to lead to the right direction on deciding which ones we really need to invest on, right now. And for those not now, what is the time for them?

And all of this after we've done all we can to avoid having any in the first place. 


Wednesday, April 19, 2017

Test Communication Grumpiness

I've been having the time of my life exploratory testing a new feature, one that I won't be writing details on. I have the time of my life because I feel this is what I'm meant to do as a tester. The product (and people doing it) are better because I exist.

It's not all fun and happy though. I really don't like the fact that yet again, the feedback that I'm delivering happens later than it could. Then again, as per ability, interest and knowledge to react to it, it feels very timely.

There's three main things on the "life of this feature". First it was programmed (and unit tested, and tested extensively by the developer). Then some system test automation was added to it. I'm involved in the third part of its life, exploring it to find out what it is and should be from another perspective.

As first and second parts were done, people were quick to communicate it was "done". And if the system test automation was more extensive than it is, it could actually be done. But it isn't.

The third part has revealed functionalities we seem to have but don't. Some we forgot to implement, as there was still an open question regarding them. It has revealed inconsistencies and dependencies. And in particular, it has revealed cases where the software as we implemented isn't just complicated enough for the problem it is supposed to be helping with.

I appreciate how openly people welcome the feedback, and how actively things get changed as the feedback emerges. But all of this still leaves me a little grumpy on how hard communication can be.

There are tasks that we know of, like knowing we need to implement a feature for it to work.
There are tasks that we know will tell us of the tasks we don't know of, like testing of feature.
And there are the tasks that we don't know of yet but they will  be there.

And we won't be done before we've addressed also the work we just can't plan for.

Wednesday, March 29, 2017

Test Planning Workshop has Changed

I work on a system with five immediate teams, and at least another ten I don't care to count due to organizational structures. We had a need of some test planning for the five immediate teams. So the usual happens: a calendar request to get people together for a test planning workshop.

I knew we had three major areas where programmer work is split in interesting (complicated) ways across the teams. I was pretty sure we'd easily see the testing each of us would do through the lenses of responding to whatever the programmers were doing. That is, if one of our programmers would create a component, we would test that component. But integrating those components with their neighbors and eventually into the overall flows of the system, that was no longer obvious. This is a problem I find that not all programmers in multi-team agile understand, and the testing of a component gets easily focused on whatever the public interface of the team's component is.

As the meeting started, I took a step back and looked at how the discussion emerged. First, there was a rough architectural picture drawn on the whiteboard. Then arrows emerged in explanation of comparing how the *test automation system* works before the changes we are now introducing - a little lesson of history to frame the discussion. And from there, we all together very organically talked on chains and pairs and split *implementation work* to teams.

No one mentioned exploratory testing. I didn't either. I could see some of it happening while creating the automation. I could see some of it not happening while creating the automation, but that I would rather have people focus on it after the automation existed, I could see some of it, the early parts of it as things I would personally do to figure out what I didn't yet even know to focus on as a task or a risk.

Thinking back 10 years on time before automation was useful and extensive, this same meeting happened in such a different way. We would agree on who leads each feature's testing effort, and whoever would lead would generate ways for the rest of us to participate in that shared activity.

These days, we first build the system to test the system, explore while building it and then explore some more. Before, we used to build a system of mainly exploration, and tracking the part that stays was more difficult.

The test automation system isn't perfect. But the artifact that we, the five teams, can all go to and see in action, changes the way we communicate on the basics.

The world of testing has changed. And it has changed for the better.

Tuesday, March 28, 2017

World-changing incrementalism

As many exploratory testers do, I keep going back to thinking about the role of programming in the field of testing. At this point of my career, I identify both as a tester and a developer and while I love exploratory testing, maintainable code comes close. I'm fascinated by collaboration and skills, and how we build these skills, realizing there are many paths to greatness.

I recognize that in my personal skills and professional growth path there have been things that really make me more proficient but also things that keep me engaged and committed. Pushing me to do things I don't self-opt-in is a great way of not keeping me engaged and committed, and I realize, in hindsight that code for a long time had that status for me.

Here's still an idea I believe in: it is good to specialize in the first five years, and generalize later on. And whether it is good or not, it is the reality of how people cope with learning things, taking a few at a time, practicing and getting better, having a foundation that sticks around when building more on it.

If it is true that we are in a profession that doubles in size every five years, it means that in a balanced group half of us have less than five years of experience. Instead of giving the same advice on career to everyone, I like to split my ideas of advice on how to grow to these two halfs: the ones coming in and getting started vs. the ones continuing to grow in contribution.

I'm also old enough to remember the times when I could not get to testing the code as it was created, but had to wait months before what we knew as a testing phase. And I know you don't need to be old at all to experience those projects, there's still plenty of those to go around. Thinking about it, I feel that some part of my strong feelings of choosing tester vs. developer early path clearly come from the fact that in that world of phases, it was even more impossible to survive without the specialization. Especially as a tester, with phases it was hard to time box a bit of manual and a bit of automation, as every change we were testing was something big.

Incremental development has changed my world a lot. For a small change, I can explore that change and its implications from a context of having years of history with that product. I can also add test automation around that change (unit, integration or system level, which ever suits best) and add to years of history with that product. I don't need a choice of either or, I can have both. Incremental gives me the possibility, that is greatly enhanced with the idea of me not being alone. Whatever testing I contribute in us realizing we need to do, there's the whole team to do it.

I can't go back and try doing things differently. So my advice for those who seek any is this: you can choose whatever you feel like choosing, the right path isn't obvious. We need teams that are complete in their perspectives, not individuals that are complete. Pick a slice, get great, improve. And pick more slices. Any slices. Never stop learning.

That's what matters. Learning.

Changing Change Aversiveness

"I want to change the automatic installations to hourly over the 4-hour period it has been before". I suspected that could cause a little bit of discussion.

"But it could be disruptive to ongoing testing", came the response. "But you could always do it manually", came a proposal for alternative way of doing things.

I see this dynamic all the time. I propose a change and meet a list of *but* responses. And at worst they end up with *it depends* as no solution is optimal for everyone.

In mob programming, we have been practicing the idea of saying yes more often. When multiple different ways of doing something are proposed, do all. Do the least prominent one first. And observe how each of the different ways of doing teaches us not only about what worked but what we really wanted. And how we will fight about abstract perceptions without actual experience, sometimes to the bitter end.

This dynamic isn't just about mob programming. I've ended up paying attention to how I respond in ways that make others feel unsafe in suggesting the changes, after I first noticed the pattern of me having to fight for change that should be welcomed.

Yes, and... 

To feel safe to suggest ideas, we need to feel that our ideas are accepted, even welcome. If all proposals are met with a list of "But...", you keep  hearing no when you should hear yes.

The rule of improv "Yes, and..." turns out to have a lot of practical value. Try taking whatever the others suggest and say your improvement proposal as a step forward, instead as a step blocking the suggestion.

Acknowledge the other's experience

When you hear a "But...", start to listen. Ask for examples. When you hear of their experiences and worries, acknowledge those instead of trying to counteract them. We worry for reasons. The reasons may be personal experiences, very old history or something that we really justifiably all should worry about. The perception to whoever is experiencing a worry is very real.

A lot of times I find that just acknowledging that the concern is real helps move beoynd the concern.

Experiment

Suggest to try things differently for a while. Promise to go back or try something different if this change doesn't work. And keep the promise. Take a timebox that gives and idea a fighting chance.

People tend to be more open to trying things out than making a commitment on how things will be done in the long term. 

Monday, March 27, 2017

The Myth of Automating without Exploring

I feel the need of calling out a mystical creature: a thinking tester who does not think. This creature is born because of *automation*. That somehow, because of the magic of automation, the smart, thinking tester dumbs down and forgets all other activities around and just writes mindless code.

This is what I feel I see when I see comparisons of what automation does to testing, most recently this one: Implication of Emphasis on Test Automation in CI.

To create test automation, one must explore. One must figure out what it is that we're automating, and how could we consistently check the same things again and again. And while one seeks for information for the purposes of automation, one tends to see problems in the design. Automation creation forces out focus in detail, and this focus in detail that comes naturally with automation sometimes needs a specific mechanism when freeform exploring. Or, the mechanism is the automation thinking mindset. 

I remember reading various experience reports of people explaining how all the problems their automation ever found were found while creating the automation. I've had that experience in various situations. I've missed bugs for choosing not to automate because the ways I chose to test drove my focus of detail to different areas or concerns. I've found bugs that leave my automated tests in "expected fail" state until things get fixed.

The discussion around automation is feeling weird. It's so black and white, so inhumane. Yet, at core of any great testing, automated or not, there is a smart person. It's the skills of that person that turn the activity into useful results. 

Only the worst of the automators I've met dismiss the bugs they find while building the automation. Saves them time, surely, but misses a relevant part of feedback they could be providing. 


A Regular Expression Drive-By

I was working in strong-style pair on my team's test automation code last week, to assess candidates to help us as consultants for a short timeframe of ramping up our new product capabilities. The mechanisms of "an idea from your head to the computer must go through someone else's hands" lends itself well for assessing both skills and collaboration. At first, I would navigate on the task I had selected - cleaning up some test automation code. But soon, I would hand the navigation over to my pair and be the hands writing the changes.

There was this one particular line of code that in both sessions caught my eye and was emphasized by the reaction of my pairs: "This should have a code comments on it", "Ehh, what does this do, I have no idea!". It was a regular expression verifying if a message should be parsed to passed or failed but the selection of what the sought for keyword was was by no means obvious.

I mentioned this out loud a few days later, just to seek for confirmation that instead of the proposed code comment, it should really just be captured in a convenience method that would have a helpful name. But as we talked on the specific example, we also realized that it would make sense to add a unit test on that regular expression to explain the logic just a bit more.

The unit test would start failing if for any reason the messages we used to decide on pass/fail would no longer be available, and would be more granular way of identifying where the problem was than reading the logs of the system test.

A regular expression drive-by made me realize we should unit test our system tests more. 

Friday, March 24, 2017

Find the knobs and turn them

"What happened at work?" is a discussion I get to have daily. And yesterday, I was geeking out on installing and configuring a Windows Server as a domain controller, just so that I would have one more route to put things on a list that our product was supposed to manage.

Instead of talking about the actual contents, the discussion quickly moved to meta through pointing out that a lot of my stories of what I do for work include finding this button, lever or a knob, and twisting, pushing, pulling even intentionally isolating it. I find things that give me access to things others don't pay attention.

"I'm sure a developer did not take two hours to set the server up just for this test", I exclaimed. And continued with "while I was setting this up, I found four other routes to tweak that list." It was clear to me that if there was anything interesting learned from the 1st route I was now working on, the four others would soon follow.

Think about it: this is what we do. We find the knobs of the software (and build those knobs to be available in the system around our software) just so that we see, in a timely fashion, what happens when they are turned.

It turns out you may find some cool bugs thinking like this.

From appreciation of shallow testing towards depth

So, Maaret Pyhäjärvi is an extraordinary exploratory tester. ... She took ApprovalTests as a test target. She's like "I want to exploratory test your ApprovalTests" and I'm like "Yeah, go for it", cause it's all written test first and its code I'm very proud of. And she destroyed it in like an hour and a half. She destroyed in in things I can't unit test. One of the things she pointed out right away was "Your documentation is horrible. You're using images that you can't even copy and paste the examples from". And I'm, like, "yeah, that's true". And then she's like "Look at the way you write this API, it's not discoverable". And that's a hard thing for me to deal with because for me, I know exactly where the API is. One of the things I constantly struggle with is beginner mindset. And it's so easy to lose that and then never appreciate it in  the beginning. You're like "no, idiot, your supposed to do it this way". So this idea that my names are not discoverable is not something I could unit test but she was able to point out right away. And after pointing it out, and sort of arguing a little bit, she did this thing where she... She did in a session. I attended the session, but everybody is doing a mob exploratory testing an now I'm watching like 10 people not being able to find a reporter. It's nothing like watching people use your product and not be able to talk to make you appreciate you've done it wrong. I was like "oh, this is so painful, I never want to see that again".

What I found is that it used to be the case that we would write code and it was horrible. It was buggy and just so full of problems. And there was so many bugs where what we intended to occur wasn't what was happening, so that all that testing was was checking that what the programmer intended what the code did. This is all we had time for. As we started doing unit testing and automated testing, and test first, those problems started to go away. So now what the code does is what we intend it to do. And then it turns out there is this entire another world of is what you intended what you want. And it turns out, that's still a remarkably complex world. So you don't want to spend your time fighting with what I intended is not what the code does, so you need the unit test for that. But we also need this much bigger world of is what I intended what I actually want. What are the unforeseen consequences of these rules. That starts moving to exploratory testing and monitoring. Which is effectively exploratory testing via your users. "

The story above a great story about how one programmer learned there was more to testers contributions that he could have seen. It's great hearing Llewellyn pass hints in a meetup to other programmers such as yesterday: "Your testers know of more bugs than what they tell you. Even though it feels they tell you a lot, they still know more. Ask them, don't just wait them to tell you."

Some of the emphasis in the above text are for adding more to the story.

1,5 Hours is Shallow Testing and Excludes Earlier Learning

While a tester can in "just hour and a half" get you to rewrite half of your API, there's more depth to that testing than just the work immediately visible. Surely, when I started testing ApprovalTests, I already knew what that was supposed to be for and the hours in the background getting familiarized count in what I could do. I had ideas on what a multi-language API in IDEs should be like, and out of my 1,5 hours, I still used half an hour on two research activities: I googled what a great API is like and I asked user perspective questions from Llewellyn to find out what he thinks ApprovalTests Approvals and Reporters do - collecting claims. 

With the claims in particular and consistency across languages taking into account language idiosyncrasies, I could do so much more with deep exploratory testing he has not yet seen. That's what I do for my developers at work.

Things You Can and Can't Unit Test For

While discoverability of an API in an IDE does not strike as an idea to unit test for, after you have that insight, it is something you can change your unit tests to include. Your unit tests wouldn't notice if the API turns again hard to but it would give you the updated control over what you now intended it to be. 

The reason I write of this is that I find that a lot of times when I find something through exploration, I have a tendency of telling myself that this insight couldn't be a unit tests because I found it in the system context. After an insight exists, we could do a lot more on turning those insights into smaller scale and avoid some of the pain at least I am experiencing through system level test automation. We need to understand better (through talking about it) what is the smallest possible scope to find particular problems. 

When Making a Point, Try Again

The story above hints on arguments over the API, that were much less of arguments than discussions on what is practical. Changing half of your API after you have thousands of users isn't exactly a picnic in the park and as a tester, I totally get that many organizations don't really care about that feedback on discoverability when it is timed wrong - get your testers involved before your users fix your world. 

I would believe I got my message through with Llewellyn already telling my experience. But surely, I do have a tendency of advocating for the bugs I care for, and getting an experience with your real users trying to use your software is a powerful advocation tool. 

As an exploratory tester, I could write a chapter about ways I've tried advocating for things that my devs don't react on, just to be sure we understand what we don't fix. Perhaps that's what I do next for my exploratory testing book on leanpub

Where Most of the Software World Is

Getting to work with developers who do test-driven development and test with the commitment Llewellyn shows is rare. When in the second part of the exerpt he talks about the testing for what programmer intended for, I can't help but realize that out of the hundreds of developers I've had the pleasure working with, I can count the ones who do TDD with one hands fingers. 

Let's face it. The better of us unit test at all. And even that is not a majority still. And generally, most of us still suck at unit testing. Or even if not personally, we know a friend who does. 

When I explore, it is a rare treat to have something where the software does *even* what the programmer intended to. So I start often with understanding that intent through exploring the happy, expected paths. I first have empathy of what the world could be if the programmer was right in what he knew today while implementing this. 
But even the TDD-ers, I approach with scepticism.  Llewellyn meetup talk yesterday introduced Asserts vs. Approvals and he had this slide comparing someone's Assert-TDD end result to his Approvals-TDD end result. 
He pointed out that the tests on the left (Asserts-TDD) missed a bug in the code for value 4 being represented as IIII, whereas the test on the right (Approvals-TDD) found that missed bug run against the other's code. 

As a tester, I would have been likely to check how the developer tested this. My life would have been a lot simpler reading the Approvals-file with formatting and scenarios collected. But even if I did not read the code, I would be likely to have gone to sample values that I find likely to break. 

What you usually get in TDD is your best insight. And our shared insight, together, tends to be stronger than yours alone. I tend to generate different insight when my head is not buried in the code.





Wednesday, March 15, 2017

Don't pay to speak, get paid to speak

I strongly believe the world of tech conferences needs to change, and the change I call for is that whoever the conference organizers deem good enough to step on their podium to speak, should not have to pay to speak. And when I talk about paying to speak, I speak of expenses.

In case of paying for travel expenses, and encouraging the cheapest possible travel, there's a second step. When booking early, pay back early. Don't use your speakers as a bank and pay back after the conference.

I work towards changing these two.

Other people ask for more, and I would love to join them. They ask to be paid to speak. They ask for the time they put on preparing to be compensated. And since the work you do is not preparing the talk, it's becoming the person that gets on that stage, the speaking fees should be relevant.

In paying, a big injustice is when some people get paid differently than others. The injustice of it just gets bigger when conferences give replies like this on paying some but not others.
As a conference organizer, I want to share my perspective.

I set up a conference that:

  1. Pays travel expenses of the speakers. All the speakers.
  2. Profit shares with all the speakers. Keynotes get 5* what a 30-minute slot speaker gets. 
The second happens only if there is profit. And I work hard to make profit. In 2017, I failed. I made losses. 

If I would have committed early on to paying my speakers, I would have lost more than 20k that I lost now. This loss is insignificant as it is an investment into a great conference (and great it was!) and an investment in making things right in the world of speakers. But imagine if I had a thousand euros to pay to each of my speakers, I would be down 30k more. 

What I failed in was marketing. Getting people to learn about the conference. Yet, I feel that whoever came are the right people. 


To make marketing easier, famous names help. Some famous names are willing to risk it to not be paid for their time, and I'm very grateful for that. But others have a fixed price range, paid in advance. When as an organizer you want to invite one like that, you fill the other similar slots with people who are not the same: people who don't insist on being paid fairly. But lying about it is just stupid. The speakers talk. And should talk more in the future.

As an organizer, I rather leave out the superstars if the same fees principle is a problem for them. And honestly, it is a problem for many of our tech superstars. But things change with conferences only if we change them. One conference at a time. 

Meeting is not just a meeting

We're sitting in a status / coordination meeting, and I know I don't look particularly happy to be there. The meeting, scheduled at 3pm has been lurking on my mind the whole day and for the last hour before it, I recognize I have actively avoided doing anything I really should be doing. And what I really should be doing is deep thinking while testing. I feel there must be something wrong with me for not being able to start when my insides are seeing the inevitable interruption looming.

It's not just the inconvenient timing at the seeming end part of my day that has negative impacts on my focus. It's also the fact that I know the meeting is, in my perspective, useless and yet I'm forced there trying to mask most of my dislike. It drains my energy even further.

In the ten years of looking at agile in practice, one of my main lessons has been that planning the work is not the work. I can plan to write a hundred blog posts, and yet I have not written any of them except for a title. I can plan to test, yet the plan never survives the contact with the real software that whispers and lures me into some cool bugs and information we were completely unaware of while planning.

I love continuous planning, but that planning does not happen in workshops or meetings scheduled for planning. It happens as we are doing the work and learning. And sitting in a team room with insightful other software developers, any moment for planning is almost as good as any other. The unscheduled "meeting" over a whiteboard is less of an interruption than the one looming in my schedules.

I know how I feel, and I've spent a fair deal of time understanding those feelings. I know how to mask those feelings too, to appear obedient and, as a project manager put it, "approach things practically". But the real practice for me is aspiring to be better, and to accommodate people with different feelings around same tasks.

Planning is not doing the work. But it does create the same feeling of accomplishment. When you visualize the work, you start imagining the work is done. And if you happen to be a manager who sits through meetings day in and out, the disruptiveness of a meeting in schedule isn't as much as it is when you are doing the work.

I used to be a tester. Then I became too good to test, and took the role of a manager. I was still good, just paying attention to different things. But the big learning for me came when I realized that to have self-organized teams as we introduced agile a decade ago in the organization, I was a hindrance. My usefulness as a manager stopped the people from doing the work I was doing. Stepping down and announcing the test manager role gone and just teaching all the work I had been doing to teams was the best choice I've done.

And it made me a tester again. But this time around, I don't expect a manager to be there. I expect there's a little manager in every one of us, and the manager in others help me manage both the doer and the manager in me.

The two roles were different for me. And awareness of that keeps me wary of meetings.

Monday, March 13, 2017

A Mob Testing Experience

During my 6 months at the new job, I've managed to do Mob Testing a few times. Basically the idea is that whenever I sink into a new feature that needs exploring, I invite others to join me for the exploration for a limited time. I've been fascinated with the the perspectives and observations of the other testers I've had join me, but these always leave me wanting after the Mob Testing experiences I had at my earlier place of work. There not only testers joined (well,  there were no testers other than myself) but we did the tasks together with the whole team, having programmers join in.

There's a big difference on if you're mob testing amongst testers (or quality engineers as we call them) or if you're including your teams developers and ever product owners. And the big difference comes from having people who need to receive the feedback testing is providing sharing the work.

With 6 months approaching, I'm starting to see that my no-so-subtle hints on a regular basis are not taking adapting mob testing / programming further. But it became funny at a point I taught developers from another organization who started off with the practice, and only through their positive reports someone relevant enough to push people to try it took initiative. There's an infamous old saying of no one ever being a prophet on their own land, and that kept creeping up to my thoughts - I became part of furniture, "always been here" surprisingly quickly. And I don't push people to do things they don't opt in to.

But finally last week's Wednesday, while my own personal team did not opt in, the team next door did and invited me to join their experience. With two application developers, two test developers and two all-around test specialists, we took the time to mob for about 5 hours during the day.

The task we were working on was a performance testing task, and the application developers were not in their strong area. We worked on extending an existing piece of code to a specific purpose, and the idea of the task was available to start our session. There were a few particularly interesting dynamics.

When in disagreement, do the less likely one first

About half an hour into our mobbing, we had a disagreement on how we would approach the extending of the code. We just did not disagree what would  be the right thing to do as the next step. The two of us who were familiar with what the goal of what we were doing had one perspective. And another suggested doing things differently, in a way that in the moment felt it made little sense to us.

I realized that were were quickly going into discussion mode, convincing the other of what the right thing was - at a time we really knew the least. The other suggestion might not sound like the best idea, so we played a common rule to beginning mobs: "Do the less likely first, do both". Without continuing the discussion, we just adjusted the next step to be one that the other, in minority, felt strongly enough to voice.

And it turned out to be a good thing to do in a group. As it was done, the work unfolded in a way that did not leave us missing the other option.

Keep rotating

Between hours 2-3, two of the six mob participants needed to step out into another meeting. I was one of these two. For first two hours, we had rotated on a four minute timer and pushed the rule of having a designated navigator. As I came back from the meeting, the rotation had fallen off as the mob had found relevant bugs in performance and had two other people join in as lurkers on the side of the table, monitoring breaking services in more detail. The lurkers did not join the mob, but also the work got split so that the common thread started to hide.

Bringing back rotation brought back the group thread. Yet it was clear that the power dynamic had shifted. The more quiet ones were more quiet and we could use some work on dominating personalities.

But one things I loved to observe on the more quiet ones. They aced listening and it showed up as timely contributions when no one else knew where to head right now.

Oh Style

The group ended up on one computer with one IDE in the morning and another computer with another IDE in the afternoon. Keyboard shortcuts would fly around, and made different IDEs obvious.

On the order of doing things, there was more disagreement than we could experience and go through in one day. Strong opinions of "my way is the best way" would be best resolved doing similar tasks in different ways, and then having a retrospective discussion of the shared experiences.

And observing the group clean up code to be ready to check in was enchanting. It was enlightening to look at group who have "common rules" to not have common rules after all. Mobbing would really help out figuring the code styles over the discussions around pull requests.




Thursday, March 9, 2017

A Simple Superpower

There was a problem and I could tell by the discussions in the hallways. I would hear from one side that the test automation doesn't work, and it will be perhaps fixed later - but uncertain. And I would hear from the other side that there's a lot to do, with suspects of not really having time to address anything outside immediate attention.


I don't have a solution any more than anyone else. But I seem to have something of a superpower: I walk the right people into one space to have a discussion around it. And while the discussion is ongoing, I paraphrase what has been said to check if I heard right. I ask questions, and make sure quiet does not get interpreted as agreement.

There's magic in (smart) people getting together to solve things. But seems that bringing people together sometimes is a simple superpower. Dare to take room for face to face communication. If two is enough to address something, great. But recognizing when three is not a crowd seems to provide a lot of benefits.

If you can use 15 minutes in complaining and uncertainty, how about walking around to have a practical solution-driven discussion. It's only out of our reach is we choose so.

Tuesday, March 7, 2017

Testing in a multi-team setting

There's a lovely theory of feature teams - groups of people working well together, picking up an end-to-end feature, working on a shared code base and as the feature is done (as in done done done as many times done as you can imagine) there's the feature and tests to make sure things stay as they were left off .

Add multiple teams, and the lovely theory starts shaking. But add multiple teams over multiple business lines, and the shakiness is more visible.

Experiencing this as a tester makes it obvious. I work on one business line and the other business line is adding all these amazing features. If the added feature was also built and tested from my business line's perspective, it would be ideal.

The ideal breaks on a few things:
  • lack of awareness of what the other business line is expecting and needing, and in particular, that some of the stuff (unknown unknowns) tend to only be found when exploratory testing
  • lack of skill on exploratory testing to do anything beyond "requirements" or "story"
  • team level preference to create test automation code only to match whatever features they are adding
I've been looking at what I do and I'm starting to see a pattern in how I think differently than most people (read: programmers) in my team. When I look at the work, I see two rough boxes. There's the feedback that I provide for the lovely programmers in my team (testing our changes / components) and there's the feedback I provide for the delightful programmers in other teams (testing their changes in product / system context).

It would be so much easier if in the team everyone shared a scope, but this division of "I test our stuff and other teams' stuff" gets very clearly distinguished when seeking for someone to fix what I found. And I find myself running around the hallways meeting people from the other teams, feeling lucky if my feedback was timely and thus a fix will emerge immediately. More often than not, it isn't timely and I get to enjoy managing a very traditional bug backlog.

Features teams that can and do think in scope of systems (over product lines) would help. But in a complex world, getting all the information together may be somewhat of a challenge.

Minimum requirement though: the test automation should be timely and thus available for whatever the team is that is now making (potentially breaking) changes without a human messenger in the chain. 

Thursday, March 2, 2017

The Awesome Flatness of Teams

For a long time, I've known that benchmarking our practices with other companies is a great way of mutual learning. But a lot of times these benchmarks teach me things that I never anticipated. Today was one of these and I wanted to share a little story.

Today, I found myself sitting on Skype facing three people just as agreed. One of the three introduced themselves as "just a quality engineer", whereas the others had more flashy titles. I also introduced as "just a quality engineer". Turns out those words have fascinated me since.

The discussion lead me to realize I have yet really not given much credit to how different from most places out team structure is. Our teams consist of people hired as "software engineers" and "quality engineers" and there's somewhat of a history and rule of thumb on how many of each type you would look for in a team. We share the same managers.

When you grow in a technical role, you move to senior, lead and principal in the same family of roles. And usually the growing means changes in the scope of what you contribute on, still as "just a member of a team".

As a lead quality engineer, I'm not a manager. I'm a member of a team, where I test with my team and help us build forward our abilities to test. With seniority, I work a lot cross-team figuring out how my team could help others improve and improve itself. I volunteer to take tasks that drive our future to a better state. I'm aware of what my team's immediate short term goal is, but also look into finding my contribution to the organization's long term goals.

Our teams have no scrum masters. The product owners work on priorities, clarifications and are a lovely collaborator for our teams. I'm not allocated a technical (quality engineering) leadership, I just step up to it. Just like the fellows next to me.

So I'm "just a tester", as much as anyone ever is just anything. But much of my power comes from the fact that there's no one who is anything more. Everyone steps up. And it's kind of amazing. 

Wednesday, March 1, 2017

Seeing symmetry and consistency

Morning at office starts off with news of relevant discussions that took place while I was gone. So I find myself standing next to a whiteboard, with a messy picture of scribbled boxes, arrows, acronyms. And naturally none of it would make sense without a guide.

But with a guide, I quickly pick up what this is about. A new box is introduced. Number of arrows is minimized. The new box has new technology, and I ask some questions to compare and contrast that to the other technologies we're using to figure out if there's a risk I'd raise right now.

I also see symmetry. There's boxes for similar yet different purposes. Pointing out the symmetry as a thing that makes sense from testing perspective (I know what to test on the new thing, as it is symmetrical to the old thing) gets approving nods.

I end up not raising up risks, but complimenting the choices for symmetry and choices of leaving boxes without changes that I was expecting they might be changing simultaneously just because we can.

There's hope for incremental development.

Tuesday, February 28, 2017

The Lying Developers

The title is a bit clickbait-y, right? But I can't help but directly addressing something from UKStar Conference and a talk I was not at, summarized in a tweet:
As a tester, the services I provide are not panacea for all things wrong with the world. I provide information, usually with primary emphasis on the product we are building with an empirical emphasis. Being an all around lie detector in a world does not strike me as the job I signed up for. Only some of the lies are my specialty, and I would claim that me being "technical" isn't the core type of lie (I prefer illusion) that I'm out to detect.

If a developer tells me something cannot be fixed (and that is a lie), there are other developers to pick up that lie. And if they all lie on that together, I need a third party developer to find a way to turn that misconception into a learning of how it is possible to do after all. I don't have to be able to do it myself, but I need to understand when *impossible* is *unacceptable*. And that isn't technical, that is understanding the business domain.

If a developer tells me something is done when it isn't, the best lie detector isn't going and reading the code. Surely, the code might give me hints of completely missing implementation or bunch of todo-tags, but trying out the functionality reveals often that and *more*. Why would we otherwise keep finding bugs when we patiently go through the flows that have been peer reviewed in pull requests?

Back in the days, I had a developer who intentionally left severe issues in the code he handed to testing to "see if we notice". Well, we did.

And in general, I'm happy to realize that is as close to systematic lying I feel I have needed to come to.

Conflicting belief systems are not a lie. And testers are not a lie detector, we have enough work on us without even hinting on the idea that we would intentionally be fooling each other.

There's better reasons to be a little technical than a lying developer fallacy.



Monday, February 27, 2017

A chat about banks

After a talk on mob testing and programming, someone approached me with a question.

"I'm working with big banks. They would never allow a group to work this way. Is there anything you have to say to this?"

Let's first clarify. It's really not my business to say if you should or should not mob. What I do is sharing that against all my personal beliefs, it has been a great experience for me. I would not have had the great experience without doing a thing I did not believe in. Go read back my blog on my doubts, how I felt it's the programmer's conspiracy to make testers vanish, and how I later learned that where I was different was more valuable in a timely manner in a mob.

But the problem with big banks as such is that there are people who are probably not willing to give this a chance. Most likely you're even a contractor, and proposing this adds another layer: how about you pay us for five people doing the "work of one person". Except it isn't work of one person. It's the five people's work done in small pieces so that whatever comes out in the end does not need to be rewritten immediately and then again in a few months.

Here's a story I shared. I once got a chance to see a bank in distress, they had production problems and were suffering big time. The software was already in production. And as there was a crisis, they did what any smart organization does: they brought together the right people, and instead of letting them work separately, they put them all on the same problem, into the same room. The main difference to mobbing was that they did not really try to work on one computer. But a lot of times, solving the most pressing problem, that is exactly what they ended up doing.

For the crisis time, it was a non-issue financially to bring together 15 people to solve the crisis, using long hours. But as soon as the crisis was solved, they again dismantled their most effective way of developing. The production problems were a big hit for reputation as well as financially. I bet the teams could have spent some time on working tightly together before the problem surfaced. But at that time, it did not feel necessary because wishful thinking is strong.

We keep believing we can do the same or good work individually one by one. But learning and building on each other tends to be important in software development.

Sure, I love showing the lists of all the bugs developers missed. The project managers don't love me for showing that too late. If likes of me could use a mechanism like mobbing to change this dynamic, wouldn't that be awesome?

Friday, February 24, 2017

Theories of Error

Some days it bothers me that I feel testers focus more on actions while testing than thinking about the underlying motivations of why  they test the way they do. Since I was thinking about this all of my way to office, I need to write about a few examples.

In a conference few weeks ago, I was following a session where some testing happened on stage, and the presenter had a great connection speaking back and forth with the audience on ideas. The software tested was a basic tasks tool, running on a local machine saving stuff into what ever internal format the thing had. And while discussing ideas with the audience, someone from the audience suggested testing SQL injection types of inputs.

The idea of injection is to enter code through input fields to see if the inputs are cleaned up or if whatever you give goes through as such. SQL in particular would be relevant if there was a database in the application, and is a popular quick attack among testers.

However, this application wasn't built on a database. Doing a particular action wouldn't connect with making sense of testing this way unless there was a bit more of a story around it. As the discussion between the audience member and the presenter remained puzzled, I volunteered an idea of connecting that together with an export functionality, if there was one and assessing the possible error from that perspective. A theory of error was needed for the idea to make sense.

Another example I keep coming back to is automation running a basic test again and again. There has been the habit of running the same basic test on schedule (frequently) because identifying all the triggers of change in a complex environment is a complex task. But again, there should be a theory of error.

I've been volunteered a number of rationale for running automation this way:
  • The basic thing is basic functionality and we just want to see it always stays in action
  • If the other things around it wouldn't cause as much false alarms in this test, it would actually be cheap to run it and I wouldn't have to care that it does not really provide information most of the time
  • When run enough times, timing-related issues get revealed and with repetition, we get out the 1/10000 crash dump that enables us to fix crashes
I sort of appreciate the last one, as it has an actual theory of error. The first two sound most of the time like sloppy explanations.

So I keep thinking: how often can we articulate why we are testing the way we are? Do we have an underlying theory of error we can share and if we articulated it better, would that change the way we test? 

Tuesday, February 21, 2017

It's all in perspective - virtual images for test automation use

I seem to fluctuate between two perspective to test automation that I get to witness. On some days (most) I find myself really frustrated with how much effort can go into such a little amount of testing. On other days, I find the platforms built very impressive even if the focus of what we test could still improve. And in reflection to how others are doing, I lower my standard and expectation for today, allowing myself to feel very happy and proud of what people have accomplished.

The piece that I'm in awe today is the operating system provisioning system that is in the heart of the way test automation is done here. And I just learned we have open sourced (yet apparently publicized very little) the tooling for this: https://github.com/F-Secure/dvmps

Just a high level view: imagine spawning 10 000 virtual machine for test automation use on a daily basis, with each running some set of tests. It takes just seconds to have a new machine up and running, and I often find myself tempted to use on of the machines for test automation, as the manual testing reserved images wait times are calculated in minutes.

With the thought of perspectives, I go for doing a little more research on how others do this. If you're working on scales like this, would love to benchmark experiences.

Friday, February 17, 2017

Testing by Intent

In programming, there's a concept called Programming by Intent. Paraphrasing on how I perceive the concept: it is helpful to not hold big things in your head but to outline intent that then drives implementation.

Intent in programming becomes particularly relevant when you try to pair or mob. If one of the group holds a vision of a set of variables and their relations just in their head, it makes it next to impossible for another member of the group to catch the ball and continue where the previous person left off.

With experiences in TDD and mob programming, it has grown very evident that making intent visible is useful. Working in a mob when you go to whiteboard with an example, turn that into English (and refactor the English), then turn it into test and then create the code that makes the test pass, the work just flows. Or actually, the being stuck in the flow happens more around discussions on the whiteboard.

In exploratory testing, I find that those of us who practiced it more intensely tend to inherently have a little better structure for our intent. But as I've been mob testing, I find that still we suck at sharing that intent. We don't have exactly the same mechanisms as TDD introduces to programming work, and with exploratory testing, we want to opt to the sidetracks that provide serendipity. But in a way that helps us track where we were, and share that idea of where we are in  the team.

The theme of testing by intent was my special focus in looking at a group mobbing on my exploratory testing course this week. I had an amazing group: mostly people with 20+ years in testing. One test automator - developer with solid testing understanding. And one newbie to testing. All super collaborative, nice and helpful.

I experimented with ways to improve intent and found that:
  • for exploring, shorter rotation forces the group to formulate clearer intent
  • explaining the concept of intent helped the group define their intent better, charters as we used them were too loose to keep the group on track of their intent
  • explicitly giving the group (by example) mechanisms of offloading sidetracks to go back to later helped the focus
  • when seeking deep testing of small area, needed strict facilitation to not allow people to leave undone work and seek other areas - inclination to be shallow 
There's clearly more to do in teaching people how to do it. The stories of what we are testing and why we are testing it this way are still very hard to voice for so many people.

Then again, it took me a long deliberate practice to build up my self-management skills. And yet, there's more work to do. 
 

Tuesday, February 14, 2017

The 10 minute bug fix that takes a day

We have these mystical creatures around that eat up hours in a day. I first started recognizing them with discussions that went something like this:

"Oh, we need to fix a bug", I said. "Sure, I work on it.", the developer said. A day later the dev comes back proclaiming "It was only a 10 minute fix". "But it took the whole day, did you do something else?", I ask. "No, but the fix was only 10 minutes".

On the side, depending on the power structure of the organization, there's a manager causing confusion from what he picks up on that discussion. They might want to go for the optimistic "10 minutes to fix bugs, awesome" or pessimistic "a day to do 10 minutes of work".

The same theme continues. "It took us a 2-week sprint for two people to do this feature" proclaimed after the feature is done. "But it took us 2 2-week sprints for two full-time and two half-time people to do this feature, isn't there something off?"

There's this fascinating need for every individual to look good belittling their contribution as in how much time they used, even if that focus on self takes its toll on how we talk about the whole thing.


There's a tone of discussion that needs changing. From looking good through looking at numbers of effort, we could look good though looking at value at customer hands. Sounds like a challenge I accept.

Monday, February 13, 2017

Unsafe behaviors

Have you ever shared your concerns on challenges in how your team works, only to learn a few weeks later the information you shared is used not for good, but for evil?

This is a question I've been pondering a lot today. My team is somewhat siloed in skillsets and interests, and in the past few weeks, I've been extremely happy with the new raise of collaboration that we've been seeing. We worked on one feature end-to-end, moving beyond our usual technology silos and perceived responsibility silos, and what we got done was rather amazing.

It was not without problems. I got to hear more than once that something was "done" without it yet being tested. At first it was done so that nothing worked. Then it was done so that simple and straightforward things worked. Then it was done so that most things worked. And there's still few more things to do to cover scale, error situations and such. A very normal flow if the people proclaiming "done" wouldn't go too far outside the team with their assessments that just make us all look bad.

Sometimes I get frustrated with problems of teamwork, but most teams I've worked with have had those. And we were making good progress through a shared value item in this.

In breaking down silos, trust is important. And that's where problems can emerge.

Sharing the done / not done and silo problems outside one's immediate organization, you may run into a manager who feels they need to "help" you with very traditional intervention mechanisms. The traditional intervention mechanisms can quickly bring down all the grassroot improvement achieved and drive you into a panicky spiral.

So this leaves me thinking: if I can't trust that talking about problems we can solve allow us to solve those problems, should I stop talking about problems. I sense a customer - delivery organization wall building up. Or, is there hope in people understanding that not all information requires their actions.

There's a great talk by Coraline Ada Ehmke somewhere online about how men and women communicate differently. She mentions how women tend to have "metadata" on side of the message, and with this, I keep wondering if my metadata on "let us fix it, we're ok" was completely dismissed due to not realizing there is a second channel of info in the way I talk.

Safety is a prerequisite for learning. And some days I feel less safe than others.

Pairing Exploratory and Unit Testing

One of my big takeaways - with a huge load of confirmation bias I confess to - sums up to one slide shown by Kevlin Henney.

 
First of all, already from the way the statement is written, you can see it is not to say that this information has an element of hindsight: after you know you have a problem, you can in many cases reproduce that problem with a unit test.

This slide triggered two main threads of thought in me.

At first, I was thinking back to a course I have been running with Llewellyn Falco, where we would find problems through exploratory testing, and then take those problems as insights to turn into unit tests. The times we've run the course, we have needed to introduce seams to get to test on the scale of unit tests, even refactor heavily but all of it has made me a bigger believer of the idea that we all too often try to test with automation as we test manually, and as an end result of that we end up with hard to maintain tests.

Second, I was thinking back to work and the amount and focus on test automation on system level. I've already been realizing my look at testing through all the layers is a unique one for a tester here (or quality engineer, as we like to call them) and the strive to find the smallest possible scale to test in isn't yet a shared value.

From these two thoughts, I formulated on how I would like to think around automation. I would like to see us do extensive unit testing to make sure we build things as the developer intended. Instead of heavy focus on system level test automation, I would like to see us learn to improve on how the unit tests work, and how they cover concerns. And exploratory testing to drive insights on what we are missing.

As an exploratory tester, I provide "early hindsight" of production problems. I rather call that insight though. And it's time for me to explore into what our unit tests are made of.

Monday, February 6, 2017

The lessons that take time to sink in

Have you ever had this feeling that you know how it should be done, and it pains you to see how someone else is doing it in a different way that is just very likely to be wrong? I know I've been through this a lot of times, and with practice, I'm getting only slightly better at it.

So we have this test automation thing here. I'm very much convinced on testing each component or chains of couple of components over the whole end-to-end chains, because granularity is just awesome thing to have when (not if) things fail. But a lot of times, I feel I'm talking to a younger version of myself, who is just as stubborn as I was on doing things as they believe.

Telling the 10 years younger me that it would make more sense to test in smaller scale whenever possible would have been a mission impossible. There are two things I've learned since:
  • architecture (talk to developer) matters - things that are tested end-to-end are done by components and going more fine-grained isn't away from thinking of end user value
  • test automation isn't about automating the testing we do manually, it's about decomposing the testing we do differently so that automation makes sense 
So on a day when I feel like telling people to fast-forward their learning, I think of how stubborn I can be and what are the ways I change my mind: experiences. So again, I allow for a test that I think is stupid in the test automation, and just put a note of that - let's talk about it again in two weeks, and on a cycle of two weeks until one of us learns that our preconceived notions were off.

I'd love if I was wrong. But I'd love it because I actively seek learning. 

Friday, February 3, 2017

Making my work invisible

Many years ago, I was working with a small team creating an internal tool. The team had four people in total. We had a customer, who was a programmer by trade so sometimes instead of needs and requirements, you'd get code. We had a full-time programmer taking the tool forward. We had a part- time project manager making sure things were progressing. And then there was me, as the half-time-allocation tester.

The full time programmer was such a nice fellow and I loved working with him. We developed this relationship where we'd meet on a daily basis just when he was coming back from lunch and I was thinking of going. And as we met, there was always this little ritual. He would tell me what he had been spending his time on, talking of some of the cool technical challenges he was solving. And I would tell him what I would test next because of what he just said.

I remember one day in particular. He had just introduced things that screamed concurrency, even through he never mentioned it. As I mentioned testing concurrency, he outright told me that he had considered that and it would be in vain. And as usual, with my half-time allocation, I had no time to immediately try go prove him wrong. So we met again the next day, and he told me that he had looked into concurrency and I was right, there were problems. But there isn't anymore. And then he proudly showed me some of the test automation he had created to make sure problems of that type would get caught.

It was all fine, I was finding problems and he was fixing them, and we worked well together.

Well, all was fine until we reached a milestone we called "system testing phase starts". Soon after that, the project manager activated his attention and came to talk to me about how he was concerned. "Maaret, I've heard you are a great tester, one of the best in Finland", he said. "Why aren't you finding bugs?", he continued? "Usually in this phase, according to metrics we should have many bugs already in the bug database, and the numbers I see are too low!", he concluded.

I couldn't help but smile with the start of how nicely my then manager had framed me as someone you can trust to do good work even if you wouldn't always understand all the things that go on, and I started telling the project manager how we have been testing continuously on system level before the official phase without logging bugs to a tool. And as I was explaining this, the full-time developer jumped into the discussion exclaiming that the collaboration we were having was one of the best things he had experienced, telling how things had been fixed as they had been created without a trace other than the commits to change things. And with the developer defending me, I was no longer being accused of "potentially bad testing".

The reason I think back to this story is that this week, I've again had a very nice close collaboration with my developers. This time I'm a full time tester, so I'm just as immersed into the feature as the others, but there's a lot of similarities. The feedback I give helps the developers shine. They get the credit for working software and they know they got there with my help. And again, there's no trace of my work - not a single written bug report, since I actively avoid creating any.

These days one thing is different. I've told the story of my past experiences to highlight how I work, and I have trust I did not even know I was missing back then.

The more invisible my work is, the more welcoming developers are to the feedback. And to be invisible, I need to be timely and relevant so that talking to me is a help, not a hindrance. 

Monday, January 30, 2017

Entrepreneurship on the side

I had a fun conversation with Paul Merrill for his podcast Reflection as a Service. As we were closing the discussion in the post-recording part, something he said lead me to think about entrepreneurship and my take on it.

I've had my own company on the side of regular employment for over ten years. I have not considered myself an entrepreneur, because it has rarely been  my full time work.

I set a company up when I left a consultancy with the intent to become independent. I had been framed as a "senior test consultant" and back then I hated what my role had become. I would show up at various customers that were new to the consultancy, pretending I had time for them knowing that the reality was that on worst of my weeks, I had a different customer for each half a day. Wanting to be a great tester and make great impact in testing, that type of allocation did not feel like I could really do it. I was a mannequin and I quit to walk away from it.

Since I had been in contact with so many customers, I had nowhere to go. According to my competition clause, I couldn't work with any of those companies. They were listed in a separate contract, reminding me of where I can't work. One of the companies back then on the list of no-go was F-Secure, and the consulting I had done for F-Secure was a single Test Process Improvement assessment. F-Secure had a manager willing to fight for their right (my right) for employing me, and just stepping up to say that they vanished from my no-go list and I joined the company for 6-months that turned into three years.

As I was set out to leave in 6 months, we already set up a side work agreement. And in my three years with F-Secure, I started learning what power entrepreneurship on the side could have.

In the years to come, it allowed me a personal budget to do things the company wouldn't have budget for - including meetups and training that my employers weren't investing in for me. It allowed me to travel to #PayToSpeak conferences I could have never afforded without it. Training for money a day here and there were enough to give me the personal budget I was craving for.

I recently saw Michael Bolton tweet this:
I've known I'm self-employed on the side, and it has increased my awareness that everyone really is self-employed. We just choose different frames for various motivations to do so. On the side is a safe way of exploring entrepreneurship.

What's worth repeating?

This is again a tale of two testers, approaching  the same problem with very different ways.

There's this "simple" feature, having more layers than first meets the eye. It's simple because it is conceptually simple. There's a piece of software in one end that writes stuff to a file that gets sent to the other end and shown on a user interface. Yet, it's complicated looking at it from just having spent a day on it.
  • it is not obvious that the piece of software sending is the right version. And it wasn't due to an updating bug.  Insight: test for latest version being available
  • it is not obvious that whatever needs to be written into the file gets written. Insight: test for all intended functionality being implemented
  • it is not obvious that when writing to the file, it gets all the way to the other side. Insight: test for reasons to drop content
  • it is not obvious that on the other side, the information is shown in the right place. Insight: test for mapping what is sent to where it is received and shown
  • it is not obvious that what gets sent gets to the other side in the same format. Insight: test for conversions, e.g. character sets and number precision
  • it is not obvious that if info is right on one case, it isn't hardcoded for that 1st case. Insight: test for values changing (or talk to the dev or read the code)
It took me a day to figure this out (and get the issues fixed) without me implementing any test automation. For automation, this would be a mix of local file verification (catching the file sent on a mock server because manually I can turn off network to keep my files, our automation needs the connection and thus a workaround), bunch of web APIs and a web GUI.


So I look at my list of insights and think: which of these would even be worth repeating? And which of these require the "system" for repeating them and which could just as well be cared for on "unit" perspective. Rather straighforward mapping architecture, yet many components in the scenario. Unlikely to change much but to be extended to some degree. What automation would be useful then if we did not get use of it as we were creating the feature in the first place? 

And again I think there is an overemphasis on system level test automation in the air. Many of these I would recognize from the code if they broke again. Pretty much all but the first. We test too much and review / discuss / collaborate too little.
 
Can I just say it: I wish we were mob programming.





Tuesday, January 24, 2017

Appreciating Special Programmers

I'm having this phase in my life where I feel like going to conferences to speak isn't really my thing. I don't think it is the infamous imposter syndrome, because there's plenty of stuff for me to share on. While I might have low self esteem in some areas of life, work isn't one of those areas.

So in this moment of crisis, I think of things that have changed for me. I realize one of the things that has changed a lot is how I identify. I remember tweeting with Adi Bolboaca about two years ago who I insist that testers would not be called developers and I can see the irony in now organizing European Testing Conference with Adi and not being able to recall why I would have ever wanted to insist on that, other than fear of losing appreciation for my special skills in testing.

So I keep thinking what (and who) changed my mind, and realizing it has been a group of individuals that never tried changing me.

It all starts with Vladimir Tarasow, who invited me to speak in Latvia for an Exploratory Testing Training and then the Wildcard conference. Wildcard was one of the first mixed role conferences I've been to and Vladimir and his colleagues were first developers I met that cared (enough to act on it) on the community of testers and testing.

Since I was at Wildcard, I participated on sessions. And one of the sessions was full Saturday long Code Retreat, facilitated by Adi Bolboaca.

I loved Code Retreat and could recognize my team would love it too, so Adi came and taught a wonderful day for my two teams programmers. And unlike in the conference where I sat through the day, here I stepped down for feeling insufficient.

These people together with my programmers at work started a learning path in which I appreciated code quality in relation to end user quality in our efforts, and started looking more deeply into ways those two are intertwined.

Adding Llewellyn Falco into the picture and being encouraged to re-learn to program through Mob Programming and Strong-Style Pair Programming, I can't even pinpoint who, when and where changed my mind. I can just recognize it did and find it fascinating.

I think the keys to this change have been:

  • No one tried to "change" me but just allowed safe experiences and discussions where we could agree to disagree, and we did
  • I had free capacity for learning widely over previous choice of deeply into exploratory testing, as every day brings more capacity if you just stick around long enough
  • Other things I wanted (closer human connection at work, not sub optimizing testing but optimizing the product development) required me to do things I wouldn't have otherwise volunteered to do
  • I connected with great people on my way, that I can only properly appreciate in hindsight
So Vladimir, I owe you a beer. Clearly. Thank you. I never realized how many aspects of our paths crossing had a meaning to me. 

Frustrations on system test automation efforts

For a tester who would rather spend her time not automating (just because there's so much more!), I spend a lot of time thinking about test automation. So let's be clear: people who choose not to spend their days in the details of the code might still have relevant understanding on how the details of the code would become better. And in the domain of testing, I'm a domain expert, I can tell the scopes in which assumptions can be tested in to a level I would almost believe them.

Back in my earlier days, I was really frustrated with companies turning great testers into bad test automation developers (that happens, a lot!) and these days, I'm really frustrated with companies turning great testers away and rather hiring test automation developers. Closing one's eyes on what is the multitude of feedback that you might want while developing makes automation easier - yet just not quite the testing one may be imagining. One thing has changed from my earlier days: I no longer think of bad test automation developers as the end of those people, as long as they start treating themselves like programmers and growing in that field. It's more of a career change, benefiting from the old domain knowledge. I still might question, based on my samples, the goodness of domain knowledge of testing on many of the people I've seen make that transition. Becoming a really good exploratory tester is a long road, and often people make the switches rather sooner than later.

Recently, I've been frustrated with test automation specialists with a testing background, who automate in the system / user perspective and refuse to consider that while this is a relevant viewpoint, a less brittle one might involve addressing things in a smaller, more technology-oriented perspective. That unit tests are actually full-fledged tests as an option to keep track of things that should work. That it is ok to test a connected system with a fake connection. And that it just doesn't need to be, when automation is on the plate, a simulation of what a real user would do. Granularity - knowing just what broke is more relevant.

I believe we run our test automation because things change, and as a long-time tester, I care deeply what changed. I recognize the changes that my lovely developers do, and I have brilliant ways of being alerted both by them with a lot of beautiful contextualized discussions but also just seeing from tools what they committed. I read commit comments, I read names of changed files and their locations, and I read code. I recognize changes coming in to our environment from 3rd party components we are in control of and I recognize changes into the environments that we can't really control in any way.

And while our system test automation works against all sources of changes, I prefer to think my lovely developers over my users with the test automation giving feedback. The feedback should be, for the developers, timely and individualized to the change they just introduced. A lot of times I see system test automation where any manual tester does the timely and individualized better than the system created for this purpose.

Things fail for a reason. Build your tests granular to isolate those reasons. 

Monday, January 23, 2017

Testing in the iterative and incremental world

I've run my fair share of sessions where we test together something. My favorite test targets recently have been DarkFunction Editor (making 2D sprite animations), Freemind (mindmapping) and ApprovalTests (golden master test library) but there's a thing that is common to all these three. When I introduce them to my groups for testing, they're static. They don't change while we test. They are not early versions with the first of user stories implemented, to grow much later. They are final releases (until next one comes along).

In projects that I work with, I test a lot of things that are not yet the final releases. And it's almost like a different ballgame to find the right time to give feedback on things. In my experiences, early testing has been crucial for allowing for time to understand what we're building to guide that also from a testing perspective. As we learn in layers, the testers too need time to peel a layer at a time to be deep by time of final rounds. But it has also been crucial to fix issues before they become widely spread assumptions that can't be questioned without the whole brick structure falling down on us.

Some years ago, a session I was running was playing with this dynamic. I gave a group of testers the scope of product backlog (stories) for 1st increment and asked them to plan their test ideas. Usually very little came out. I gave then the 2nd increment with very similar results. Then I fast-forwarded 10 sprints to a close to ready game, and I got a long list of things to consider. The point of the session was to show that thinking in the ready state is easier, but having done that, you can categorize then your ideas to figure out how early you could run tests on some of these.

I think it is time for me to experiment with three different new sessions. 
  1. Incremental test planning/design - bring back and improved version of something I have not paid attention to for years. 
  2. Incremental exploratory testing - figure out a way of running a course where the test target is not static but grows incrementally
  3. Test idea creativity - while executing and generating ideas now come for me intertwined (curse of knowledge), looking around me I realize that the creativity part of it could use more focus. 
The first is easy, so I'll just schedule a trial run for my local community. The two others take a bit more processing, and for #3 I think I might know just the perfect place for it - a developer conference. 

Thursday, January 19, 2017

Re-testing without the tester

Some days we find these gems we, the testers, like to call bugs. Amongst all kinds of information, bugs are often things we treasure in particular. And we treasure them by making sure they get properly communicated, their priority understood and when they're particularly valuable, reacted on with a fix.

We're often taught how the bug reports we write are our fingerprints and how they set our reputation. And that when something relevant was found and fixed, the relevant thing is worth testing again when a fix is available to see that the problem actually has gone away.

We call this testing again, the precisely same thing as we reported the bug on, re-testing. And it's one of the first things we usually teach to new people that there is a difference in re-testing (precise steps) and regression testing (risk around the change made).

Today I got to hear something I recognize having said or felt many times. A mention of frustration: "they marked the bug closed as tested, and it turns out they only checked that a big visible error message had vanished, but not the actual functionality".

This was of course made even more relevant with external stakeholders coming back with the feedback that something had indeed been missed.

What surprised me though was the quickness of my reaction to mention that it was not *just the tester* who had failed to retest the fix. It was also the programmer who did the fix, who had completely closed their eyes on the actual success of whatever change they did. And to me, that is just something I want to see different.

It reminded me on how much effort I've put on teaching my close developers that I will let fixes pass into production without testing them - unless they specifically ask me to help because they are concerned of the side effects or don't have access to the right configuration that I would have.

Re-testing needs to happen, but re-testing by a tester is one of these relics I'd rather see more careful consideration when it is done. 

Two people who get me to do stuff

Sometimes, I feel like I'm a master of procrastination. For some types of tasks (usually ones requiring me to do something small by myself that has little dependency or relevance to other people) just seem much bigger than they realistically should. I wanted to make note of two people I respect and look up to for making me do things I don't seem to get done.

'I'll just sit here until it's done'

There's a team next door here, that works on the same system but we could easily organize our work so that we don't share that much. I had decided however I wanted to try out running their test automation, maybe even extending that when things I want to test benefit from what they've built. And I got the usual mention: there's instructions, just three steps. So I went and followed the instructions, only to be there in (typically) unlucky day when they had changed everything except their instructions while upgrading their test runner.

So a day later, I hear they've improved the instructions and we're back to just three steps. As I work on something else, I don't really find the energy to go back and see how things are. I gave it change, it did not work out, not like I really need it anyway. So my favorite colleague from that team comes into my team room, with his laptop and sits on the corner of my desk saying: 'Try it. I'll just sit here until it's done'. And I try it, and five minutes later we have delightful discussions on my feedback on making sense of this as someone new.

Thinking back to this, I realize this is a tool he uses all the time. Actively deciding something needs to be done and committing his time to insert positive pressure by just being there. Setting an expectation, and making himself available.

'Let's pair'

Another person I admire takes it further. He volunteers to pair and actively schedules his time to get more out of the shared work. Sometimes his 'Let's pair' attitude feels like pushy, but the results tend to be great. It takes time to get used to the idea that someone is there with you on you doing something you know you sort of could do by yourself.

As one of the organizers for European Testing Conference, he has paired with every one of us. The pairing has both supported timely doing of things, but also created contents we wouldn't create without pairing.

There was a task that I needed to do, and I was trying to find time in my busy schedules to do it. With him proclaiming 'Let's pair on it', it got done. And while I was sure I had the best skills for the task, I was again reminded on the power of another person on identifying things I could be missing.

From Envy to Stretching

I find it extremely hard and energy consuming to force myself on people who are not actively and clearly inviting my participation. So I envy people who, with a positive attitude just go and do it, like these two people. Recognizing the envy gives me a personal stretch goal. Try it, do more of it, find your own style.

It's not about doing what they do, but knowing if doing what they do would help you in situations you experience.