Thursday, May 24, 2018

More on Jira story to pull request ratio

Today, just as I was blogging away my previous post, one of my team members send me a quick message: "Would you accept my pull request?"

I don't get many of those. There's so many of us working on the shared 'internal open source codebase' that mostly other people get there before me. I've been given rights just as everyone else even if I don't use them exactly as everyone else.

So I follow the link to the pull request, check the repo name and wonder a little, and check the diff to realize it is a one line change. For a small moment, I stop to appreciate how nice the tooling is, showing me exactly what is changing. I appreciated the description giving me context of why the one line was changing. So I pressed the Approve-button.

A minute later, I get a second ping. "Can you merge the pull request? My rights don't seem to be high enough.". I go back, see that my rights are high enough, stop to think about the ridiculousness that someone has again introduced different rights and assigned them this way, and click on the Merge-button.

A minute later, I get a third ping. "I checked downstream things, all worked out well".

Between each minute, I was testing already the exactly same thing the change would have impact on. After all, we were working together, on the same thing, as a team.

This was so painless. So easy. And then I thought about what another team wanted to require of our processes.

For each pull request, there must be a matching Jira ticket "for documentation purposes", because "support needs to read it". The change we just introduced was part of a bigger things we were building, and the bigger thing would require changes to at least 10 different repositories - meaning 10 different pull requests. And the pull request had already all the relevant info. A one-liner of it would end up in the detailed release notes in case someone wasn't into hunting down the info across repositories.

I sighed and thought about stupidity of processes, and decided to write this down.

Don't come telling me that this is needed for a bigger company. This is particularly detrimental for a bigger company. Know more options.

There's a great way of learning about options: listening. There are so many awesome people who speak about this stuff. Pay attention. I try to.

Sticking Your Nose Where It Does Not Belong

There's the things I'm responsible for, and then there's things I choose to insert myself into, without an invite. I have numerous examples, and many experiences on how at some point of my career this became ok and even expected behavior. In the famous words of Grace Hopper paraphrased: Don't ask for permission, apologize if needed. It's been powerful and I can only hope I learned it sooner.

Today, I wanted to share a little story of one of those times I was sticking my nose where it does not belong: another team, another team's choices of how to run their "agile process".

It all started from seeing a written piece defining, as testers, what are the tester requirements for a story to be accepted for testing. This whole vocabulary is so strange to me. As a tester, I am not a gatekeeper. I don't define *my* requirements. Teams negotiate whatever requirements they may have across roles when they are any good. And I would do a lot of things to avoid building the "development" and "testing" phases, even within a change - including mob programming. But the contents of the list were what ticked me on a trajectory this time.

The contents, paraphrased again, were saying that there must be code review, unit tests, functional tests, parafunctional tests, static analysis, documentation in general and in Jira story tickets in particular. But the idea that this was a requirement for being "ready for QA" seemed so off I did not even know where to start.

So I started simple - with a question.
I wanted to ask how you think of this a little more. Some context: here we do continuous integration through pull requests, meaning that a typical Jira ticket could have anything from 1 - 100 pull requests associated with it. We also do releases continuously, meaning that what goes out (and needs testing) is a pull request content not a Jira ticket. Is there something relevantly different in how you work in this case?
Asking this, I learned that *obviously* there must be a Jira ticket for every pull request. It was so obvious that I was cringing because it wasn't. So while I think there are a ton of things wrong with that idea from looking at how developer work flows naturally and incrementally when feature toggles are available, I had one thing I felt the urge of saying out of all the things I was thinking.
If writing a ticket takes 2 minutes, and reading it by various stakeholders takes 10 minutes, I might argue that you could actively think of opportunity cost. The time you use on those tickets is time away from somewhere. Where is the value on documenting on that level? Many teams here have come to the conclusion that it isn't there.
I should have realized to just step out at this point, because I was in for a teaching of how I don't know software development.
Maybe it is not obvious, but things like 'developer checklist' could help to deliver high quality code.. which means less bugs and less cost. If someone would like to discuss about this, let's show him/her "How much do bugs cost to fix during each phase" graph.
 And it continued:
  • Writing documentation - in Jira tickets of pull request size - is essential because R&D is not alone but there are other parts of organization AND we get blamed if we don't deliver as ordered.
At this point, I realize there are fundamental differences not worth going into. There is no interest in hearing opposing views. They are optimizing for a customer - contractor relationship while I'm optimizing for working as a single product company. I've lived years of blameless life that enables me to focus on actually delivering value, they are afraid where I am not. 

Good thing my manager knows that I can stir up some trouble. I may step out, but I never ever give up. The best way to show this is to focus on how awesome my team is at delivering. Nose back where it belongs, until another dig.

Lessons Learned on Two First Weeks as Manager

A few weeks ago, after a long battle, I gave in and applied for a new role: Senior Manager, Windows Endpoint Development. That means I work with two full development teams, handling their line manager duties.

When considering the shift (actually, when looking for reasons to not make the shift) a friend reminded me:
You have always job crafted your tester role to barely recognizable. Why should this manager role be any different in how you end up doing it?
I almost understood what they meant. Now I understand fully.

I've been now in this new role for two weeks, and I barely see any difference. That is how much I had crafted my previous role.

I still test and I still love it. I'm still finding time boxes to test whatever I feel like, and I'm still not taking  tasks from "the backlog" but identifying the most valuable thing I could be doing.

I still look at systems beyond the perceived team / components divisions, following all flows of change into the system we are building.

I still guide us all in doing things incrementally, and the testers perspective is perfect for that.

I still listen to my team mates (now direct reports) for their concerns and walk with them to address things. 

I was as powerful as a tester (with my network of people to do the work with). I might claim I was even more powerful, because I did not have to deal with the "yes, manager - of course we do what you asked" and got to dig into the best solutions defending my perspectives, turning them reality without assigned power.

I never cared for the hierarchy when getting things done. If I learned something now, I learned that others should learn to care less about hierarchy. And that "escalating through hierarchies" is an organization worst practice on the level of actual practical work. I thought others also saw problems and felt they could just go solve them, seeing all obstacles they can get through as potential things for themselves to do.

I always knew many people's salaries. Because I talked with people one on one and made sure it was fair. Well, I told people joining the company my salary for them to know what they can try negotiating on. I also have moved my own bonuses on others who in the bigger scale of things deserved it more because that is what my sense of justice told me to do. I have fought for other people's raises, as well as my own before.

The only thing that is sort of new is the many quirks of recruitment - relocation processes and the work managers do there.

I'm surprised that my new job is my old job.

And talking with the friend again:
I told you you were already doing that job.

Saturday, May 12, 2018

Slowing Down the Flow

This week Craft Conference saw a second edition of my talk "How Would You Test a Text Field?". Regardless of the boringness of the title, I find that this is the type of talk we should have more. 
The talk was recorded on video and will be available later in its recorded format.

What I find interesting though is the stuff going through my mind post talk, thinking through all the things I want to do differently.

The talk had five sections to it.

  1. Introducing a common model of assessing testers in job interviews: "How would you test a text field?" and one way of judging the responses
  2. Testing a text field without functionality
  3. Testing a text field in product context through UI & filesystem
  4. Testing a text field in code context through API with automation
  5. Testing a text field that is more than a text field
As I will not be going to conferences in the upcoming year, I don't have a place to fine-tune the talk to its next generation. Sounds like a great opportunity for doing videos I've been thinking about for a long time! 

The things I want to do differently are:
  1. When testing a text field without functionality, get people to generate test ideas to the same functionality with GUI present or API present - focus on what functionality is in which layer
  2. Visualize test ideas from the group better - use a mindmup. Map them to the four level model of how a tester thinks. 
  3. Teach the model of testing better: I intentionally create first "phases" of idea collection, idea prioritization, and learning from results before selecting or generating a new idea for the next test in the first exercise. Later exercises I allow for prioritized ideas to action, but force discussion on what the test made us learn. When I test, it all mingles.
  4. Force the focus in code context better to just the text field, blur the frame needed to run things out. 
I've been teaching the model of testing before with a story of my sister coming back from her first driving lesson many many years ago. She asked our brother to show her the ropes in our own car, as she felt that after her first real drive she was finally ready to pay attention to all those details. On her first driving lesson, the teacher had made her drive around the parking lot a few times and then surprised her by guiding her right amongst traffic. Her experience reminded me of mine: not being able to remember a single thing. 

When we are learning to drive a car (or to test in an exploratory fashion), we first need to learn all the individual bits and pieces. In the car, we need to remember the shift, the gear, gas and break, the wheel and all the looking around while maneuvering things we have no routine on. New drivers often forget one of the things. Stopping into a traffic light you see them accidentally trying to go with a gear too high and shutting down their car. As they are slowing and turning the wheel to make a turn, changing the gear at the same time feels difficult. So they make just enough room for themselves to practice. Same applies with testing. 

The actions we need to make space for are different for testing. We need to make room for coming up with a test idea and prioritizing the right test idea out of all the ideas we have in our head. We need to make room for properly executing a test that matches that idea. We need to make make room paying attention to the result against both spec and any other ideas of oracles we could use. And we need to make room to reflect and learn about how what we just saw the application do changes our next idea.

Just like when driving a car, the driver controls the pace. Slowing individual activities down is something the driver can and should do. Later, with routine we can combine things. But when we start, we need to pay attention to every individual action. 

Thursday, May 10, 2018

When Your Mind Fails You

I walked down from my room, through the long corridor to get to the conference venue in a big American hotel. I felt great and was looking forward to meeting people. I walked to the registration desk, and while I know many of the conference organizers, I did not know anyone there. I walked around a bit, seeing lots of people but no familiar faces. And I recognized the feeling of anxiety. I was uncomfortable, turned around and walked back to my room, just in time to collapse in a massive panic attack.

I don't know what panic attacks look like for others, but I know they are scary as hell for me. I hyperventilate, lose feel of first my fingers and then feet so that I cannot stand up. My face tingles, like it was waking up from being numb without it having been numb. And the only way out of it is to find a way to calm down. Hugging a pillow helps. Being angry at people close to me doesn't. But blaming some of the uncontrollable emotion on others feels like a plausible explanation, until it happens enough to realize it is something where my mind just plays tricks on me.

The trigger to my anxiety seems right now seems to be conferences, particularly large ones with lots of people I don't know or things not flowing as I imagined. The first one I remember I got from the very first developer conference I ever volunteered to speak at. For the last two years, panic attacks have been a frequent companion to every developer conference, but lately also creeping into testing conferences.

Conferences are too rough on me, so I will be taking a break. Not only because I can't deal with my mind, but also because my presence is needed at home.

I used to be afraid of public speaking, and I trained myself out of it by taking a job that required regular speaking: teaching at university. I still remember what drove me into starting the practice: physically not being able to stand in front of a crowd just to introduce myself. It was crippling.

The panic attacks are more frightening, but also feel harder to control than the good-old fear of public speaking. Over time, I'll figure out a way of working this out. Time teaches things we can't expect or see. It always has.

Monday, May 7, 2018

Changing jobs, setting goals

As I return to office after a week in conference, I come back to a new job. Yours truly is now "senior manager" of two R&D teams. And it was a position I resisted for quite a while until yet another 180 turn to see how things I believe in are when lived for real.

Before I was "just a tester" - with the reminder that no one is just anything. My title was "Lead Quality Engineer" and it meant that my responsibility in the teams was to be a senior senior tester. As someone who has been around a while, I knew that I wasn't serving only my team's developers as the tester, but working on the system to which my team contributed. And just as any other senior senior team member in a self-organized team, I could manage my work, propose improvements for the entire team and drive through changes across what ever organizational limit I felt like tacking.

As "just a tester" who just so happened to also be capable of all things management, requirements, programming and testing, I could model what a team member looks like. I could do things that needed doing, not things that I was told that I need to do. I could show that it is a normal case to have "just testers" who talk to all levels of management and consistently get heard. I could show that "just testers" care deeply for developers ability to focus and work. And I could show that "just testers" change the world on aspects that people believe cannot be changed.

So, now I am a manager. I intend to experiment with a few major things in my new position:
  1. Don't change. As a non-manager, I believed in team self-organization and decision-making. I intent to stick to my beliefs and resist the "managers decide" power tripping that is all too common. I intend to see how much I can do things just as I used to, spending most of my days testing happily. I was available to every one of my now direct reports as a tester, and I am available to them as their manager. Not changing things for worse will be a major effort, and require some candid discussions.
  2. Game the Annual Reviews to Tolerable. As manager, I have to run the company appointed annual review processes with my direct reports. I have hated the process, especially  the part that ends up with emails saying things like "your performance evaluation for last year set during review discussion remained the same after the company-wide calibration process.". I hate stack ranking. And I hate trying to fit people into the same mold over finding their unique strengths, allowing and encouraging job crafting. I have no idea what I do with those, and how much the process allows leeway, but I will figure it out. 
 Not changing will be hard. Already my usual request of "let's make sure we can easily automate tests against  the new GUI technology" was taken more seriously with devs immediately coming back with Proof of Concept on testability in addition to just the prototype, with the half-serious joke of "must do when the soon to be our manager tells us". And a dev just came to me asking to pair test tomorrow - never happened before.

Thursday, May 3, 2018

A lesson from my brother, the developer

"You never talk about your brother!", I was told yesterday as I mentioned I have a lovely brother who is a developer, just a little younger than me that he always needs to keep on competing with me and never quite catch up and who just do happens to work at the same company as I do. 

We don't really work together. Work gives me just as much excuses to call him up as the fact that we're related, and both require conscious effort. My brother is lovely, and really really good developer. He works with identity management systems and I work with end-point protection, so he's on a component and I'm on a product. Recently, he's on a component that has been failing a lot in production and that gets easily blamed even when it isn't, and some of the illusions around that are so much easier to address because I can call him up whenever. 

A few months back, I called him as I was preparing a talk I wasn't particularly into doing. I had originally committed to the talk to support a developer who was new to speaking. They backed out, and left me delivering the talk by myself. So I called my brother to think through what had changed in the last 10 years. 

What he shared was foundational. He talked about how as a young programmer, he got to take a requirement and turn that into code. How most of his time went into writing code. How code he wrote was predominantly Java. And how now when someone joins in as a young programmer, they enter a whole different world. 

Most of a new joining programmer's life is no longer writing code. It is writing code in multiple languages. It is configuring environments (as code). It is tweaking parameters, usually not well documented, to get the third party components to work your way in your environment. And it's not that writing code is enough. You need to write tests. Unit tests. Integration tests. System tests. And as you're writing the tests, you need to explore to figure out all the things you need to test. 

Expectations for a young programmer has skyrocketed. We expect them to start more ready, and cover more ground. 

As a tester, I'm often worried of my mindspace, ability to track different levels of abstraction and keeping quality of my work on a level I can be happy with. Not as perfectionist, but as someone who aspires to be better every day. My brother, as a developer, isn't that different. The world is changing around us for us all. 

We're both asked more each day in our respective professions. The entry level requirements are going up. And with the next generations of future professionals growing through the elementary school now being taught programming from grade 1 onwards, it starts to make sense that programming is an entry level requirement for testers. 

It's more about the foundation, than the active use of it. Seeing the possibilities. Knowing to ask help instead of fighting alone. 

Tuesday, May 1, 2018

Building up to a Treshold

For 1.5 years, I've been with my team as a vocal supporter of mob and pair programming. Getting my team, or any of the nearby teams to mob has been a challenge. We've mobbed a few times on functional testing, once on performance testing, once for purposes of learning TDD together but that is about it.

Being a vocal supporter doesn't mean that I couldn't accept a no. I will not force people into doing things they don't want to do. But I keep on mentioning problems that just would go away if we paired or mobbed. And that people who really like each other wouldn't make the others waste so much time working alone towards a pull request where their early contributions would have made a world of difference.

I see problems of many sorts. Lack of empathy. Not understanding one another. Waiting and building queues. Being less happy, less efficient. Making excuses. Blaming. Avoiding necessary feedback when it is delayed. And I keep believing that we could do differently. We can be ok now, but awesome if we find better ways.

In particular with my teams, there has been one vocal person against the whole idea. When someone everyone else is supposed to look up to is vocally saying what they hate about pairing, it makes it hard for others who are conflict averse to step up and say they disagree.

A week ago things started changing, finally. First one new person joined the team, emphasizing how much pairing and teaching people more junior would impact their happiness. They, for the first  time, stood up as the second strong voice, in disagreement.

It was obvious which side I stood on. Discussions in team emerged, and one by one, people started saying they'd love to pair regularly or pair sometimes. The last joiners clearly joined the now "popular belief" that this was worth doing even if they weren't originally strong proponents.

There's this idea of threshold. That when one person does something, that someone may just be peculiar. But when the group grows big enough, the ones not doing it are the peculiar ones. And as social creatures, we tend to want to be with the in-crowd.

Glad I got to this before they made me a manager. Proved me that long term work and kinds reminders push through without assigned power.

Looking forward to pairing with my team more. And making it a great experience that includes us all. 

So, you're a consultant?

I had just finished teaching a half-day training on Exploratory Testing at StarEast. The group was lovely and focused, 18 individuals with different backgrounds and experiences.

Group on my StarEast Tutorial

We were just about to start a speed meeting event, and I talked with someone I had not met before. They saw that I had a speaker badge, and asked immediately: "So, you're a consultant?". I looked puzzled for a moment before I explained, like so many times before that I was a tester in a team, that majority of my working days went into testing with a wonderful team and delivering features to production on pretty much a continuous cadence. 

The question, however, crossed a personal threshold. It was that one too many to think that I wasn't out of place and fighting against the stream. The default setting was to be a consultant, not an employee in a product development company. It was yet another one of those things that made me happy that I had decided to stop speaking at conferences for now. 

Being a consultant is a great thing. But consultants are a different species. They are more self-certain. They're ok being always on the road. They often live off their analysis of other people's work. They carry a risk that I can only admire they cope with, believing there will be more clients. They get different value out of doing (usually an unpaid) talk at a conference because they have something to sell.

I'm here because I have a strong need to sharing what I have learned, in hopes of finding people who can help me accelerate my learning. I'm not a consultant, but I care. And I'm particularly lucky in being non-consultant with an employer who truly supports me doing this as long as I want to do it. It's just that I no longer really want the travel. Maybe again when my kids are past their teenage years. Meanwhile, my helping presence is online, not in conferences. 

Power dynamics in pairs and mobs

Four years ago, I wasn't pairing or mobbing. Frankly, I found both of those intimidating. Instead of expressing that I was scared, I talked about needing alone time, being introverted and not being able to think when others are too close.

I came back to think about all this today for a great thread of power dynamics in pair programming and TDD from Sarah Mei.

If you ever get a chance of watching a mixed gender group of teenagers, you see some of the gender-based power dynamics at their worst. Teaching programming to teenagers in a mixed group results in the girls giggling, downplaying their abilities and focusing on letting the boys shine. It's easy to think of this as characteristic of the girls up to the point you teach the same girls in a separated gender group. The girls are still bubbly, but now concentrating and doing their best. And their best is often much better than the best in mixed gender groups in general.

The problem isn't men's fault, it's structural. Years of belief systems on how men and women interact. And for women in IT, it often takes years (or a lifetime) to overcome the structures that keep them listening rather than contributing. That keeps them apologizing. That causes them to feel they are not good enough.

We see the same dynamic in conferences. We have loads of speakers who are identified as men. We have all male lineups. And when we talk about including women, we hear that we cannot lower the bar. Yet for conferences doing good work on inclusion and blind review, the shortlists end up being very equal or women dominated. There may be less of women, but quality they produce seem to not lower the bar, quite the opposite. But to get considered, you have to listen to some people's hostility in assuming that you're not up to the bar.

Four years ago, I started first mob programming and then pair programming (and testing). I still think of mobbing as the gateway to pairing in situations where the power dynamics are hard or next to impossible like in my team. Imagine having to pair with a man who tells you "women only write comments in code". The pain of having to sit with that person, alone, is enough for many women to walk out. Their attitude shows in the mob too, but a group of people can help moderate, correct or even punish in ways that an individual on the lower end of the dynamic cannot.

I also remember well the first time I paired with Llewellyn Falco. This is a story we've told many times in our shared talks, yet it was a story that I would have never shared with Llewellyn unless we ended up working on a share talk. I came back to think of this time for one of the Sarah Mei's subtweets:
We had agreed to pair on exploratory testing a new functionality added to the software I was working on.  So we sat down together, I guided him first to get to the functionality just to see it was there. The functionality included a new dialog, and as it popped open, my first words were "I wonder what size it is...". It wasn't intended a question, but very much taken as one. What I meant is that we have an agreement on the resolution where things still must be usable on the screen without scrolling. We were clearly on a higher resolution, and still the dialog was big and clunky. But before I got in another word, Llewellyn picked up on the cue and started showing me tools that enable me to measure the dialog size in centimeters, pixels you name it.

I didn't even understand right there and then that I was uncomfortable. That I felt overridden. That none of the stuff I was trying to show - my work in exploratory testing - got done. Moreover, I'm sure I acted like one of those teenagers, just telling myself that my contributions weren't expected for real. I tried enjoying the results we ended up with, not miss the results that were hijacked from me by a power dynamic.

After this experience unfolded in prepping the talk, we have had great pairing sessions and learned to explore in a pair and mob a lot better. He actively worked against the power dynamic, paying attention to listening instead of talking over me, thinking their way is always correct.

Thanks to Sarah Mei's tweets, I remembered again how far I've come. And how far there is still to go now that my whole team is expressing openness to pairing - finally after 1.5 years.

Friday, April 27, 2018

Flunking tester candidates in masses

Recently, I've been receiving messaging in LinkedIn. It starts with someone I don't know wanting to link with me. Since LinkedIn is my professional network, I check profile for complete fakeness and approve if it could be a person. The next step is getting a message, usually just a Hi. Nothing more. Since it is my professional network, my default response is "How may I help you?". The response to this is "I need to find a job".

These people often don't know why they are contacting *me*, beyond the fact that they just did and I responded to their greeting. When I ask more about what I could do specifically, the conversation ends with "I don't know".

If you don't know what you want, how do you know I would know what you need?

One of our teams is currently seeking a senior tester. It has not been easy. I have not really been a part of the process, except filling in during absence of other great testers. I see applications fly by due to notifications but I have not as much as read one for a while.

What still happens is that the other great testers share their wonders. They share the realization that the tester we are looking for is hard to find because it is a tester and a programmer (for test automation), great communicator and able to grasp complicated systems in a very practical manner. But they also share the realization that there is a surprising number of people who think the way to test is to write test cases. That exploratory testing means monkey testing and no serious tester would do that. That testing happens in the end after programming is done. None of these beliefs are true, and least of all in a company that emphasizes agile practices.

So if you're one of those people who feel like approaching me for a job help on LinkedIn, I just wanted to mention a few things you absolutely need to do. Go read stuff on Ministry of Testing. Look for things that introduce ideas of how test cases are not the thing that makes testing awesome. Learn to build the technical tools, and the most important tool: your brain. Join slack and look at what people talk about who identify as testers. Here's the key: it's not test cases.

I've been insisting that I don't see people like this in my bubble. I don't, because they are stopped at the gates. Learn more stuff that remains relevant. 

Thursday, April 19, 2018

Where Are Your Bug Reports?

Yesterday, I put together a group of nine testers to do some mob testing. We had a great time, came out a shared understanding, people's weak ideas amplified and people standing together in knowing how we feel about the functionality and quality, and value for users.

This morning, I had a project management ping: "There are no bug reports in Jira, did you test yesterday?".

Later today, another manager reminds over email: "Please, report any identified bugs in Jira or give a list to N.N. It's very hard to react/improve on 'loads of bugs' statement without concrete examples.". They then went on informing me that when the other testers run a testing dojo, they did that on a Jira ticket and reported everything as subtasks and hinted I might be worried about duplicates.

I can't help but smile on the ideas this brings out. I'm not worried about duplicates. I'm worried about noise in the numbers. I have three important messages, and writing 300 bugs to make that point is a lot of work and useless noise. This is not a service I provide.

Instead I offered to work temporarily as system tester for the system in question with two conditions:
  1. I will not write a single bug report in Jira, but get the issues fixed in collaboration with developers across the teams. 
  2. Every single project member pair tests with me for an hour with focus on their changes in the system context. 
Jury is still out on my conditions. I could help, but I can't help within the system that creates so much waste.  I need a system that improves the impact of the feedback I have to give through deep exploratory testing, focused on value.

I'd rather be anything but a mindless drone logging issues in Jira. How about you?

Tuesday, April 17, 2018

Interviews Are a Two-Way Street

We were recruiting, and had a team interview with a candidate. I was otherwise occupied, and felt unsure of hiring someone who I had no contact with, especially since the things I wanted to know of them were unanswered from my team. We look for people strong in C++, but also python and scripting, since a lot of a DevOps type of team's work ends up being not just pure homeground. So I called them, and spent my 10 minutes finding out what they want out of professional life, what makes them happy and making sure they were aware how delighted my often emotion-hiding team would be if they chose to hang out with us and do some great dev work. They signed. And I'm just as excited as everyone was after having had chance of meeting the candidate.

So often we enter an interview with the idea of seeing if the person is a fit for us. But as soon as we've established that, we should remember that most of the time, the candidates have options. Everyone wants to feel needed and welcome. Letting the feeling show isn't a bad thing.

There's a saying that individuals recruit people like them, and teams recruit people that fill the gaps - diverse candidates. For that to be true, you need to have first learned to appreciate work in team beyond your own immediate contribution.

All this recruiting stuff made me think back to one recruiting experience I had. I went through many rounds of checking. I had a manager's interview. Then a full day of psychological tests. Then a team interview. And finally, even the company CEO wanted to interview me. I required yet another step - I spent a day training testing for my potential future colleagues, in a mob. Every single step was about whether I was appropriate. If I would pass their criteria. They failed mine. They did not make me feel welcome. And the testing we did together showed how much use I would have been (nice bugs on their application, and lots of discipline in exploring) but also what my work would be: teaching and coaching, helping people catch up.

Your candidate chooses you just as much as you choose the candidate. Never forget.

Saturday, April 14, 2018

Second chances

"This does not work", they said. "We used to find these things before making a release", they continued. I see the frustration and understand. I feel the same. We lost an exploratory tester who spent 13 years with the application, and are reaping the results as they've been gone for a month. Our ways of working are crumbling in ways none of us anticipated.

We lost the tester, because for years they got to hear they are doing a bad job. How they are not needed. How they would only become valuable if they learned automation. And they were not interested. Not interested when the personal managers told it. Not interested when most conferences were full of it. Not interested when articles around the globe spouted that the work they were doing was meaningless.

They found the job meaningful. The team members found the results meaningful. And it was not like the manual exploratory testing they did had stayed the same over the years. As others in the team contributed more automation, their testing became deeper, more insightful, targeted on things where unexpected change was the only constant.

I reviewed their work long before they decided on leaving. I promoted the excellence of results, the silent way of delivering the information to make it visible. And when they decided it was time to let go of the continuous belittling, I was just as frustrated as anyone in the teams that lack of appreciation would lead to this.

Just as they were about to go, we we found them a new place. And I have a new tester for my own team. The very same tester who elsewhere in the organization wasn't supported is now my closest colleague. I got a second chance of helping non-testers and non-programmers see their value, for them to feel respected like I do.

And for that I feel grateful. I already knew my manager is a great match for me in my forward-thriving beliefs of building awesome software in collaboration with others, valuing everyone's contributions and expecting daily growth - in diverging directions. My good place - my own team - is again even better with a dedicated manual exploratory tester with decades of deep testing experience.

Wednesday, April 11, 2018

Task assigning does not teach self-organization

I was frustrated, as I was ticking away mental check boxes on the testing that needed to be done. It was one of the last tasks of a major effort so many of us had contributed on for the last 6 months. The testing I was doing wasn’t mine to do but I had agreed with our intern that this would be work they’d do. Yet I found myself doing it, after 3 days of pinging, reminding and explaining what needed doing.  The work I was doing wasn’t just something I expected from them, but as I was doing it, I learned they had also skipped my previous instructions of utmost importance.

As I completed the task, I shared the status to our coordination channel. Next up was a discussion I wasn’t sure on how to have on missing the mark of my expectations big time.

My feelings are a thing I can hardly not let show, and I approached the discussion letting my frustration be visible, and using my words to explain that I wanted to understand, and that failing wasn’t something I’d punish on, but something we just had to talk about.

We learned three things together.

For not doing an important thing I had reminded them on multiple times, there was a clear lack of understanding why I considered it so relevant. Discussing the big picture of risks, I’m sure that particukar thing gets done the next time. The intern expressing frustration on boring and repetitive task lead us also into identifying the root cause, and they volunteered to drive through an organizations fix - while not dismissing my instructions on remedies while the fix was not in place. I was delighted on the show of initiative.

For not doing the testing I ended up doing, I learned they were overwhelmed with number of requests all around, and the problem was a result of misprioritization. They were spending their time on a pesky test automation script, while the real priority would have been to complete the testing I just did.

We also reviewed the testing I had done, to realize they would not have known what to do. The task was one with many layers, dependencies to what happened while testing, requiring end to end understanding of business process and a perspective into lifecycle. All this was obvious to me, but they had worked on simpler tasks before. We had now identified a type of task that stretched too far.

We ended with celebrating how awesome this story of our mutual learning is and agreed to work on intake of complex work differently next time around.

As I mentioned the experience to a colleague, I was told their preferred way of dealing with this is Jira tasks with clear instructions. That’s what they were doing to me and learning I never obeyed. Others did. The discussion made a belief system thing visible: I was building each of my colleagues for their best future self as contributor. My colleague was focusing on how to get the work done with existing limitations.

Their style gave results where everyone did a little less than what they asked. My style gave results where we occasionally failed and reflected, but I could always assume a bit more next time around.

Assigning tasks wasn’t growing people. Quite the contrary, it created an environment where people consistently underdeliver to a standard never reaching their potential.

It takes past better experiences or exceptional courage to step to self-organization when you feel the organization around you just wants you to focus on assigned tasks. I’m lucky to have past experiences that allow me to never obey blindly.

Promoting the Air

Last week, I tweeted a remark:

The irony was that after 1.5 years at my current company I did a teaching women Java thing, and that was something people were excited about to an extent that they wrote about it (interviewing me) for the company blog. Meanwhile, I do 30 talks on testing/agile, most international, a year and none of that has crossed the news bar.

I talked further to many colleagues in testing, and came to the conclusion that we are in an interesting situation where our profession is very valued within teams, where a lot of managers are "helping us grow" by pushing more automation even by force and anyone outside the teams have increasingly skewed perceptions of what we do, and why.

We are like air we breathe. Invisible. As soon as it is lost, we notice. And we lost some of our testing very recently so now we are noticing.

This all leads me to think of  being the change I want to see. So instead of dwelling in my frustration of irony, I took the observation and promoted the other things I do. The amount of empathy and understanding sharing my frustration has been overwhelming. And the constructive actions of wanting to hear more, wanting to share more equally and learn about this stuff has been delightful.

We who see what testing is, appreciate it, need to talk about it more. We need to help others see and remember the invisible. Every single tester shares their part of promoting. Some of us get our voices heard just a little further, but our common voice is stronger than any individual's.

Thursday, March 29, 2018

A Developer's Idea of Exploration

"We did a neat thing exploring today", a developer exclaims. I look, interested, wondering what is the source of excitement this time. It's not the first time they've been excited about doing things clearly very close to my heart. But a lot of times we find our ideas of exploring take very different, yet fascinating turns.

"We did this combinations test", they explain. "We took bunch of values when we did not feel like thinking too much, and passed them all in, and created combinations", they continue. "We learned about behaviors we did not think of", they finish. And we agree that is wonderful. Learning while testing, appreciating the new information, absolutely something we share.

There's been little remarks like this coming my way a lot recently, and while I can share the excitement of learning something we did not know, I also find that  the ways of going to "as close to exploratory testing as I usually do" as a developer isn't quite where my exploration is.

There was a session about property based testing, and generating test cases to run through same partial oracles. Just doing it wider does reveal things of unexpected nature, especially when you have a way of identifying some relevant aspect of correctness with a property.

There was an exercise of creating combinations for a 3 variable method, finding out the application does not work as specified on its boundaries. Just having more cases easily available and visually verifiable revealed information of unexpected nature.

All the three examples I've had recently, are ways of programmatically doing more. While they uncover relevant information, there's still more to exploratory testing.

This makes me think back to exploration of someone else's web app we did in a mob yesterday evening. Just some things I remember us learning:
  • For a system aiming to enhance datasets to acceptable, it makes no sense that when there is a condition preventing the dataset from ever being acceptable, we would first need to fill in some info when the condition of rejecting exists without any additional info (a problem in the order of tasks in the process)
  • For uploading files, we'd like to be able to use explorer over just drag-and-drop. It matters how most others do things. 
  • When a user interface view would include many things to show in tables, having relevant tooltips for some but forgetting placeholders for others less obvious creates confusion. 
  • When you must choose of of two but not both, automatically emptying the other field isn't exactly the best way to present that. 
  • When rejecting an input, logging it might be useful.
  • When failing so that there's a big visible error in the log, it would be very nice if that error was made visible also for the user. 
  • When having a recurring element of guiding users, filling it in three different ways makes little sense.
  • When you can get to a functionality with one click as there is just one option, hiding it in a menu requiring extra click won't be helpful. 
None of these would have been found by the "let me just play with my unit tests" approach to exploring. Then again, none of what we did would have found things that approach could find.

It's not this or that, but this and that. And it's lovely when developers show ideas of applying the same ideas with the tools at their hands. I hope to get to experience a lot more of it going forward. 

Tuesday, March 27, 2018

The Test Automation Trap

There's a pattern that we keep seeing in "agile" projects again and again.

We work together as a team to implement a feature. We automate tests for that feature as part of its definition of done. As end result, we have some more tests than before, on all layers of tests. We get the tests run blue and we make a release.

We work together to implement a feature. The previously added tests make our tests run in all lights of a christmas tree, and in addition to adding the new tests for new functionality, we clean up the previous tests.

The longer we continue, the worse the christmas tree lights get. The more time we spend on fixing the past tests, the less time we have on the new tests. And we take shortcuts on our past tests fixing, just removing the ones we deemed so necessary before.

And no one talks about it. It is a ritual that we must go through. Like a rite of passage.

Over time no one cares about how well the automation tests things. All we care for is that it passes for us to get through the gate.

I've seen so many people trapped in the cycle of being too busy to think about *why the tests exists* and *what value are they really giving us*. These people have no time for manual testing, because - very honestly - automation eats up all their time. And they might not even see that the approach is not really working out for them.

The test automation trap creates testing zombies. Ones that make the moves, but that stopped learning on what they're doing.

The best way I know out of the trap is to start caring about testing again. Put testing, not the scripts, into the center. It's time to talk about risk and strategies again. It's time to build up a test automation asset that supports whatever strategies you're going for. Stop moving through the motions, and think. Learn. Look at where your time goes. Experiment your way out of the trap of magical moves that feel better idea than they are.

Thursday, March 22, 2018

The tester work asymmetries in team

In the organization, I work with a team. I sit in the same room with this team. I use a shared label to identify our togetherness. We go through rituals together: planning, working, delivering, demoing, and improving. There's just enough of us so that we can do relevant things, yet not so many that coordinating our work would be a problem.

The best of these kinds of teams work together over longer time, and on problems they can feel ownership on. That's where my life gets complicated.

The wonderful little team I work with, works in an organization that works with the ideals of internal open source project. The team has no nice list of microservices they'd be responsible for, but anything and everything anyone has ever created into the overall system is up for grabs.

As a tester in a team like this, I find it fascinating to look at how people approach the problem of modeling their responsibilities differently.

One tester seems to model their actions on the team's developers actions.  If a developer goes and changes something, a tester follows and helps tests the change. A lot of this activity happens by the developer pulling someone other than themselves to implement automation.

One tester seems to model their actions on the end to end flows of the system, from the perspective of mention-worthy functionalities being introduced. None of this activity happens by the developer pulling people in, but the tester pushing ideas of seeing value in the system perspective.

One tester seems to model their actions on collecting any work anyone would wish a tester would do. Whatever needs doing and looks like being dropped by others ends up as things they do.

Explaining and understanding where the time goes and what activities belong with "a team" can get very complicated. I guess it also makes sense that its also a high trust environment, where doing is considered more relevant.

Tuesday, March 20, 2018

Working with all levels of ignorance

There's a view of the world of testing on the loose, that I don't really recognize. It's a view driven by those identifying primarily as developers, and it looks at testing as a programming problem. It suggests that we already know what we know and that testing is about keeping tally of that again and again with changes to the applications we're testing.

It is evident that I come to testing from a different place. I approach it primarily as an exercise of figuring out things we don't event know we don't know through spending time and thought with the applications we're testing. I expect to find illusions to break and show how things for real are different that we imagined they should be - in relevant ways.

I think of it as a quest for four types of information.
1) Known knowns - things we know with certainty
2) Known unknowns - things we know with caution
3) Unknown knowns - things we forget
4) Unknown unknowns - things we ignore

So many times over years, I've been fooled by the unknown unknowns, my own self-certainty of my analytical skills, and lack of focus on the serendipity nature of many of the bugs. But even more, I've been around to save same developers, again and again from their self-certainty of their analytical skills and complete ignorance of information beyond what they already remember.

The idea of orders of ignorance is powerful. And as a tester I come to testing much more from the ideas of not knowing at all what I don't know, but a keen quest to keep experimenting until I find out.

When I draw the image some years back, I was trying to find imagery related to building houses. We know with certainty things a house needs. A house without a door, window, or roof wouldn't be much of a house. Yet we even with things we know for certain, we can end up with different expectations because what one of us thinks is certain, another takes as a mere suggestion. We also know with caution what a house needs. Like when we know it needs windows, we might not know the exact shape, number or position of them, but we certainly know we need to figure that out. With a house sufficiently complex, we start forgetting some of the nooks of it and need to rediscover what has become lost. And there's things we just completely miss out on, that could end up shaking the very foundation of what a house should be like.

Thinking back to a particular example of testing a remotely managed firewall, it is also possible to map activities I came across. I knew that if I introduce a rule remotely, it is supposed to show up as rule locally. I knew I did not know if there was a rule name limitation, so testing for it made sense. I knew I had created rules before using the local UI and very short names were allowed, and trying it again reminded me on as short names as single character. Yet when using a single character name remotely through an API, I witnessed completely unexpected performance issues resulting us in forcing a 3 character limit for stability reasons. All levels of ignorance were in play.

Sunday, March 18, 2018

The Lure of Specifications

There's a fun little exercise from Emily Bache called Gilded Rose. The exercise is intended as a piece of software to extend, and naturally you'd want to have tests before you go on changing it. Coming to it from a more pure testing / tester perspective, my fascination towards the exercise is on how people end up modeling the work.

Gilded Rose makes available:
When setting up the exercise, I hand people the spec, and create a combination approval test with the sample unit tests scope that is easy to extend with new values.

The question goes as so many times before: how would you test this?

Given a specification, most people jump at specification. As first values based on the spec get added, I usually introduce the idea of seeing code coverage as we are adding tests, and some people pick it up others don't.

This particular exercise lets me model on how people connect three types of coverage when testing: covering the spec, covering the code, and covering the risk.

The better ones have
1) Refused to follow the spec step by step, because someone else must have done that already
2) Thought of ways to test that neither the spec or the code introduce
3) Not stopped testing at covering the spec when code coverage is still low.

There's something about a specification that drives people's focus, making them less likely to see other things without added effort. Sometimes, it might make sense to step away from the lure of specification's answers, and see if the answers you'd naturally get to make any more sense.

Sunday, March 11, 2018

Building a relationship with developers

At a conference talk, I again haphazardly shared the idea of not writing bug reports. I call it haphazard, because I my talk was about increasing your impact as a tester, and it was just one of the “try this” solutions I shared. But it is one that rocks people’s world and beliefs, makes them approach me with disbelief and even come off as attacking me for having an experience and sharing it.

In all these interactions, I found a lot of value for me, in recognizing how the environments I habitate and set up things can be different. While “stop writing bug reports” is the thing I say, what is really behind that is the idea of starting to pay attention to cost - value structure of your work, with particular focus on opportunity cost. Each one of us has more power to decide on what we do and how we do it than we realize. If we are asked to execute preplanned test case and a manager asks us which ones did we execute at the end of each day, we are more constrained that I believe great testing should ever be. Yet even in that setting, we can choose the level of focus we exert on each of the tests. We can add emphasis on some, quickly browse through others and add our own ideas in between the lines. If we book a meeting with ourselves for an hour to practice using tools and approaches that don’t fit into our normal day, most organizations don’t even notice. And instead of asking permission for this all, think of it as possibility to ask for forgiveness - but only if it is needed. 

The environments I habitate are essentially different. It’s been years since anyone told me specifically what I need to do, and what is my next task. Even the constraints that appear to be in place may not be real. But what is very real is the sense of responsibility and continuous value delivery. I know what good and better results could look like. And I know I don’t know how to get to the best results, without experimenting. 

So I experiment with stopping bug report writing. I end up working my first year in an organization, where on another business line a colleague is being scolded for lack of “evidence of value” saying they don’t write enough bug reports and since they don’t automate or review automation, they are not visible in the pull requests either. The number of Jira tickets I raise in the whole year can be counted with my two hands fingers. Yet the number of issues I find, address and get fixed is different.

It isn’t easy to say to myself to take the harder route and go talk to the developer who could fix this, and seek actively a way to get a decision on it on the spot - it either matters (and we fix it now) or it doesn’t matter (and we fix it when we realize it matters coming back from the users). Delivering continuously supports - even enables - this way of working because you cannot leave issues of relevance around without them impacting the users, very soon. 

The not easy route is rewarding though. In those moments where I used to enjoy my private time writing a bug report I could be proud of later (that never warranted for such care and love for any other reason that being the evidence of “me”), I’m building a relationship with the developer that I need so that my work has real impact. I learn more in that interaction. I have chances of getting my message across better. And more often than without, the bug turns not only into a fix but a unit tests too, in that little collaboration we end up having. 

When I need to choose time to writing a bug report and time to communicate bug report in a way that creates a better relationship, I stretch for the latter. Not because it is comfortable (it isn’t, sometimes the reactions are downright mean) but because it makes us and our software better. 

Friday, March 2, 2018

Results from No Product Owner Experiment

Four months ago my team embarked on an experiment to change the way the team worked in an uncomfortable way. We called it "No Product Owner" experiment because it captures one core aspect of it. It was essentially about empowering a team to be customer obsessed without a decision proxy, in an organization where many people believe in finding one person responsible to be a core practice.

Four months later, the experiment is behind us. We continue working in the product-ownerless mode as the team's de facto way of working. The team is still very much an experiment within the organization, and our ways are not being spread elsewhere as we in the team like to keep our focus on the team, technical excellence and delivery.

Experiment hypothesis

We approached the No Product Owner suggestion as an experiment, as it had many things none of us had experience on. There was still the person in the organization that was hired to be the team's product owner. The team wasn't all super-experienced mega-seniors but a diverse group.

When thinking of the assumption we would be testing with this, we came to think of it as customer-obsessed team directly in touch with their customers performs better without a proxy. 

Better is vague, so we talked about looking at the released output from the team. Not all the tasks we could tinker on given the full freedom, but the value delivered for customer's benefit.

Happened before this experiment...

To understand what happened, there's some things that happened already before. We did not just talk about them as "grand experiments" that would be shared anywhere. They were just our way of tweaking our ways of working by trying out what could work, and not all did.

We had experimented with backlog visibility by using post-its on a wall in form of a kanban board, using all electronic kanban board, and not using a kanban board but a list of things we were working on within the team. The last worked best for us, we did not find much value in the flow, just the visibility (and discussions). We had experimented with the product owner location in relation to the team having him in the team room, and later on different floor. We had learned to do frequent releases, and through learning to do that stopped estimating and focusing on fast delivery of continuous value.

The frequent releasing, in particular, was the reigns of the team keeping us synchronized. The value sitting on shelf in the codebase not visible and available to our users wasn't value but just potential of it. It had transformed the ways we designed our features, and helped us learn splitting features always asking if there was something smaller in the same direction we could deliver first.

We also had no scrum master. At all. For years. No team-level facilitator, and our manager is very hands-off always available when called type with about 50 people to manage. 

Introducing No Product Owner

I blogged about the first activity to No Product Owner already months ago. We listed together all the expectations we had towards a product owner, and talked about how our expectations would change. We moved the person assigned Product Owner to a role we framed as Product Management Expert, and agreed his purpose towards the team was very straightforward: requirements trawling. He would sit through meetings, pick up the valuable nuggets and bring them back to the team for the team to decide what to do with the information.

Team embraced the change, and level of energy became very high. The discussions were more purposeful. We started talking directly to our sales organization, to our real customers over email and in various communities. We increased our transparency, but also our responsiveness.

In the first month, there were several occasions where the PME would join team meetings on a cadence, and express things in the format "I want you to..." to find themselves corrected. The team wants. The team decides. The team prioritize. The power is with the developers.

And our team of developers (including testers who are also developers) did well.

From High Energy to New Impacts

Before starting the experiment, we were preparing a major architectural change effort, and there were certain business critical promises attached to that change effort. As soon as the experiment started, we sat with sales engineers talking about the problem. An hour later, we had new solutions. A week later, the new solution was delivered. The impossible-without-architecture-change turned possible, understanding (and finding motivating) the real needs, and the real pain.

Throughout the experiment, I kept a diary of the new impacts. The impacts are visible in a few categories:
  • Taking responsibility of real customer's experience. We had a fix that is delivered in a complicated way through the organization's various teams, and we did not only do the fix like usually, but we followed through on the exact date the solution was available to solve the customers problem.
  • Fixing customer problems over handing the off through prioritization organizations. We hooked real users with problems directly to the people fixing problems. The throughput time increased, and we did fixes I can claim we would not have done before. 
  • Delivering customer-oriented documentation when there was a solution but it needed guidance.
  • Coordinating work across organization on level of technical details to increase the speed of solutions, removing handoffs. 
  • Coming up with ways of doing requested features that brought down the risk and scope of first deliveries, enabling continuous flow of value. 
  • Coming up with features that were not requested that the team could work on to improve the product.
  • Adding configurable telemetry to understand our product better in a data-driven way
There's two particular highlight days.

21 days into the experiment the team received feedback that  their latest demo was particularly good and focused on the customer value. When confronted with the feedback, the team considered it as "that's what we're supposed to do now" - we are customer-focused.

65 days into the experiment the team realizes the last appearance of the product management expert in planning was around day 55. There were other channels to maintain pulse of what might be important than the structure.

There was one particular low  or risky day.

40 days into the experiment the team reallocated 3/4 programmers and 1/3 testers to work on things outside the usual team scope.

Interestingly, the reallocation after 40 days took the already customer-obsessed developers and moved them to work on something where they could still implement the responsibility assignment. The subteam ended up representing the business like in the cross-business line effort without needing a matching role to the other business line's product owner. Progress on the tasks with the high motivation while feeling empowered has also been great. 2.5 months into a 9 month plan there is an idea that we might be done in 4 or 5 instead, while still bloating that effort with necessary improvements over following the plan.

Team Retrospective

After the 90 days period, we had a team retrospective with the ex product owner and talked about what changed. The first almost unanimous feeling was that nothing changed. Things flowed just as before.

The details revealed that there might have been change we did not appreciate.
We delivered about twice the amount/size of things of value as in the two previous 3-month intervals, all of them assessed after through discussions, not through the estimates.
We were more motivated, regardless of the team split that was temporary (even if for 9 months).
We did things we were not doing before, without having to drop things we were doing before.

I can now believe in magical things happening in very short timeframe. I couldn't before. Some of the things never reached us before to help us keep focus, and turned into big things that could never be done.

We did not magically have more people available. But the people we had available were more driven, more focused, more collaborative and believed more in themselves in their ability to take things through to customers.

Not using time and energy on estimating and the value of that became evidently clear with the taskforced subteam inflicted into an environment where estimates were the core. The thinking around opportunity cost - what else could one do with the time used estimating - became more clear.

Finally, we looked at what the Product Management Expert did. They reported higher job satisfaction and less stress. They reported they focused on strategic thinking and business analysis.

No one remembered any pieces of information the PME requirement trawling or the strategic thinking would have brought to the three months, so there is value potentially not delivered through to customer (or work wasted as it has no impact).

Improving the ways of connecting product management and RnD efforts is a worthwhile area of tasks to continue on. There may be a need of rethinking what and RnD team is capable of without an allocated, named product owner.

There was also some rumours around that I've really assumed the de-facto product owner role, but I assure I haven't. Things flowed just as well while I took my 3 week winter vacation, and spent at least another 2 weeks in conferencing around the world.

Every single team member acted in the product owner role. Every one. Including the 16-yo intern.

I couldn't be much more proud of my colleagues. It is a pleasure to change the world in our little way with them. Without a product owner.  

Grow your Wizard before you need them

Making teams awesome is something I care deeply for, so it is  no wonder that discussions I have with people are often on problems around that. Yesterday again I suggested pairing/mobbing at work to receive cold stares and unspoken words I heard in the last place I worked: "You are here to ruin the life of an introvert developer". I won't force this on people,  but they can't force me not to think about it or care about it.

As I talked about the reactions with Llewellyn Falco, he pointed out a story he has been talking about many times before. And with "just the right slot" in my calendar, I go and write about it. He will probably make an awesome video when he gets to it.

Some of us have some sort of history with computer games. Mine is that I was an absolute MUD (multi-user dungeon) addict back in the days, and I still irregularly start up Nethack just for nostalgic reasons. In many of these fantasy game types, we fight in teams. And we have characters of different types. If you play something that is strong in the beginning, you survive early on more easily. The wizards on low levels are particularly weak, and in team settings we often come to places where we need to actively, as a team, grow our wizard. Because when wizard grows to its high level potential leveling up with others support, that's an awesome character to have in your team.

A lot of times we forget the same rule goes around growing people in our teams. The tester who does not program and does not learn to program because you don't pair and mob could be your wizard. At least the results of being invited to "inner circle" fixing problems by identifying them as they are being made feels magical.

Just like in the role plays, you need to bring the wizard fully into the battle, and let them gain the XP, you need to bring all your team members into the work, and find better ways for them to gain experience and learn.

Pairing and mobbing isn't for you. It is for your team.

Friday, February 23, 2018

Assignments in Intent

We're testing a scenario across two teams where two major areas of features get integrated. In a meeting to discuss testing some of this in end to end manner where end to end is larger, we agreed the need to set up a bunch of environments.

"Our team sets up 16 windows computers in the start state" seemed like an assignment we could agree on.

Two days later, I check on progress. I personally installed 3 computers on top of what we agreed to be what my team would do, and was ready to move on to using the computers as part of the scenario. The response I get is excited confirmation of having the rest of them available too.

The scenario we go through has a portal view into the computers installed, and checking if the numbers and details add up, I quickly learn that they don't. Ones I set up are fine. All others are not fine. We identify the problem ("I forgot a step in preparing the environment" and "It did not occur to me that I would need to verify on system level that they are fine") and agree on correcting them.

Two days later, I check again. It has not been corrected. So I ask where we are, to hear that we are ready, which we are not. Containing the mild steam coming out of my ears, I ask if they checked the list in which they could see things are fine from a system perspective and I get explanations ("I don't have access", "I did not know I have access").

Another day passes by and I realize there's a holiday season coming up, so I check again. They are not fine, but "they should be". I ask for a list of the computers, to learn there isn't one. And I spend a few days tracking the relationship of the IP (given by DHCP, changes over time) as the only info given, matching them to image names and actual statuses of getting things to work.

The assignment was made in intent. No clarifying questions were asked. Given solutions, instructions were being dismissed. Learning did not show up in the process with repeating patterns. And finally, there was no consideration for the handoff that was evident for the planned vacation.

This is the different between a trainee and a senior. Seniors see things and track things others don't.

Today I'm enjoying the results of this prep, finding some really cool bugs having guessed some places where it would be likely to break. Having these issues now and having them soon vanish, knowing that my mention of them here is all I have to show is deeply satisfying.

Wednesday, February 21, 2018

Conferences as a change tool

European Testing Conference 2018 is now behind us, except for the closing activities. And it was the best of three we’ve done so far. We closed the 2018 edition saying thank you and guiding people forward. Forward in this case was a call for action to look into TestBash Netherlands, which is in just two months in Utrecht. I will personally attend as a speaker, and having been to various TestBashes, I’m excited about the opportunity to share and learn with fellow test enthusiasts. 

This promotion of other conferences is yet another thing where we are different. We don’t promote the other conferences because they ask us to. We don’t promote them because they pay us to. We promote them because we’ve learned something before we started organizing our own conference: we all do better when we all do better. 

In TestBash New York, Helena Jeret-Mae delivered a brilliant talk about career growth, with one powerful and sticky message: in her career, while she stayed in the office and focused on excellence at work, nothing special happened. But when she went for conferences, met people and networked, things started happening. She summed it up as “Nothing happens when nothing happens”. There’s side effects to growing yourself in conferences, learning and networking, that create a network impact of making a change relevant in advancing your personal career. This resonated. 

At European Testing Conference 2018, there was a group of people in different roles in a company. There was the manager, and there was the person the manager would manage. Telling the person to do something differently had not resulted in a change. Sending the person to a place where people enthusiastically talked about doing the thing differently made the person come to manager with a great idea: there’s this thing I’m not doing now, I want to do more of it. Ownership shifted. A change started. The threshold of thinking “all cool kids are doing this” was exceeded. Power of the crowds made a difference. 

While we would love to see you at European Testing Conference 2019, the software industry is growing at a pace where we are realistically seeing that the need of awesome testing and programming education (tech excellence) is needed. We need to grow as professional ourselves, but also make sure our colleagues get to grow. We all do better when we all do better. We suggest you find a local meetup, learn and network. Go to any conferences, go to great conferences. Go and be inspired. The talks can give you nudges with ideas, the skills you acquire by practice. Sample over time, always look for new ideas. 

The short list of conferences I pay attention to mentioning are ones I recognize for being inclusive and welcoming, and treating the speakers right (paying their expenses and including new voices amongst seasoned ones). I want to share my love and appreciation for TestBashes (all of them, they are all around), Agile Testing Days (in USA and Germany), CAST (USA, since latest editions)  and Nordic Testing Days. The last one is a fairly recent addition to my list now that they’ve grown into a solid success that can treat the speakers right. 

I enjoy most of the conferences I’ve been to, and would recommend you to go to any of them. I have a list of my speaking engagements in and the places I’ve experienced is growing. 

What’s the conference you will be at this year? Make sure there is one. Nothing happens when nothing happens. Make things happen for you. 

Introducing intentional vs. accidental bugs

There was an observation I made on the psychology of how people function in a previous job that has kept my mind occupied over time.

When I joined, our software was infested with bugs, but the users of the product were generally satisfied and happy. It appeared as if the existence of bugs wasn’t an issue externally. It was an issue internally - we could not make any sense of schedules for new features, because the developers got continuously interrupted with firefighting for a bug fix. 

Over time working together, we changed the narrative. The number of bugs went down. The developers could focus better on new features. But the customer experience with our product suffered. They were not as happy with us as they had been. And this was puzzling. 

Looking into the dynamic, we grew to understand that for the product, there was a product and a service component. And while the product was great, the service was awesome. And there was less of the service when there was no “natural” flow of customer contacts. If a customer called in to report a problem and we delivered a fix in 30 minutes, they were just happy. Much happier than without the need they had for our service. 

This past experience came about as we were organizing European Testing Conference 2018. Simon Peter Schrijver (@simonsaysnomore) was awesome as a local contact, focusing on a good experience for the venue in making things clear and planned in advance. As a result, things flowed more smoothly than ever before. There were “changes” as we went on setting up sessions on when we’re reorganize rooms, and those changes required the conference venue to accommodate some unscheduled work. While we felt we can do this ourselves, this venue had a superb standard of service (highly recommending the Amsterdam Arena conference center) and would not leave us without their support. 

Interestingly, some of us organizers felt less needed and necessary when there was less of firefighting, bringing back the memories of the service component from the past. Would there be a way of knowing if people were happier with our past years quick problems resolution (we were on it, so promptly) or this years feel of everything just flowing? Whose perception of quality mattered? Interestingly, in retrospect we identified one problem that we had both earlier years and this year. Earlier years we fixed it on the fly. This year we did not fix it, even we should have had equal (if not better) financial resources to act on it. I personally experienced the problem with the microphones, and failed to realize that I had the power to fix it. I can speculate on the feeling of “executing a plan” vs. “emergent excellence”, but I can’t know the real causes to the effects. 

This brings me to the interesting question of introducing intentional vs. accidental bugs. If the problems while they exist make things better as long as we can react on them quickly, would moving from accidental to intentional be a good move? Here the idea of opportunity cost comes to play: are there any other, less risky ways to focus on the pull of service, than creating the need of the service with bugs? 

With the software product, we needed to invest more in sales and customer contacts to compensate for the natural pull of the bugs for the customer to be in contact and nurture our mutual relationship. Meeting people on content can happen ore in a conference with less issues to resolve. Did we take full advantage of the new opportunity? Not this time. But we can learn for future.