Friday, October 30, 2015

Confidence playes a role in testing


It does not take a lot of confidence to say that a typo fix in code (or resource files amongst all the code) is on the smaller risk side of changes. Doing those changes rarely I tend to move slow and review what changed both when changing and when checking in. I build the system, and check the string in it's use. Sometimes I think that's what I could expect developers to do too, and most of the time they do.

But sometimes, when the change feels small and insignificant, we grow overconfident. There's nothing that could break here, surely? Just a typo fix!

I know I have accidentally cut out the ending quotation mark. I have introduced longer strings that don't fit into the screen. I have fixed a text resource from point of view of one location, where it's used in many places and other has issues. I have wanted to fix a "typo" of missing dots on top of a, to learn that it's not a typo, but scandinavian alphabets can't be used here. Also the typo to me could be not a typo for someone else, like US vs. UK English.

So surely, I am aware that nothing is safe. But I would not agree that I cannot myself do the related checks, have the discussions. And if I would work on a high-risk application (which I don't), I would probably still involve someone else with my analysis. I don't look for a one-size fits all solution, I'm against one.

But in my context, it just makes little sense to have everything checked by another person. It somehow boils down to confidence, which reminded me of an insight a friend shared from Lean Kanban Nordic a while ago:
 I'm not very confident when I go fix typos in code. I spend so little time with code. I know what I'm doing, sort of, but I'm also very aware of things that could go wrong. My lack of confidence makes me double-check. But it does not make me include another person just in principle. Then again, I also know that I don't have to ask anyone to look at the code after my change, the version control alerts the developer on my changes and almost certainly someone is looking for a chance to point out if I could learn something from my mistakes.

The discussion also lead to this question:

Here's where I am confident. I don't think my existence in (this or any other) team is based on the lack of ability to check their own code. I've seen how brilliantly my team's developers test when I sit with them, silently, without making them do better testing with my advice. My presence, my existence and my aura of higher requirements seem to be enough. It's not (just) about skills, it's about habits and differently developed interest profiles.

I regularly let developers (like it would be letting, they always test their own code, there's 10 of them and just one of me) test their own code. Sometimes, I specifically speak out loud about putting their code into production without me testing it, just to assess the risk and remind about our agreement: my effort is something we add on top of all the testing they've already done, it's exploring for new information. And it's best if we can do that together, so that the information of how you do this sticks around when I'm not around.

They don't need me. They benefit from having me.

They get faster feedback on complicated things end users might (or might not) report back to them.
They avoid building some things that aren't going to be valuable because I get heard when I speak business and end-user languages of needs, concepts and value.
They get to rely on a close colleague on asking questions or pondering about the choices while implementing as they learn more of what is possible. That colleague is available more often than the end users and business people, and knows the product by learning more about it, hands on and all around the various channels developers don't find as fascinating.  
They get regular positive feedback when they excel in creating good things. I see what they really do and compliment on actions they are doing. They don't have to be perfect now, but they get recognized when they improve.
They get encouragement to practice, to get better, to share things with me and with each other.
They get praise of building great software, knowing they would not have built it as well without the deep feedback I helped them gather.

They don't need me. But they tell me that I'm a catalyst, that makes us all better. That I voice out hard things they wish they would know to say. And that together we're just better. I'm just different. I bring other viewpoints to the table. And I'm confident enough to no longer have to measure my value in bugs in Jira or lines of code changes I check.

Too much or too little confidence are both warning signs. With a bit of ping pong in between and a dose of healthy criticism, there's a great opportunity to learn. Sometimes you succeed, sometimes you fail, but you always learn. 

Thursday, October 29, 2015

You don't need a different person to test what you did

If it is unclear to anyone reading my blog, I'm very much a tester. Nothing I do outside the domain of what testers usually do changes the fact that I love testing, I care for testing and I want to learn about testing. There might be days when I'm the product manager. There might be days when I'm the programmer. And the more people tell me that there's things testers don't do, the more I go and break the wall that is just imaginary.

On the other hand, I've been a tester for 20 years. I work long, intensive days learning my craft. I pay attention to how I test. I pay attention to where my ideas come from, collect perspectives and ways of getting me into perspectives that everyone else in my team misses. And I'm getting pretty good at that, and yet there is so much more to learn about how to test in a world of unknown unknowns, too much information and myriad of connections. It's the best thing ever. And it's hard work.

It's hard work that haven't given me time to do manual regression testing, because I keep coming up with new perspectives (and I think manual regression testing can also be done by developers, increasing the likelihood of it turning into automation). It's hard work that haven't given me enough time to learn to be as excellent in programming as I could be if I had chosen differently. But testing is my superpower, it helps my team do awesome stuff together. It helps our product manager come thank my team about delivering consistently working software, and it lets me be proud of my developers who invite my contribution (even if still way too slowly) and actively act on the feedback they can get from me.

You become great at testing by choosing your focus like a tester chooses. Practicing testing. Programming is fun, but it's different. It creates different thought patterns that the focus on bugs, value, business and systems. To build great software, we need both thought patterns, preferably in close collaboration.

I was again tweeting today about the stupidity of my own thoughts in hindsight, realizing how much effort I've wasted (without creating any additional value) by logging bugs into bug database about user message typos, over the option of skipping the logging and just going in and fixing them. I've discussed my way through the layers of resistance to get access to the source code only fairly recently, still causing regular stress to developers who see I've changed a file they consider their own to fix typos in strings. But that takes us forward. It's time-saving for us all, that I approach different problems differently. The risk of changing a string to be printed to two letters shorter has implications, but just as developers are able to check their changes, I check mine. The tester in me is strong, it has no problems overpowering the programmer me.

But when I tweet about things like this, my colleagues in testing remind me of how much my mind has changed. This is a great example:
I too used to think the relevant bit was having two people, a programmer and a tester. I used to think there was things a tester should do (and nothing would stop a programmer from taking role of a tester, except skills)  the check your work part.

What I've learned, looking at things in more detail and with less abstractions, is that the changes we do into code are not all the same. For non-programming testers (one of which I still consider myself to be most of the time) the changes in software to make it work can appear more magical than they are. And stopping the mysticism that surrounds software is very much necessary. When I know from hundreds of samples the impact on our system - rehearsing again and again, delivering continuously to production - that user string fixes are often safe, I don't need to go ask some different person to play the role of tester for me. I know I'm biased about my own work, but not everything needs to be treated the same. That's one reason why I love the idea of context-driven.

Find your rules. Learn about the weaknesses of them. Sometimes (more often than not), explore around them. But always, always be aware that in whole team of us developing, time used on one thing is time away from something else. Handoff from one person to another is more work than handoff from the programmer in me to the tester in me. The choices we make are supposed to be intellectual. The best the bright people can make together.

You don't need a different person to test what you did. But someone should test the change. Just like I don't test every change from developers, they don't need to test every change from me. We can ask for help when help is needed.




Wednesday, October 28, 2015

A bug that taught that we're implementing too much

There's a bug we've been analyzing that is emphasizing things in a relevant way.

At first sight, it appears to be just a resizing / layout issue. An extra white box that emerges when you resize your window.

At first, we let time fly by with the high-level discussion between two developers on what could cause it. One, responsible for styles appeared convinced it would require functional changes to fix it. The second, responsible for technologies appeared convinced it would require style sheet changes. With the unclarity on needed skills, it wasn't going forward while the second person did not have time to jump in.

Finally the two paired up and started investigating. I felt very proud looking at them triangulating the cause by removing factors and simplifying the problem to understand it better. No rushed "let's try this" or "maybe this would fix it", but building an understanding before jumping into conclusions more. We had already our share of conclusions aired while not investigating deeper and speculation isn't the right thing here. They removed all self-built styles from the component we're using, and the problem disappeared. So we now we know the second developer was right.

The main insight, however, is the discussion our team ended up with next. We realized there was hundreds of lines of style code (in Less) that made very small changes to the (relatively nice) styles the commercial component ships with. And our aspiration to tweak all the details of the layout was causing us to use, repeatedly, a significant amount of time testing, debugging and fixing problems. What if we would redesign our approach to styles from maintainability viewpoint, how would that change things? What if we actively would have less of implementation, since many of the tweaks in styles are not driven by the end-user value but the possibilities of all the things we could do.

From a small fix, we're going into a shared work of creating less software to maintain. I find this way of thinking insightful to enough to note down and share. We're a small team, we can't afford to not react on maintenance burden.  But it has been a long route of added collaboration and shared ownership to get to a point where these discussions and lessons naturally emerge, and are taken forward.

Is this testing? I might think so. It's all of my business to bring forth the cost of testing complicated structures to get them simplified.


Monday, October 26, 2015

Don't #PayToSpeak, join European Testing Conference as a speaker

I'm an idealist that works hard to change the world. European Testing Conference is one tool for me to change the world as we know it - the world of conferences. And the aspect I want to change is having to #PayToSpeak.

When you #PayToSpeak, many presentations have a sales pitch knack on them. Not all. But some. There's some reason for someone other than the conference organizer to pay you to show up.

I go to quite a number of conferences. I review conference proposals for some. Some of them I have very high respect for and would recommend to others without a blink both as participants of conferences and as speakers. These two tend to be connected though. Good speakers create better experiences from the contents to the participants.

The #TestBash series (TestBash, TestBashNY, TinyTestBash) are on top of my list. The amazing Rosie Sherry works with integrity unlike any other, pouring her heart into making the events great both for participants and speakers and succeeds for years in a row. Rosie's conferences don't make the speakers pay for speaking: she covers travel and stay, and her actions show she realizes how much work speakers (like me) put on their presentations. It is only fair we don't #PayToSpeak even if we are not paid to speak.

The second category of conferences I vouch for are ones where the community is so strong that the contents turn out great, even though the speakers will pay for showing up. I'd like to recognize CAST and Nordic Testing Days in this category. I have paid to speak at CAST (and it costs a lot to travel there from Europe, and my employer does not pay for me!) and I could speak at NTD since it is so close by. I hear Copenhagen Context and Let's Test might be similar, but I have no personal interest towards them so far, for very different reasons.

The third category is conferences I have spoken at, but would no longer do that unless invited (and paid for). These include EuroSTAR, STPCon and other typically commercial events. I get that the commercial success is partly based on volunteering speakers, but I also believe that it means they get a very biased view into the world of testing. It's sales oriented and new speaker oriented, new speakers that still seek to invest in having their first mentions of reputable conferences under their belt.

Fourth category are conferences where you pay entrance fee to speak. These are usually framed as from community to community. Sometimes they appear like commercial conferences (like XP201x), sometimes they are open space conferences. For open space conferences, I get the idea that everyone pays the fee, but it tends to be cheaper and extends then to everyone.

European Testing Conference seeks to join the first category. We believe that great speakers with practical messages to share should not #PayToSpeak, quite the contrary. So we pay the travel and stay. And when we are financially successful, we also create a model of paying honorariums for the work. Creating a presentation is a lot of work. It's valuable. It's is the second main reason people should join conferences. The other is to meet the community. But content we confer around is relevant.

Have you already let us know about your interest to speak and the story you would have to share? Look at our call for co-creation. And if you are not a speaker, did you get your ticket already for learning from some of the greatest speakers we can find under the case of paying them instead of making them pay? We've published 3/4 keynotes and the ticket price goes up again at the time all of the speakers are announced, so get yours now.

A course on testing, pairing and mobbing

I had a great time delivering my Exploratory Testing Work Course in Brighton before TinyTestBash last week. My goal on that course is to teach people to distinguish some critical self-management techniques that make a difference for better quality exploratory testing, mainly keeping track of both the threads of details and higher levels of planning and backtracking, in a combination that is right for you personally, in the frame you feel you are in today.

This is a course I've done a lot of times pairing people for the five sessions of testing. This time, I did something different based on what I've learned on how to get everyone in the class to learn better. I had people pairing Strong-style in the morning and test in a Mob Programming format in the afternoon.

Strong-Style Pairing on the course

I've had people pair before. But this time, I was very specific on how I wanted people to pair. I asked one to be the driver, who would have the keyboard but who is supposed to keep listening to what the navigator says, no own decisions on what to do but always check with the navigator. I asked the other to be the navigator, who would actually be the tester, but with access to the keyboard only through the driver, no touching the keyboard. In strong style, all ideas from one head must go to the computer through someone else's hands.

There's essentially two things that while  testing go on the computer:
  • using the program
  • making notes
I looked at groups doing this, and most groups instinctively took notes by the one that was not on the computer. The trouble with that is that while it may seem faster, it removes the feedback loop on if you actually agree on what is being written and creates distance in the pairing. 

In the first session, I suggested that people could change roles with ideas. Most groups did not. So on the second session to improve it, I introduced a must-change rule of four minutes that I called out. 

All the playing with pairing was not to teach people on the course pairing, but make sure they teach each other testing by really sharing the activity through strong style pairing. 

When I called for observations, my own observation was that people paid relationally more attention to pairing and how different (better) the style was and how they had no idea that there were different styles to pairing. We were on an exploratory testing course though, not pair testing course so I was hoping for pairing to give me a way to teach testing (have the pairs teach in pairs) but people did not vocalize much of that in observations. So it was  good I had something different planned for the afternoon.

Mob Exploratory Testing on the course

In pairs, I can introduce rules and hope people will follow them. I can try mixing up people so that the pairs end up diverse, but usually course logistics give some limitations here. But I can't see what goes on in detail in each pair, and teach them better testing. Mob format is different.

In Mob format, I can step in as navigator whenever I feel I need to show / teach something for the whole group, in the context of what we are trying to test now. I can make sure we as the group stick to a given charter and at least intentionally divert from it when we do. And everyone in the mob can contribute ideas to make this one task's output better.

For a course, it is a big mob but since we had half a day, that is not a problem. I preferred having all 18 in the mob over the style I use in shorter conference sessions where I choose a subgroup to do mobbing and others just observe. Everyone gets their time on each role more than once, and everyone can contribute on the hard tasks. 

I handed out Mindmup document as the place to take shared notes on, and someone from the group asked if it would be better if someone else would take the notes. This is a question so common in mob testing, that I need to learn to address it better. Shared notes are not the same thing as private notes, and all created stuff is supposed to go through two people as mobbing uses strong style pairing too.

In change of modality from pairs to mob, I also changed the application we were testing. The reason is, as I told my group, is that I've seen people with previous information on application on the courses move from testing for new information to showing off what they already know, and I wanted to level that knowledge.

I introduced a planning oriented charter of identifying what there was to test in a very specific part of the software, and I watched the group learn that by testing it. Sometimes they would see something but miss it, and I would step in to make a note of that in the Mindmup-document with the driver. It was interesting to see how the task turned hard when obvious things had been noted, and the mob still kept contributing more ideas, finding hidden features using common conventions of where you could find functionality in a user interface.

We also worked on more of a detailed testing oriented charter, only to run on a  bug that I had not seen before. We changed our task to logging that bug properly, and it turned out to be the most difficult thing we had done all day. As a mob, we needed to agree what we were reporting and to what audience, and the format brought out well the diverse opinions in the group for us to discuss.

Thoughts for future

I'm thinking between two options for future setting of this course. Either I will do it again as this, since people get to test so much more in pairs, or I will spend the whole day mobbing to take the whole group deeper. There would be so much of testing I could help everyone get better at either by me pairing with them, or by me facilitating a mob for them.

If you feel you could teach testing to others, try teaching in a mob format. It gives you a whole new power in helping your students out of their specific problems. And let's face it: every student deserves the chance of teaching something new to their teacher too. In testing, everyone has special insights. And sharing those is the most awesome thing I can think of, today.


Saturday, October 24, 2015

When do you take a joke too far?

I have a heuristic that I use nowadays. When I feel I should not write about something because it's sensitive and could be just my view, I go against my instinct. There's a corollary: some things like that are better to be dealt with not publicly and sometimes there is a fine line between what to blog on and what to deal with by email. Blogging is more of a self-reflection instead of an action.

I just had a great time at TinyTestBash in Brighton. So many amazing people. So many great discussions. Inspirational new speaker talks. And an overall sense of belonging.

But there's one thing that left me thinking. There's a TestBash meme going on with one particular person and a tutu. Tutu, as in a ballerina skirt.

This meme was around in the main TestBash to an extent that the person at the heart of the meme included it into his talk, wearing a Desmond Tutu -hoodie and made remarks of not wearing a tutu the skirt. It was all fun and good, and everyone seemed to be taking it as a fun thing.

The tutu-theme continued with TinyTestBash. A tutu was made available for the person at the heart of the meme, and he again refused to wear it. But this time it was different. I felt it was at the brink of too much. It might be just me who thinks this way and I'm transferring my feelings on someone who has none of those.

Here's my line of thought. If I was a constant center of a joke that I considered funny at first, I could feel very uncomfortable when that joke turns out to be the thing that defines me to new people. And at that point, I have two options. I could get visibly upset and tell everyone to just stop it. Or I could laugh along, but find it just less funny. Kind of like the laughter I do when I get to hear very gendered jokes about my gender. Not funny, but not laughing is a worse option socially.

I think we might need to stop to think when we take a joke too far. I borrowed the tutu from him and wore it for the day.  Then again, tutu on me is normal, not funny. Just for the fun of it, I could wear a tutu for my next talk TestBash NY, just to show that the tutu has moved on.

I think we should stop to think if we're about to take a joke too far when the joke becomes the thing to talk about with that particular person. And in case we are,  how do we change the joke so that it becomes positive in a different way. The TestBash-spirit brings forth wonderful jokes and memes, like the TestBash briefs that we saw handed out this year. There's a time for every meme. It might be time for the tutu meme on a person to go away or transform into something different.







Tuesday, October 20, 2015

From test of the need to building the program

Today, October 20th, is the last day we sell Super Early Bird tickets (350 euros) for the European Testing Conference. All of our tickets are cheap in relation to the conference contents we're setting up, but this is ridiculously cheap for a 2-day professional conference in a high-end location in Bucharest, with great international speakers.

Two weeks ago we set out to test the need of this conference and support in the community by setting a goal of people showing us they want us to do this by buying the tickets. On the last day, we are at 85 % of our goal, with belief you will take us above our set limit. Two weeks ago, I was feeling moments of despair fearing what the decision to test would reveal. But it is revealing that you're with us.

We've started our call for co-creation. That is to say that this event is from us the organizers as part of the community to the overall community. We think we know some great speakers as we follow our craft intensely and have been around. But we also know that our sample is the visible contributors from around the world. We need the community to help us find them so that we can reach out to them, inviting them to share with us for you.

We do not limit ourselves to call for co-creation. We seek the best speakers and contents, both with you and without you, co-creating it with the speakers. And we believe we can do this, because we have set our conference uniquely to pay the speakers for the work they do. With this, we're changing the world of testing conferences that sometimes appears to mostly enable people whose companies have something to market.

A few people have asked why buy the ticket without knowing the content. My answer is to look at the list of organizers and the sessions we do in conferences around the world, that sets a bar (which is not low) on contents we will be offering and decide on whether you would trust us with your money to be invested in the best possible testing learning experience combining testing as we know it as developers, testers and analysts.

We've intentionally revealed so far only that Linda Rising will be with us to encourage us into the days of practice. Linda is the author of a book Fearless Change, and speaks from a vast experience of being around a while to the hearts of people. When I first heard her speak at Turku Agile Days, I left the room crying and I wasn't alone. Her talks move people and change the world. And now she joins to change the world of testing with us.

I will make sure I can say as positive things about every one of our speakers, not just the keynotes. That's why we co-create. We get to know you. We want to help you shine. And through this, we all win. We're creating a balanced conference of practical testing, with an agile slant as we believe in fast feedback. You'll want to be there.

Oh, the normal ticket price is now available: it's 750 euros. The Super Early Bird price is available only today.


Sunday, October 18, 2015

Writing a blog or writing a book

If you read my blog, you must have figured out by now that I like writing. Writing is my way of externalizing my thoughts, making notes of them and finding bugs in my thinking patterns. I forget my thoughts and going back to them helps me see things with fresh eyes. Now I'm moving from writing a blog to writing books, discovering LeanPub.

A theme I have been writing about a lot is pair programming (strong-style) and in particular mob programming that includes the same pairing mechanism as strong-style but extends to the whole team.

In Tampere Goes Agile 2014, I invited Woody Zuill to share my community his story on Mob Programming. Since then, I've been practicing to get towards it and learned that Mob Programming is a gateway to pair programming too for pairing reluctant developers. It feels safer as the first step, and shares the responsibilities of growing up those who are behind in their skills development on some area. In Tampere Goes Agile 2015 just this weekend, someone phrased it nicely: a year ago they heard about Mob Programming for the first time. Now it feels like it's everywhere.

There's companies like Mystes in Helsinki who have several teams mob programming full time and sharing their experiences locally, like in Scan Agile 2015 -conference. There's people like Woody Zuill and Llewellyn Falco, who have both been around since inception of the concept that keep showing up. Llewellyn now lives in Finland, and Woody has been back twice since first visit in Tampere Goes Agile 2014. I've experienced a bunch of coding sessions in mob format to learn refactoring and test-driven development. And I've started experimenting with mobbing at my work, and in particular, I've transformed all my exploratory testing training sessions into a mob format during the last year. It just works in learning so well.

From a discussion with Juho Vepsäläinen I realized that the work I do in background (writing a book) I should change into doing it in the foreground, as other people energize me. So yesterday I published with Llewellyn Falco our first version of one of our books in progress.


Today, we have sold for money 3 books. The support of people in writing can make a world of a difference. It makes a difference for us. Writing when you know there's someone who wants what you're writing creates a different feeling. I can only call it "driven". This book goes forward, and it will be awesome.

I feel I need to add thanks to all those people, who tell us stories of struggles on introducing mob programming. How it feels wrong and how it fails. That comparison is already helping us isolate some of the patterns that you might want to recognize that helps you in succeeding and getting well started.

It's time for me to learn how is it different to write a blog and write a book - in leanpub style. This should be fun!




Friday, October 16, 2015

Four sessions of remote pair testing

Sometimes I feel alone and isolated as the only tester amongst the developers in my team. I start to question my learning, my observational skills and my ability to work with people in general. Pairing up with other testers on an exercise helps with self-doubt. Remote pairing with volunteers has also been a relevant step in my ongoing journey to learn to be a better facilitator and participant in mob programming (testing activities in mobbing in particular).

In particular, after a weekend testing session where remote pairing was a hard experience to deliver through writing and instructions, I asked people from that group to give me an hour to experience it with me.

The setup for pairing was use of my computer, sharing control of it through join.me -application. We would have a Mindmup-document to make notes on. We would have Dark Function Editor as out system under test. And we would have a recon mission, learning what there is that should be tested as our starting task. And we would change who was driving (using keyboard) and navigating (speaking of what to do) every four minutes.

The four sessions I had were completely different in how they felt to do as a pair. The four sessions also had very different outcomes as in the learnings we had on the application within that timeframe.

From the shapes and colors in the mindmaps, you can already see some of the differences. One session was much more attention on the software, trying to understand as much there without making notes regularly. One session was very observant on bugs. Two sessions focused heavily on identifying the  features, noting only occasional bugs. It's hard to do all at once. 

Here's some notes I've written down on the session that I enjoyed the most:
  • She made me work on higher level of abstraction, giving me small tasks without details on instruction with instant feedback on correcting if I misinterpreted what she wanted
  • We noticed things together, and found bugs in the application I had not seen here
  • She knew what she was doing, even if she usually tests backend and logs. 
To contrast to another very output-wise productive session I did not feel at ease with:
  • Very focused on what is a bug and knowing the right answer on a product we don't know made me feel like my product, even me, was being criticized.
  • Strong assumptions, not testing for what is plausible out of what we think we already know; overconfidence in coverage in short time
  • Not staying with the pair but going alone to prepare the data - different idea of working in pair
  • As a driver I keep asking "what would you like me to do", "did you notice", "should we make a note of that" - feeling left out. 
  • Could not change roles while "in middle of something". 
  • Pointed out a type of functionality I had not paid attention to before: tabs.
  • Not "pairing" but "working individually with someone watching". 
  • Needs longer timeframe to build into collaboration not a skills demo. 
The two others I had not made as specific notes on. I remember really enjoying the flow and ease of one, and enjoying in a very different way with one where we started from basic ideas on how testing works. 

My main lesson: driving and navigating are skills that take practice. I can learn from everyone, but everyone also teaches me very different things about myself. Practicing with different people is good, and a regular reflection on how the pairing makes you feel creates an environment where the experience is going forward. But there's some hard messages to deliver about how you feel sometimes. It's not about getting the most out of the two people, but the best. Both need to be contributing. This experience also gave me a glimpse into the idea why pairing can make people quit: inability to hide while requiring much out of your individual contribution creates extra stress. 



Monday, October 12, 2015

Unit testing is about Asserts

There's this weird state of mind that I'm in with Unit Testing. I read unit tests, I talk about unit tests but I rarely write any. But I've been around them enough to start to think I'm recognizing some patterns there, and know when stuff I'm being suggested is useful / good and not.

Last week, one of my teams had a Unit Testing -training. My motivations to participate were two-fold. I was really curious why they had set learning unit testing as their target (now that they no longer have a professional tester working with them, the rumor is that they might be struggling more with delivery quality). I also wanted to see how the new training company was doing on the topic.

Out of the three hours, we spent two with theory and one with hands-on work. The theory was the usual stuff: refactor to find functional bits you can test. Isolate the bits that make it hard for you and leave them out of your tests. We also looked at my team's tests, without a single good example. There was a lot of bad out there.

The hands on work focused on removing need of profiler in the tests. We had been heavy on mocking with JustMock, that made testing possible, but to an extent it made tests slow doing some things for us that weren't needed. So we were removing dependencies on Profiler.

While looking at the examples and the tests we were trying to change, my eyes kept going to the asserts in the tests. That's where the bit of testing with the tests happen. And I could not help but noticing how weak the asserts were. I had been primed for paying attention to that.

To start the same week, I had just listened to a talk by Llewellyn Falco titled "Common Path for Approval Testing - patterns for more more powerful asserts". Perhaps that is why I made the connection: all of the asserts I was seeing were the first step. We would assert numbers and boolean values for existence. Nothing more advanced. Nothing more meaningful. And asserting simple things is simple, but leaves a lot to hope for in the perspective of actually noticing that things break. And when they break, being able to analyze what is going on. The picture introducing step 6 (Diff tools), doing things on failure that can be slow but happen only on items that fail was an eye-opener to me.

With all of this, I was left to wonder. Having weak tests that run faster cannot be the goal we should be having. When there's many things to work on, how do teams really end up making their choices of what to start from? This choice, looking from a tester perspective, just makes little sense. If testing happens somewhere in the unit tests, the asserts seem like a place to pay attention to. Thus I'm very drawn to the idea of making them more powerful. 



Friday, October 9, 2015

Contributing in mob programming as a non-programmer

Mob Programming is about the whole team working on one computer, taking turns on the keyboard driving (no thinking while on keyboard), and while not on the keyboard, navigating with the rest of the group. It's single piece flow, everyone working on the same value item.

Here's a worry that re-emerged from a discussion I had, something I've needed to address for myself.
What if I'm a non-programmer, I'm learning but not contributing, I'm just slowing the others down.
Well, I'm a non-programmer and I've volunteered to sit in mobs. I continue to volunteer, to the point that I'm working on convincing my team to mob more with me around. So I've needed to address this, and here's what I learned to pay attention to.

  1. Being there "holding space" for testing/quality has value too.
    There's a weird phenomenon I'm observing when I mob with my team. Quite often during our day, one of the developers glimpses at me without me saying anything saying "should we test that?" or "we'll need to fix that". It's as if they are somehow on their best behavior on testing/quality, knowing that stepping away from testing I'm there to remind us - without me even reminding. They work better when I'm around. Sometimes they even acknowledge that in the retrospectives.
  2. Being there as "the only woman" or "the only non-programmer" has value too. 
    I don't really know how my team mates behave when I'm not around. But I've listened enough to others to recognize that me being different has value too. It might be again that people are on their better behavior. But it seems to be also that me being slower in getting what goes on makes them being clearer in navigating. And there's been observable instances where speaking with me in mind makes things easier for the other developers.
  3. My learning has value in the long term even if not immediately
    When I'm there, I'm learning a lot. I'm learning about details of implementation, dynamics of what is easy and difficult, ways developers go at problems and the syntax of the language we're working. Seeing my developers work creates different models on how I test when I'm without them, knowing where they seem strong or weak.
    If I am just learning today, that learning is in use for me the next time we do this. And I get to live to my motto: every day at work makes me, my brain - the most important tool in testing, a little better. I'm sticking around, I matter just as much as the next guy.
  4. Driving and allowing others to navigate has value too.
    Just taking the keyboard and writing what others tell you, even letter by letter has value too. It slows down thinking and makes thinking clearer. I've noticed me being slow gives my team mates room to jump in to correct the other, just because they want to avoid the mistake being made in my pace. Articulating your ideas clearer is good. I can provide that service when there's nothing else I can provide.
    With few rounds, they no longer need to tell a non-programmer introducing variables letter by letter, but they get to work with concepts. Every non-programmer can learn that, and it does not turn us into programmers immediately. Personally, I think ability to fluently read code is separate from ability to fluently write code, and I can do the first as a non-programmer. I love talking around code, but writing by myself makes me sleepy.
  5. There's stuff I contribute when I speak up
    I've noticed I say particular types of things. I pick up half-sentences that are about domain knowledge to correct it, preventing mistakes from turning into code. I ask about using time on something and if we should try something else. I have answers on direct product-owner-type questions with actionable information of connecting features on detailed knowledge of using the product in all imaginable flows that come to my mind.
    Llewellyn Falco told a story about one of his mobs having a team member who contributed by asking "maybe it's called something else" without knowledge what that could even be, triggering the others to realize the place they were looking at being wrong. They could have been stuck there for quite a while, and found what they needed with that remark that mostly went unnoticed.
  6. Positive feedback is valuable
    I've noticed many of the things I say are about my amazement of how nicely things flow, and how smart my team mates are. Hearing someone appreciates what you do with specific examples isn't an everyday thing. Saying "I feel I did not contribute, you are great" has value in itself. But you might notice that your team returns the favor and lets you know they appreciated you being there - even if you slowed them down. 
  7. Beginner mind
    Adding one more thing on my list. There's a bunch of questions only a true beginner can ask. They are typically "why" -questions. When answering why you do things the way you do, the more experienced one tends to sometimes find out that thing they take for granted aren't. 
Give yourself a chance and try mob programming. If you are worried, don't stay away but express your concern and invite the people you mob with to bear with you. In a mob, you're valuable when you're learning or contributing. Learning is a perfect reason to be there. 

Thursday, October 8, 2015

From Co-Creation to Collaboration

Agile. We all have our ideas what it might mean. When listening about Agile Testing explanations, it's evident that testing is still testing but Agile is a context that changes many of the factors on how the testing ends up being done.

With Agile, the testers have learned to work together with developers. I guess it works both ways. We hang out in the same meetings. We share work on the same value items to deliver. We pitch in from a bit different perspectives, with a common goal of getting it done, learning to find problems before implementing still leaving the question to be answered on how well this works when it has been implemented. And "this" is something small, with a continuous flow.

In this setting, when a tester finds a problem, she tells about it to a close colleague. Having worked together for years on consistent pace, there's no drama to this. It's information exchange. I know something you didn't, now you know it. And knowing it means more work for both of us. It's only natural to ask if we could learn something from the information and avoid late discoveries next time. Optimistically we agree to try to work on it, yet always failing with some details. It's a "I wish I knew when I made that decision" -case.

Now that I've seen a bit of Mob Programming (whole team, one computer), I'm suspecting that the best of our current tester-developer collaboration is just co-creation, not true collaboration.

In Co-Creation, work well together but not really together. You know something, I know something, and we take turns to contribute our things into the end result. Some of this we pair up on, but mostly when we pair, we pair with the likes of us, with similar type of information at hand. We're not witholding any information, we're actively sharing, making anything available. We sit together in the same space, asking is easy - if only we know to ask.

In Collaboration, we don't need to ask as we build on top of each other. Like when we mobbed in my team, someone mentioned in passing looking at the code we were changing the words "one or many", with an instant conclusion that the answer was known: "one". I immediately jumped in correcting it's "many", adding a bit of history of why's to the shared knowledge. And a third person cracks a joke: "That would have been an expensive one to find later". Mostly me in the room makes others say things like "ooh, we need to test that", with a glimpse to my direction looking for acceptance, acknowledgement and approval on the developers being on their perfect behavior of discipline. But there's also these moments when I suggest variation to explore the limits of the box we've set ourselves into: change of browser, different data, different order of doing things, new viewpoints to see the same differently. And I do my share typing for my team, on my turn. 

How could we move from sharing, helping and co-creating in our projects and professional communities into true collaboration, where we remove more of the time and task separation, without removing the specializing that enables each of us to contribute things the others could not by themselves? That's something for me to think about.





Saturday, October 3, 2015

Test Early, to not Fail often

It was one of those usual yet unusual meetings. The product ownership team had summoned together a team of versatile experts. There was deep representation of the user base. There was sales and marketing. There was half of the development team, all "disciplines" represented with the project manager, the coding architect, the user interface specialist and the testing specialist. It was labeled "Concept" and little did we know what was coming.

It was a meeting to start something new, perhaps only a year later. It was an early chance of testing. And it taught me how important it is, yet again, to be testing early on.

As the meeting unfolded, I learned that the product we've been creating isn't optimal (easy to sell, hard to find market for) based on a marketing research done on one target market. And that something else would be more optimal. Something we did not have. At all. Something that would land in the same rough business neighborhood but that was not what we had been creating for the past three years.

It wasn't that we were told we had spent three years with features they did not want. There was a strong internal customer too, and they had significant competitive advantage with what we had created. It was great. But the other customers we would dream of did not see it as something they need. And we wanted to learn to create things others wanted too.

Clearly, something failed with how we had been testing our assumptions about the business opportunity. These kinds of failures of testing are in scales more expensive than any other mistakes we could do. The result of late testing can be that we will recreate everything, if we even get a second chance.

These kinds of experiences have made me a big fan of lean startup. The idea that you should test your business model's built in assumptions with smart approaches, going to the real customers and not taking their word for it when their money speaks volumes more to the truth. You can politely say something is interesting, but when you're asked to buy it, your actions will say if you were serious or not. In software, we should sell earlier - a major lesson I've learned over the years.

The same idea that I keep failing with at work in scale and practicing to handle better with suggestions of how we could test our business assumptions applies to my side projects. I'm pretty proud of myself for the agreed action items yesterday, that will dispel some of our illusions. I'm extremely happy to be testing early in great collaboration with people who never tell me that I'm "Just a Tester" because they've seen that testers like me are helpful finding good ideas of how we could know more of what's real and what's illusion

With my side project, the European Testing Conference, I don't want to fail in the same way I do with my work. I want to do what I do for my work: early testing. What we need to test is something others too consider valuable. We believe it is, in the organizer group. We are passionate in our belief that extending the learning we've already had between testers by profession and developers by profession within the organizer group, that will create a great conference. That there should be a space, a conference, in which we bring together the different worlds of developer testing and tester testing, build bridges and co-create cross-functionally.

We're inviting you to test with us. A Super Early Bird ticket I like to call "validate-the-need" ticket is now available for ridiculously low price until Oct 20th. We're creating a great conference that will teach you practical stuff. Will you trust us on that and be there with us?