Thursday, June 30, 2016

The Excuse of a Context

Now that we've been talking again about what context-driven might mean, I'm starting to realize that as a separate phrase from testing, it might mean less and less. There's one thing in particular, though, that it means to me - context is not an excuse for doing things worse than they could be done for a context-driven tester. But there's a lot of using context-driven as an excuse around that I'd personally like to make a separation from.

Two examples of using context as an excuse would be the following.

Giving in to obey a manager.

Some years back, I joined a new organization as the first ever tester they had had. The recruiting manager had studied testing enough to know that he felt he needs a tester, and that the tester should be managed by having her create and mark passed/failed some test cases. I could have done what the manager asked. That would be using context as an excuse: "the context of this organization is that I have to write detailed test cases that everyone can use to share testing work with the developers this way". But that is a bad excuse, and wouldn't be context-driven.

Instead, I pointed out that the test cases he had forced the developers to do had 53 pages of text and 3 specific pieces of information that were not obvious enough for me to know them on my week 2 at a new company. Wasted effort. I suggested than instead of me, the expert, writing test cases as he hoped, I could write detailed notes for two weeks of what I was thinking while testing so that he would have a better understanding of what I do when I test, because test cases were clearly something he was using because "how do I know you're even working if you don't mark test cases done?". I figured out what parts of documentation would be helpful in dual purpose: as part of the product, e.g. helping our sales team or serve us end user documentation as well as helping me keep track of testing. And for things with details, I would make a pact with developers on putting the stuff down in automation to save them from  having to write useless and stupid test case documentation ever again.

Bending a methodology until it kind of fits

There's a lot of useful tools. Like Specification by Example. Surely no one can be agile without using it? If the organization isn't quite ready, let's use half of it. If the examples don't quite fit the format, let's not use any other format, because GIVEN-WHEN-THEN is the formula.

Or like TMap. Or RST. Or Scrum. Or XP. How much do we change these, before they are not the thing anymore? And if they are the thing, perhaps we're being context-imperial, not exercising enough of critical thinking of what actually takes us forward and what would be the best way to organize for inclusion of deep testing.

This is again on the excuses side, but erring on wishful thinking. I'd like to say my favorite acronym is actually a best practice. And I avoid the actual consequence of my inherent desire by repeating a mantra it depends. I can't really  tell what it depends on, but I can say that the method that worked for me in my previous job and make me successful (or rich as I'm selling it) must be the thing that will save us here as well.

No best practices means just that. Seek for better. Experiment with something different. Sometimes it takes you backwards, sometimes forward. But no change is like saying we have nothing to learn in software development.

Being unethical

I added this third story based on a triggering twitter discussion. There was one more story I wanted to share. Some years ago, I was working on a project with a large subcontracting firm. We were creating bespoke software for a customer, and I was facing one of the most difficult project managers I've had the pleasure of working with. I was the test manager, and she insisted that if I ever had anything to report on testing, I would only report to her. With a lot of power of persuasion, I learned she always rewrote the reports, leaving my name but changing the message before sending them to our client.

No matter what I did, I couldn't then find a way to change that. I would talk with the client representatives directly, trying to mediate the problem of realistic information not flowing, seeking support in fixing what I felt was a dead end.

I failed, and I walked away. As I walked away, I had a long discussion with the client on how to handle project managers like that when they really wanted realistic information so that we could do on-time corrective actions.

My ethics are not for sale. Contextual factors won't make me lie. My life has better content than being pushed to a place that I strongly feel shouldn't exist in the first place. Walking away is part of it. It's not exactly context-driven to me, but it is something a professional tends to need to stay sane and true to herself.

Strategies on learning programming

I'm a tester and I want to be a tester. But I also want to be a product entrepreneur. I want to be a business analyst. I want to be a programmer. And most of all, I want to have fun learning stuff that I feel drawn to. This post is inspired by Cem Kaner's words_
"Note to people who want to stay in testing: temporary role shift is worth the investment."
My thinking is that I'm not in middle of a temporary role shift, but instead I do extensive job crafting. I take my work and craft it to be better for me, better for my company and I look critically what that might be. I'm still a tester, but it does not stop me from doing stuff.

I find that when people take up learning programming, they usually appear to have a common path. They read something, a book or online articles.They take a course in programming. They try stuff out and create new programs. I haven't been drawn to that. I find that by accident, I've found other ways of learning that inspire me. Perhaps some of those would inspire you too?

Pairing and mobbing 

After I got over my discomfort of pairing on something that I did not already know how to do (hint: strong-style pairing & the idea that you can bring different things to pairing), this is now a way I want to learn in. Pairing still drains my energy, as it is very intensive learning. Mobbing gives me more of ability to control the intensity as I share the work with a group over having one person rely on me.

I tried reading a book on programming and fell asleep. But instead, I sat with my team and programmed in a mob. I sat with nicest ones of my developers and fixed bugs with them. I organized Tech Excellence -themes meetups to program in other languages with developers I don't work with.

I learned a lot already. I keep learning a lot. And I'm addicted. Learning in bite-sized chunks. The joy of getting things done while learning. I recommend this!

Clean up, don't create 

I'm noticing that while the courses and books direct you to start from a greenfield project, I make choices that lead me to start with legacy code. I see a lot of samples on how people have done it. I find joy in increasing my ability to understand what the code does by cleaning it up. I might ask questions that lead to cleaning it up. Or I might just take it upon myself to go and rename methods, to do automatic refactorings and in general, go through the existing "prose" to make it more easy to read.

I feel this approach is closer to deep proofreading with understanding, than to creating. I feel I enjoy taking the role of someone who keeps the style consistent and readable.

Reading code

You don't have to write anything or change anything. You could just read what is there. I did this for a few decades without giving myself credit on it. Don't belittle yourself, reading is a skill just as writing.


Push for seams for smart tests

Now that I program, I have something against keeping me constraint to programming on tests. First of all, I still consider the interesting problems to be in the application's domain - that's why I love being a tester in the first place. Second, programming tests is programming, and why should one limit themselves in just doing that if other options exist.

I rather share all the work with my team. And sharing has some very nice benefits. When testing something is hard, I seem to be one with creativity on saying that we could test these things this way, and these other things this other way, if only we could split these two. And when the focus is on solving the problem (getting the stuff tested with and without programming test artifacts) we fix the problems instead of playing around them. Instead of generating tons of selenium on top, we can find ways of pushing testing down to a smaller scope.

I find my learning programming is with focus on communicating this, over work on programming test scripts on where they are possible now.

Find your *thing*

This shouldn't come as a surprise, but programming or being a programmer isn't thing. I speak of this with 20 years of experience of monitoring several individuals in detail, and analyzing the stuff they produce in detail as a tester.

Some people do a bit of everything, but programmers also start from some corner. You can choose your corner in any way that suits you and (helps your company forward).

There's at least:
  • Types of activities with different skills needed
    • Creating something new, prototyping - some programmers would only want to do this
    • Debugging & fixing - some programmers are really bad at this 
    • Configuring the platform - some programmers do wonders on fixing behaviors without writing a single line of code
    • Cleaning up & extending legacy code - some programmers have extraordinary skills in working with code they don't understand
    • Selecting technologies and libraries - some programmers have great overall understanding of purposes and availability of solutions
    • Architecting structures and enforcing patters - some programmers can apply a great deal of information of good learning that came before them
    • Creating maintainable and purposeful test artifacts - some programmers have a great idea what kind of programmed test artifact would be helpful in the long run
    • Portraying good habits - some programmers are disciplined in working the way they know is good and not taking shortcuts
    • Recognizing programmable solutions - some programmers notice things that should be automated to make the flow of work better
    • Understanding user and domain - some programmers are able to talk with end users and find the right small next bit of value to implement 
    • Working efficiently - using other programmers and non-programmers to get the work done-done, instead of going where one's own limits reach.
  • Languages and libraries
    • From single language specialist to a polyglot programmer, there's a lot of variety. And more to know comes in every day, so instead of knowing, the focus is on discovering and fast learning.
A big realization: the best of programmers start to resemble the best of testers a lot. We just start out in different corners and use most of our energy on different things. After 20 years of testing and a few in programming, I hate the term "junior developer - senior tester". With what I know from testing, I already contribute more than I would have given myself credit for earlier.

Monday, June 27, 2016

Context-driven testing

Twitter has an ongoing discussion about what is context-driven testing, who is a context-driven tester and who gets to make calls on what is and isn't. Since the industry doubles every five years, meaning half of us have less than five years of experience, it might be good to refresh into memory what context-driven testing is all about.

It all starts with a manifesto in 2002. Except that it was called Principles, not a manifesto. And where Agile Manifesto was an outcome of a meeting, this one is an outcome of a book writing process of Cem Kaner, James Bach and Bret Pettichord.  

THE SEVEN PRINCIPLES OF CONTEXT-DRIVEN TESTING
  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
If you're ever trying to figure out if you are doing context-driven testing, these are the go-to definitions.

I loved the book. I loved the internal disagreement of advice in the book, and its attempt to describe when you might choose one way over the other. I became a context-driven tester and have been one ever since. Being a context-driven tester has helped me move between waterfall and agile project, between financial systems and web pages development and always make sense of what would be appropriate there over what was appropriate in my previous work engagement.

The core principle for me has turned out to be number 3. People, working together, are the most imporant part of any project's context. People come with skills and habits. Skills grow and habits change, but not over night - it's a longer process. I can, however, do the best testing possible for the current time and current constraints (my choices) while I keep on working to change the world as we know it (the givens).

There is nothing anti-automation in context-driven testing. Automation extends our abilities in testing, and it is a part of most strategies to testing. Automation is done by people, maintained by people and serves the needs of people. Just like any other software product, automation in testing is a solution.  If the problem isn't solved, the product doesn't work. And while there's great automation in testing out there, there's a lot of solutions that neither solve or really help with the problem.

There's always the idea of opportunity cost: time used on something could be used on something else. And as a context-driven tester, my interpretation of the principles has been that it is my responsibility to drive a balanced view of short-term and long-term gains with regards to what (and by whom) we could automate.

Working with agile projects, I've learned that the only thing that stays is change. My team learns, and changes. Learning changes my context. If my context changes, I change with it. I can always reflect back to the principles - am I still providing testing that is "a challenging intellectual process"? 

The Context-Driven Testing blog includes a commentary that I take to heart:
Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.

Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
Specific situations over prescribed notions of testing. It does not stop me from experimenting with TDD (that my developer's just have not gotten the hang of, yet!) or BDD (that just did not work out for us at the time we tried it) or Mob programming (that helped us get closer to real teamwork) or even most of test automation (building the skill takes a while). While experimenting and trying to get better, we keep the release engine running. And release daily. Testing included - in a context-driven fashion, growing as the context enables something different.





Saturday, June 25, 2016

An exploration example: Getting ready to test

I've been running a session at a few conferences on Exploratory Testing an API, using ApprovalTests framework as our test target. I needed a test target without a GUI, and loved the idea of testing a testing framework. The developer who created it is available, and it reportedly has unit tests. All of that is good. It gives a premise of having a target that is not as target-rich as most things with a GUI and without unit tests I pick up from github as test targets.

Today, I was planning on preparing a bit more into future sessions of ApprovalTest exploration. I had scheduled a pair testing session with a wonderful lady from UK, and I just wanted to get my environment set up.

Before, I had been exploring the C# version, and today I wanted to work on the Java version. My reasons were two fold: 1) I wanted to be able to work on my Mac as the camera on my work Windows won't work 2) I wanted a first feel of consistency between the C# and Java versions.

I download the package from GitHub and import the project, and run the unit tests to make notes of my first observations (would like to say bugs). This really should be available from the Eclipse Marketplace or whatever equivalent the other IDEs have
  • The unit tests are red - not passing for 5 tests out of 341 total. 
The developer is unavailable, so I peek the ones failing. There's mentions of UniqueForOS(), so I'm guessing it's an environment thing. But I make a note of the issue that bugs me:
  • The machine-specific tests are not easy to recognize as such and make the suite fail
With a new version of Eclipse recently installed, I proceed to install other stuff I feel I need from the Eclipse Marketplace: Emma for code coverage and PITest for mutation testing. The latter comes in as an idea from release notes, that mention the latest change from yesterday to be PITest and TestNG support. A tester's hunch tells me that these have probably been tested against a customer case with the need of these, and the tools own unit tests might not have been considered. 

Running Emma for coverage I learn that the unit tests cover 44 % of lines of code (there's clearly more to do just to add coverage, but that wouldn't be my main concern as an exploratory tester). Running PITest I learn it fails because the suite is not green. 
  • Low coverage could be addressed
  • PITest fails

The developer becomes available and decides to fix stuff. As I'm not really testing this stuff for him but for my course preparation purposes, I catch myself being slightly annoyed with his eagerness to fix things, he has already ruined many great examples of bugs by actively reacting to them. I scold myself on remembering there's *always* more and that I don't look for the easy stuff when I teach, with less target-rich environments we get much deeper in our exploration. Testing exists to help improve, and I'm serving the purpose. 

We pair on the fixes, first understanding the failures on my machine. It turns out I don't have any visual comparison tool, and he guides me into installing P4Diff. 
  • No user manual that would guide a new user to do this...
The test fails, and I see a diff. And just looking at the things compared, I can't spot a difference. If I would test without a tool extending what to notice, I would say these are the same.
The tool has ways of overlaying the images, and I still see no difference. So I use the feature to highlight the differences. 

The differences of rendering the Swing GUI could be caused by many things. But if the test is very sensitive to the environment it's run in, it should be visible from the structures. 

We continue to the other tests, with similar things. One of the five tests we look at, and I point out that it looks very similar. And it is very similar. 
  • Same unit tests duplicated over different locations in the project
I also ask about the UniqueForOS() call in the tests, only to see it being deleted in front of my eyes. It did not make sense to me, as there was a restructure of the machine specific tests going on, and I learn that this is a relic from a further past.
  • Unnecessary old notations in the code
The fixing is driven by two questions. First I ask  "How to know that these are supposed to fail on my machine?"and as the structure emerges, I ask "How to run the others in the project so that these don't fail for me?". And we end up with a solution with environment setting to control the running of those tests. 

While he's adding stuff to implement this, I notice him adding @Test -keywords and ask about it. I had earlier noticed the tests in general did not have those, and I get the JUnit 3 vs. JUnit 4 answer. These came with the later version, and they have not been needed until now when he wants to ignore some tests as environment specifics. 
  • Clean up the code to use the 4-notation consistently
I get the updated package on my machine to get the tests running green, with easy toggle to turn them back on. But PITest still fails, the solution isn't elegant enough to survive with the other players in the ecosystem, and I look forward to seeing if the fix is in ApprovalTests or PITest. 


The after exploration discussion is what puzzles me the most. The developer again labels the things I'm doing and pointing out as product owner stuff, when this is what exploratory testing has always been for me. And on the other hand, I've yet to experience a product owner that would actually go hands on enough to do stuff empirically. He points out that while he never realized you could ask *this* from your testers, it's likely that there's other developers who have no idea what their testers could help them with. Exploratory testers seem to understand (learn to understand) the vision, and understand (learn to understand) the user.

We also talk about my ideas of how I want to spend time on exploring the rich ecosystem, and how he's never really paid much attention to it outside end user feedback.

He concludes there seems to be three things working in my favor:
  1. Skill and discipline in organizing thoughts 
  2. Beginner mindset
  3. Look of the code as a product; devs look at it as code; product owners look at product as product. 
I find that working together might also help me outline and explain what I do and provide in a way that is perceived as less defensive. There's a lot of the idea of technical exploratory testers being non-technical, when the point is not in the lack of coding skills, but in the focus of what I do. I think differently while generating code. 


Friday, June 24, 2016

Resurrecting signature series of webinars

You have relevant experience. Yes, I mean *you*. Every. Single. One. Of. You.

You might be considering speaking about that experience. Something you find interesting and relevant. Or something your friends around you tell you is interesting and relevant.

I'm volunteering to facilitate a webinar stage with Ministry of Testing to get these experiences out. 

I've facilitated communities long enough to believe that people will vote with their feet (and use of time) on where they want to show up, and we can give chances of things to emerge.

I've run conferences long enough to believe that hearing a presenter trumps any abstract.

I've spoken in conferences and meetups long enough to know that only practice, feedback and reflection takes us forward. And failing to provide value in a talk can be done safely!

My working name for this is signature series. Deliver the talk you might want to be known for. When you want to change your signature, deliver another one. You get to choose if the one you delivered was a one-time delivery, of if we keep it as your current signature.

When you want to speak at a conference, you'll have something to refer to. It could be your first conference talk, or it could be your first keynote you're prepping for, or anything in between.

The stage is free. Let's start scheduling the sessions. Get in touch: maaret@iki.fi

There's too much good we haven't yet found. So many great voices and experiences. Time to do something about it.

(As with most of my offers, I want to see you succeed so I'm happy to give help you before your delivery. And I personally commit to giving you feedback, if you want it.)


How Mob Testing is organized?

A discussion started on Mob Testing today in StarEAST attendees group that I happen to be a part of. The discussion quickly shifted from "nice idea" to How Mob Testing is organized?

"When i say organised i mean how do you start, testers supposed to come prepared with their scenarios ? Are there any hard and fast rules ? Logging bugs or documenting tests ? Or is it totally up to team to decide how they want to go about it ?"

I decided to share my response in my blog as well as the group, in case others might find this interesting.

How do you start on a normal day of your work? Do you come prepared with a scenario, or do you start with a previously prepared scenario? There's no difference. The heuristic I try to use is "everyone should be learning or contributing" and that helps me choose what we'd do together. My team uses mob testing / programming for learning, not all production work. There's other teams (most famously Hunter, Cucumber Pro and Alaskan Airlines teams) that use mobbing for most/all production work. I don't think there's hard/fast rules, but these are ones I try to adhere to:

  1.  Roles & rotation. Driver on keyboard, navigators decide what to do. To learn navigation, try designated navigator pattern. Rotate on timer, short to begin with.  
  2. Kindness, consideration and respect 
  3. Yes, and... - continue and build on the others work 
  4. No thinking leading to independent decisions on direction at the keyboard - trust the group of navigators to set the direction.  
  5. If you're not learning or contributing, rethink what you're doing 
  6. If you disagree on which way to do things, do it both ways and then decide.  
  7. Retrospect regularly.  
I've logged bugs in a mob. It was painful, because it taught us so much about how much we disagree on what makes a good description of a problem. I've written test documentation in a mob, it has created common styles of what we're comfortable with.  It's really up to you to decide what you feel would be a good thing to try. I love mob testing in an exploratory testing manner, learning and finding bugs on new application. But then again, that is the format I teach in nowadays, also outside my usual day job.

Wednesday, June 22, 2016

From push to pull and the need of allies

I read a post on diversity in tech activism and its impact on burnout, and felt I need to stretch what I read a little further. Diversity in this article means what we traditionally mean by diversity - moving away from white heterosexual men. However, the article resonated with me in particular from the point of view of diverse voices in software development, and the never-ending story of how non-programming testers will soon no longer be needed.

I had a burnout on a very early stage of my career, and I've since been telling myself I recognize when it creeps in, and can act on it. Recently, I've started to question my ability. There's so many things I don't want to drop. I have my full-time testing job. I have my second job to teach testing to the rest of the world (one form of my activism). I speak at conferences. I have my tech diversity activism in mentoring new speakers and teaching non-programmers (kids & women) how to code. I have a family that needs my presence. I don't want to drop anything out of the equation. Naturally, the balancing is hard but often worth it.

A few days ago, I told a close friend I was feeling down. Not burned out, but low and drained. The last two trips were good, but I remembered the bad: the organizers not seeking me out to talk with me on a two day conference; sitting alone reading a book in middle of a conference party; the pairing partner choosing not to pair with me. I felt low enough that I again started thinking of options to exit the whole field of software development, feeling trapped for the good pay. She pointed out: "You've shifted quite a lot on the programming side lately, maybe that's what you actually don't enjoy doing?"

The question was great, because it helped me see that I do enjoy doing that but I don't enjoy the extra diversity pressure the new choice brings with it. Many seems to dislike testers regardless of gender, but in programming my gender becomes an issue. I start to take offense of the girlfriend assumption. It bothers me that people start talking about raising kids over refactoring code when I join a discussion. I never knew what the "microaggressions" were while I identified as "just" a tester. But I found the concept and the label to explain what about identifying in programming was making me uncomfortable.

The added experience is making me realize that there's a whole bunch of these microaggressions on testers' type of testing too. The "non-programming testers will no longer be needed" is sometimes a direct message to the face, but more often a remark with "but I mean the other testers, you're good". The testing community has carried me through rough years of this, offering comfort and skills to be really, really good - and to know it.  I talked about testing with testers before, in addition to the few developers I worked with. But the testers, people who understand, they were a majority. This is changing with agile.

It also made me think of why some of the prominent figures in testing might resort in stronger expressions than I would appreciate. They are fighting with the ego in programming to a different level that I am. They are hearing even more of the "non-programming testers will soon no longer be needed" and they, like me, know better. No one pulls for the info on what these people could contribute, but they push and get a lot of rejection. They collect the rejection of testers worldwide and empathize. There's very few allies for this work. The allies would be people who want to actively hear the value of testers and deep skills in testing. People who wouldn't push their current solution ("let's automate it all") to silence the message that feels hard to deliver.

This fight for space to exist as a non-programming tester is my core reason of tiredness. I feel that the years of activism many of us have put on that message is downplayed. Not just by ego in programming, but nowadays, the ego in programming for purposes of testing.

There is a significant movement to push a group of people out of the industry, and a group that just so happens to have a lot of women in it. I've tried to not care about that so much, and not caring is what is eating up my energy even more.

So read the article on burnout from the perspective of helping testers. The article ends appropriate quoting anonymous:
“Recognize that, while extremely beneficial, diversity-in-tech work exacts an emotional and mental toll on the well-being of the people who do it. We need to value people; people must always come first. For without them, there would be no work at all.”
There is a diversity of specialties. And people like me would really need help from developers (programmers) on understanding and explaining widely what the value is.
When did you, as a developer, actively make room for your tester to share their trade? When did you, as a tester, have a developer who actively wanted to bring out the best of you without rewriting you into a programmer?


And please don't tell me anymore that I'm special and get heard because of that. I get heard because I fight for getting heard - my fight just tries hard to be a considerate and persistent kind. Take action in listening, even when people are not pushing to talk and share.The "commodity testers" might not want to be that way, and have relevant stuff to contribute. I believe they're products of their environments, and have even more load to peel off before you get to the core of their contribution. But they're thinking smart individuals underneath that all.



Word-policing and responding critically

I feel my call for kindness and consideration of what debates/discussions I want to immerse myself into has lead to an idea that I don't want to be criticized. I value good discussion, but one that takes us forward. Forward for me is increased understanding, instead of defense of an idea. When understanding grows, some ideas turn out to be bad. Some turn out to be better than without the added understanding.

I've been wondering a lot about an ongoing discussion about wordplay vs. relevance of semantics. I feel that when I tweet, I get corrected a lot. Most often by the lovely, helpful Michael Bolton. He occasionally reminds me that we can't prove things, we can't assure quality and that we can't automate testing. I'm trying to learn to say thank you. Because even with risk of shallow agreement, I believe we agree on the relevant bits of these.

I picked a sentence from comments of a specific blog post by James Bach.
The competence issue is when you stand up, put your ideas out to your peers, and yet expect them not to respond critically.
I've loved some debates and hated others. Both are about "responding critically". Where's the difference, then?

There's questioning that aims at adding understanding. In these discussions, people talk more to understand and to map my experiences and ideas into their context. My skills and knowledge at the point of time are part of that context, and often helpful questions are about my awareness of opposing views (e.g. "Yes, thank you, I have heard about continuous delivery being a bad idea without automation. We still do that very successfully for two years now").

Then there's question that seeks a winner. For a debate or an argument, there's a winner. Which implies there's also a loser. These debates often end up in rhetorics that lead to winning, even through taking the opponent out of balance.

When people say twitter is a bad medium, I believe it is only bad if we make it so. If we approach discussions to seek the one truth.

There was a particular discussion today that I want to use as an example. I tweeted and got a response that I consider typical:
A friend was quick to jump in to inform I was aware of the difference and that there was a message other than choice of words in my tweet. I got corrected on words. Again. Good thing I'm beyond my earlier fear of saying things, because this could also be a great (unintentional) silencing technique.

A little later, I was still tweeting about my unfinished thoughts:
A friend of mine coins this beautifully in a private discussion. A bit of flexibility in vocabulary can open so many doors! 

So, I keep on being critical without focusing on vocabulary. There's more words in the world, and more words may eventually lead to a better understanding. Defining the words and policing them makes people feel bullied, even if the intention is to help and respond critically.

Final note: 
(the word "congruence" - I just can't get my head around this as non-native english speaker, it just does not translate well - so I use words that make sense to me)

I will talk about test automation and automating testing. 
I will not work with the testing/checking distinction. 
I will trust my developers and managers to understand there's no complete testing, so while I can try using safety language, I forgive myself and call myself a skilled tester even if my language is imprecise on what exactly I can assure or confirm or test.  

Learning to learn

I had a SpeakEasy coaching session with my latest aspiring speaker. He is inspiring me in the process.

He used to be a teacher before he started his work on software and testing. So it feels kind of natural he wants to talk about learning.

Our little chat on his talk idea lead me to think more about learning, and in particular, learning to learn. Surely there's techniques that bring structure to learning and make it easier. But there was a particular aspect he did not really directly mention that I think is really relevant: the speed of learning.

We need to build the skill to learn in small batches.

He mentioned a few examples of things he has needed to learn as he became a tester. They're big things like performance testing and BDD - in the sense of including also better communication with a difficult customer. But all this learning really is built up a day at a time.

This made me think of an old job of mine, where I was making of a case of how stupid it felt that they were forcing (encouraging, some might say - with an option of layoffs) some non-programmer testers to move into automation. I remember pointing out numerous times that we actually ended up with previously productive people (they were good exploratory testers, who found relevant bugs we were fixing) turned into students and working in a box that did not enable the same results. I still feel what we did in that organization was a wasted time. But I missed a reframe of the problem that could have made a difference, that I only thought today.

If the learning and retraining could have been done in small batches instead of this all-consuming learning effort with poor results in many ways, things could have been completely different.

A main skill I feel I've been developing as a tester by profession is ability to work and be productive without knowing all. I don't need to do full research to start testing - I learn in layers. Instead of using all allocated time before deadline into research and learning, I research a little, try it out and research some more.

Whenever I hear that it takes months for a new employee to provide any value to the new company, I wonder why my experience is so different. I usually find bugs with the new company's software within the first week - or even the first day. And my learning of the product never ends - there's so many layers I did not know of when I started at Granlund 4 years ago, and I just keep learning more about our product.

Testing is learning about the product and sharing the information effectively. And there's a lot of other skills/knowledge that is useful in that, including performance testing and BDD. 

Fascinated with ApprovalTests

Last Friday, I watched a group of software craftsmen agree on 3 * 20 minutes of paired demonstration on a refactoring Kata "Gilded Rose", and then changing their mind after the first 20 minutes.

The first 20 minutes was a pretty awesome demonstration of Llewellyn Falco and Aki Salmi pairing in strong-style using ApprovalTests in Java. The first 15 minutes went into a cycle of adding tests using LegacyApprovals (that I knew from C# as CombinationApprovals) adding criteria to a one line of code based on what Emma code coverage tool was hinting might be missing. With every expected result, they just documented as ApprovalTests what current one was, over trying in any way to understand or describe it yourself.

The last 5 minutes they cleaned up some code, covered with 100 % unit test coverage.

The 5 minutes after their time-box the group used on extending to mutation testing, adding some more tests as PiTest-tool suggested some of the existing tests were weak.

Total: 1350 tests with one line of code, and expected results defined as "if it works in production now, let's just keep it that way".

On Saturday, I took part in a code retreat, and used ApprovalTests on some of my sessions. This left me thinking why I'm particularly fascinated with ApprovalTests.
  1. The tests in the file format with explanatory padding make sense in the world I think in. 
  2. The "recognition" part is what I feel I have special skills on anyway as an exploratory tester
  3. The idea of filtering and processing depending on what technology you're testing to keep focus on testing makes sense to me
  4. There's practical solutions to things that I've thought sometimes as too hard to test, like running combinations quickly or keeping tests that work against an external service fast (iExecutableQueries stuff, where you do slow stuff only on failure).
  5. The idea of doing special things on failure for granularity makes sense, and changing reporters when investigating reminds me again of exploratory testing. 
  6. I like how this feels so much like exploratory testing on unit level. 
Knowing the developer who created this stuff isn't actually a negative either. But for me, that would be often more of a reason to find actively reasons not to like it.I don't endorse friend's stuff blindly.

Better do some more exploratory testing on the tool. Next up is understanding how well the claims of what different Approvers do is actually consistent over the implementation. And then I was thinking of finding ways of breaking it in the environment of use.

If you want to pair on this, ping me. Just some educational fun on someone's open source project. 


For testers still living in the waterfall

There was a piece of feedback from a testing conference: "There's all these great ideas about testing in agile. We still live in the world of waterfall. Is there nothing other for us in testing than to move to agile?"

It's been a while since I've used my time to actively think about waterfall projects, other than for the fact that I feel so blessed not to work on those anymore. I wanted to write this post thinking of the colleagues fortunate in different ways than I am, with the ideas of what is there for you on the current themes of testing.

1. Testing is testing, agile is a context

This phrase was going around some years back, and it's never been more true. Testing, as in looking at a product with most recent changes available to use and find information on, is very much the same with agile.

Agile as a context suggests some ideas that work just as well in waterfall environment:

  • If we shared the ideas of what and how we're testing, we'd get more done and better - think testing over testers 
  • If someone is a specialist in testing, they will have a chance of bringing in diversity of viewpoints (testers and programmers focus their thinking differently!) and find problems the non-specialists struggle with. It's not magic, it's time and focus combined with continuously improving skills. 
  • A professional, self-organized team that works on learning does better in delivering. 
In waterfall, it's still a day after the other. Every day is a chance of learning. The feedback cycles are different (slower, requiring different effort) but none of that stops you. 

2. A lot of it starts with understanding value

In the world of waterfall, we used to talk a lot about requirements. But they are really our interpretations of what would be of value. 

Why would anyone want to use your product? What are the core functionalities? What are the core risks, with regards to quality? What do you need to know and is it you who will be delivering that information? 

You can work out the core risks just in time. For some of them, the right time is to talk about your concerns before implementation. For others, the right time is to talk about them sneakily while implementation is ongoing, without overloading the stressed developers. And for others, the right time is to find the problems before production, in whatever testing phases you have scheduled.

3. Buffer and protect the timeframe

With waterfall, people will still end up using the testing phases (that are really fixing phases!) as buffer. You need to make room for enough rounds of testing to happen, for this to not ruin all your plans and commitments on testing. And whatever time you have available, the best you can do is to find bad bugs early. Bad bugs will buy you more time. Bad bugs will need the time to get fixed. 

But be careful. In protecting the timeframe, you might have to make choices of how you do that best. You might use your time on talking with people, when actually a better strategy could be to just test some more. Empirical evidence trumps the speculation. 

4. Types of testing and tools

The new and cool stuff applies to you just as much as your more agile counterparts. Need more hands on (shallow) testing? Consider crowdsourcing. Need a tool to help out automating with a specific technology? Follow what new comes in and try not to drown in the overflow of options. 

Do you still write test cases? Have you already considered moving towards session-based test management or some lighter-weight exploratory testing management frame? Or making your test cases, if you must have them, at least some of them, into BDD/SBE-style automation of executable specifications? 


You know, it's not that different after all. Other than the waiting. The powerlessness of introducing change without added politics. The inherent blaming of missing things because you're asked to do things at times you knew the least. 

In waterfall projects, your bug reports are against a fixed deadline. Choose wisely and work to help in any way you can to make sure there's not too much options for you to choose from. The dynamics are different if you release once a year and if you release once a month. 

Tuesday, June 21, 2016

When two testers meet

Last Saturday, I participated a Code retreat in Vienna, Austria. I wanted to share my favorite moment of the day: meeting another tester.

In a code retreat, we practice programming implementing Conway's Game of Life in pairs with Test-Driven Development and various types of constraints (like data structures, frequency of checking in required, style of pairing). A day fits typically 5 sessions and thus 5 pairs.

I had just finished my first session of the day and a morning break was called, when one of the three women amongst the 25 participants approached me. She had learned that we were both testers, and we set up to pair together on the problem in the next session.

I had great time programming with her. It was one of these experiences some folks tend to refer to as "Reese's pairing" where both parties have ingredients that make the result great but neither has the complete set of things.
** In case you don't know, Reese's are peanut butter + chocolate, an american candy that was advertised as a lucky accident of bringing two great ingredients together to make something even  better. I wouldn't know as I'm neither American or able to each anything with chocolate. 

There were pieces I could bring into the puzzle from past code retreat experiences. With her, I pushed for trying out ApprovalTests and I really liked the way our domain model ended up in the classes. I learned a lot I could use in the later sessions of the day too.

Later in the afternoon, we were sitting with a small group. We got back to the discussion about both of us being testers. My pair was a tester specializing in Java test automation. I was a tester specializing in exploratory testing. What we do for our work has little in common, yet we're both testers to others.  We also work in very different contexts as per ratio of testers to developers and resulting assumption of who would contribute what in the development work.

When two testers meet, it's good to remember that we're not all the same. And instead of us arguing on the essence of which one of us is a true tester, we can just add labels to explain our difference.


Thursday, June 16, 2016

Insensible test automation comparisons

In an open space conference, one session was playing on playing with combinatorial testing. I missed the session, but heard about it on the hallways. The message was that they found a problem (failure mode) somewhere at thousands of tests, and kept going up with the numbers to have some impressive amount of different tests created, just in this hour-long session where random people just got together on a problem.

Finding the failure mode was cool in my books. The number of tests run, as impressive as it may be, wasn't.

I started thinking back to this, seeing a tweet from #BTDConf stating on a slide that "It takes at least three times the effort to automate a manual test."
Just as the thousands of tests was irrelevant information, this isn't much more helpful. I've seen again and again that when there is a nice seam (api with special testability in mind), it can be faster to automate a test than run a similar idea manually. Then again, what I run manually is never exactly the same. Creating the seam slows things down, but adding more similar tests evens out the investment often quite radically.

It just makes little sense to me to compare stuff on a very general level.

Could people just share some very specific examples, instead of the attempt to generalize (and scare with how hard it is)?

And another thing from my experiences: the thing that is expensive when I do it turns out to be very cheap when pairing with a great developer with specific experience.

Yes, learning is expensive. So let's get cracking on it. All I need is everyone to be just a little better every day. On something. Choose something  you like, something that challenges you. And keep trying when it's hard. You are not alone. 

The Squeezed Testing Problem

This post is inspired by two nicely timed incidents:

  • a discussion with an aspiring speaker, who was enthusiastic about BDD as a way of testers being able to work continuously throughout the sprint on testing
  • a tweet mentioning a common theme of tester complaints: “We’re agile. All the testing is squeezed into the end of the sprint.”

Let’s talk about approaches in tackling the squeezed testing problem. I find this important because, as popular it seems to be, the BDD stuff is not the only way forward. I’m sure the ways forward I’m aware of are not the only ways forward.

So, how could you organize the testing so that it does not end up all being squeezed into the end of the sprint?

Approach 1: no sprints

I find that Scrum and sprints is what people often start from, and having had 10 years of experience with adopting agile, it makes less sense to me now. So instead of starting with Scrum, what if you would go for per-feature “sprints”, except those tend to be called Kanban and Continuous Delivery.
Don’t fall into the trap of thinking you cannot do continuous delivery without test automation. You can!

Try thinking it this way. You have feature X you know is important. Even most important right now. You discuss the feature with your team, and chip away until you find the smallest possible value item you can deliver all the way to production. You then, as a team, do all the necessary work you need to get it delivered. And all the necessary work includes manual programming and manual testing – nothing special there.

The strategies you employ to test might differ a bit. You might pair up with a developer to be there testing as it is being built. You may have a working agreement that you test on a development machine before anything gets into source control. Or you may have a branching model where each fully integrated feature can be automatically built as a system from its own branch that you can test with the developer fixing things as you find them.

Who defined that a month of programming would need to be tested in a day? The whole thing could be a two-month thing instead. When you deliver a functionality at a time, it’s not a big trick to consider the thing done only after it has also been tested and fixed.

Just work on having small functional slices. You really don’t want 2-month feature projects if you can have half-a-day feature deliveries.

Approach 2: forget about the idea of testing before production

I find that a lot of testers are stuck on the idea that system and acceptance testing (that’s what they do) happen before you go live with your software. But you could also look at testing as something you do almost exclusively against your production.

If you plan on using this approach, your organization might be better off having some safeguarding mechanisms on how you roll back or do staged releases (not everyone gets hit with problems at once in large user base).

So when your team is done with the sprint, the system goes live. You test the squeezed testing for the half-a-day you can squeeze in (or can’t), but it’s really not your responsibility to attend to the fact if it works or not. Someone decided that it will get addressed during production use.

The changes are perhaps not that big, after all, they needed to fit into the squeezed development schedule of the sprint. Your testing really starts when the end users using starts – just with the idea that you will tell clearly when you run into problems and pair up with a dev to get the fixes done as soon as possible. The end users might not tell they have problems, and when they tell, getting them to express what they did and what is the problem is a lot of work.

There’s one big problem with this approach. If you need to find a lot of problems, the developers need to fix a lot of problems and they don’t make progress with the things they aspired to in the upcoming sprint. But if it works well, you will find missing value items, ideas to improve the user flow and “missing backlog items” that you can add to your upcoming sprints to improve your product based on the feedback. The stuff you find can wait a sprint. It’s like you’re an empirical extension to a product owner.

Approach 3: shift left

Shift left is the popular idea in agile that you would finally get a practical way of doing the thing waterfall always failed with. Your chances are up because of short increments – helping build the right product one small right thing at a time seems more feasible than hitting the mark on something big in waterfall style.

BDD (Behavior-driven development), SBE (Specification by Example), and ATDD (Acceptance Test Driven Development) all roughly mean the same idea: create examples of behavior in test artifact format, implement the automation while implementing the features and you’ll hit the mark better and need to do less of exploratory testing when what you were building was more clearly designed.

This approach would ask the tester to work on creating the test artifacts (with a product owner and the rest of the team) first as text, and then contribute to automating. When the test artifacts “pass” against the implementation, the assumption is that the exploring needed is small and fits the squeezed timeframe. After all, you were exploring already while designing the feature.

I find that the product is my external imagination, and my best attempts to tell, even for small features all the stuff in advance are limited. But limited is better than not trying to clarify what we’re doing.

Other approaches?

There are a lot of options of tweaking each of these, I’m sure. My main concern is this. If so many testers are struggling with testing getting squeezed to the end, why are so many testers feeling so powerless to do anything about it?

We’re in the business of providing information through empirical work. How about using the empirical evidence to change the status quo to something a little better, one experiment at a time? A good tester has a lot of power. Find the information that matters to the people who matter.

Wednesday, June 15, 2016

Battleship TDD and ApprovalTests

Last Friday after #DevoxxUK (a java conference), we got together with a small group to try out a TDD mobbing kata. With a bit of discussion on what problems we'd find fun to work with, we ended up with Battleships.

So we draw our first scenario with a Destroyer on a 4x5 board after the little tweak of not having a board that is symmetric.

We write our test in English and translate each line to code. The last line is to check the board. And as  I've previously been introduced to ApprovalTests (as opposed to Asserts), there's a nice flow from the visual representation of our game into an ascii art form of the game.

So instead of sampling assert at a time to represent core parts of the board we draw, we drew the whole thing at once as ascii art, saved into an .approved text file and then let the test guide our implementation.

I've been to different TDD sessions over the years, and I've really grown to like more the Approvals-based approach as more visual and more as a thing that represents world as I know it. I had not really thought it much, but it's an ascii art representation of the thing we're thinking of that we save to a file.

It's nice to be able to say in code

Approvals.Verify(); 

and move the rest of the describing end result into the text file over writing a pile of asserts.

Similarly, I was thinking back to one of my first unit tests with Approvals, where we generated combinations of credentials, dumped them into a file and added nice and clear explanatory test around it. I've appreciated the clarity of that piece of documentation since.

These just seem to map closer to something I feel at home with. So, I keep thinking about it. Maybe it's just me. But if it works for me, it might be of interest to someone else too. 

Database checks help us test

Reading around what people write on testing and test automation, I get the feeling that there are these two big camps of information. There's a lot of stuff on unit test automation, and there's a lot of stuff about system test automation, in particular things like Selenium. It could just be what catches my eye, but I wanted to dedicate a small piece of writing to my one current favorite of test code we run: database checks.


Four years ago, as we were starting our efforts with automation, we focused heavily on unit tests to fail with them in various ways. We could use a lot of time creating them, but with lack of skill we ended up doing tests that lock implementation not behavior, and a maintenance nightmare. In addition, the tests never failed for anything useful. So they vanished.

Two years ago, we then focused on Selenium. The tests found relevant things, and covered ground that developers found somewhat boring to cover. But as the amount of these tests grew, so did the troubles with brittleness. We then identified the subset of tests that would run so that we'd be able to rely on their results.

Less than a year ago, we moved our unit tests from Asserts testing more towards Approval testing. It ended up helping us check complex objects without a lot of maintenance work, and encouraged the team to look for better interfaces to test through.

I don't even remember exactly when the idea of database checks came up. There was just this recurring theme with stuff I'd find that revealed some of our functionalities broke the data. It was painful to test that, because you would only see the brokenness through other functionalities or over a longer time. So we started adding the rules of what the data should be, and made it automatic so that it would alert when I used features that messed up the data, or similarly, when users used features in production that messed up the data.

The tests weren't particularly granular. We could tell who used application in a way that triggered the database checks, but not what they were doing. We could tell the problem happened in last 24 hours, but in production there was always a delay.

Running checks in test environment with what the team was doing was more granular. But even the detective work needed to figure out what needed addressing wasn't impossible - since we knew the work existed with the checks.

Out of all the things we've done for automation, these have helped us the most. The little extensions to what a person has the energy to continuously observe on a database level finds relevant problems.


There is just one big challenge: discipline. Keeping up with the idea of considering detective work a priority, to address causes over symptoms. We're still working in untriggered volunteering of this work to share with the team.

Loving sustainable pace

We're getting closer to two years at work with moving from Scrum-like monthly releases to releasing daily. Back when I drove through the change into #NoEstimates and continuous delivery, I knew the theory that this would be different. Now I know the practice and wouldn't really want to live any other way.

I remember to appreciate the steady, almost uneventful pace we're delivering in because I met a friend yesterday, who reminded me that everyone isn't equally lucky.

My friend is a relatively new developer, with about a year of project experience behind her now. She mentioned how her first project was a waterfall project, and how she was struggling to keep up with the pace. Not because she was new, but because of big promises early on with little room to adjust. She mentioned how she felt that going for lunch seemed something you couldn't do, and that it was hard to leave the office on time, and impossible to make it to user groups like the one we met in last night.

She also mentioned that things had recently changed for her, as she joined a project identifying as agile. Small iterations and all of a sudden, the panicky skipping lunches turned into a steady pace of delivery.

I appreciate a lot her sharing her success with agile on feeling in control of her time again. It reminded me how much I value the sustainable pace and lack of fires around me.

Small batch size in delivery is great and helps create a pace that we can sustain.

Tuesday, June 14, 2016

Letting people go

I tweeted a little thought:
I'm not really into talking about the recent experiences I've had on this topic, but it's great to have old experiences to refer to. I want to elaborate a bit on what I mean with bad people.

First of all, I believe all people are products of their environment. There are no inherently bad people. There are no people who wouldn't be able to grow should their environment be the right one. However, I believe sometimes the damage done to people by their past environments is hard to unlearn.

What makes a person bad, in my experience is that there starts to be a common perception in a team that someone is not really contributing, or that the value they contribute is negative. I'd be very careful on assessing so called badness on a short timeframe and without significant effort in helping anyone perceived bad first get a fair reassessment and then help in growing.

With these warnings, I want to share an old experience.

Over ten years ago, I was a test manager (the good old days, when I still thought being a manager was more power than being a great hands-on tester... Empirical evidence trumps speculation. Every. Single. Time.). I had a tester in my team, and the other testers were mentioning in passing that he wasn't  really contributing much. Of anything.

I worked more closely with him. I talked about how he defined his job. He was enthusiastic about automation, and considered that one of his main tasks was to run a test automation set on every daily build. He had no skills to maintain the automation set, but he loved running it. He could skip the one that was failing, or remove it. And he could mark down 272 tests run every day. There was _never_ a single bug found by that automation, but he felt it covered a lot of ground. Manually, he could run maybe just 10 tests a day.

I suggested he would start skipping some of automation runs and just use every second day on adding 10 tests that were completely new, outside the automation set. I showed him how his area of responsibility had more support complaints from production than other areas, and explained I would like to work with him to figure out ways of testing that would find problems that clearly were there as per support feedback, and stop paying attention to the automation that wasn't really taking him forward with his goal of helping the team deliver better quality.

We worked on various ideas over a period of six months, and none of it stuck. We tried a lot of things, but it always all came to the conclusion that I had no clue of what good testing looks like and eventually, that I speak so much only to cover the fact that I know nothing.

At some point, I started making a journal of every encounter we had. I started giving all assignments both in writing and spoken form. But essentially, I started making a case of getting the person fired.

Out of this whole incident, I remember best one colleague who approached me. In not so many words, he came to plead me for letting this tester with troubles keep his job. The words stuck with me forever: "Just let him stay in his cubicle, doing nothing. This is a big company. They wouldn't know he does nothing if you did not tell them."

Back then, I worked to the conclusion of having the bad person leave. I was close with some of the good people leaving, as not everyone thinks its ok to have someone to share the work so that others will have to carry his weight too.

I learned back then that it is a lot of work to first make sure you help the person. And then collecting the fair case of letting them go.

Nowadays, I believe the tester back then was very harmless. He was just not providing any value. He was also not on the way of others, really, other than leaving more workload for others to carry. Nowadays, I work with developers and I know bad developers are not harmless. A bad developer can easily not create value but also make two other people work on fixing the mess they leave behind.

I still believe these people are products of their environments: years of accepting stale technology and little learning, years of siloed work, years of not considering the craft but hacking together whatever might appear to work without considering maintenance or quality.

As a tester, I tend to shine light to places that were secluded and hidden before. And when you see the mess in the corner, you need to start addressing it. Agile has many other mechanisms that do similar things. And when the cat is out on the table, it needs to be addressed.




Friday, June 10, 2016

Everyone pitching in

There was an interesting note I at a developer conference retrospective workshop on the topic of team dynamics. With a room full of developers, a bit of frustration to someone identified as tester was expressed.

The frustration was described as pigeonholing. You know, learning the parts of agile that you choose to enforce your message. Like as a tester, you would learn that "everyone tests", but you'd fail on learning that it would mean that you'd also need to do other things than testing. Everyone pitching in means *everyone*, not just the programmers.

The whole mention of this sounded very familiar expression of not understanding what the other party sees as the whole scope of tasks that need to be done combined with a bit of bitterness on unfairness of how we end up being treated.

I believe the core of the problem is in not understanding how to break down programming and testing into activities, perspectives and needed skills within them.

I find that it's fair to ask of everyone in the team that they grow their skills. I find that the umbrella term of testing (or programming, take your pick) includes a lot of width and depth, and gives a team endless possibilities of mixing up individual's focuses on what stretches them best next.

I don't find it fair to ask that when there is e.g. 1 testing specialist amongst 10 programmers to say that since programmers need to test, the testing specialist must do programming. In particular, if the "doing programming" means doing it alone in a corner, trying to pick up skills.

It seems to me that a lot of times these discussions are hard because we don't share an understanding of what the umbrella terms actually include. The tester saying "everyone tests" could easily mean that  there's a lot of that work and people need to pitch in and that she won't be feeling available to leave the post for learning stuff outside her usual stuff.

I find that the container "testing a change" is easily a 10 minute task for a programmer and a 100 minute task (or more) for a tester. The results also vary.

Making the actual work more visible is one of the main reasons I love mob testing. All of a sudden the 10 minute task, with an expert in the room, grows into its full length, often extending even the original idea.

My main concern with the remark is that the honest discussion about how people feel might not happen at the workplace - instead we go to our own peers to complain and the work at office continues to annoy us just as much as before.

Feelings need to be talked about. I'm sure the only reason for testers to stick with their turf is not pigeonholing. It's often the feeling that someone needs to take real responsibility of that turf no other really really understands.


Tuesday, June 7, 2016

SpeakEasy looking for EuroSTAR speaker

Speak Easy is a diversity program that works on bringing new voices to conferences on Testing. The way they work is  that they pair up mentors and new speakers, and they have agreed special channels to various conferences outside usual Call for Proposals. Speak Easy is awesome, and that is why I volunteer with them as a mentor. We need the new voices in the field.

Currently the Speak Easy call for EuroSTAR conference is open. I would really love for them to get the best of the world of new speakers, so this is my call for any new-to-international big arena's speakers to sign up!
The Call for proposals is open until 31st of July, so you have time to act! 


This call also gives me a chance to announce an experiment we're running to change the world of conferences. EuroSTAR is a typical what I call "pay to speak" conference, that I have had trouble submitting to because they give the conference entry but do not pay for the travel + stay (or lost income).

I want things to be different for someone who came after me and had similar feelings. So we're launching a scholarship to pay for the travel + stay for whoever gets selected. Don't let the cost of the travel keep you from proposing the talk the conference audience deserves to get!

Lost income I can't help with (yet), but will figure that out eventually too.

Show Speak Easy what you've got. Submit. And if you feel you could use help in formulating what you're talk is about, I'm still one of the people who are happy to help with that.

The world of testing conferences needs you. Start the process of getting your voice out with Speak Easy EuroSTAR 2016. And when you get a no (most people do), you've already worked out the first step to do that talk in other conferences and local meetups that are just waiting for people like you to volunteer.

** Who is we? That is organizers of the European Testing Conference and me leading up front as the main organizer. I want to see things be better. 

Friday, June 3, 2016

Programmers make great testers

Patrick Prill published a throughful, and heartfelt blogpost on Reinventing Testers and Testing Prepare for the Future. Read what he said, there's so much good in that.

There's a piece I want to write on more on in that blog post:
As a tester in the role as tester, used in the right situations, I can provide value to the project only I as tester can provide. And I don’t want to give up my role as a tester. I want to continue asking questions, experimenting with the system, analyzing strange problems, I don’t want that to go away.
These words could almost be mine. On some days, they are exactly like words I use. But when I use them, I recognize that for me, those phrases are grounded in fear. And justifiably, I'm afraid of not getting to do the job I love. I'm afraid I wouldn't at some point find work in which the management understands the immense value I provide because they'd worry about developers giving up on their quality-responsibility for just mere existence of someone like me.

However, I see programmers asking questions - better questions even - than me, and it fills me with delight. I see programmers experimenting with the system and showing genuine curiosity. I see programmers analyzing strange problems, using debugging tools to deeply understand what is going on. I see programmers doing brilliant testing, changing perspectives, realizing new stuff. And I don't feel fear of losing my work, but I feel pride in enabling other people to get to enjoy the stuff I've enjoyed so much. And I feel delighted when they identify tedious points of doing this and add automation to help out with that.

I'm proud to be a tester. But I'm more proud that my programmers are testers too. 

I recently met with a tester, who was waiting or coming up with other simple tasks while she couldn't test. The discussion reminded me on the fact that even with my programmers testing very close to my skills, there's so much work on feedback that there's always stuff I could propose to contribute on. (Actually, just contribute. Ask for forgiveness, not permission).

Now that we release daily, there is no more testing phase, but the whole life is a testing phase and for all of us. There's always the view of testing as artifact creation (giving us spec, feedback, regression and granularity) and the view of testing as performance/exploration (giving us guidance, understanding, models and serendipity). The views and deep skills to those views peek at different times with regards to adding a capability for the software. We test continuously.


As a tester, I've spent a lot more of my time with things some people like to call "shift left". But since this is all a cycle, there's no more right or left, really. While in the past as a tester, my main contributions were on tasks with focus on exploring before production (letting the software speak to me) and contributing before implementing, now my focus is more on exploring while in production, with help of metrics and insights supported by patterns of real use. I get to do more while implementing when we mob too, and absolutely love the half-sentence picks that save us hours and days without programmer ego in play.


I believe we need new ways of explaining what is the "testing" we speak of. Because it is more of testing as performance. The performance feeds the artifact creation. The artifact creations constrains the performance - often unnecessarily.

Learning in layers - testing as performance - takes time. The time taken is what makes some programmers bad at testing as we know it. Time comes from outside as well as from within.

I will do whatever I can to make sure we enable the programmers that see big picture and work wonders. Enforcing a tester role, and founding it in arguments of fear has not helped me. Enforcing understanding of testing has. When I speak of my fear, I get helpful responses. When I set up defenses and claim I'm not afraid, I feel attacked. 

Being nice, being active

Meeting wonderful people at conferences, I get reminded of how lucky I am in many ways. I get to work in an environment that feels safe, where I can voice out my concerns and feel that when I fail in communicating, I can recover.

This is not always the case. The story is adapted from inspirational real circumstances, combined with some of my past experiences and then shared.

It was an important project, with a deadline. You know, one of those "deadline regulated by law" types of things. People had high hopes on new technologies being introduced making us more productive. But new technologies had surprises, and under pressure people were starting to feel like someone else was slowing their bit down.

On one occasion, there was an outburst of frustration. Two weeks later, the fellow who had not contained herself, was replaced. Consultants are replaceable. We needed the good atmosphere, something that said that we can make the challenging schedule.

It's not that anyone had evidence on the schedule being off. There was just a lot to do. Including a lot of surprises. 

But with all these things between us that we couldn't say for various reasons, I suspect none would speak up if they were concerned. So I focused on my tasks. My work. And while there was nothing to test, I was just prepping mentally, reading stuff online, asking for tasks I could do every now and then. 

I can't imagine having to be afraid of being fired for speaking up. I can't imagine even being afraid of being fired if I lost my temper and said something inappropriate, as long as we work things out afterwards. 

I can't imagine I would passively wait for work to be assigned to me, or that I would go ask for tasks. I ask for goals and purposes, and in collaboration find the things I can contribute or learn on.

I've worked in places with reputation like this, without having the same problems. I've always attributed this to skills of how to take things forward: being nice, being active.

We're all work in progress and while we can't change the way others behave, we can change how we behave in response. I would hope to learn to model better behaviors with hopes of some of it catching on. Say nice things. Encourage. Be helpful. Believe people mean good. Have patience.

It's hard work, but makes life so much nicer.
 


Work on culture: being told and offering views

Some days I feel there's serendipity in the air. Today, the serendipity emerged with two tweets of totally different sources close by in my tweetstream.

The first tweet got me really excited and thoughtful.
I feel there's a lesson for the software industry here. Being realistic about our abilities. Respecting everyone's contribution. Thinking in bigger scale. Making mistakes and not trying to hide them.

So it seemed very appropriate that the next tweet I looked at was one by Anne-Marie Charrett, proclaiming "Leave testing to the experts".

Just seeing the title, I disagreed. Seeing who wrote it I was sure I wasn't disagreeing, that the disagreement is probably around rhetoric. Reading the text, I feel there's some experiences I have that make me feel different.

When Anne-Marie says: "I don’t tell you, oh developer how to code your program. I don’t tell you oh, sales person, how to sell your product.", I instantly realize I do tell developers how to code their program. I regularly mob with my developers, and feel increasingly comfortable offering my views. I do tell sales person how to see our product. I offer my views regularly and in good spirit. If my team in large (sales people belong to my team in large) felt protectionist, they would not take my offered views as that. And if I offered views that were dismissed because experts just know better, I wouldn't feel particularly good about that. And we'd make more mistakes, improve less. So I love the fact that everyone tells others what they could do, and everyone tries to understand why the other would think that would be a good idea.

This tweet sums it up perfectly - Respect. I show respect and I'm shown respect. Respect is like trust, cultural aspect that can be offered without me working hard to earn it. Believe good in people and you get good in people.
Offering view is often taken as telling. I like to try to hear people are offering views even when I feel they are telling. 

Having worked side by side with developers, it's clear I bring in some special skills and I develop some special skills, both in myself and in the people I share work with. As there's less of need of me being the expert in testing as everyone tests like an expert, I feel sad when we reclaim testing saying things like "Leave testing to the experts". I'd like to see that we could distribute that expertise and with a mix of different skillsets to it (in particulation the automation mindset developers apply to any problems they encounter), we will find ways of doing great things even better.

So when Anne-Marie asks:
So why do you think its reasonable and perfectly acceptable to tell me how to test software? 
I would consider reframing what we hear as people telling as people being bad communicators with good intent and missing information they could well have on how software testing actually works. And like with the expert pilots listening to advice from newbies, we should open out to listen what advice we're given. And instead of taking advice as advice, I like to think of responding: "That sounds interesting. Would you like to pair with me on trying it out in practice?"


Paraphrasing Woody Zuill from memory "It's in the doing the work we discover the work that needs to be done". I find that while doing, some things that I considered worthless or wrong, turn out to be interesting ideas that combined with existing knowledge transform the ways we work.