Sunday, January 15, 2017

The Overwhelmingly Helpful Comments

I'm going through a bit of an emotional flashback for a discussion I saw on twitter and tried to dismiss. Did not succeed to well with the dismissing, so I'm blogging to offload.

One of the 49 % of the 454 people (woman) I follow on twitter posted her Selenium code sample from a short training she had just delivered. And two of the 51 % of the 454 people (man) I follow on twitter posted comments criticizing her code.

I truly believe these comments of critique were made in good faith, and with the intention to help improve. Both comments introduced concepts that were missing from code ('page object factory', 'java 8 features like lambda') and you could even assume the authors knew there could be a reason things are excluded even if they could be there too.

What brought me to blogging is that it took me back to the time when I started coding decades ago, and when I stopped coding for precisely these kinds of lovely, helpful people who were suffocating me.

Talking in metaphors

I love dancing but I'm not a dancer. When I go to dancing lessons, on the basic level my teachers usually correct only the most relevant things, and a lot of times, they don't correct anything. They let me be surrounded with the joy of dancing, encourage continued practice without critique.

They could also do things differently. They could start right away telling me to pay attention to my dancing position. But they could also point out continuously everything that I could do differently and better. I could hear of my facial expressions (smile!), the positioning of every part of my body and the fact that you know, there's all these subtle differences to rhythms I'm not yet ready to pay attention to.

The joy comes first. And the other stuff comes layered. And sometimes, the feedback just sucks the joy out of dancing because that's all I want to do.

Being a Woman who Codes

When you are minority, the positive and helpful people around you tend to all want to pitch in to the feedback. The style of comments may be very constructive, but the amount of it can be overwhelming. You see that the amounts are overwhelming just for you because they just don't care as much for the others. The others are lucky if they get helped, but you get helped by everyone.

Everyone looks at what you do in a little more detail. Everyone wants to help you succeed with their feedback. And then there's someone, usually a very small minority, who uses all the feedback you're getting that others don't as evidence that 'women are not meant for coding'.

In projects with crappy code from everyone else, I always felt the feedback was asking me to be more perfect. Good intentions turn into sucking the joy out of the whole thing. I dropped coding for 20 years. And even as I've come back, I'm still overly sensitive to being helped in overwhelming amounts.

The environment matters

Recently, experiencing projects where pull requests get, regardless of gender, criticized and improved in detail are places I find safe again. It's not special treatment, it's feedback for everyone. And it comes from a place of putting every line to production pretty much as soon as it gets committed to the main branch.

Back to the twitter incident

So with the twitter incident of commenting in particular for this piece of code, I would ask:

  • Are these same people giving same attention to every other public speaker's code?
Selective helping is one of the things I've experienced that drove me away from coding. I can't speak for anyone else, but I surely know that at a younger age, it made a difference to me. I would not be back without (strong-style) pairing and mobbing. 

Saturday, January 14, 2017

Thinking in Scopes

The system I'm testing these days is very much a multi-team effort and as an exploratory tester looking particularly into how well our Windows Client works, I find myself often in between all of these teams. I don't really care if works as designed on my component, if the other components are out of synch failing to provide end users the value that was expected. 

Working in this, I've started to experience that my stance is more of a rare one. It would appear that most people look very much at the components they are creating, the features they assign to those components and the dependencies upstream or downstream that they recognize. But exploring is all about discovering things I don't necessary recognize, so confirming and feature focus won't really work for me. 

To cope with a big multi-team system, I place my main focus on the two end points that users see. There is a web GUI for management purposes, and there's a local windows client. And a lot of things in between, depending on what functionality I have in mind. As an exploratory tester, while I care most for the end-to-end experience, I also care in ways I can make things fail faster with all the components on the way and I have control over all the pieces in between. 

I find that the decomposition of things into pieces while caring for the whole chain may not be as common as I'd like it to be amongst my peers. And in particular, amongst my peers who have chosen to pay attention to test automation, from a manual system tester background.

Like me, they care for end to end, but whatever they do, they want to do in means of automation. They build hugely complicated scripts to do very basic things on the client, and are inclined to build hugely complicated scripts to do very basic things on the web ui - a true end-to-end, automated. 

There's this almost funny thing for automation that while I'm happy to find problems exploring and then pinpoint them into the right piece, I feel the automation fails if it can't do a better job at pinpointing where the problem is in the first place. It's not just a replacement of what could be done manually while testing, it's also a replacement for the work to do after it fails. Granularity matters. 

For automation purposes, decomposing the system into smaller chains responsible for particular functionality gets more important. 

I drew a picture of my puzzle.


Number 6 is true end-to-end: doing something on a windows client 'like a user', and verifying things on the the web guide 'like a user'. Right now I'm thinking we should have no automated tests in this scope.

Number 1 is almost end to end, because the Web GUI is very thin. Doing something on the windows client and verifying on the same rest services that serve the GUI. This is my team's system automation favored perspective, to an extent that I'm still struggling to introduce any other scopes. When these fails (and that is often), we talk about figuring out in the scope of about 10 teams. 

Number 2 is the backend system ownership team's favored testing scope. Simulating the windows client by pushing in the simulated messages in from one REST API and seeing them come out transformed from another REST API. It gives a wide variety of control through simulating all the weird things the client might be sending. 

Number 5 is something the backend system ownership team has had in the past. It takes REST API as a point of entry simulating the windows client, but verifying end user perspective with the Web GUI. We're actively lowering the number of these tests, as experimenting with them shows they tend to find same problems as REST to REST but be significantly slower and more brittle. 

I'm trying hard right now to introduce scopes 3 and 4. Scope 3 would include tests that verify what ever the windows client is generating against what ever the backend system ownership team is expecting as per their simulated data. Scope 4 would be system testing just on the windows system. 

The scopes were always there. They are relevant when exploring. They are just as relevant (if not more relevant) when automating. 

The preference to the whole system scope is puzzling me. I think it is learned in the years as "manual system tester" later turned into "system test automation specialist". Decomposing requires deeper understanding of what and how gets built. But it creates a lot better automation. 

Telling me there are unit tests, integration tests and system tests just isn't helpful. We need the scopes. Thinking in scopes is important. 




Friday, January 13, 2017

Overnight Changes

There is a discussion that I keep going back to, begging to be unloaded off my mind. This morning I said the words: "It's like I joined a different project this week". There's been a sudden change in the atmosphere and in the things we do, and thinking back the last week makes me realize some changes can be really fast.

In a week, my team transformed from a team working on its own components to a team that works with other teams on shared goals. We transformed from a team that seeks product owner acceptance and prioritization into a team that checks priorities but works actively to identify the next steps with one another. And we changed from a team that was quiet and not sharing, into a team that talks and plans together.

I can see three changes in the short timeframe:

  1. We did our first end-to-end demo across two teams and it resulted in a lot of positive reinforcement of customer value over team output. 
  2. Our product owner moved out of the team room and took a step back leaving more room for the team to decide on things. 
  3. A new-old developer joined the team. 
We weren't bad before, but this week has been amazing. We've accomplished a lot. We've learned a lot. 

Experiences this week remind me again on how changes in the environment change the system. And I'm delighted to be in a place that is willing to play with the environment to find even better ways to work together. 

The Expensive Fear of Forgetting

I sat through two meetings today that leave me thinking about product backlogs.

In the first one, we took a theme ('epic') and as a group brainstormed adding post-it notes to describe what would be needed, what would be needed first and what would be needed in general. The discussions provided a lot of shared understanding and clarity, and helped us identify a shared idea of how we are trying to prioritize things for value and risk. At the end there was a pile of post-its we had had the discussion around. I felt the meeting had been really good until someone said: "Now, let's take all these post-its and put them to Jira". I shrugged the unease off, let my mind relax and realized something about priorities around the principles we had grown to understand that again changed the overall plan. At this point the unease turned into frustration. If someone did take the "plan" to Jira, now someone needed to go and change the plan. Couldn't the shared understanding and the next step to work on be enough over the whole plan?

The second meeting was one clarifying a feature ('story') we had just pulled up as a thing to work on. The meeting focus was on identifying acceptance criteria, and again the discussions around the item helped us create a shared understanding, identify work to do between various parties and introduce people working on this to one another. The moment of unease happened again at the end as someone said: "now we need to go add all individual tasks to Jira and put the estimates in place". My team does not do estimates, we work with post-it notes on the wall and are doing pretty well with our Jira avoidance, taking discussions away from the writing and into the richer media.

Instead of improving the backlog practices, I work with my team to improve our collaboration and discovery, shared understanding of priorities and ability to release. Instead of asking "how long will it take", I work with them to figure out if there was a way we could deliver something smaller of value, first. And it is clear: in doing the work, we discover the work that needs doing. We need to focus on doing more of the next valuable thing, over creating a longer term view or details of promises in electronic format.

Sometimes, we are so afraid that we are forgetting, that we are ready to both invest in maintaining our lists (what a waste, in my experience) but also making our work shape so that there's less maintenance with less learning. Discovery is critical, and we pay high, hidden price when we create ways of working that don't encourage that in full.



Yes is the right answer when someone asks for help

Working in agile projects, we tend to write a little less documentation. And working in a big project, whatever documentation we write, it tends to be dispersed.

Four months into the new job, I'm still learning to work my way around doing things and figuring things out. I'm happy for my little tools of finding the dozens of code repos that build up the product I'm testing, but there's a lot going on I just have chosen to not pay attention to. Quite often there's this feeling of being overwhelmed with all the new information, as by no means we stopped making changes since I joined.

In the past, I remember solving issues of documentation with two main ideas:
  • Draw on request. Whenever someone would want to understand our current system, anyone in the team could go on a whiteboard, draw and explain. 
  • Write on repeated requests. When same info is asked that does not completely change as we are learning, write instructions on the wiki. 
They are still relatively good approaches, except...

Yesterday, I was overwhelmed with many different directions of work and there was one particular thing I needed to learn to do: get started on testing against a REST API.

Some weeks back I had taken my first go at it, and postponed the work for missing information about some needed credentials. So this time I decided to approach it differently. I went and talked to a colleague, asking if he would join me to get one post working on my machine. But no.

I got an (outdated) wiki page describing content rules, but lacking the credentials I was unaware of.

I got a (not working) exported Postman script.

I've been thinking about this ever since. When someone comes talk to you and asks for help as in doing something together that you know well, the right answer would be yes, or yes, in two hours.  Not "here's the document".

I eventually got it working with the documents. But I'm now realizing that the feeling of being left alone is overwhelmingly more important than the fact that there was pieces of documentation that were eventually pointed out.

I miss more of a human connection than "create a pull request and someone will review it". How about us working together, really *together* for a change?

I guess I did not know to miss this before I had experienced Mob Programming. But now the individualistic attitudes make me painfully aware how things could be better.


Saturday, January 7, 2017

Why setting out to automate tests is a bad idea

On Thursday at work, a colleague was doing a presentation I had invited, on how they've been automating their tests. Organizing sharing sessions comes naturally, both from me being curious and knowing where to find all the best stories, but also from creating an atmosphere of sharing and learning.

As his story is starting, he tells us he needs to explain a few things first. He spends maybe 30 seconds on explaining why finding a way to automate was so needed (malware evolves fast and when you're responding to something like that, you will need to evolve fast too). But then, he spends 20 minutes talking about things most people in the room, identifying as quality engineers, have never done. He speaks of recognizing problems with being able to test, and finding the best possible programmatic solution.

He talked on how they introduced blue-red deployments within the product (without even knowing it was a thing outside windows client software) and how that solved all sorts of problems with files being locked. He shared how they changed, bit by bit, the technical designs so that the whole installation is rebootless because it was just hard to automate stuff that would need to continue after reboot. Example by example, his story emerges: to automate testing, they needed to fix testability. And that just adding tests when you have big problems that are hard to go around when you can change the product makes little sense.

The story makes it clear: to be effective in this style of testing, you should be able to program outside of the tests you're programming, and if you can't, team up with someone who can. Without the view of solving problems programmatically where they make the most sense (design vs. tests), you would be on a path to difficulties.

For a room for of test automators who barely look into the application code, his message may have been intimidating. Setting out to automate test (as in this is what I want to test, designs don't change) is often an invitation to trouble.

Make it first simple to test, then a simple test to test it. The first is much harder. And I find that most of the repurposed manual testers becoming test automators without caring for product structures to make "manual" testing easier are hitting this trap harder than exploratory testers who have been working with the friends with pickup trucks (programmers) all along.

Monday, January 2, 2017

Normalizing Learning

I remember some years ago when I heard about a new thing that was going on and getting some buzz around the tester universe: Weekend Testing.

The idea is simple and beautiful. Volunteers would dedicate some time to facilitate practice sessions on testing over Skype and anyone could join. The sessions, as the name says would take place on weekends - off time from work. The sessions would be a place to see how other testers approach a particular problem. And if you missed a session, a transcript of the writing that was going on would be published for you to read.

I absolutely hated the idea. Not because the idea of practicing over Skype, but the built-in cultural experience that said to me:
Testers are not important, if they want to learn they need to do so on their own time. Learning is not part of work hours.
I was so against the notion that I did not join any of the weekend testing sessions (until I ended up facilitating for for Weekend Testing Europe a little over year ago). Instead, I would put energy on organizing half of my meetups during office hours to learn that in Finland companies do let people join in the middle of the day and in particular in the mornings.

I remembered this because I listened to Ajay's CAST keynote and  learned how he would work (+ travel for work) from 8 am to 7 pm, and then work on learning from 7 pm to 1 am. And how he, after hard work of 17 years (!!) finally was delighted to do his 1st international keynote, something he had aspired for since doing a local talk on 9th grade.


My hours probably look only a little better, but the underlying cause I work for is to find means to normalize learning. When I am at work, every day I can take an hour to do things *differently* than usual, and that teaches me a lot. I can stop to reflect instead of just steaming through an assignment. I can read or listen to a talk. I can volunteer to do tasks I'm not assigned to, even tasks where people say they are "not part of my job description". And I can find a meetup where I can hear how bad things are elsewhere so that I remember to appreciate how amazing places to work I have managed to end up in.

Learning is the key. But instead of externalizing learning to one's own time, it needs to be normal to learn while working. Even when we are ambitious and find it hard to invest just the regular hours for our 'work' - including the learning.