Tuesday, August 29, 2017

Collaboration Call at Its Best

I've been giving some intensive hours into getting to know loads of awesome people within timeboxes of 15 minutes. We call that European Testing Conference Collaboration Call, where we (organizers and potential speaker) meet to understand the potential talk better.

We are doing this with the 110 people who left their contacts for us when we called for collaboration and a selection of others we find we would like to consider (e.g. mentions from those who submitted), totaling somewhere around 150 discussions. We do most of this paired to make sure we hear from a tester and a developer perspective.

While the hours sound high, we feel this is an investment into a wider scope of things that just the immediate selection. We don't think of it as interview, but we approach it as a discovery session. We discover similar and different interests and viewpoints, to build a balanced program of practical value that is raising the bar of software testing.

150 people means somewhere in the neighborhood of 200 talks. And the conference program for 2018 fits 14.

I've been delighted with each and every discussion, getting to know what people feel so strongly on that they want to share them in a conference. I've learned that two thirds of people I would never select based on the text they submit, but that I can get excited on their talks when I speak with them. Some would become better with a little guidance. Sometimes the guidance we fit into the 15 minutes is enough to make them better.

Most of the calls end up with "I already see stronger proposals in the same category" or "We'll keep you on list to see how that category builds as we continue doing the calls". Today's call was the first one where we ended with "Welcome to European Testing Conference 2018 as a speaker".

The call was how I think of a collaboration call at its best. This time a 1st time speaker had submitted a title (with remark 'working title') and an abstract of two sentences. As they went through the talk proposal, it sounded exactly as many others: how testers can provide value other than programming. At one point of the story, half a sentence was something like "I introduced them (programmers) to heuristics and oracles" with explanation around it making it obvious this lesson was well received and useful. In the end we told what we heard - a story that was relevant and shared by many. And a piece that should be a talk in its own right.

With a bit of collaboration, that piece around heuristics seemed to take form. And knowing what is on the list to consider as the call is closed, this is the thing we want to show - testing as testers think of it. Practical, improving our craft.
It's a talk that would not exist without this 15 minute call.
It's still open if that talk will be seen, as anything that emerges that suddenly deserves a thinking time, especially for the presenter. We would never want to get people committing to talks that they don't own themselves. And many of us need introspection through quiet time.

I just wish I had 150 slots available to share the best out of every one of these unique people we get to talk to. So much knowledge, and wonderful stories of how the lessons have been learned.

Tuesday, August 22, 2017

A look into a year of test automation

It's been a year since I joined, and it's been a year of ramping up many things. I'm delighted about many things, most of all the wonderful people I get to work with.

This post, however, is on something that has been nagging on the back of my head a long time, yet I've not yet taken any real actions on doing anything other than thinking. I feel we do a lot of test automation, yet it provides less actionable value that I'd like. A story we've all heard before. I've been around enough organizations to know that the things I say with visibility into what we do are very much the same in other places, with some happy differences. The first step to better is recognizing where you are. We could be worse off - we could be not being able to consider where we are with evidence of things we've already done.

As I talked about my concerns out loud, I'm reminded of things that Test Automation has been truly valuable on:
  • It finds crashes where human patience of sticking around long enough will not do the job, and makes random crashes into systematic patterns with saving results of various runs
  • It keeps checking all operating systems where people don't do that
  • It notices side effects on basic functionality in an organization where loads of teams commit their changes on the same system without always understanding dependencies
However, as I've observed things, I have not seen any of these really in action. We have not  built stuff that would be crashing in new ways (or we don't test in ways that uncover those crashes). We run tests on all operating systems, but if they fail, the reasons are not operating system specific. And there's much simpler tests than what we run to figure out that the backend system is again down for whatever reason. Plus, if our tests fail, we end up pinging other teams on fixes and I'm growing a strong dislike on the idea of not giving these tests for the teams themselves to run that need pinging.

Regardless of how I feel, we have now invested one person and a full year into our team's test automation. So, what do we have?

We have:
  • 5765 lines of code committed over 375 commits. That means that we do 25 pull requests a month, of average size 15 lines per commit.
  • The code splits into 35 tests with 1-8 steps each. My reading perception is that I'm still ashamed to call the stuff these tests do testing, because they cover very little ground. But they exist and keep running.
  • Our test automation python code is rated 0.90/10 with Pylint. The amount of complaints is  2839 things. That means that every second line needs looking into. The number is worse as I did not set up some of the libraries yet.
In the year, I cannot remember more than one instance where the tests that should protect my team (other teams have their own tests) have found something that was feedback to my team. I remember many cases where while creating test automation, we find problems - those problems we could find also just diligently covering manually the features, but I accept that automation has the tendency of driving out the detail.

I remember more cases where we fix automation because it monitors things are "as designed" but design is off.

I know I should do something about it, but I'm not sure if I find that worth my time. I prefer the manual approach most of the time. I prefer to throw away my code over leaving it running.

There's only one thing I find motivation in while considering I would jump into this. It's the idea that testers like me are rare, and when I'm gone, the test automation I help create could do some real heavy lifting. I'm afraid my judgement is that this isn't yet it. But my bar is high and I work to raise it.

As I write this post, I remind myself of a core principle:
all people (including myself) do the best work they can under the pertaining circumstances.
Like a colleague of mine said: room for improvement. Time to get to it.

Friday, August 18, 2017

Making a Wrong Right

I've coached my share of awesome speakers to get started in speaking. That makes me a frequent mentor. I'm also an infrequent sponsor, and would love to find possibilities to make that more common.

This week, one of my mentees spoke up in a group of women in testing about a challenge she was facing in speaking. She had been selected to speak at EuroSTAR, which is a #PayToSpeak conference meaning you pay your own travel + stay. When planning for the conference, she had made plans with her company support, but things changed and that was retracted.

She was considering her options. Cancelling, but that seemed hard after all the program was already out. Showing up just for the talk to minimize the cost. And that was pretty much it.

She asked around for advice on others getting their companies to pay for travel, to hear that it was not an uncommon thing amongst the group of women, even the frequent speakers, that their employers don't pay the travel. The conferences really should pay the travel and stay for their speakers. And the good ones do.

I was delighted to have the opportunity to step up and offer a travel scholarship for this particular case. Second year in a row, I'm using my non-profit to pay a Speak Easy connected minority speaker to go to EuroSTAR, to speak and use the full opportunity of learning. I call EuroSTAR my favorite disliked conference, as they really should change their policy. And while I can't change them, I can change a small part of the pain the #PayToSpeak policy causes.

I can make one small wrong right. This speaker is awesome in many ways, and an inspiration to me.

Just briefly checked some conferences that pay their speakers and some that don't. Unsurprisingly, the ones that pay their speakers have a much more natural gender balance.

I can correct one wrong. The power of correcting the bigger wrong lies with the conference organizers. 

Tuesday, August 15, 2017

Dare to be the change you want to see

"Thank you for not having the steering group preparation meeting", he carefully said after the 1st steering group meeting after the holidays. I probably looked puzzled, as for me it was obvious and part of what I had signed up for. I wasn't about to turn into a project manager in an awesome place that has no scrum masters and usually also no project managers. I'm a hands-on tester. But when the previous project manager stepped down and I saw a need for people to meet weekly to exchange management-level news (to leave teams alone unless they pulled the info in), there was no other option than promising to hold the space for the steering group to happen.

Let me go back a little in time. I joined a year ago, and we had no project manager. We had teams of software engineers and quality engineers, and as with new teams, we were finding our way with the guidance of self-organizing. Many of us were seniors, and we got the hang of it.

Meanwhile while we were stumbling to form as individual teams and establish cross-team relations across two sites, someone got worried enough to "escalate". And escalation brought in a project manager.

The project manager visibly did two things. He set up a steering group meeting, where I ended up as a member ("just a tester, but none of us is *just* anything these days"). And he set up a cross-team slot. He was probably trying to just create forums, but they felt more of ceremonies. The cross-team session was a ceremony of reporting to him, as much as he tried to avoid it. And the steering group was a ceremony of reporting cleanly to the management, as it was always preceded with a prep meeting as long as the actual meeting, but only 3 out of 8 people present.

As the project manager left for other assignments, teams abandoned the cross-team slot and started more active 1:1's as they sensed the need. Out of 10 of us, only 2 strongly stated over time the slots were not a good use of time, yet everyone was keen to give them up. Others just came, because they were invited.

And similarly, the steering group meetings turned into actual discussions, creating feeling of mutual support and sharing without the pre-meeting. I stated I was there to hold the space, and that's what I do. I start discussions, and end them if they don't fit the idea of what this meeting is about as per our mutual understanding.

But for the last 6 months, I did not like the way we did things. Yet I too, while expressing my feelings every now and then, went with the motions. I only changed when the environment changed.

All of this reminds me to be more brave: dare the be the change you want to see. Experiment with fixes. And not only when the people leave, as they were never the real bottleneck. It was always in our head. My head amongst the others. 

Friday, August 11, 2017

A Serendipitous Acquintance

We met online. Skype to be precise. Just a random person I did not know, submitting to our conference. And we talk to everyone on Skype.

As the call started, we had our cameras on like we always do to begin a call, to create a contact between people instead of feeling like a phonemail of strangers. And as his camera was turned on, we were in for a surprise. It was clear we were about to talk to a teenage boy who had just submitted to a testing conference.

We talked for 15 minutes, like with everyone. It was clear that based on his talk proposal, we would not be selecting him. But listening to him was different. His thoughts were clear and articulated. He was excited about learning. He was frustrated about people dismissing him - he had submitted to tens of conferences, and we were the second he would hear back from. We asked him questions, poked his experience and message and got inspired. Inspired enough to suggest that regardless of what would be our decision on this conference, I would be delighted if we would accept my help as a speaker mentor, and I could help him hone his message further. He had delivered a keynote in Romanian Testing Conference through local connections, and was driven for more. 15 minutes was enough to realize that Harry Girlea is awesome.

When later met him for going through his talk and talked for 40 minutes, the first impression strengthened. This 13-year old is more articulate than many adults. When he told me stories of how wonderful he felt testing with professional games testers in game realms, I could hear he was proud of his learnings. And when he coined why he loves testing as "As tester, things are similar but are never the same“, all I could do is say that with my extra 30 years of experience, I feel the same.

It became clear that he wanted to be a bigger part of it, speaking in conferences and learning more on testing.

We improved his talk proposal, and he submits again. For European Testing Conference, we have not made our choice yet. But I hope we are not the only ones seriously considering him.

The kids of today learn fast. Us adults have lot to learn from them.


Thursday, August 10, 2017

We don't test manually at all

We sat in a room, the 7 of us. It was a team interview for a new candidate and we were going though usual moves I already knew from doing so many of these in the last few weeks. And as part of the moves, we asked the programmer candidate on how they test their code.

It wasn't the candidate that surprised me, but one of my own team's developers, who stated:
"We don't test manually at all".

My mind was racing with thoughts of wonder. What the hell was I doing if not testing? How could anyone think that whoever was figuring out scenarios, very manually wasn't doing manual testing at all? Where has my team's education failed this much that any of them could even think that, let alone say it out loud?

At the team room, I initiated a discussion on the remark to learn the meaning of it.

What I was doing wasn't included (I do a lot of exploratory testing and find problems) because I refuse to test each build the same way.

What the developers were doing wasn't included because manual testing targeted for a change is just part of good programming.

Figuring out scenarios to automate and trying them out seeing if they work when turned into code and debugging tests for failing wrong (or not failing right) wasn't included because it is part of test automation.

So I asked what then was this infamous manual testing that we did not do? It is the part of testing that they consider boring and wouldn't label intellectual work at all. The rote. The regression testing done by repeating things mindlessly without even considering what has changed, because there could be things that just magically broke.

We test manually, plenty. We are no longer mindless about it. So I guess that's what it really means. Not manual, but brain-engaged.

I can just make sure that people who matter in recruiting make sure someone is particularly well brain-engaged when joining the teams. That someone sometimes is not the tester who specializes in automation. 

Sunday, August 6, 2017

Community over Technology in Open-Source

So, you have created an open source tool. Great, congratulations. I'm still planning on adding mine to the pile. But let me say something I wonder if techie people understand: *your tool is not the only tool*. And as a user of these tools, I'm feeling the weight of trying to select one that I want to even look at. I've looked at many, to find myself disappointed in something I find relevant to be missing. And yes, with an open source tool, I've heard the usual mantra that I can just change it. But forking my own version to have faster control over it creates a merge hell, so you better make sure you let things in the main repo fast enough and not leave them hanging in the pull requests queue.

There's loads of awesome open source tools, but the user challenge is no longer so much about finding some, but finding one that is worth investing your time on. Having something die out of your tool stack and replace it creates distraction. So most of us go for tools with good communities. Tech matters less than the community.

With European Testing Conference Call for Collaborations, many people who have created a tool propose a talk on that tool. A quick and simple search to github tells me there are 1,004,708 repository results for "testing" and over the two years of these 15-minute calls, I've got a small insight into maybe a hundred people creating and maintaining their own tools, wanting to share their awesomeness.

Last year we defined what kind of things we might consider, saying that it has to be either an insightful idea that anyone could relatively easily bring into their own testing frameworks or something that an open source tool supports. This year, I'm learning to add more requirements to the latter.

The open source tool is not of support if it does not have a proper community. There needs to be other users and active core group answering questions and improving the experience of getting introduced into the tool. But also, it matters now more to me how the core group deals with their project.

If I see pull requests that have been in the queue for a long time, it hints to me that the community contributions are not seen as a priority.

Building and supporting a community takes an effort. I see some projects understand that and emphasize a community that welcomes contributions, while other treat the community more as outsiders.

I'm grateful for the 15 minutes of insight into tools I would never given even that time unless I had the main contributor as my guide in the call, wanting to share on their project at one of the limited spots of the conference. For a conference, any conference not just European Testing Conference, the organizers are always working against the idea of a limited budget of spaces. and that gives an indication that out of a typical 10-20 slots in a conference, not all of these tools will ever be presented.

What are the tools that are worth the spots then? Selenium / Protractor are clearly choices of the community already. Others need to have a common problem solved in a particularly insightful way and life ahead that the community can believe in.

Community is more relevant.