Wednesday, May 24, 2017

Impact of Test Automation in my Everyday Worklife

I'm not particularly convinced of the testing our teams test automation does for us. The scenarios is automation are somewhat simple, yet take extensive time to run. They are *system tests* and I would very much prefer seeing more things around components the team is responsible for. System tests fail often for dependencies outside the team control.

I've been actively postponing the time of doing really something about it, and today I stopped to think about what existence of the minimal automation has meant for me.

The better test automation around here seem to find random crashes (with logs and dumps that enable fixing), but that is really not the case with what I'm seeing close.

The impact existence of test automation has had for my everyday work life is that I can see with a glimpse if the test systems are down so that I don't need to pay attention to installing regularly just to know it still installs.

So I stopped to think: has this really changed something for me, personally. It has. I feel a little less rushed with my routines. And I can appreciate that.

Tuesday, May 9, 2017

Bias for action


'Bias for Action'. That's a phrase I picked up ages ago, yet one that has been keenly on my mind for some time now.

It means (to me) that if I can choose planning and speculating vs. doing something, I should rather be doing something. It's in the work we do that we discover the work that needs doing.

There are things I feel need doing, and I notice myself trying to convince others in doing those over being alone in doing those. I notice being afraid of going in and starting the restructure of our test automation to a shape that would make more sense.

Without bias for action, I procrastinate. I plan. I try to figure out a way of communicating. I don't get anything done.

With bias for action, I make mistakes and learn. I make myself more vulnerable and work with my fears of inadequacy.

It's been such an important thing to remember: things don't change without changing them. And I can be a person to change things I feel strongly for.

Thursday, April 20, 2017

Dear Developer

Dear Developer,

I'm not sure if I should write you to thank you for how enthusiastically you welcome feedback on what you've been working on and how our system behaves, or if I should write you to ask you to understand that is what I do: provide you actionable feedback so that we can be more awesome together.

But at least I want to reach out to ask for you to make my job of helping you easier. Keep me posted on what you're doing and thinking, and I can help you crystallize what threats there might be to the value you're providing and find ways to work with you to have the information available when it is the most useful. What I do isn't magic (just as what you do isn't magic) but it's different. I'm happy to show you how I think well around a software system whenever you want to. Let's pair, just give me a hint and I make the time for you.

You've probably heard of unit tests, and you know how to get your own hands on the software you've just generated. You tested it yourself, you say. So why should you care about a second pair of eyes?

You might think of testing as confirming what ever you think you already know. But there's other information too: there are things you think you knew but were wrong. And there are things you just did not know to know, and spending time with what you've implemented will reveal that information. It could be revealed to you too, but having someone else there, a second pair of eyes, widens the perspectives available to you and can make the two of you together more productive.

Us tester tend to have this skill of hearing the software speak to us, and hinting on problems. We are also often equipped with an analytic mind to identify things you can change that might make a difference, and a patience to try various angles to seeing if things are as they should be.  We focus our energies a little differently.
 
When the software works and provides the value it is supposed to, you will be praised. And when it doesn't work, you'll be the one working late nights and stressing on  the fixes. Let us help you get to praise and avoid the stress of long nights. 

You'll rather know and prepare. That's what we're here for. To help you consider perspectives that are hard to keep track of when you're focused on getting the implementation right.

Thank you for being awesome. And being more awesome together with me.

     Maaret - a tester

Time bombs in products

My desk is covered with post-it notes of things that I'm processing, and today, I seem to have taken a liking to doodling pictures of little bombs. My artistic talent did not allow me to post one here, but just speaking about it lets you know what I think of. I think of things that could be considered time bombs in our products, and ways to better speak of them.

There's one easy and obvious category of time bombs while working in a security company, and that is vulnerabilities. These typically have a few different parts in their life. There's the time when no one knows of them (that we know of). Then there's the time when we know of them but other don't (that we know of). Then there's the time when someone other than us knows of them and we know they know. When that time arrives, it really no longer matters much if we knew before or not, but fixing commences, stopping everything else. And there's times when we know, and let others know as there is an external mitigation / monitoring that people could do to keep themselves safe. We work hard to fix things we know of, before others know of them because working without an external schedule pressure is just so much nicer. And it is really the right  thing to do. The right thing isn't always easy and I love the intensity of analysis and discussions vulnerability related information causes here. It reminds me of the other places where the vulnerabilities were time bombs we just closed eyes on, and even publishing them wouldn't make assessing them a priority without a customer escalation.

Security issues, however, are not the only time bombs we have. Other relevant bugs are the same too. And with other relevant bugs, the question of timing sometimes becomes harder. For things that are just as easy to fix while in production and while developing an increment, timing can become irrelevant. This is what a lot of the continuous deployment approaches rely on - fast fixing. Some of these bugs though, when found have already caused a significant damage. Half of a database is corrupted. Communication between client and server has become irrecoverable. Computer fails to start unless you know how to go in through bios and hack registries so that starting up is again possible. So bugs with impacts other than inconvenience are ones that can bring a business down or slow it to a halt.

There's also the time bombs of bugs that are just hard to fix. At some point, someone gets annoyed enough with a slow website, and you've known for years it's a major architectural change to fix that one.

A thing that seems common with time bombs is that they are missing good conversations. The good conversations tends to lead to the right direction on deciding which ones we really need to invest on, right now. And for those not now, what is the time for them?

And all of this after we've done all we can to avoid having any in the first place. 


Wednesday, April 19, 2017

Test Communication Grumpiness

I've been having the time of my life exploratory testing a new feature, one that I won't be writing details on. I have the time of my life because I feel this is what I'm meant to do as a tester. The product (and people doing it) are better because I exist.

It's not all fun and happy though. I really don't like the fact that yet again, the feedback that I'm delivering happens later than it could. Then again, as per ability, interest and knowledge to react to it, it feels very timely.

There's three main things on the "life of this feature". First it was programmed (and unit tested, and tested extensively by the developer). Then some system test automation was added to it. I'm involved in the third part of its life, exploring it to find out what it is and should be from another perspective.

As first and second parts were done, people were quick to communicate it was "done". And if the system test automation was more extensive than it is, it could actually be done. But it isn't.

The third part has revealed functionalities we seem to have but don't. Some we forgot to implement, as there was still an open question regarding them. It has revealed inconsistencies and dependencies. And in particular, it has revealed cases where the software as we implemented isn't just complicated enough for the problem it is supposed to be helping with.

I appreciate how openly people welcome the feedback, and how actively things get changed as the feedback emerges. But all of this still leaves me a little grumpy on how hard communication can be.

There are tasks that we know of, like knowing we need to implement a feature for it to work.
There are tasks that we know will tell us of the tasks we don't know of, like testing of feature.
And there are the tasks that we don't know of yet but they will  be there.

And we won't be done before we've addressed also the work we just can't plan for.

Wednesday, March 29, 2017

Test Planning Workshop has Changed

I work on a system with five immediate teams, and at least another ten I don't care to count due to organizational structures. We had a need of some test planning for the five immediate teams. So the usual happens: a calendar request to get people together for a test planning workshop.

I knew we had three major areas where programmer work is split in interesting (complicated) ways across the teams. I was pretty sure we'd easily see the testing each of us would do through the lenses of responding to whatever the programmers were doing. That is, if one of our programmers would create a component, we would test that component. But integrating those components with their neighbors and eventually into the overall flows of the system, that was no longer obvious. This is a problem I find that not all programmers in multi-team agile understand, and the testing of a component gets easily focused on whatever the public interface of the team's component is.

As the meeting started, I took a step back and looked at how the discussion emerged. First, there was a rough architectural picture drawn on the whiteboard. Then arrows emerged in explanation of comparing how the *test automation system* works before the changes we are now introducing - a little lesson of history to frame the discussion. And from there, we all together very organically talked on chains and pairs and split *implementation work* to teams.

No one mentioned exploratory testing. I didn't either. I could see some of it happening while creating the automation. I could see some of it not happening while creating the automation, but that I would rather have people focus on it after the automation existed, I could see some of it, the early parts of it as things I would personally do to figure out what I didn't yet even know to focus on as a task or a risk.

Thinking back 10 years on time before automation was useful and extensive, this same meeting happened in such a different way. We would agree on who leads each feature's testing effort, and whoever would lead would generate ways for the rest of us to participate in that shared activity.

These days, we first build the system to test the system, explore while building it and then explore some more. Before, we used to build a system of mainly exploration, and tracking the part that stays was more difficult.

The test automation system isn't perfect. But the artifact that we, the five teams, can all go to and see in action, changes the way we communicate on the basics.

The world of testing has changed. And it has changed for the better.

Tuesday, March 28, 2017

World-changing incrementalism

As many exploratory testers do, I keep going back to thinking about the role of programming in the field of testing. At this point of my career, I identify both as a tester and a developer and while I love exploratory testing, maintainable code comes close. I'm fascinated by collaboration and skills, and how we build these skills, realizing there are many paths to greatness.

I recognize that in my personal skills and professional growth path there have been things that really make me more proficient but also things that keep me engaged and committed. Pushing me to do things I don't self-opt-in is a great way of not keeping me engaged and committed, and I realize, in hindsight that code for a long time had that status for me.

Here's still an idea I believe in: it is good to specialize in the first five years, and generalize later on. And whether it is good or not, it is the reality of how people cope with learning things, taking a few at a time, practicing and getting better, having a foundation that sticks around when building more on it.

If it is true that we are in a profession that doubles in size every five years, it means that in a balanced group half of us have less than five years of experience. Instead of giving the same advice on career to everyone, I like to split my ideas of advice on how to grow to these two halfs: the ones coming in and getting started vs. the ones continuing to grow in contribution.

I'm also old enough to remember the times when I could not get to testing the code as it was created, but had to wait months before what we knew as a testing phase. And I know you don't need to be old at all to experience those projects, there's still plenty of those to go around. Thinking about it, I feel that some part of my strong feelings of choosing tester vs. developer early path clearly come from the fact that in that world of phases, it was even more impossible to survive without the specialization. Especially as a tester, with phases it was hard to time box a bit of manual and a bit of automation, as every change we were testing was something big.

Incremental development has changed my world a lot. For a small change, I can explore that change and its implications from a context of having years of history with that product. I can also add test automation around that change (unit, integration or system level, which ever suits best) and add to years of history with that product. I don't need a choice of either or, I can have both. Incremental gives me the possibility, that is greatly enhanced with the idea of me not being alone. Whatever testing I contribute in us realizing we need to do, there's the whole team to do it.

I can't go back and try doing things differently. So my advice for those who seek any is this: you can choose whatever you feel like choosing, the right path isn't obvious. We need teams that are complete in their perspectives, not individuals that are complete. Pick a slice, get great, improve. And pick more slices. Any slices. Never stop learning.

That's what matters. Learning.