Tuesday, February 28, 2017

The Lying Developers

The title is a bit clickbait-y, right? But I can't help but directly addressing something from UKStar Conference and a talk I was not at, summarized in a tweet:
As a tester, the services I provide are not panacea for all things wrong with the world. I provide information, usually with primary emphasis on the product we are building with an empirical emphasis. Being an all around lie detector in a world does not strike me as the job I signed up for. Only some of the lies are my specialty, and I would claim that me being "technical" isn't the core type of lie (I prefer illusion) that I'm out to detect.

If a developer tells me something cannot be fixed (and that is a lie), there are other developers to pick up that lie. And if they all lie on that together, I need a third party developer to find a way to turn that misconception into a learning of how it is possible to do after all. I don't have to be able to do it myself, but I need to understand when *impossible* is *unacceptable*. And that isn't technical, that is understanding the business domain.

If a developer tells me something is done when it isn't, the best lie detector isn't going and reading the code. Surely, the code might give me hints of completely missing implementation or bunch of todo-tags, but trying out the functionality reveals often that and *more*. Why would we otherwise keep finding bugs when we patiently go through the flows that have been peer reviewed in pull requests?

Back in the days, I had a developer who intentionally left severe issues in the code he handed to testing to "see if we notice". Well, we did.

And in general, I'm happy to realize that is as close to systematic lying I feel I have needed to come to.

Conflicting belief systems are not a lie. And testers are not a lie detector, we have enough work on us without even hinting on the idea that we would intentionally be fooling each other.

There's better reasons to be a little technical than a lying developer fallacy.



Monday, February 27, 2017

A chat about banks

After a talk on mob testing and programming, someone approached me with a question.

"I'm working with big banks. They would never allow a group to work this way. Is there anything you have to say to this?"

Let's first clarify. It's really not my business to say if you should or should not mob. What I do is sharing that against all my personal beliefs, it has been a great experience for me. I would not have had the great experience without doing a thing I did not believe in. Go read back my blog on my doubts, how I felt it's the programmer's conspiracy to make testers vanish, and how I later learned that where I was different was more valuable in a timely manner in a mob.

But the problem with big banks as such is that there are people who are probably not willing to give this a chance. Most likely you're even a contractor, and proposing this adds another layer: how about you pay us for five people doing the "work of one person". Except it isn't work of one person. It's the five people's work done in small pieces so that whatever comes out in the end does not need to be rewritten immediately and then again in a few months.

Here's a story I shared. I once got a chance to see a bank in distress, they had production problems and were suffering big time. The software was already in production. And as there was a crisis, they did what any smart organization does: they brought together the right people, and instead of letting them work separately, they put them all on the same problem, into the same room. The main difference to mobbing was that they did not really try to work on one computer. But a lot of times, solving the most pressing problem, that is exactly what they ended up doing.

For the crisis time, it was a non-issue financially to bring together 15 people to solve the crisis, using long hours. But as soon as the crisis was solved, they again dismantled their most effective way of developing. The production problems were a big hit for reputation as well as financially. I bet the teams could have spent some time on working tightly together before the problem surfaced. But at that time, it did not feel necessary because wishful thinking is strong.

We keep believing we can do the same or good work individually one by one. But learning and building on each other tends to be important in software development.

Sure, I love showing the lists of all the bugs developers missed. The project managers don't love me for showing that too late. If likes of me could use a mechanism like mobbing to change this dynamic, wouldn't that be awesome?

Friday, February 24, 2017

Theories of Error

Some days it bothers me that I feel testers focus more on actions while testing than thinking about the underlying motivations of why  they test the way they do. Since I was thinking about this all of my way to office, I need to write about a few examples.

In a conference few weeks ago, I was following a session where some testing happened on stage, and the presenter had a great connection speaking back and forth with the audience on ideas. The software tested was a basic tasks tool, running on a local machine saving stuff into what ever internal format the thing had. And while discussing ideas with the audience, someone from the audience suggested testing SQL injection types of inputs.

The idea of injection is to enter code through input fields to see if the inputs are cleaned up or if whatever you give goes through as such. SQL in particular would be relevant if there was a database in the application, and is a popular quick attack among testers.

However, this application wasn't built on a database. Doing a particular action wouldn't connect with making sense of testing this way unless there was a bit more of a story around it. As the discussion between the audience member and the presenter remained puzzled, I volunteered an idea of connecting that together with an export functionality, if there was one and assessing the possible error from that perspective. A theory of error was needed for the idea to make sense.

Another example I keep coming back to is automation running a basic test again and again. There has been the habit of running the same basic test on schedule (frequently) because identifying all the triggers of change in a complex environment is a complex task. But again, there should be a theory of error.

I've been volunteered a number of rationale for running automation this way:
  • The basic thing is basic functionality and we just want to see it always stays in action
  • If the other things around it wouldn't cause as much false alarms in this test, it would actually be cheap to run it and I wouldn't have to care that it does not really provide information most of the time
  • When run enough times, timing-related issues get revealed and with repetition, we get out the 1/10000 crash dump that enables us to fix crashes
I sort of appreciate the last one, as it has an actual theory of error. The first two sound most of the time like sloppy explanations.

So I keep thinking: how often can we articulate why we are testing the way we are? Do we have an underlying theory of error we can share and if we articulated it better, would that change the way we test? 

Tuesday, February 21, 2017

It's all in perspective - virtual images for test automation use

I seem to fluctuate between two perspective to test automation that I get to witness. On some days (most) I find myself really frustrated with how much effort can go into such a little amount of testing. On other days, I find the platforms built very impressive even if the focus of what we test could still improve. And in reflection to how others are doing, I lower my standard and expectation for today, allowing myself to feel very happy and proud of what people have accomplished.

The piece that I'm in awe today is the operating system provisioning system that is in the heart of the way test automation is done here. And I just learned we have open sourced (yet apparently publicized very little) the tooling for this: https://github.com/F-Secure/dvmps

Just a high level view: imagine spawning 10 000 virtual machine for test automation use on a daily basis, with each running some set of tests. It takes just seconds to have a new machine up and running, and I often find myself tempted to use on of the machines for test automation, as the manual testing reserved images wait times are calculated in minutes.

With the thought of perspectives, I go for doing a little more research on how others do this. If you're working on scales like this, would love to benchmark experiences.

Friday, February 17, 2017

Testing by Intent

In programming, there's a concept called Programming by Intent. Paraphrasing on how I perceive the concept: it is helpful to not hold big things in your head but to outline intent that then drives implementation.

Intent in programming becomes particularly relevant when you try to pair or mob. If one of the group holds a vision of a set of variables and their relations just in their head, it makes it next to impossible for another member of the group to catch the ball and continue where the previous person left off.

With experiences in TDD and mob programming, it has grown very evident that making intent visible is useful. Working in a mob when you go to whiteboard with an example, turn that into English (and refactor the English), then turn it into test and then create the code that makes the test pass, the work just flows. Or actually, the being stuck in the flow happens more around discussions on the whiteboard.

In exploratory testing, I find that those of us who practiced it more intensely tend to inherently have a little better structure for our intent. But as I've been mob testing, I find that still we suck at sharing that intent. We don't have exactly the same mechanisms as TDD introduces to programming work, and with exploratory testing, we want to opt to the sidetracks that provide serendipity. But in a way that helps us track where we were, and share that idea of where we are in  the team.

The theme of testing by intent was my special focus in looking at a group mobbing on my exploratory testing course this week. I had an amazing group: mostly people with 20+ years in testing. One test automator - developer with solid testing understanding. And one newbie to testing. All super collaborative, nice and helpful.

I experimented with ways to improve intent and found that:
  • for exploring, shorter rotation forces the group to formulate clearer intent
  • explaining the concept of intent helped the group define their intent better, charters as we used them were too loose to keep the group on track of their intent
  • explicitly giving the group (by example) mechanisms of offloading sidetracks to go back to later helped the focus
  • when seeking deep testing of small area, needed strict facilitation to not allow people to leave undone work and seek other areas - inclination to be shallow 
There's clearly more to do in teaching people how to do it. The stories of what we are testing and why we are testing it this way are still very hard to voice for so many people.

Then again, it took me a long deliberate practice to build up my self-management skills. And yet, there's more work to do. 

Tuesday, February 14, 2017

The 10 minute bug fix that takes a day

We have these mystical creatures around that eat up hours in a day. I first started recognizing them with discussions that went something like this:

"Oh, we need to fix a bug", I said. "Sure, I work on it.", the developer said. A day later the dev comes back proclaiming "It was only a 10 minute fix". "But it took the whole day, did you do something else?", I ask. "No, but the fix was only 10 minutes".

On the side, depending on the power structure of the organization, there's a manager causing confusion from what he picks up on that discussion. They might want to go for the optimistic "10 minutes to fix bugs, awesome" or pessimistic "a day to do 10 minutes of work".

The same theme continues. "It took us a 2-week sprint for two people to do this feature" proclaimed after the feature is done. "But it took us 2 2-week sprints for two full-time and two half-time people to do this feature, isn't there something off?"

There's this fascinating need for every individual to look good belittling their contribution as in how much time they used, even if that focus on self takes its toll on how we talk about the whole thing.


There's a tone of discussion that needs changing. From looking good through looking at numbers of effort, we could look good though looking at value at customer hands. Sounds like a challenge I accept.

Monday, February 13, 2017

Unsafe behaviors

Have you ever shared your concerns on challenges in how your team works, only to learn a few weeks later the information you shared is used not for good, but for evil?

This is a question I've been pondering a lot today. My team is somewhat siloed in skillsets and interests, and in the past few weeks, I've been extremely happy with the new raise of collaboration that we've been seeing. We worked on one feature end-to-end, moving beyond our usual technology silos and perceived responsibility silos, and what we got done was rather amazing.

It was not without problems. I got to hear more than once that something was "done" without it yet being tested. At first it was done so that nothing worked. Then it was done so that simple and straightforward things worked. Then it was done so that most things worked. And there's still few more things to do to cover scale, error situations and such. A very normal flow if the people proclaiming "done" wouldn't go too far outside the team with their assessments that just make us all look bad.

Sometimes I get frustrated with problems of teamwork, but most teams I've worked with have had those. And we were making good progress through a shared value item in this.

In breaking down silos, trust is important. And that's where problems can emerge.

Sharing the done / not done and silo problems outside one's immediate organization, you may run into a manager who feels they need to "help" you with very traditional intervention mechanisms. The traditional intervention mechanisms can quickly bring down all the grassroot improvement achieved and drive you into a panicky spiral.

So this leaves me thinking: if I can't trust that talking about problems we can solve allow us to solve those problems, should I stop talking about problems. I sense a customer - delivery organization wall building up. Or, is there hope in people understanding that not all information requires their actions.

There's a great talk by Coraline Ada Ehmke somewhere online about how men and women communicate differently. She mentions how women tend to have "metadata" on side of the message, and with this, I keep wondering if my metadata on "let us fix it, we're ok" was completely dismissed due to not realizing there is a second channel of info in the way I talk.

Safety is a prerequisite for learning. And some days I feel less safe than others.

Pairing Exploratory and Unit Testing

One of my big takeaways - with a huge load of confirmation bias I confess to - sums up to one slide shown by Kevlin Henney.

 
First of all, already from the way the statement is written, you can see it is not to say that this information has an element of hindsight: after you know you have a problem, you can in many cases reproduce that problem with a unit test.

This slide triggered two main threads of thought in me.

At first, I was thinking back to a course I have been running, where we would find problems through exploratory testing, and then take those problems as insights to turn into unit tests. The times we've run the course, we have needed to introduce seams to get to test on the scale of unit tests, even refactor heavily but all of it has made me a bigger believer of the idea that we all too often try to test with automation as we test manually, and as an end result of that we end up with hard to maintain tests.

Second, I was thinking back to work and the amount and focus on test automation on system level. I've already been realizing my look at testing through all the layers is a unique one for a tester here (or quality engineer, as we like to call them) and the strive to find the smallest possible scale to test in isn't yet a shared value.

From these two thoughts, I formulated on how I would like to think around automation. I would like to see us do extensive unit testing to make sure we build things as the developer intended. Instead of heavy focus on system level test automation, I would like to see us learn to improve on how the unit tests work, and how they cover concerns. And exploratory testing to drive insights on what we are missing.

As an exploratory tester, I provide "early hindsight" of production problems. I rather call that insight though. And it's time for me to explore into what our unit tests are made of.

Monday, February 6, 2017

The lessons that take time to sink in

Have you ever had this feeling that you know how it should be done, and it pains you to see how someone else is doing it in a different way that is just very likely to be wrong? I know I've been through this a lot of times, and with practice, I'm getting only slightly better at it.

So we have this test automation thing here. I'm very much convinced on testing each component or chains of couple of components over the whole end-to-end chains, because granularity is just awesome thing to have when (not if) things fail. But a lot of times, I feel I'm talking to a younger version of myself, who is just as stubborn as I was on doing things as they believe.

Telling the 10 years younger me that it would make more sense to test in smaller scale whenever possible would have been a mission impossible. There are two things I've learned since:
  • architecture (talk to developer) matters - things that are tested end-to-end are done by components and going more fine-grained isn't away from thinking of end user value
  • test automation isn't about automating the testing we do manually, it's about decomposing the testing we do differently so that automation makes sense 
So on a day when I feel like telling people to fast-forward their learning, I think of how stubborn I can be and what are the ways I change my mind: experiences. So again, I allow for a test that I think is stupid in the test automation, and just put a note of that - let's talk about it again in two weeks, and on a cycle of two weeks until one of us learns that our preconceived notions were off.

I'd love if I was wrong. But I'd love it because I actively seek learning. 

Friday, February 3, 2017

Making my work invisible

Many years ago, I was working with a small team creating an internal tool. The team had four people in total. We had a customer, who was a programmer by trade so sometimes instead of needs and requirements, you'd get code. We had a full-time programmer taking the tool forward. We had a part- time project manager making sure things were progressing. And then there was me, as the half-time-allocation tester.

The full time programmer was such a nice fellow and I loved working with him. We developed this relationship where we'd meet on a daily basis just when he was coming back from lunch and I was thinking of going. And as we met, there was always this little ritual. He would tell me what he had been spending his time on, talking of some of the cool technical challenges he was solving. And I would tell him what I would test next because of what he just said.

I remember one day in particular. He had just introduced things that screamed concurrency, even through he never mentioned it. As I mentioned testing concurrency, he outright told me that he had considered that and it would be in vain. And as usual, with my half-time allocation, I had no time to immediately try go prove him wrong. So we met again the next day, and he told me that he had looked into concurrency and I was right, there were problems. But there isn't anymore. And then he proudly showed me some of the test automation he had created to make sure problems of that type would get caught.

It was all fine, I was finding problems and he was fixing them, and we worked well together.

Well, all was fine until we reached a milestone we called "system testing phase starts". Soon after that, the project manager activated his attention and came to talk to me about how he was concerned. "Maaret, I've heard you are a great tester, one of the best in Finland", he said. "Why aren't you finding bugs?", he continued? "Usually in this phase, according to metrics we should have many bugs already in the bug database, and the numbers I see are too low!", he concluded.

I couldn't help but smile with the start of how nicely my then manager had framed me as someone you can trust to do good work even if you wouldn't always understand all the things that go on, and I started telling the project manager how we have been testing continuously on system level before the official phase without logging bugs to a tool. And as I was explaining this, the full-time developer jumped into the discussion exclaiming that the collaboration we were having was one of the best things he had experienced, telling how things had been fixed as they had been created without a trace other than the commits to change things. And with the developer defending me, I was no longer being accused of "potentially bad testing".

The reason I think back to this story is that this week, I've again had a very nice close collaboration with my developers. This time I'm a full time tester, so I'm just as immersed into the feature as the others, but there's a lot of similarities. The feedback I give helps the developers shine. They get the credit for working software and they know they got there with my help. And again, there's no trace of my work - not a single written bug report, since I actively avoid creating any.

These days one thing is different. I've told the story of my past experiences to highlight how I work, and I have trust I did not even know I was missing back then.

The more invisible my work is, the more welcoming developers are to the feedback. And to be invisible, I need to be timely and relevant so that talking to me is a help, not a hindrance.