Sunday, January 14, 2018

Why positive discrimination is equality over time

I remember a spring day 20 years ago. I was a university student, who had just taken a course on public presenting with a teacher that turned out to be the most transformative in my life. What I remember is not that her forcing me to watch me speak on video made me realize my inner world was much more messy than my presentation. I remember her for one of my first discussions on feminism.

I was a hopelessly shy student who believed she possessed little opinions. And even if I did, I was very uncomfortable sharing them. While on that course, I read news every day to force myself to be even remotely able to have group discussions on day-to-day topics. That just wasn't me.

So when in the end of the course my teacher told me in private that she thought I was a feminist, I responded like so many women: I wasn't. I did not need to be. There was nothing wrong with equality. If anything, I was always just positively discriminated.

I did not think about that discussion for a very long time, but obviously years since have changed my perspective and raised my awareness on need of feminism. There's tons of wonderful writings on the problems and solutions, and my concern is still not that I get regularly mistreated, but that I've needed in many ways to be exceptional when normal should be enough.

A few days ago, I retweeted this:

Let's look at what it claims:

  • Women generally apply for jobs only when they meet all the requirements
  • Some women apply without meeting all the requirements and that requires them extra effort because it is against what they'd naturally do.
Just sharing this tweet meant there was someone puzzled asking to be educated (extra work on women when all the resources are already available). To be honest, it did not sound as asking for education, much more on explaining to me why I shared a tweet that was just wrong using "one woman got selected with us even though she did not fill all the requirements" as evidence that this is not a general trend. Still, see point 2 above: she might have needed to exert extra effort to apply. Regardless, one data point isn't enough. 

In our discussion we got soon to a point I see commonly coming from women: There should not be positive discrimination - "I don't want to be selected for my gender"

The thing is, acts of discrimination are a long-term phenomenon, and we need to look at it discrimination over long term, not as individual event happening at individual job interview.
  • When I was 10, my family purchased our very first computer, and we had different ones ever since. They always were located in my (younger) brother's room and I asked for permission to use it as budget rules gave me space. His access was less limited. 
  • He started working with programming seriously at age of 12 (I was 14). His friends were all into it. I coded games by typing them from magazines already at 12, but I never had a single friend who'd do that with me. 
  • By time we both went to university to study computer science, he had 7 years of hobbyists programming because "computers were boys toys". I had a starting interest with time spent on BBSs and rudimentary programming I had done on "Teaching myself Turbo-Pascal" as schools gave you space to learn, they did not teach anything back then.  Most girls were not quite so advanced. 
  • Most of my university students were with backgrounds akin to my brother. I was years behind. In addition, if I ever did group work, I got told I probably did not contribute anything. Both other students but also some teachers. I needed to continuously keep proof of my contributions, or work alone when others got to work in groups. 
  • Any course with classroom exercises were my nightmare. There were 2% women and many teachers believed both genders needed to speak every time. I learned to skip classes to suffer less. Again, more work just to survive.
  • If I was ready to get help, I had lots of the classmates helping me. Usually with the price of figuring out if I was single or not. 
Back then, this was how things were. I wasn't brave enough to call out any of this. I thought it was normal. That was the world I had always been in. I had no feminist friends to make me aware this was exceptional. I went through it all with plain stubbornness. 

I know I'm not alone with my experience.

So when then I get invited to a conference past the call for proposal process, I recognize that is positive discrimination. Similarly, if two candidates in job interview seem equal and the woman gets selected, that could also be positive discrimination. But we really don't hire just for skills of today, but potential of tomorrow. So it is less straightforward. 

With all the debt on the negative discrimination I've got to go through, I'm nowhere near equality yet. 

So I believe in equity. We need to help those who need more help more than those who started off in a more privileged position. Positive discrimination of equality over time - equity today. 

My story is one of a privileged white woman. The stuff other underprivileged groups go through means we need to compensate for them much longer. 

PS: I spent 30 minutes writing this post and I've had thousands of discussions like this in my lifetime since I realized I'm a feminist. Imagine what those not needing to have these discussions get accomplished with that time. 



Friday, January 12, 2018

All I got for a week of programming was one lousy test script


From the title, you might think the post is about venting on how slow it is to learn automation. If that's what you are looking for, this is not that post. Instead, this is a post about insights of what happens while we program test automation.

There was a fairly simple end to end scenario that needed testing. The tool of choice was Python, and examples of doing something fairly similar were plentiful. 

To maintain the focus, the scenario was first drafted just as code comments. The steps the script should go through. The verifications that needed to happen along the way. The way we would determine what to make note of while the test was running, and what would be things that need to stop the test from proceeding as it just makes no sense. 

It could all be very simple, except it almost never isn't. 

First of all, to figure out the scenario, some details of what to check require the external imagination: product we are testing. Seeing the details of what could be verified need hands on the computer. We could call that automating, but what we actually do is mostly manual. Sometimes we can run the start of the script to get to the point of pondering. But  the pondering is still a manual process. We look at what is available that we could programmatically access. We think of what is good enough to determine if things work, and how the actual application would allow us to see things with code. 

As we get to a manual process, we learn that while I wanted to do a thing, for some reason it does not work. We find bugs. Some of the bugs we notice when we just run through the scenario manually. Other bugs we notice, because automation is picky. Where a person can just work around some deficiencies, automation may get us momentarily stuck. Something else needs changing before the script can proceed. And we end up with todo-markings in our automation code, even fixing the problems on the application ourselves just to be able to make progress. 

Towards the end of the week, multiple little learnings later with blocking bugs fixed, we finally get the script to a point where it runs in its intended scope. Allowing then to think outside this little agreed box that took the whole week, there's more. But also, just going though this one scenario is already making the work of adding another easier. There will again be bugs, but they will be different. The scenario we already automated gets run since its introduction, alerting us on possible regressions. 

I write this post because I read that "Testing as an exploratory, investigative activity, cannot be replaced by automated checks". It bothers me how often we testers say this. The automated checks are done by people too. The human part of a check precedes creating the automation that successfully executes things. It grows as we add more checks. Many times when automating, we need to look with more detail. 

The risk to good testing isn't in including automation into the way we work. It is in not looking wide if automation gives you the sense of already covering what ever scenarios are relevant. The risk is the automators who say "this is fully tested" when there really is one happy day scenario with one set of very limited data and selections. 

Automation has so much power as a way of executable documentation.

Thursday, January 11, 2018

It ain't bragging if it is true

I'm ready to blog about this:

Let me start of with quoting the colleague in question, with her permission:
I had a session with Maaret, where we went through things I do in my job as a software tester. It amazed me how difficult it was to brag about myself and even more, how difficult it was for me to see all the things I do.

We used white table and Maaret wrote all the thing she has seen me doing. I was speechless. I just nodded, yes yes, that's what I do - I just didn't understand it was worth for mentioning. It was just "business as usual".

It is really hard to try to prove to your boss how useful you are, when all the things you do happen in the background without any "hard evidence".

I'm so grateful Maaret took time and went all this through with me, it was an eye-opening session for me, too, and now I have hard evidence to present to my boss :)
You've been here, right? Feeling that you do a good job, feeling that talking about what you consider good is bragging and that bragging is just awful. It's so awful you can't even find the words to talk the truth to the power when it matters to you personally the most: when someone is deciding on your future.

So how do you learn to brag about your contributions?

Ask someone else to brag for you

It if often easier to notice the good in others than in yourself. Listen to when people say nice things about you and in addition to avoiding the "oh, that was nothing" that comes out all too often, replace it with "Thank you" and a deep mental note on what it was about. You belittle it. Don't. Small things are much bigger than you sometimes give them credit for.

You can also start specifically asking for feedback. And asking for help, like my colleague did is also ok. Saying things that you hide deep because you don't give them credit can be hard without giving the other a chance of observing you. I worked in the same room for a month to be able to recognize some of the unique ways of working that fits her personality. We do the same work differently, even to same results. 

Practice

Start your bragging small, for a safe audience. If you start to learn to brag on your work, effort and results to your boss, you could frame it as "I'm practicing making my contribution visible". Invite feedback. If your boss isn't the person you feel safe with, find someone who is.

Within Women in Testing Slack Community, we have (from Gitte Klitgaard's initiative, isn't she awesome!) a #BragAndAppreciate channel that gives excellent opportunities on trying out ways of saying something in positive tone. Small and big brags are equally welcome.

I've had chances of assessing my feelings on bragging with various coaches, guiding (forcing) me through bragging exercises. Realizing almost everyone sucks at bragging and that we are culturally and structurally conditioned to not brag helps in giving yourself a permission to try it out.

Start small, grow. Encourage others to share positive. Share positive of others actively, and look at which ones also apply to you. 

Focus on the positive

Play your strengths, everyone knows you have weaknesses anyway. It is not dishonest to just focus on the positive, and building a case for why you are doing good. You're asked why you don't speak out in design meetings more, focus on what you do instead: listen fully without preparing to answer, digest, let things sink in. Focus on describing what do you do with the information after it sunk in. Make the 1:1 discussions that no one else pays attention to visible.

You will feel guilty about things and you feel you'd want to say out loud some of them. Learn not to. Say them at a different time. Don't belittle yourself. Others in this world do too much of it already.

Tell stories

To have really good bragging is to channel being proud and boastful when you talk about things you do and achieve (learning is an achievement, failing is an option). Include a story of what really happened, examples keep things real. For example my colleague is a thorough, patient, detail-oriented tester. Instead of saying she finds a lot of bugs to discuss and then report, tell a story of the time you tested. Choose one that is exemplary or recent. You can't tell all the stories, choose one that shows you in a good light.

Manage up

"It ain't bragging if it's true" is attributed to actor/humorist Will Rogers. You could say it's lying, not bragging if it isn't true. But the difference here is to look at yourself and your work in the best possible light. Shine the light on the good parts. We'll notice the others if they are relevant without you personally pointing them out as disclaimers every time you speak of yourself. We can talk of those at another time.

Appreciate what you do. All what you do. You use a lot of time on doing it. It is more worth appreciating than you realize. You need to appreciate yourself so that your boss can learn to appreciate you more through your views. Your career is yours, and too important to be left on your manager. It's more of a collaboration, you drive your own future.

And to end the story my colleague started, one step further: she presented the list of what she does to her boss, only slightly apologizing that she needed to share all of this stuff. But the best part to me was what she said right after: "My throat is hurting, I was talking so much in this meeting". Mild bragging accomplished and adored.






 



Sunday, January 7, 2018

My #1 thing to Add With Testing

Over the years, I've had the pleasure working with many kinds of developers. There's been those who struggle and barely get the code written, and testing for them is often somewhat painful. Fixing makes things more broken. And everything I touch feels broken. The majority, however, succeeds fairly well both in creating something and changing it on feedback. And then there's the small lovely group of test-driven developers who again are almost like a different species on the level of trust (or mechanisms of creating/maintaining trust) one can place on their changes.

There is, however, one type of testing that I've been thinking about, that tends to find problems of relevance with all sorts of developers. And that is one focused on the environment around the software we are creating.

I remember a big revelation years ago on what system testing can mean. I was testing a security scanning software on a mobile platform, and majority of things I needed to test was whether other applications and services other applications use still work with this software installed. It was by no means obvious. The system was much more than the mechanics of the software we created. It was everything our software touched. The software was special in comparison to many others, hooking deep into the operating system in ways that with the possible combinations of differences in firmware could result in interesting behaviors.

As I was testing ApprovalTests for the first time, the very first things I went through were environment setup. I had my C# environment, with two different test runners (there's more options though) and I started setting up the thing I was about to test, failing miserably. I had just hit a bug that soon got fixed (and forgotten) - the installation path through nuget would fail in cases where there were more than one runner installed. Again, the software failed for the environment it was put in.

Similar problems were there with the latest feature I was testing. It was fine "on my machine". But if "my machine" got more complicated, with competing ways of using same services available, it would fail in interesting ways.

So, when testing, remember you're not testing just the software as the requirements seem to state. That software is supposed to live in an environment with other software. It has a lifecycle. It relies on shared services.

Sometimes, the environment with other software is not for your company to control. Who gets assigned blame on a problem of incompatibility? Usually the one who comes in last. You might at least want to think through what other software your software is supposed to live with, and test for those.



Monday, January 1, 2018

Getting Through 2017

The year is about to change, and I want to continue my tradition of taking a moment of reflection before it's done. Living in the past (=being in USA) is handy, as it's already past midnight where I'm from and I still have a full working day for this. In case you're interested in comparison, this is what my 2016 looked like.

My 2017 was rough on me. You might not have noticed, but I took significant pauses just to rest. The more I was around people in conferences, the more lonely I felt. So instead of looking at how much I did, I want to just think through progress I made, while struggling.

My work

I had my first full year back at F-Secure. I feel I'm home, but I also feel torn when I'm away. And I was away a lot. 21/30 sessions I delivered in 2017 were abroad.

In 2017, I received some of the nicest, unintentional compliments from my colleagues at F-Secure. Some recognized how things were different when I was around. I worked to restore developers to the core of product decisions (No Product Owner experiment still ongoing), move the release decision power from testers (QEs as we call them) to the whole team and developers in particular, enabling fast fixing though frequent releases and knowing of the need through production monitoring.

We got through a relevant major marketing release. I ended up holding space for steering group meetings for us to support each other in multi-team setting, and being invited as the R&D representative to a 360 business steering group for a wider area around products I'm testing. I got to reflect on my future career deciding to still not be a manager again, yet still having a relevant say in overall business decisions channeling voices of the developers in general.

I also said it out loud: I'm done with my 3-year goal of being a keynote speaker. My next 3-year goal is to be a developer / architect by embracing the reinvigorated love I had since 15-years old on programming.

I love my work. I want to spend more time at work. So 2018 will see less of me abroad.

Speaking

I did 30 sessions in 2017, and went through my stats of speaking sessions over the years since I started. I've now done 352 sessions (talks, workshops, courses) and all of these on the side of a full-time job.



I've always held on to my work as "just a tester" - knowing that no one is ever just anything. I've reflected a lot more on why I keep speaking, and learned there's two things:
  • Meeting people to learn with - and the more "fame" I get, the less of this I get.
  • Fueling my drive to improve at work to have stuff to speak on - the further I get on my improvements at work, the more "far-fetched" people feel the things I do are, even though they're not. 
My talks got more personal. I enjoyed delivering "Making teams awesome" and "Learning through Osmosis" a lot. And I finally found a way to speak to developers about testing with "Breaking Illusions - Perspectives to Testing" that was just a very basic talk on what testing is and how everyone could improve on it. I also got a public reference on my abilities as a tester from a developer, listening to a podcast.

Other

While working and speaking, I also managed to do some other things.

European Testing Conference 2017 got organized. European Testing Conference 2018 talk selection process taught me a lot with the chance to pair with Franziska Sauerwein on our submitter Skype calls and meeting all the awesome people. I've taken a lot of joy introducing some of the awesomeness of the people who did not fit into the program to other people I know, and seeing new connections to create more great content happen.

Women in Testing Slack group has been my support throughout the year. Having direct access to 150 women with similar interests and challenges have been invaluable. I love our #BragAndAppreciate channel, and the permission to say what we're doing without being considered negatively.

My books progressed very little, but they did. Mob Programming Guidebook now has 741 readers with 201 paid (2016: 454/133), and Exploratory Testing book has 145 readers with 17 paid. What progressed is plans to make more room for writing in 2018. I still wrote 103 blog posts and kept to my idea of writing whenever inspired - as I write primarily for myself. I also wrote three articles, two of them with Ministry of Testing and one with Stickyminds. The latest one was just finished, so publishing will be on the 2018 side.

Someone seems to read my blog, as I'm now at 490 628 hits (2016: 361622) on my posts. And people follow me on twitter, 3889 as of today (2016: 2964).

While posting on twitter about #PayToSpeak conferences, I got called out for "misusing my influential stance". That was somewhat of an accomplishment, as I don't see myself as having influential stance, and if I have, it comes through doing work for stuff I believe in: testing, fair conferences, equity over equality and awesome software that makes a difference.

People

There's so many people I could mention that have impacted on the positives of 2017. So I just mention a few.

  • Llewellyn Falco has been important in more ways that I can count. His drive to pair programming with random people over Skype (and me getting to watch those sessions emerge) has taught me a lot about how kind programmers interact. 
  • Franziska Sauerwein joined organizers for European Testing Conference and has grown into a pair I do a shared talk with, and a friend I appreciate tremendously. 
  • Selena Delesie was the friend who picked me up when I fell, reminding me that there can be a real connection with people you don't get to see or keep in touch all the time. 
  • Jose Diaz showed how deeply he thinks in the improving world of conferences, and helped me  love his Agile Testing Days even more than I did in the past. 
The year saw birth of strong keynoting tester women: Helena Jeret-Mae, Ash ColemanAlexandra Schladebeck, Gwen  Diagram,  Nicola Sedgewick, Maria KedemoAshley HunsbergerKatrina Clokie and Angie Jones are just few names I now remember to honor. You'd be lucky to have any of them speaking in conferences you participate. And there's more, just start with the list of 125 awesome testers

My biggest lesson on 2017

There's a thing that I take out of my experiences in 2017 that is on the #PayToSpeak theme. I see from my stats why this is so important to me: I started speaking abroad only at a point where I had help on financing my travels, before that I was "stuck" in Finland for 14 years.

The lessons for 2017 is the added respect for other aspects of organizing a conference that speakers thinking they are the product conferences sell don't see. Speakers don't sell tickets. Marketing sells tickets. And I admire people who are good at marketing, realizing that with my time limitations, I am not (yet). 


Signing out 2017 - always a tester, never just a tester. 



Saturday, December 30, 2017

Finding a Bug I wasn't looking for

Some years ago, I was working as a test manager on a project where I was considered "too valuable to test". So I tested in secret, to be able to guide the people that were not as "valuable" as I was, to make them truly valuable.

We were an acceptance testing group on customer side, and the thing we were building would take years and years before there was all the end to end layers to use it like an end user would. A lot of the testing got postponed as there was no GUI - leaving us thin on resourcing early on. There were multiple contractors building the different layers, and a lot of self-inflicted complexity.

The core of the system was a calculation engine, a set of web services sitting somewhere. With the little effort and weird constraints on my time, I still managed to set up up to test something while it had no UI.

We used SoapUI to send and receive messages. The freebie version back then did not have a nice "fill in the blanks" approach like the pro one, and it scared the hell out of some of my then colleagues. So we practiced in layers, putting values in excel sheets and then filling the values back into the messages. As my  group learned to recognize that amongst all the extra technical cruft was values they cared deeply for and concepts they could understand better than any of the developers, we moved to working with the raw messages.

In particular, I remember one of my early days of trying to figure out the system of thousands of pages of specification with using it. I could figure out that there were three fields that needed filling as compulsory, the other stuff was all sorts of overrides. So I took a listing of a set of those three things in some thousands, and parametrized to send thousands of messages, saving responses on my hard drive.

I did not really know what I would be looking for, but I was curious of the output. I opened one in Notepad++, and skimmed through thousands of lines of response. There was no way I would know if this was right or wrong. I got caught up seeing error codes, and made little post-it notes categorizing what I was seeing. I repeated this with another message, and felt desperate. So out of a whim, I opened all the messages I had and started searching codes I had on my notes across all the messages.

The first code I was searching for was something that I conceptually understood that it shouldn't be that common. Yet, 90 % of the messages I had included that code. I checked with a business expert, and indeed my layman understanding was correct - the system was broken in a significant way if this code was this common. It meant lots of manual work for a system that was intended to automate decisions in the millions.

By playing around to understand when told not to, I found a bug I wasn't looking for. But one that was a no go in a system like that.

My greatest regret is that I spent time in the management layers, fighting in their terms. With the skills I have as a tester, I would have won out the fight for my organization if I just tested. Even when told not to. I was too valuable not to test.

This experience made sure I would again find places to work that did not consider the most expensive tester someone who wasn't allowed to test. And I've been in right kind of organizations, making a difference ever since. 

Sunday, December 17, 2017

Kaizen on Test Strategies

I just saw a colleague changing jobs and starting to talk on test strategies. As I followed their writings, my own experience started to highlight. I realized I am no longer working on visible test strategies - the last one I created was to start my second to last job and it did not prove that valuable.

When I say test strategy, I mean the ideas that guide our testing. Making those visible. Assessing risk and choosing our test approaches appropriately.

In the past, making a strategy was a distinguishable effort. It usually resulted in either a document or a set of slides. It guided not only my work, but supposedly the whole project. It was the guideline that helped everyone make choices towards the same goals.

Thinking of the strategy and specifics of a particular project was distinguishable effort while I was still doing projects. With agile and continuous delivery, there is no project, just flow of value in a frame of improving excellence. When I joined new organizations that had no projects, my introduction to coming to "improve / lead the testing efforts" triggered me to the strategy considerations. So what is different with my most recent effort, other than the lazy explanation of me not being diligent enough?

I approach my current efforts with the idea that they have been successful before me, and they will remain successful with me. I no longer need to start with the assumption that everything is wrong and needs to be set right. Even if it was wrong, I assume people can't change fast without pain, so I approach it with a Kaizen attitude - small continuous improvement over time, nudging a little here and there and looking at where we are and where I would like us to find our way.

Nowadays, a selection of visions of what good testing looks like resides in my head. I talk about that, with titles like "modern testing", "modern agile" and "examples of what awesome looks like". I don't talk about it to align others to it, I talk to allow people visibility to my inner world, for me to learn on what they are ready to accept and what not.

All the work with testing strategy looks very tactical. Asking people to focus here or there. Having a mob testing session to reveal types of information we miss now in general. Showing skills, tools. Driving forward the respect of an exploratory tester but also the patient building of test automation system that does better as per what I understand better to be.

Looking back, I remember (and can show you) many of the test strategy documents I've created. None of them has been as effective as the way I lead testing, with Kaizen in mind, for the last five years.