Monday, April 16, 2012

Why would we improve the way we work?

I had a short, but inspiring discussion at work today with some of my team's developers. Since it's apparent someone for some reason wants things to improve, they were asking for set goals and better understanding of where (and why) we're heading. Apparent for the reason that they just recruited me, but still not so clear since while they may now invest a little on "tester", they still haven't invested visibly into fixing.

I gave a short answer as I see it: there's a tradeoff in the way they develop. When we deliver "more short-term business value", we take "quality/technical debt". We're approaching a point where the debt and the interest of the debt are threatening our ability to deliver features, and we are unwilling to scale the team size but look for other solutions. Our challenge is not about us wanting to do things in ways that make sense, but about the effort and timeframe we need to invest to get where we want to be.

Another one of the developers who wasn't part of the discussion as it started, suggested from his cubicle that this could actually be what we'll clarify and discuss with the product management team. Didn't test that yet - will soon. This is a very simple and basic approach, but today reminded me how many times it has been effective. And most likely, I will learn more layers on what we actually find relevant for our product to become a better tester for it when talking about this.
 

Friday, April 13, 2012

If it looks too easy to be missed by developers...

On my first weeks at new job, I've had the pleasure of reporting bugs again. I find this particular result of testing to give me the feeling of achievement. The more relevant the problem, better.

There was one bug I reported on Monday, that just looked too easy to be missed by the developers in my team. As I originally reported it, the problem was that when logging in with one of our three main browsers, there's a highly visible error message. And that this seems to happen only with the recent builds, not in production version.

In the end of the week, I quickly asked in passing the developer whose component was failing, if a fix is available in the next weekly build. He seemed puzzled: what problem, what fix? I checked our Jira, and the issue had not been addressed - which is quite normal. He took a quick look at it, and came back with "I didn't change this for ages" with some details.

I started testing the issue more with the information from him. With fresh eyes, I realized I entered the program from a bookmarked link - something I hadn't mentioned in my original report. I also realized, that I had different addresses bookmarked in the other browsers. So I had missed a relevant bit of info I provided now.

Bottom line: if it looks too easy to be missed by developers, it may be that they didn't test, but in this case, I missed relevant factors that are needed to get the bug visible.  Talking sooner than later to the very busy developers is still a good idea.

Wednesday, April 11, 2012

New product, new team, new practices

For a bit over a week now, I've been wondering where I ended up in my quest for hands-on testing work. With hard choices on the way, I'm now working for Granlund, a civil engineering company, with a product that handles building-related data. The domain is something I have little idea as of before, and I'm looking forward to learning a lot on that in addition to tuning and changing whatever I can with my testing skills. We have a small team of less than 10 people, and I'm the first and only tester. Most of my colleagues in development seem to work remotely, but within a week, I've had a change of learning they're just as much fun to work with as I expected.

I start off with a redesigned version of a product that has been around for quite a while. The redesigned version is also out in production, new versions of it going out once a month. With customers actually paying for the product, they must be doing something right, even if they never had testers around.

After reading up on what the product is about with a shallow scan of its documentation, I've worked on:
  • Setting up test management based on sessions with Rapid Reporter and csv-note scanning tool to show metrics I will create - as I won't create test case counts
  • Learning the product's capabilities (and quality) with doing exploratory testing on its main feature areas
  • Existing test suites and redesign of test documentation
  • Redesigning a consultant-suggested testing methodology that I just can't believe would provide added value (unless faking testing is considered that to someone I did not yet meet there)
There's two strong first impressions:
  1. I've got a likely case of "asking for testing, when you should ask for fixing" ahead of me
    I find it somewhat funny that so many people equate testing (quality-related information service) with fixing (making the quality right) and don't think of the dynamics that the added information will bring in. Then again, understanding the business impact and meaning of the existing technically oriented issues is a service I think I can help with. 
  2. As there's not enough rational testing examples around, it's easy to take the basic book on what's a test case and try replicating it without thinking enough
    I've enjoyed reading the attempts to design tests in per-feature-area test suites of varying sizes, but all with step-by-step test cases repeating most of the steps again and again. I took on of these documents, 39 pages with 46 documented test cases and read it through in detail to make a mindmap of mentioned features (I do need a feature list to support my testing). While reading and using the product for learning it in practice (a couple of 1,5 hour sessions), I came up with one-page mindmap mentioning 88 things to test and four dimensions that cause repetition on significant amount of testing that should happen, like different browsers, user rights, and such. Out of the 39 pages, came out 3 things I could not directly deduct from the user interface with little information on the actual product. While doing this stuff, I marked down some (quite many) issues I would write bug reports on - if it wasn't the area we're about to rework on in a significant manner right about now. 
Looking forward to all this - and the chances it provides for writing stuff and providing examples of something that is doable to "just a tester".