Friday, February 17, 2017

Testing by Intent

In programming, there's a concept called Programming by Intent. Paraphrasing on how I perceive the concept: it is helpful to not hold big things in your head but to outline intent that then drives implementation.

Intent in programming becomes particularly relevant when you try to pair or mob. If one of the group holds a vision of a set of variables and their relations just in their head, it makes it next to impossible for another member of the group to catch the ball and continue where the previous person left off.

With experiences in TDD and mob programming, it has grown very evident that making intent visible is useful. Working in a mob when you go to whiteboard with an example, turn that into English (and refactor the English), then turn it into test and then create the code that makes the test pass, the work just flows. Or actually, the being stuck in the flow happens more around discussions on the whiteboard.

In exploratory testing, I find that those of us who practiced it more intensely tend to inherently have a little better structure for our intent. But as I've been mob testing, I find that still we suck at sharing that intent. We don't have exactly the same mechanisms as TDD introduces to programming work, and with exploratory testing, we want to opt to the sidetracks that provide serendipity. But in a way that helps us track where we were, and share that idea of where we are in  the team.

The theme of testing by intent was my special focus in looking at a group mobbing on my exploratory testing course this week. I had an amazing group: mostly people with 20+ years in testing. One test automator - developer with solid testing understanding. And one newbie to testing. All super collaborative, nice and helpful.

I experimented with ways to improve intent and found that:
  • for exploring, shorter rotation forces the group to formulate clearer intent
  • explaining the concept of intent helped the group define their intent better, charters as we used them were too loose to keep the group on track of their intent
  • explicitly giving the group (by example) mechanisms of offloading sidetracks to go back to later helped the focus
  • when seeking deep testing of small area, needed strict facilitation to not allow people to leave undone work and seek other areas - inclination to be shallow 
There's clearly more to do in teaching people how to do it. The stories of what we are testing and why we are testing it this way are still very hard to voice for so many people.

Then again, it took me a long deliberate practice to build up my self-management skills. And yet, there's more work to do. 
 

Tuesday, February 14, 2017

The 10 minute bug fix that takes a day

We have these mystical creatures around that eat up hours in a day. I first started recognizing them with discussions that went something like this:

"Oh, we need to fix a bug", I said. "Sure, I work on it.", the developer said. A day later the dev comes back proclaiming "It was only a 10 minute fix". "But it took the whole day, did you do something else?", I ask. "No, but the fix was only 10 minutes".

On the side, depending on the power structure of the organization, there's a manager causing confusion from what he picks up on that discussion. They might want to go for the optimistic "10 minutes to fix bugs, awesome" or pessimistic "a day to do 10 minutes of work".

The same theme continues. "It took us a 2-week sprint for two people to do this feature" proclaimed after the feature is done. "But it took us 2 2-week sprints for two full-time and two half-time people to do this feature, isn't there something off?"

There's this fascinating need for every individual to look good belittling their contribution as in how much time they used, even if that focus on self takes its toll on how we talk about the whole thing.


There's a tone of discussion that needs changing. From looking good through looking at numbers of effort, we could look good though looking at value at customer hands. Sounds like a challenge I accept.

Monday, February 13, 2017

Unsafe behaviors

Have you ever shared your concerns on challenges in how your team works, only to learn a few weeks later the information you shared is used not for good, but for evil?

This is a question I've been pondering a lot today. My team is somewhat siloed in skillsets and interests, and in the past few weeks, I've been extremely happy with the new raise of collaboration that we've been seeing. We worked on one feature end-to-end, moving beyond our usual technology silos and perceived responsibility silos, and what we got done was rather amazing.

It was not without problems. I got to hear more than once that something was "done" without it yet being tested. At first it was done so that nothing worked. Then it was done so that simple and straightforward things worked. Then it was done so that most things worked. And there's still few more things to do to cover scale, error situations and such. A very normal flow if the people proclaiming "done" wouldn't go too far outside the team with their assessments that just make us all look bad.

Sometimes I get frustrated with problems of teamwork, but most teams I've worked with have had those. And we were making good progress through a shared value item in this.

In breaking down silos, trust is important. And that's where problems can emerge.

Sharing the done / not done and silo problems outside one's immediate organization, you may run into a manager who feels they need to "help" you with very traditional intervention mechanisms. The traditional intervention mechanisms can quickly bring down all the grassroot improvement achieved and drive you into a panicky spiral.

So this leaves me thinking: if I can't trust that talking about problems we can solve allow us to solve those problems, should I stop talking about problems. I sense a customer - delivery organization wall building up. Or, is there hope in people understanding that not all information requires their actions.

There's a great talk by Coraline Ada Ehmke somewhere online about how men and women communicate differently. She mentions how women tend to have "metadata" on side of the message, and with this, I keep wondering if my metadata on "let us fix it, we're ok" was completely dismissed due to not realizing there is a second channel of info in the way I talk.

Safety is a prerequisite for learning. And some days I feel less safe than others.

Pairing Exploratory and Unit Testing

One of my big takeaways - with a huge load of confirmation bias I confess to - sums up to one slide shown by Kevlin Henney.

 
First of all, already from the way the statement is written, you can see it is not to say that this information has an element of hindsight: after you know you have a problem, you can in many cases reproduce that problem with a unit test.

This slide triggered two main threads of thought in me.

At first, I was thinking back to a course I have been running with Llewellyn Falco, where we would find problems through exploratory testing, and then take those problems as insights to turn into unit tests. The times we've run the course, we have needed to introduce seams to get to test on the scale of unit tests, even refactor heavily but all of it has made me a bigger believer of the idea that we all too often try to test with automation as we test manually, and as an end result of that we end up with hard to maintain tests.

Second, I was thinking back to work and the amount and focus on test automation on system level. I've already been realizing my look at testing through all the layers is a unique one for a tester here (or quality engineer, as we like to call them) and the strive to find the smallest possible scale to test in isn't yet a shared value.

From these two thoughts, I formulated on how I would like to think around automation. I would like to see us do extensive unit testing to make sure we build things as the developer intended. Instead of heavy focus on system level test automation, I would like to see us learn to improve on how the unit tests work, and how they cover concerns. And exploratory testing to drive insights on what we are missing.

As an exploratory tester, I provide "early hindsight" of production problems. I rather call that insight though. And it's time for me to explore into what our unit tests are made of.

Monday, February 6, 2017

The lessons that take time to sink in

Have you ever had this feeling that you know how it should be done, and it pains you to see how someone else is doing it in a different way that is just very likely to be wrong? I know I've been through this a lot of times, and with practice, I'm getting only slightly better at it.

So we have this test automation thing here. I'm very much convinced on testing each component or chains of couple of components over the whole end-to-end chains, because granularity is just awesome thing to have when (not if) things fail. But a lot of times, I feel I'm talking to a younger version of myself, who is just as stubborn as I was on doing things as they believe.

Telling the 10 years younger me that it would make more sense to test in smaller scale whenever possible would have been a mission impossible. There are two things I've learned since:
  • architecture (talk to developer) matters - things that are tested end-to-end are done by components and going more fine-grained isn't away from thinking of end user value
  • test automation isn't about automating the testing we do manually, it's about decomposing the testing we do differently so that automation makes sense 
So on a day when I feel like telling people to fast-forward their learning, I think of how stubborn I can be and what are the ways I change my mind: experiences. So again, I allow for a test that I think is stupid in the test automation, and just put a note of that - let's talk about it again in two weeks, and on a cycle of two weeks until one of us learns that our preconceived notions were off.

I'd love if I was wrong. But I'd love it because I actively seek learning. 

Friday, February 3, 2017

Making my work invisible

Many years ago, I was working with a small team creating an internal tool. The team had four people in total. We had a customer, who was a programmer by trade so sometimes instead of needs and requirements, you'd get code. We had a full-time programmer taking the tool forward. We had a part- time project manager making sure things were progressing. And then there was me, as the half-time-allocation tester.

The full time programmer was such a nice fellow and I loved working with him. We developed this relationship where we'd meet on a daily basis just when he was coming back from lunch and I was thinking of going. And as we met, there was always this little ritual. He would tell me what he had been spending his time on, talking of some of the cool technical challenges he was solving. And I would tell him what I would test next because of what he just said.

I remember one day in particular. He had just introduced things that screamed concurrency, even through he never mentioned it. As I mentioned testing concurrency, he outright told me that he had considered that and it would be in vain. And as usual, with my half-time allocation, I had no time to immediately try go prove him wrong. So we met again the next day, and he told me that he had looked into concurrency and I was right, there were problems. But there isn't anymore. And then he proudly showed me some of the test automation he had created to make sure problems of that type would get caught.

It was all fine, I was finding problems and he was fixing them, and we worked well together.

Well, all was fine until we reached a milestone we called "system testing phase starts". Soon after that, the project manager activated his attention and came to talk to me about how he was concerned. "Maaret, I've heard you are a great tester, one of the best in Finland", he said. "Why aren't you finding bugs?", he continued? "Usually in this phase, according to metrics we should have many bugs already in the bug database, and the numbers I see are too low!", he concluded.

I couldn't help but smile with the start of how nicely my then manager had framed me as someone you can trust to do good work even if you wouldn't always understand all the things that go on, and I started telling the project manager how we have been testing continuously on system level before the official phase without logging bugs to a tool. And as I was explaining this, the full-time developer jumped into the discussion exclaiming that the collaboration we were having was one of the best things he had experienced, telling how things had been fixed as they had been created without a trace other than the commits to change things. And with the developer defending me, I was no longer being accused of "potentially bad testing".

The reason I think back to this story is that this week, I've again had a very nice close collaboration with my developers. This time I'm a full time tester, so I'm just as immersed into the feature as the others, but there's a lot of similarities. The feedback I give helps the developers shine. They get the credit for working software and they know they got there with my help. And again, there's no trace of my work - not a single written bug report, since I actively avoid creating any.

These days one thing is different. I've told the story of my past experiences to highlight how I work, and I have trust I did not even know I was missing back then.

The more invisible my work is, the more welcoming developers are to the feedback. And to be invisible, I need to be timely and relevant so that talking to me is a help, not a hindrance. 

Monday, January 30, 2017

Entrepreneurship on the side

I had a fun conversation with Paul Merrill for his podcast Reflection as a Service. As we were closing the discussion in the post-recording part, something he said lead me to think about entrepreneurship and my take on it.

I've had my own company on the side of regular employment for over ten years. I have not considered myself an entrepreneur, because it has rarely been  my full time work.

I set a company up when I left a consultancy with the intent to become independent. I had been framed as a "senior test consultant" and back then I hated what my role had become. I would show up at various customers that were new to the consultancy, pretending I had time for them knowing that the reality was that on worst of my weeks, I had a different customer for each half a day. Wanting to be a great tester and make great impact in testing, that type of allocation did not feel like I could really do it. I was a mannequin and I quit to walk away from it.

Since I had been in contact with so many customers, I had nowhere to go. According to my competition clause, I couldn't work with any of those companies. They were listed in a separate contract, reminding me of where I can't work. One of the companies back then on the list of no-go was F-Secure, and the consulting I had done for F-Secure was a single Test Process Improvement assessment. F-Secure had a manager willing to fight for their right (my right) for employing me, and just stepping up to say that they vanished from my no-go list and I joined the company for 6-months that turned into three years.

As I was set out to leave in 6 months, we already set up a side work agreement. And in my three years with F-Secure, I started learning what power entrepreneurship on the side could have.

In the years to come, it allowed me a personal budget to do things the company wouldn't have budget for - including meetups and training that my employers weren't investing in for me. It allowed me to travel to #PayToSpeak conferences I could have never afforded without it. Training for money a day here and there were enough to give me the personal budget I was craving for.

I recently saw Michael Bolton tweet this:
I've known I'm self-employed on the side, and it has increased my awareness that everyone really is self-employed. We just choose different frames for various motivations to do so. On the side is a safe way of exploring entrepreneurship.