How Atlassian tests its work every other week
Designing Atlassian – Medium | Georgie Bottomley
Within Atlassian our Designers, Product Managers, Developers, QA’s, Writers and Researchers all work together in cross-functional product teams. We are always looking at ways to improve, and a few years ago we set up Atlab as part of this cycle of continuous improvement. Atlab is a testing lab that allows everyone in the team direct access to customers.
One of the main goals of Atlab is to make research easy, cheap and quick so that anyone in the team can take part. This enables the customer to be at the centre of everything we do.
Before we get into how we are running our sessions, I want to clarify the goals of our testing. Too often people think the goal of testing is just to learn something and to gain insights.
I don’t think this is always the case.
The goals of Atlab:
Insights– first and foremost we want to learn if our solutions are working. Can participants find and use our features?
Empathy– sitting in a room for an hour with a stranger seeing our products through their eyes is a great way to build a deep understanding of our users.
Learning– from running cadence all our designers learn how to create a testing script, moderate sessions, and have insightful conversations with customers.
We are able to have these goals because we have sessions running every other week. It means if one session doesn’t go to plan, or if we are training someone up, or even if we have nothing to test and just have a chat, it’s okay. This is all part of the process. Having testing running so frequently means there is less pressure on the team to get it right the first time, and they can simply enjoy meeting customers and building empathy.
As we talk more and more about Atlab externally, and share how easy it is to start your own testing lab, we are often asked questions about how we find the time to test so frequently. People normally associate testing with a lot of time and effort.
In the spirit of Atlassian’s value Open Company, No Bullshit I wanted to share some things that have been working well for us, and somethings that haven’t.
After several months of iteration this is where we’ve ended up. We still regularly run retros to ensure this is working well for the product team, but this structure means that within two weeks of joining the company you meet a customer, and no matter the project stage you can get customer feedback.
So here goes, a breakdown of how Atlassian approaches it’s fortnightly testing…
The week before testing
This is your week to carry on with work! It is really important not to break the momentum of work being done, so in your non-testing week you can concentrate on your designs, and not work on preparing anything to test.
Basically spend a week ignoring the fact that you have testing next week.
Monday- the start of the week
Find out what the team has to test.
We do this by creating a page, and encouraging people to ‘dump’ their tests. This could be a couple of questions, a link to a prototype, or a fully fleshed out task based test.
In the afternoon when we are happy that the team has added everything that they have, one person pulls it together into a testing script. This is a detailed plan of what we will ask, show and look for during the session. Some weeks this is the researcher, and some weeks the researcher works with a designer to create the script. It is a great way for team members to learn how to create a script, taking a participant on a journey, and make sure we are asking the right types of questions to get the answers we need.
Have an owner. It is great sharing out the tasks, but you need someone to do it! Make sure every session has a moderator and note-taker, everyone has been asked if they have any content they would like to test, and the script has been created.
A good script, is like a good movie script. It needs a beginning, middle and end. You should choose the order of your tasks dependent on the level of context a participant needs to answer. A little like Alice down the rabbit hole, you will start with high level questions then over the course of the session drill down into their motivations.
Tuesday — the day before testing
Walk through script and prep!
Encourage the team to check which sessions they will be taking part in the next day. Each team member should have a maximum of three sessions. This could be note-taking or moderating. Some of the best feedback we have had from our cadence is the developers joining as note takers. Every team should have more than enough people to share the love and make sure you don’t get burnt out.
Have a meeting to walk through the script. This is really useful to make sure everyone has read the script before they moderate (!!!), it allows the team to spar the script, and every moderator can understand what we are wanting to learn from each test. These sessions can be half an hour, but they often end up running into an hour once you allow for discussion time!
The owner needs to make sure all the materials for testing are ready- incentives, print outs, surveys.
Make sure everyone knows what they are doing. Last minute changes to the moderator line up can be really impactful- being considerate and playing as a team works well here!
Moderating more than 3 sessions in a row is hard work. Even a seasoned researcher won’t be able to tell the difference between the sessions when you speak to them afterwards. You have a team use them
Wednesday- the day of testing
Testing, testing, testing.
The owner for the week should be in early to make sure the room is set up and everything is ready to go. Make sure the recording software is running, and video conferencing is up so the rest of the team can watch from their desk.
As a user arrives at the building make sure someone is there to greet them. We have a chat room that this is put in so anyone can go and collect them. It also means anyone can run a session… remember don’t be a hero and spend a full day testing.
As the day takes place the note takers can write up the session there and then straight into a collaborative note taking tool of your choice, we use Confluence.
One person can’t do everything. By delegating out moderating and note taking, you can get the whole team involved, and the team stays fresher for the sessions. It means each team member can be involved in 2–3 sessions, and no-one has to take a day out of work.
The best way to learn about what a user did is to talk about it. After each session talk to your team members in the room. Learn what they noticed, discuss what you think you could have done better, and laugh about the insights!
Thursday- the day after testing
Review and iterate.
The best way to understand what we saw is to discuss the session. We have a half hour meeting where everyone who was involved in the day is invited. We quickly walk through the different tests we ran, and what our insights were. As the team is typically all in the room at this stage, it’s a great place to discuss next steps and how to iterate on an idea. This is probably the highlight of the week, and as you are delegating out the moderation really important to join the dots for everyone afterwards.
You will never watch the videos again. Whilst they are great to have- having good summary notes, and knowing what the next steps are within the team are the most important outcomes from the day. You shouldn’t be spending longer writing up the testing, than doing the testing itself. The goal is lightweight regular validation.
Friday- day of rest
The team can get to actioning their recommendations on Thursday and Friday, or even start working on their next project. This is your time to move on, and get back to work!
All together for a team member this is probably about 4.5 hours of their time in the week, for the person leading the week this increases to about 6 hours. We find that is totally manageable with your other priorities and work that needs to be done. Working together pulls the team closer to each other, and everyone becomes an advocate for the customer.
Having these sessions holds the whole team accountable, there should never be work that goes out the door that hasn’t been tested, and no-one should be too busy to interact with a customer.
I would love to hear from you if you try to use this process in your work. Did you find it worked for you? Was there anything that you changed or modified? We are always working on making this as effective as possible- just enough research at just the right time.
Did you enjoy this post? Why not give it some ❤️ or a share? Want more of the same? Consider following Designing Atlassian.