Some thoughts on product discovery and design sprints and a case study from Ableton
Design sprints are a popular format for product discovery these days. The idea to collaboratively define which product or feature to build in a structured, timeboxed way is not radically new. With design thinking, in particular, a similar approach was developed 25 years ago. However, GV developed a framework that is easy to follow and that is tested in hundreds of sprints. While reading the book, I was reminded of one of my first attempts to plan and facilitate a collaborative product discovery sprint at Ableton a couple of years ago. The process had some similarities with a design sprint but was also very different in many regards, most notably in terms of scope.
Why are design sprints so popular today?
Design sprints follow a methodological pattern that has been around in the design community for quite a while. They share the double diamond approach with many other design and problem-solving methods. The double diamond refers to two separate stages — the problem space and the solution space — , each of them with an opening phase to collect a large variety of insights or ideas and a closing phase to narrow down what was found during the opening phase. The first stage is exclusively about defining the problem and it ends with a clear problem statement (or point of view in design thinking). Only then do you enter the solution space, ideating a wide range of alternative solutions (opening up again) before you focus on the one hypothetical solution that will be refined and evaluated with real users. The whole process allows some iterative back and forth between the different phases. Key to success though is to always know in which phase you are. I.e. not to think of solutions when you should focus on understanding the problem and not creating new material when you should narrow down.
Another key feature of design sprints and design thinking alike is working in cross-functional project teams. Personally, I believe that part of the reason why methods like design sprints are only now becoming so popular — besides the fact that GV did a good job in developing, documenting and advertizing the approach — is that working in cross-functional teams became the norm in software product companies. This was very different 5 to 10 years ago when the idea to actually sit and work together with folks that have an entirely different professional background for an extended period of time — like a week — was slightly disconcerting. Sure, you would meet to sync up on status or exchange documents but then you would walk back to your function’s team space and work with the other PMs, designers, marketers, engineers or whatever your trade happened to be. This has drastically changed over the last couple of years and I think that this has significantly lowered the barriers for committing to a week-long cross-functional workshop exercise.
Working in cross-functional teams on a regular basis also increases confidence with regards to the specific competencies of every individual team member and helps to build trust between people with different professional backgrounds. Both of these qualities are a necessary prerequisite if people are asked to go beyond the definition of their role profile.
Successful products are generally described as being in the intersection of viability, feasibility, and desirability. A typical interpretation of this Venn diagram is that each of the three product disciplines owns one of the circles. PM is responsible for viability, engineering takes care of feasibility and design is in charge of desirability — and then they try to figure out how to meet at this sweet spot in the middle. The power of cross-functional collaboration is to collectively push the boundaries of each of the circles so that the resulting overlap gets bigger and bigger. The team needs the functional expertise to push the boundaries of what is viable or feasible or desirable but it also benefits from the persistence of the non-expert who is not accepting the status-quo.
Collaborative workshop techniques such as design sprints offer a framework that provides an additional safety net, as people are only asked to let go of their traditional role profile for a certain time.
Live 10 Bootcamp
My first opportunity to organize a timeboxed product discovery at Ableton was not really a design sprint — it was multiple sprints at once, all running in parallel at the same time to define an entire product release.
One of the differences between online apps and desktop standard software — if it doesn’t have a subscription model — is that you need to bundle features in major releases that you can sell to your customers. This cyclic business model requires a certain amount of planning as you have to decide which features to ship immediately to keep your existing customers happy while they wait for the new release and which of the features you want to hold back to make the upgrade compelling enough for customers to buy and for the press to write about it.
Right after the release of Live 9 and Push we blocked two weeks — our regular sprint cadence — in everyone’s calendar to attend the Live 10 Bootcamp. Everyone meant everyone within product development. PMs, designers, engineers, managers, some 40 people in total, if I remember correctly. The goal of this sprint was to define and showcase the scope of the next major release and present it in front of the entire company on the last day of the sprint. The presentation itself was a kind of prototype of the actual product announcement as we imagined it to be at the time of the release.
Process, challenges, and learnings
We divided the sprint into two parts with the first week dedicated to defining the problem and the second week to coming up with a solution and prototype. Then we divided the group into small teams of 3 to 4 people, each of them responsible for a certain topic. During the kickoff meeting, we presented some context about trends in the industry, upcoming challenges and goals for the company for the next years. We also introduced the team to design thinking in general and the specific process we had planned for the next two weeks. And we provided copies of the d.school bootcamp bootleg as a reference for methods and tools that might be useful throughout the weeks. Most participants didn’t have prior experience with design thinking methods or these kinds of workshops in general, so we expected to coach the teams along the way, but we underestimated the amount of time we would have to spend with each of the teams to work together to “learn on the job”.
The individual topics for each team had been defined in the preparation of the sprint based on a long-term vision and substantial user research. Topics where rough descriptions of certain usage scenarios, a certain area of the application, a certain type of instrument or even the next generation of hardware. It turned out that in many cases the assignments were already so focused on a particular type of solution that getting back into problem space in the first week felt somehow artificial and caused a certain level of confusion with regards to the workshop goals. For many workshop participants, it felt like validating a solution that was already defined by “someone above” rather than an open exploration.
In hindsight, it became clear that we tried to reach two goals — the definition of the entire release and the definition of the individual features/products — at the same time and without properly separating the layers, communicating these different goals and their respective degree of freedom to the workshop participants.
The individual days had been structured by fixed start and end times to make sure that everyone was in the building and available for questions at the same time. We also set fixed times for coffee breaks in the mornings and in the afternoon and made sure that there were snacks and drinks available in the kitchen during these breaks. The idea was that people across teams would meet at this time and had a chance to discuss ideas or ask questions. Although this effectively turned the breaks into another workshop format, it did work as expected and people were excited to talk about their ideas. In addition, we as the management team had daily check-ins with each of the teams so that the team could share results and would be able to clarify open questions and we would be able to give advice if we would feel that a team is totally off the track.
Many of the discussions during these check-ins had been about the demo on the last day. For many teams, it wasn’t entirely clear what was expected in terms of fidelity of the prototypes and working software as opposed to fake mock-ups. Some teams saw the value of presenting a hack to gather early feedback about the feature idea while others considered the presentation a marketing driven exercise that would distract them from doing the research that was necessary to evaluate feasibility. Again, in retrospect, it is obvious that we haven’t done a great job in effectively communicating the goals of the final presentation or — even worse — that we haven’t been clear about our own expectations.
Despite all this criticism, when the demo finally took place everyone was exhausted but also excited to share results and enthusiastic about the great work that was shown by all of the teams. The prototypes included workflow features, sound examples, hardware mockups and even a drum set on stage to demo a certain feature. Dennis, who had been presenting the software during the previous “real” product launch, presented the products as if they were already for real and the audience — all company employees — put themselves in the shoes of the prospective audience at launch.
In the team’s feedback that was collected afterward, almost everyone mentioned that never before they have had such a clear picture of what the next release should be about and the challenges they would need to tackle in order to reach this goal. Although there were lots of opportunities to improve the format, most people were impressed by the amount of progress they made in two weeks and the collaboration within the mixed teams.
The goal of the sprint was to clarify the individual features of the release but also to evaluate the release as a whole. Would the combination of the individual features be balanced enough, so that it is interesting enough for a diverse user base? Would we meet basic expectations for the release but also have enough exciters? To that end, we asked the audience of the final presentation to fill in a survey and rate their feelings for each of the feature being included or not being included in the release. This method allowed us to plot the position of the individual features within a Kano model. The results of the survey helped us to identify the potential highlights of the release and must haves that we needed to include at a basic level to not disappoint the users.
The demo at the end of the bootcamp did a great job of presenting a vision of the release. We recorded it to share with colleagues who worked remote or missed the presentation for some reason. Keeping the video was not the only way to capture the results, though. The demo was a prototype of the presentation we would do to announce the product to the wider audience on release day. On top of this, we wrote a mock-up magazine review that covered the release. The image below shows a number of posters I sketched to provided a rough structure of the article. They already give some context with regards to the timeline and potential competition at the time of the release. We later filled the structure with actual text as we imagined a reviewer would describe the release. We tried to imagine the highlights from the editor’s perspective but also to anticipate weaknesses that we would not address in the release, which helped us to be conscious about necessary trade-offs.
Postscriptum: Product training
The Live 10 Bootcamp was exhausting and fun, but most importantly it served its purpose to clarify release goals and help the company to share a common product vision. It also was a great learning experience. It showed where to improve the format in terms of preparation, process, and communication. And it also made clear that working in cross-functional teams has particular challenges. People with diverse functional backgrounds often lack a common set of tools and techniques, they have different ways to approach a problem, and they hold different assumptions. All of this diversity is good because it opens up new perspectives but it also takes a while to tune into everyone’s qualities.
Especially when working in cross-functional teams is not limited to workshops and design sprints but the normal mode of working, it pays off to invest some energy in strengthening a common ground across disciplines. Ableton always had a strong training culture thanks to one of its directors who set up a training program for his unit and then extended the program to the rest of the organization. For the product development division, we added a problem-solving training that was focussed on the particular needs of this area.
The training introduced a number of problem-solving techniques and put them into the context of our own organization and management approach. We tried to bridge the gap between different disciplines by showing patterns that emerged in different methods and tools, e.g. similarities between methods as diverse as Design Thinking, a McKinsey problem-solving approach and lean A3 thinking. The training not only helped to build a common toolbox for the teams. The shared experience of attending the training and working on workshop exercises in itself was a training for cross-functional work.
Some thoughts on product discovery and design sprints and a case study from Ableton was originally published in Prototyping: From UX to Front End on Medium, where people are continuing the conversation by highlighting and responding to this story.