In fall of 2015 as part of my on-boarding experience for IBM Design I participated in a twelve week design bootcamp. For the first three weeks of the bootcamp I worked with a team of designers to develop a proof of concept to illustrate the capabilities of Watson’s natural language processing APIs.
Our expected deliverables were an analysis of the problem and goals informed by user research, a user centered design solution, and a coded prototype.
My contributions to the project included conducting and synthesizing user research, mapping user flows, wireframing, and presenting our work.
Our stakeholders were the IBM Watson team and the IBM Design department. Our work was guided by user needs, but our outcome also needed to satisfy our stakeholders. The Watson team wanted a new perspective on their problem and a POC that they could further develop. The Design Department wanted to see that we were capable of iterative problem solving and collaboration to produce quality work in a short time frame.
We used the IBM Design iterative loop of observe, reflect, and make. We moved through these steps revising our design based on user and stakeholder feedback.
In three weeks we had to understand the language APIs, align our product with the Watson brand, and go from concept to prototype, while giving weekly presentations of our progress.
Our challenge was scoped by the following design prompt:
A consumer can communicate in-person with non-native speakers with their mobile device at the speed of thought, and give Watson feedback along the way.
Our team used three of the Watson Cognitives services available through the IBM Bluemix developer platform: Speech to Text, Text to Speech, and Language Translator.
To guide our research we developed three core research questions to help us better understand user pain points in translation.
We explored different scenarios for language translation through user interviews with doctors, professional translators, students, world travelers, and Peace Corp volunteers.
Our research revealed that in order to make the biggest impact we needed to show that IBM Watson could handle the most challenging translation situations. According to our user interviews the most difficult translation situations were characterized by three elements:
We synthesized our user interviews by creating empathy maps that sorted the information into four areas: says, does, thinks, and feels. This exercise helped us better understand our user and develop three personas: a doctor with patients who speak a different language, a knowledge worker working abroad, and a Peace Corp. volunteer.
For each of our personas we explored their translation scenarios by creating scenario maps.
Based on a typical scenario for each user we depicted their workflows and mapped their pain points in order to see where our app could help.
We learned about strategies users developed to communicate with others in a different language, such as having a list of common phrases, calling a local friend who speaks the language, or practicing scenarios with a host family. We reviewed these strategies and found some common elements across those strategies that could enhance the user experience of our mobile app.
Knowing the context of a conversation can increase confidence and help convey information.
Preparing for a situation eases anxiety and helps create confident and fluid interactions.
Reviewing and reflecting on an experience enhances learning from mistakes and reinforces knowledge.
Informed by our users’ needs and the characteristics of a successful translation experience we explored initial concepts for our translation app.
We began at a high level by storyboarding different successful situations. This exercise allowed us to imagine the ideal scenario for our users without worrying too much about technical limitations.
Our storyboards helped us articulate the key features of the app that would embody a positive translation experience for our users.
The key features:
We used paper prototypes to explore how a user might engage with these features in different translation scenarios.
Watching users engage with our prototype revealed that looking up words on a phone during a conversation can interrupt the flow and distract the participants. To avoid this issue we revised our design to include a feature to record conversations utilizing the Watson API that enables speech to text. We then built on this feature to enable users to review a conversation then save words to their library from that conversation.
This demo illustrates how a user moves through the app utilizing the different features.
At the end of the three weeks we gave a final presentation and received feedback from our users, colleagues, and stakeholders. We packaged our research, designs, and technical specifications and delivered them to the Watson team.
The most successful elements of the project were delivering a coded prototype with supporting research that addressed real translation pain points and the communication skills we developed to collaborate within our group and with other IBM teams.
If we’d had more time to further develop the project I would have liked to investigate these additional areas: