Lemme Get That

A mobile service that helps you treat your loved ones

Role: Ideation, concept validation, wireframing, motion design 
Team: Jacqueline Chien, Hau-yu Wong, Ajayan Subramanian
Skills: Personas, ecosystem collection, storyboarding, wireframing, animation
Duration: 4 weeks
Project Brief
The objective was to design a mobile app that supports the exchange of token gifts by close friends and family members who live far apart from each other. The end result was an InVision prototype that demonstrates the value of our design to our two personas - Daisy and Julie (introduced later on this page).
Our Solution

A mobile service that replicates the act of picking up the tab for a loved one.
From storyboarding ideas with users we realized that even a small treat like coffee on a rainy day could brighten someone's spirits and renew contact, despite the challenge of long-distance. The service incorporates the use of vendors such as Starbucks, Uniqlo, and Target as providers of token gifts. It utilizes the growing popularity of retail payments through near field communication (NFC), as context that triggers the receipt of these gifts. Our solution benefits multiple stakeholders. Besides adding value to the gifter and the receiver, it creates an alternate revenue stream for vendors, and an opportunity to delight customers.
The Problem Space

We built empathy for the problem space by understanding our personas.
We used our personas as a basis for ideation, storytelling, and making design decisions. These personas gave us necessary constraints in our exploratory brainstorming and iterative UI design. I made the graphic below to introduce you to Daisy and Julie.
Using ecosystem collection, we explored ways in which Daisy and Julie could exchange token gifts.
Each of us filled out a worksheet to articulate our understanding of the token gift-giving process. We answered questions such as 'What is a token gift for Daisy and Julie?', 'When would they want to send/receive one?', 'Where would they receive it?', 'How would the sentiment be delivered?', 'What form of acknowledgement is appropriate?'.
There were four main opportunity areas to add value to Daisy and Julie.
1) Use the recipient’s context (from GPS or other sensor data) to find an appropriate gift.
2) Create and share customized token gifts within the context of a specific social group.
3) Identify opportunities to give gifts, to convey empathy conveniently but not impersonally.
4) Share an event/experience with someone in real-time.

Creating scenarios and storyboarding them with users helped us (in)validate our concepts.
One of our storyboards (pictured below) used our personas' love of dogs as a trigger to start an interaction. The interaction in itself was a token gift, to show someone you're thinking about them, no matter how far apart they are. On testing this idea with users, we discovered that our use of the gadget was intrusive and extraneous to the concept, which would be better served through simple photo sharing. 
Another storyboard (pictured below) involved Julie buying Daisy a coffee when she's having a rough day. The trigger for sending the gift is the app telling Julie that Daisy's encountering gloomy weather in Seattle. This storyboard largely influenced our final design. We received positive feedback on the asynchronous sending of a gift. But we realized that we didn't need to find a trigger, such as a glimpse of weather at Daisy's location. The mother-daughter relationship is enough to encourage Julie to think of Daisy.
It was apparent that Daisy and Julie already think about treating each other. Our role was to translate these thoughts into quick, seamless treats. Our solution was 'Lemme Get That'.
From storyboarding we found that there is no need to notify the sender about an opportunity for a gift. We focused on designing a quick, seamless and convenient way for Daisy and Julie to exchange token gifts. Daisy / Julie would likely visit a Starbucks, or Chipotle, or Target during the course of a week. She could be treated to a token amount at the time of purchase at one of these stores. We envisioned a future where payments are made through NFC, which provides an opportunity to discover and deduct this token gift amount. This adds a surprise element for the receiver, and precludes her need to install an app to receive a gift. I made the diagram below to illustrate the sequence of events, and how it benefits multiple stakeholders.
Left: The sequence of events that go into a typical scenario with 'Lemme Get That'.
Right: A value flow model indicating the benefit to the different stakeholders.

We broke down the scenario of giving a token gift into individual tasks a user could accomplish, and laid them out in a navigation map.
To gift someone, a user needs to a) pick a recipient b) pick a vendor / retail outlet c) decide on an amount and expiration date. We focused our prototyping efforts on these essential tasks, but still listed out other tasks that a user might need to perform.
The navigation map ensured our information architecture was sound, and helped us prioritize important screens to prototype.
We sketched wireframes on a whiteboard, and annotated details of interactions.
It was exciting to finally translate our concept into visuals. The whiteboard, with its large size and erasable nature, is a convenient tool for collaborating and iterating on UI design. We annotated details of interactions and animations, and returned to these sketches for inspiration as we moved to a higher fidelity.
We digitized these into a clickable Balsamiq prototype.
At this point, we divided responsibilities and continued to give each other regular feedback and critique. Jacqueline led the visual design of the high-fidelity mockups. Hau-yu took up the task of putting together a scenario in InVision. I was responsible for conceiving and prototyping animations. This Balsamiq prototype gave us all a common basis for our respective tasks.
We continually iterated on our design after testing it with users.
Pictured below is an instance of our iteration. One of the tasks a user performs on 'Lemme Get That' is choosing a vendor at which Daisy gets her treat. We wanted the user to know the existing treats that have already been covered for Daisy, which she is yet to avail. We communicated this information (leftmost screen below) by adding to the card the scheduled treat amount, along with its expiration date. Users found this confusing - "Is $ 10 the fixed cost of a Starbucks treat?" "What does $ 10 mean?" We chose to remove this information from the card (middle screen), but the dot was too subtle an indicator for users. We finally chose (right screen) to display the treat amount being held, but made it more explicit with text. We also did away with the history of treats (left screen), since we got the impression this would make users 'keep score'.
Left to Right: The evolution of the 'Choosing Vendor' screen.
Pictured below is the iteration of our home screen. The user's primary task here is to choose someone to treat. We initially communicated which vendors the user had already paid to gift the receiver (leftmost screen). We moved away from this as it discouraged users from sending more treats to the same person. We also wanted to encourage the user to treat someone she hadn't treated before - Roy, for instance. While the idea was conceptually good, it made users feel that this card was different from the other cards. We gradually made the 'add contact' feature more prominent (right most screen), to encourage more activity on the app.
Left to Right: The evolution of our home screen, where the user chooses someone to gift.
I added animations using After Effects, to provide feedback for user actions, to draw attention to parts of the interface, and to make transitions smoother.
A strong motivating factor for adding animations was to improve feedback of payment activity. Users are cautious when it comes to transactions, so we wanted to gain their trust. We achieved it through confirmation prompts and noticeable changes to the interface. In conceiving the animations I was particularly influenced by Windows Phone's screen transitions and Google's Material Motion guidelines. Below are some instances of our motion design, along with reasoning for its use.
Working with personas was a major takeaway. It gave us a common language and understanding for making design decisions. It also forced us to think outside of our own personal experiences, and consider Julie and Daisy's perspective. We lived and breathed our personas, so the idea of a service around getting the tab for someone came quite organically. Besides the challenge of learning After Effects for this project, I also had to tweak Illustrator files so as to animate particular layers individually. Since then I've continued to hone my animation skills on After Effects, Keynote and Pixate.

Other Projects

Educational Poster
Anna Karenina
Capstone Project with Odigo
Alarms & Clock
Tech Explorations