Digital Design Portfolio
spotify_cover.png

Alice CUI

Artboard 3.png

Alice CUI

Spotify’s pocket dj

Design Challenge

Conversational agents are a new way to leverage artificial intelligence to provide service and care. We were asked to create a conversational user interface that acts as an AI helper to enhance a service or a user’s experience.

Client: Spotify; Team: Mary Safy, Elizabeth Xu; Duration: 3 Weeks

Skills: P5.js, Aftereffects, Sketch

Project Breakdown:

  1. Research

  2. Exploration

  3. Concept

  4. Visual Design

  5. Refinement


Meet Alice, Spotify’s DJ

Alice is a conversational agent for Spotify’s premium users. Alice acts as a DJ who replaces the role of the traditional radio announcer, humanizing a curated music experience and delivering Spotify’s values of emotional sensitivity, personalization and music. Alice integrates with Twitter, Fitbit, Facebook, Snapchat, and other social media accounts to understand her audience's personal habits, and curates situationally appropriate music, bringing an ambassador to the Spotify brand.

 

Research

Structured Exploration

Our initial challenge was to approach the problem in a way that the team would find actionably communicative. After doing some initial exploration, we honed in on four key parts.

  • Emotional Range: What is the range of emotion that our prototype should be able to respond to? What type of information should it communicate back?

  • Technical Challenges: What type of libraries are appropriate for exploration? What kind of capabilities can we define for our AI past our working prototype?

  • Visuals: What visual system will we use? What variables must we define?

  • Use cases: What’s the ecosystem around this service? Where can we see this existing, and under what conditions can we see this delivering value?

test.png

 

Exploration

Screen Shot 2018-12-14 at 10.10.13 PM.png

Youtube Live Concept

Our initial concept exploration was of a Youtube Live experience. We noticed that Youtube Live is often used for audio streaming and doesn’t have a video component, which is primary to the Youtube experience.

From here, we identified two elements to pursue:

  1. Community-building within Youtube live chat feature.

  2. A way for visualization to communicate directly about the music to the user.

The following are some aspects we fleshed out before we ultimately rejected the concept due to three key aspects:

  1. Back-and-forth doesn’t make sense for AI on live platform.

  2. Guidance AI for framework would be better subtle. 

  3. Interface restraints on already-maximized content. 

Screen Shot 2018-12-14 at 10.16.08 PM.png
Screen Shot 2018-12-14 at 10.16.15 PM.png
Screen Shot 2018-12-14 at 10.16.30 PM.png
 

 

Concept

New Direction

 
 

 

Visual Explorations

Moodboard

ezgif-4-c2c937ad3189.gif

Motion Test 1

ezgif-4-14832e9f07fc (1).gif

Motion Test 2

Final Iteration

Our final iteration responds to various feelings and emotions by responding to trigger words like ‘happy’, ‘sad’, ‘angry’, with a neutral state. This iteration was coded on p5.js and uses the responsive speech API to register what the user is saying to change accordingly.

We moved through a few earlier iterations without intense colors on the inside or rings that erase as the user spoke. Previous versions proved visually messy and seemed to behave more erratically.

For our final version, we settled on a motion that that was smooth and flowing to communicate a calmer, continuous feeling.