CO-DESIGN DRIVES NOVEL AR VOICE TECH
• META SMART GLASSES

2023 “Most Advanced Piece of Tech on the Planet in its Domain”

OPPORTUNITY

Voice has been called “the new OS” (Social Media Week, 2017). Meaning, access to information, entertainment, content of whatever sort, will increasingly be controlled by using one’s voice, rather than by using a keyboard. At that point, over 100 million AI-enabled devices had been sold. There were so many Alexa-compatible devices at the Consumer Electronics Show in 2017 that David Pogue, tech guru for Yahoo, called Alexa “the star of the show”). Research firm Ovum projected that by 2021, there will be almost as many AI assistants as people. The possibility of voice becoming the new operating system has the potential to change the way we live. Any aspect of our lives that is or could be touched by the Internet could be felt by all of us. 

As such, with VR/AI eyeglasses at the forefront of Meta’s business strategy, the Reality Lab was exploring the design and role of smart voice assistants when using smart glasses. Voice-powered AI assistants that live inside these smart devices, are designed to be on all of the time, ready to fulfill the needs of the customer.


APPROACH & STRATEGY

After diving into existing and secondary research, I worked with an interactive prototype which allowed for contextual inquiry, concept testing, and card sorting exercises within a structured 1-1 participant interviews. I considered both group and 1-1 voice interaction models. And, I sought to understand participants reactions to voice prototype features including utility, modalities, adoption ease, concerns, emerging insights, and aspirational features. I also captured additional use cases and validated solutions with research cohorts.

We evaluated diverse voice designs, shaping the broader product design strategy for Meta's AR/VR/AI ecosystem. The body of data desired by Meta RL required 3 months of iterative design and research building on previous learnings with weekly reports and new studies launching each week for 10 weeks. At the end of 10 weeks, I distilled those 10 reports into a meta-analysis integrating all data with final findings for the larger team.


PROCESS & METHODS

Research & Strategy
Competitive analysis, Secondary research, Contextual inquiry, Usability testing, Customer interviews, Card sorting, Concept testing, Top line summaries, Cross-study meta analysis & recommendations, Product strategy, Design strategy

Product & UX Design
Personas, Content design, AI design, AR Voice design

Working with participants, I interviewed participants as they listened to and interacted with voice-assistants as they managed personal and business tasks, took photos and videos, messaged contacts, shared content, made appts, etc. After participants worked through the different scenarios and exercises, I captured their opinions and insights on the interactions and asked them to qualitatively describe their experience while using rating scales to quantity their satisfaction with the UX. We also ran through some exercises to co-design the IA of the voice controls, and card sorting to parse the IA of the menus. Breaking down the different use cases and participant motivations to use AI glasses, we distilled 4 different working personas, too.

Insights and recommendations were published weekly. At the end of 10 weeks, I synthesized a meta-analysis to summarize and reference thematic insights across all studies.

RESULTS

  • Eight co-design & usability studies captured insights from 80 customers

  • Eight “Top Line” reports with key take-aways

 

  • A meta analysis of all studies

  • Personas & Use Cases

  • V1 product design strategy rec for Glasses ARVA

Research artifacts are confidential. I am happy to share details in a zoom call.