🧠 Know What Matters

Helping managers confidently comprehend results and making participants feel seen.

Overview

Managers loved getting honest, open-ended feedback in Viva Pulse, but struggled to turn raw responses into meaningful action. To support trust, efficiency, and transparency at scale, I led the design of Pulse’s first AI-powered summarization feature. This feature gives authors a clear, shareable summary of their report, reducing time to insight and making it easier to acknowledge their teams.

impact

Increased report shareouts by 20%, earned 80% satisfaction, and became Pulse’s most-loved feature, now scaling across Viva.

role

Senior Product Designer

COLLABORATORS

1 Principal PM

1 Principal EM

6 SWE

Skills

Facilitating workshop

Product design

Stakeholder management

Interactive prototyping

User interview

timeline

Q3-Q4 2025

🚀 GA: Mar 2025

Problem

Our user feedback indicates that some users who send Pulse requests to large audience spend considerable amount of time going through entire report, gather all details to take futher actions.

💬 "“I just spent two hours going through every single comment.”

They struggled to make sense of open-text feedback, delaying action or leaving insights unshared. This created frustration on both sides:

  • Managers: "How do I know what's actually going on?"

  • Participants: "Is anyone reading what I wrote?"

Our goal

Key identified goals

1. Reduce the time to insights for managers

Help managers grasp key takeaways faster.

2. Increase transparency and alignment

Make it easy to share a clear, unbiased summary with the team.

3. Build trust with participants

Signal that their voices are heard by surfacing their words, which creates a sense of acknowledgment.

Discovery

I facilitated three key workshops to align the triad and define scope:

  1. Defining HMW Workshop

We listed core frustrations, and then transformed them into HMWs and goals

  1. Crazy 8 Workshop

We rapidly sketched how AI could help:

  • "How might we help authors understand trends at a glance?"

  • "How might we summarize reports without losing nuance?"

  • "How might we make summarization feel accountable—not generic?"

  1. MoSCoW Prioritization Workshop

We aligned on what to build (vs. defer), balancing feasibility with user value. I made sure we understood design intention and AI limitations early.

Key insights

From interviews, workshops, and customer 0 testing:

  • Verbatims matter more than scores. Managers trust raw words over benchmarks.

  • Managers don’t want to “author” the summary. They want AI to serve as an objective lens.

  • Participants want visibility. Summary acknowledges their voices were heard.

  • Cognitive load is high. We needed to support accessibility and reduce overwhelm.

Designing UI

I explored multiple placement models:

  • A side panel vs. a integrated child section

  • Embedded in the report vs. living above it

UI Framework Explorations

side panel

embedded

on top

living above

final design decision

Ultimately, I chose to place the summary at the top of the report, visually separate from data. It feels like a “helper” message (not part of the results) inviting trust without editorializing. Our triad also considered notification previews to meet users where they are, even though it risked reducing deep report views. We decided clarity and reach mattered more than click-throughs.

report summarization
Summarization in notification

Designing content

summary framework

With the AI and Responsible AI (RAI) teams, we created a safe, useful structure:

Intro paragraph: # of responses, response rate, strongest/weakest multiple-choice scores

  • Highlights: Favorable themes + verbatims

  • Lowlights: Critical themes + verbatims

  • Citations: Scroll-to anchors for transparency and source tracing

RAI guardrails

  • No inference or emotional analysis

  • Strip PII and remove bias triggers

  • Detect and exclude harmful comments

I also added clickable citations alongside each highlight and lowlight that anchors link directly to the original comments and response data, allowing users to trace insights back to their source. This not only helps authors verify the AI’s output, but also reinforces accountability so that the summaries feel credible, not abstract.

summarization citations

Testing and Validation

I ran early demos and follow-ups with customer 0. Key questions we asked:

  • “Does this help you understand the report faster?”

  • “Would you share this summary with your team?”

  • “Is anything missing, biased, or unclear?”

We heard a clear preference that the summary should be visible to everyone with access to the report. The placement validated that summary wasn't just a tool for authors, but a shared reference that sets the tone for transparency. The interview confirmed this and helped secure alignment across the team and stakeholders.

💬 "I love this because I can rely on Copilot to give me a summary that is unbiased and validate my understanding." - Customer 0

Outcome

100k

max recipients in Pulse survey

💬 "The highlights and lowlights are well-organized and easy to understand. I especially appreciate that the text responses are categorized into positive and negatives." - Mitsubishi

💬 "Love the summary! Saved me time from having to run a separate prompt to get the summary"

We are also partnering with other Viva platforms to share learnings and our approach to bring this feature to them in the near future

Reflection

By designing for transparency, we made it easier for managers to act and teams to feel seen.

What we did build struck the right balance between clarity, safety, and empowerment.