Enabling smarter spend through creative quality insights

Lead Product Designer, Vidmob, 2025

A reporting experience that maps media spend to creative assets, helping advertisers uncover inefficiencies and reallocate budget with confidence.

Business Impact

Adopted by global brands and contributed to retention conversations.

User Impact

Helped advertisers quickly identify overspend on low-quality creative.

Product Impact

Established a reusable, actionable reporting pattern for future features.

The Problem

Advertisers lacked a unified way to understand how media spend was distributed across creative assets and how that spend correlated with creative quality and effectiveness.

As a result:

  • High media spend was often allocated to low-quality creative without visibility

  • Insights were spread across multiple tools, forcing constant context-switching

  • Existing reports summarized data but didn’t surface which assets needed action

Hypothesis

If we help users understand how media spend is distributed across creative assets, they will be able to identify inefficiencies and take action faster.

  • Visualizing spend patterns would surface insights faster than tables alone

  • Connecting spend data directly to assets would transform reports from analysis into action

Goals & Success Metrics

This redesign was grounded in direct feedback from advertisers actively using scorecards. While updating the experience to align with our new design system was important, the primary focus was responding to real user pain points and workflows. The following goals reflect the key outcomes the redesign needed to achieve.

  • “It’s hard for me to know which assets are in the report and how they are adhering at a glance without having to click into each one.”

    Users needed to quickly see which assets were included and how they were performing without drilling into each row. This goal informed the layout hierarchy, prioritizing asset-first visibility so that the most critical information was immediately accessible.

  • “I use these reports to know which platform I should run the creative, but I can’t see how each creative is adhering across channels.”

    Advertisers needed top-level metrics and channel signals visible immediately to guide campaign decisions. This led to placing key performance indicators at the top of the report, enabling faster cross-channel comparisons and actionable insights.

  • “I want to customize the way I group my criteria, rather than just by channel.”

    Users wanted control over how they viewed and grouped criteria. To meet this need while keeping reports consistent, we introduced toggleable group-by views, allowing users to analyze data in ways that matched their workflow without breaking the shared UI framework.ription text goes here

  • “I don't know what pass vs fail means without knowing what each of these criteria really means.”

    Users needed to understand how scores were calculated to trust the report. Clear criteria definitions and tooltips were added, making pass/fail states and individual criteria transparent and actionable.

This feedback and goal-setting informed the first round of wireframes. I translated these goals into layout and hierarchy decisions, using existing page structure components, shared design system elements, and custom elements where necessary to support clarity, flexibility, and scalability.

Key Decision:
Information Architecture

Updating our navigation to support separate use cases

Pre-flight and in-flight scorecards shared the same UI, but research showed they served different users and workflows. Users scored preflight assets first, then validated live assets in in-flight reports. We decided to move them in navigation to match these workflows while keeping shared UI patterns for consistency and reduced cognitive load.


Key Decision:
Report Setup & Criteria Selection

What criteria are scorecard assets scored against?

A core discovery question was how scoring criteria should be applied, especially for in-flight reports that do not have a traditional setup flow.

Early assumptions favored manual criteria selection. Research, however, showed that criteria are mandated at the organization level to ensure consistent evaluation standards. I explored early concepts that allowed users to manually select criteria to test the earlier assumption, and we learned that users preferred having all criteria come into the report by default, but still offer some filtering abilities.

If users can’t choose which criteria are applied, how might they still understand and interact with them?

I explored patterns that preserved organizational consistency while giving users flexibility, such as:

  • Automatically including all mandated criteria

  • Allowing users to toggle criteria visibility on and off within the report

  • Providing clear definitions and contextual detail for each criterion

A critical consideration here was that pre-flight scorecards were manually created, while in-flight scorecards were automatically generated. So might I accommodate both use cases while maintaining design consistency?

Criteria visibility during manual Pre-flight set-up

By reframing the problem from user-controlled selection to organizational consistency with user transparency, I was able to understand what the users actually wanted: visibility into what criteria would be included in the report based on their report settings.

Criteria control across Pre-flight and In-flight Scorecards

While criteria inclusion needed to remain consistent, users still needed flexibility when reviewing scores. To support this, I designed a shared criteria filter panel that allows users to toggle criteria visibility on and off within the report. This gave users temporary, exploratory control to test and compare criteria without permanently altering the underlying scoring model.