Final design (anonymized)

Validation Tool

Auditing tool for Data Scientists
5.15.20 - 7.15.20

Problem

As a Data Scientist for a martech complany, I need to check the output from the new transaction data "Cleaner" so that the logic can be validated and the business can lessen the "dirty" data backlog.

Team

  • Product Manager
  • UX Lead
  • Dev Lead
  • Data Science team
  • Product Owner

Outcome

  • Increased user task efficiency by 75%
  • “Dirty data” backlog has decreased by 45% in just 1 month

My Process

case study highlights are bolded

Discover

  • Architecture Diagram
  • Stakeholder Interviews
  • User Interviews
  • Contextual Inquiries

Define

  • Journey Map
  • Card Sort
  • Impact v Effort Matrix

Design

  • Ideation
  • Review & Critique
  • Wireframing
  • Usability Testing

Deliver

  • Visual Specs
  • Functional Spec

Determine

  • User Interviews
  • Survey

Background

This company builds a digital ad platform for banks, analytics tools for advertisers, and service design tools for its employees. The foundation rests upon accurate consumer transaction data, referred to as transaction “strings”. The Validation Tool ("Val") is a way to manually check the output of an automated string cleaner.

This was my first project with the company, and one that another designer had brought to an apparent end. However, there were many knowledge gaps that needed to be addressed. One of the first artifacts I created was a architecture diagram revealing how the system worked.

In this role I needed to own the entire UX process, including research, strategy, interaction design, and visual design.

One challenge presented to me was to convince my new team that a "rewind" was necessary. To address this, I quickly set up a usability test with the actual users and presented my findings and a (time-bound) action plan.

Discover

An understanding of the end user and the problem space was also in need, so I structured user interviews so that half the time was spent understanding the daily lives of data scientists (their hopes, dreams, and frustrations) and half the time was spent in contextual interview (a walk through the current cleaning tool). The highlights included:

  • The existing way of cleaning strings was extremely frustrating - there were so many low points in the cleaning process, any automation would be a significant improvement! See the journey map below...
  • Users had to jump between 4 different systems to clean strings, though it was proven that some of that could be reduced by APIs.

Define

We conducted a set of ideation sessions aimed at drawing a clear boundary between Val and another tool ("TQ"). The first session was a card sort to bucket functionalities. The second was a wireframing activity where teams mocked up the tools and discussed how they might work. Both of these activities included 3 members of the Data Science team, the Product Mgr, the Dev Lead and myself. We were able to successfully align on the distinctions. From that, we were able to draft and prioritize the user need statements.

As a Data Scientist, I need to..

  • review strings submitted from the Cleaner
  • update approval status of submitted strings
  • flag/comment on errant/questionable fields
  • send approved strings on to main database
  • send rejected/flagged strings to TQ
collaborative card sort from initial ideation session

Design

During the Ideation phase, I first wanted to facilitate ideation sessions around Val design concepts that might've been overlooked. The previous designer had delivered one design, but not explored options in the path to their solution. I involved everyone who had been involved in the previous wire framing activity as well as the Directors of UX and Product. I showed the system architecture diagram and created some stickies for functionality so the group could start coming up with ideas. We used Invision Freehand for almost all of our ideation sessions so everyone could easily contribute.

Initial concept that contained a list of strings (left) and a comparison window (right)

The next day, I created a lo-fi, clickable prototype in Figma for the team to evaluate based on the session output and feedback…

Deliver

The final design was arrived at by iteratively taking the Data Science team feedback and formulating an iteration. These were the key rationale points:

  • Users absolutely loved the idea of a "counter" to help them understand how they were progressing through their tasks
  • Users gravitated to the industry norm for comparing data - side by side vertical 'stacks'

Working closely with the development team, we composed a set of deliverables for build:

  • Visuals for select use cases the team agreed to support
  • An updated design system
  • A comprehensive functional spec to describe multiple states (error, loading, empty, max, etc.)
Example use cases:

1. Approve the first match
2. Approve the second match
3. Reject all matches

Determine

A post launch check on the product revealed the following

  • Increased user task efficiency by 75%
  • "Dirty data" backlog has decreased by 45% in just 1 month

In Retrospect...

I learned that it's important to not just take a handoff from a fellow designer and run forward. You must ask "is there anything missing from our evidence that would inform a better design?"

In addition, getting to know people, processes, and structure was new territory for me. That being said, I’m happy to say that not only was the project successful, but I’ve established good relationships along the way.

Next Case Study: Suggested FiltersBack to Case Studies