Project IBM

Bee and beehive illustration

My Work at IBM

Designing Data & AI tools for the people who build the future

Davon Larson: Product DesignerCoconuts

The Setting

In 2019 I joined IBM's Data and AI division, working on Cloud Pak for Data — IBM's unified platform for data management, machine learning, and AI deployment. My users weren't everyday consumers. They were data scientists, ML engineers, system admins, and CIOs at some of the largest enterprises in the world. The stakes were high, the complexity was real, and the expectation was that the software would be as capable as the people using it.

What I found when I arrived was a product that had grown powerful but unwieldy. Features were siloed, workflows were fragmented, and the people doing mission-critical work were spending too much time fighting their tools. My job was to fix that — one hard problem at a time.

Overview of the Cloud Pak for Data project suite — three distinct design challenges, one platformFIG. 1

Overview of the Cloud Pak for Data project suite — three distinct design challenges, one platform

Cloud Pak for Data home — the command center for enterprise data teamsFIG. 2

Cloud Pak for Data home — the command center for enterprise data teams

Project Satellite: Taming the Distributed Cloud

The first major challenge was IBM Satellite — a product that let enterprises run Cloud Pak for Data anywhere: on-prem, at the edge, or across multiple cloud providers simultaneously. The dream was a unified hybrid cloud. The reality was that configuring and managing these distributed environments was a nightmare for admins.

The domain was deeply technical. I had to get fluent in Kubernetes, cloud infrastructure ownership models, and the difference between what IBM managed vs. what the customer owned. I ran research sessions with three distinct personas — Admins who wanted granular control, Data Scientists who cared about speed above all else, and Engineers who needed to debug at 3am without calling anyone.

I designed a UI that let admins configure satellite locations across AWS, Azure, and GCP from a single pane of glass — with clear status indicators, ownership labeling, and the right level of control without overwhelming complexity.

The impact was tangible. One CTO reported that the integration of Cloud Pak for Data with Satellite reduced scoring latency on time-critical ML models from 10 seconds to under 1 second.

Satellite location entry point — where admins configure their distributed cloud environmentsFIG. 3

Satellite location entry point — where admins configure their distributed cloud environments

Satellite location management — prebuilt and custom locations across AWS, Azure, and GCPFIG. 4

Satellite location management — prebuilt and custom locations across AWS, Azure, and GCP

Delivery specifications — defining ownership boundaries between IBM and the clientFIG. 5

Delivery specifications — defining ownership boundaries between IBM and the client

Project OSM: Making Open Source Safe

The second project was sparked by a real fear in the market. In 2017, a single vulnerable open source package — Apache Struts — led to the Equifax breach, exposing the personal data of 143 million people. CIOs and dev team leads were nervous about what packages their engineers were pulling into production. But banning open source wasn't an option — it's the backbone of modern software.

The challenge: how do you preserve the convenience of open source while giving enterprises control?

I designed the Open Source Package Manager (OSM) — a governance layer built into Cloud Pak for Data that let organizations curate, approve, and monitor the open source packages used by their data teams. The system centered on three pillars: Governance (approval workflows and vulnerability tracking), Community (ratings, comments, and internal usage data so teams could learn from each other), and Reporting (package usage across projects and teams).

I redesigned the MVP in approximately two weeks, running rapid lo-fi testing, iterating based on feedback, and shipping a polished hi-fi design. Key research finding: admins wanted the import step and the approval step to be separated — they needed more information before committing to an approval. That insight drove the final information architecture.

The result: package approval time went from a weeks-long, ad-hoc spreadsheet process to as little as a few hours — sometimes minutes.

OSM browse view — 567 packages organized by approval status, vulnerabilities, and community ratingsFIG. 6

OSM browse view — 567 packages organized by approval status, vulnerabilities, and community ratings

Package catalog — filtering, requesting, and managing the approved open source libraryFIG. 7

Package catalog — filtering, requesting, and managing the approved open source library

Package detail view — vulnerabilities, community reviews, licensing, and usage all in one placeFIG. 8

Package detail view — vulnerabilities, community reviews, licensing, and usage all in one place

Project Operator View: One Dashboard to Rule Them All

The third project tackled a problem that had been quietly frustrating system admins for years. Cloud Pak for Data had jobs and workloads running across Watson Studio, DataStage, Data Virtualization, and more — but there was no single place to see what was happening. Admins were jumping between tools, losing context, and flying blind when something broke in production.

My user for this project was "Oscar" — a composite persona of the system admins I interviewed. Oscar had been on the job 7–10 years. He was running data pipelines in production, monitoring deployments across spaces, and responsible for uptime he couldn't always control. His quote stuck with me: "I've got to have visibility into thousands of environments, infrastructure activity, and deployment status all at the same time."

I designed the Operator View: a universal jobs and workloads dashboard that aggregated scheduled jobs, active runs, deployments, and completion history into a single page. The design went through multiple iterations — from lo-fi table explorations to a final interface with histogram visualizations, time-based filters, and space-level drill-downs — each round informed by usability research with real admins and release engineers.

The final design gave operators everything Oscar needed: a scheduled jobs timeline for the next 8 hours, live status of active runs, a completion log with success/failure breakdowns, and the ability to stop, restart, or collect logs without leaving the page.

What I Took Away

Working at IBM taught me three things I carry with me into every project:

Treat engineers like your users — with empathy. They have frustrations, mental models, and pain points just like any other user. The only difference is they'll tell you exactly what's wrong if you ask them the right way.

With enough patience, you can figure out anything. I came in knowing nothing about Kubernetes, data science workflows, or distributed cloud architecture. By the time I left, I was designing confidently in all three domains. The learning curve is real, but it's not a wall — it's a ramp.

Ambiguity is an opportunity. When I arrived, nobody fully understood how Satellite should work from a UX perspective. That blank space was scary and exciting in equal measure. Some of my best work happened in that space between "we don't know" and "we shipped it."

Full Case Study

Download the complete deck for a deeper look at the process, research, and final designs.

Download PDF