Three Acts
My time at Google has spanned three distinct chapters — each one pushing further into territory I didn't know existed when I arrived. I've designed the infrastructure millions of businesses run on, shaped how people interact with AI through hardware, and now work at the bleeding edge of machine learning research tooling. Each chapter has required a different version of me as a designer.
Act I — Core Compute: The Infrastructure Behind Everything
My first two years at Google were spent on Google Cloud Platform's Core Compute team. If you've ever spun up a virtual machine on GCP, you've used what my team built. The Console is the primary control surface for millions of cloud customers — from solo developers to Fortune 500 engineering orgs — and the VM creation flow is one of the most-used workflows in the entire product.
The complexity here is enormous. A single VM creation form might surface dozens of configuration options: machine type, CPU platform, boot disk, networking, service accounts, GPU attachments, sole-tenant nodes. The challenge was making all of that power accessible without making it terrifying. Every design decision had to work for a startup founder configuring their first server and a staff engineer at a bank who's done it ten thousand times.
One of the biggest projects I tackled was the Future Reservations flow — a new product that let customers pre-purchase compute capacity at a specific point in time. For companies running massive ML training jobs, the ability to guarantee access to thousands of GPUs on a specific date is worth a significant premium. This was a net-new workflow with no prior art in the console, which meant I had to work closely with PMs and engineers to define not just the UI but the entire user mental model from scratch. What does a "reservation" mean? When does it activate? What happens if your needs change? All of that had to be answered in the design before a line of code was written.
I also owned the Committed Use Discounts (CUD) flows — the mechanism by which GCP customers commit to a certain spend level in exchange for deep discounts. These flows are financially significant: a single committed use contract can represent millions of dollars. The design had to be crystal clear about what users were agreeing to, with zero ambiguity about terms, duration, resource scope, and cancellation implications. I worked closely with legal, finance, and enterprise sales to ensure the experience was not only usable but defensible.
Act II — Pixel Buds: Designing for Voice and AI
After two years in the cloud, I moved to the Pixel Buds team — a completely different world. Pixel Buds sit at the intersection of hardware and software, and the design challenges are unlike anything in traditional UI work. There's no screen. The interface is sound, touch, and motion. And increasingly, it's AI.
My focus was on AI voice interactions — designing the end-to-end experience of how users invoke, converse with, and receive responses from Google Assistant through the buds. This meant thinking about interaction latency (a 200ms delay that's imperceptible on a phone feels like an eternity when it's happening in your ear), audio feedback design, gesture grammar, and the edge cases that emerge when your interface is worn on a human body during real life.
The work pushed me to think about design in entirely new dimensions. Prototyping meant building audio demos, not Figma screens. User testing meant watching people walk around, exercise, and cook while interacting with the device. Success metrics were things like natural language comprehension rates and re-activation frequency — not click-through rates or time-on-page.
Act III — Machine Learning Tools: Designing for the Frontier
I now work on some of the most technically complex products I've ever encountered: internal tools for Google's machine learning researchers and engineers. There are two main focus areas.
The first is LLM profiling tooling — software that helps ML engineers understand exactly where time and compute are being spent during large language model training and inference. When you're training a model with hundreds of billions of parameters, inefficiency isn't just slow — it's astronomically expensive. A 5% improvement in utilization across a training cluster can translate to millions of dollars saved and weeks of iteration time recovered. I design the interfaces that make those inefficiencies visible and actionable.
The second is fleet management for large-scale Reinforcement Learning training clusters on TPUs. Google operates some of the largest TPU fleets in the world, and orchestrating RL training across thousands of chips simultaneously is a discipline unto itself. The engineers running these clusters need to monitor training health, catch divergence early, manage preemptions and restarts, and reason about job state across distributed systems — all in real time.
Designing for this audience — researchers and engineers who are among the most technically sophisticated people on earth — is its own craft. They have zero patience for tools that obscure information in the name of simplicity. The goal isn't to dumb things down. It's to make the right information findable at exactly the right moment, with enough context to act on it decisively.
What Ties It Together
Three very different products. Three very different users. But the throughline is the same: design at Google means operating at a scale where the decisions you make matter in ways that are hard to fully comprehend.
When I changed a label in the VM creation flow, it affected how millions of businesses understood their infrastructure choices. When I designed a voice interaction pattern for Pixel Buds, it shaped how people talked to AI in moments of their real lives. When I design an LLM profiling interface, I'm influencing how the people building the next generation of AI models understand and optimize their work.
That weight never goes away. And honestly, I wouldn't want it to.
