
Matterport's flagship $5K professional camera — one button, a 256×256 display, and a phone app had to do the work of an entire interface. I designed the system across all three surfaces. $13.6M year-one revenue, $28.8M year two.
Company
Matterport (3D capture platform, acquired by CoStar for $1.6B)
Role
Product Designer (only designer on the project)
Team
Firmware engineer, Mechanical engineer, Industrial design agency, Packaging designer
Impact
$13.6M year-one revenue, $28.8M year two Shipped to John Deere and other major enterprise customers
First Matterport camera where users never leave the app to connect
Pro3 was Matterport's biggest hardware leap — first LiDAR, first outdoor capture, first swappable batteries. I owned the whole experience: state machine, OOBE, app rewrite, packaging. Started by auditing the Pro2 and mapping every state on the existing camera against its app before pushing pixels. The bar the project came back to: a $5K professional camera should be learnable without documentation.

Pro2 connection flow

Pro3 connection flow
The Pro2 broadcast its own WiFi network. To connect, users had to leave the app, open Settings, find the camera, join it, then come back. Every setup, every site, every time.
I pushed to move the initial connection to BLE — pair once like AirPods, then the camera hands the phone its WiFi credentials so heavy data still moves over WiFi. Our CEO wasn't sold (WiFi worked, BLE meant new firmware on a tight timeline). I made the case on user time saved and support tickets removed.
The decision held. First-time setup went from a Settings detour to seconds.

I started with a state diagram — twelve states from Suspend through Capturing through Error — and for each one, a clear expression on button, display, and app. Whichever surface a user looked at, the story had to match. The clearest example: tap-to-cancel during a scan. The LiDAR head spins to capture a sphere of data, and there's a real second or two where the hardware is still spinning down after a button press. The display had to acknowledge the press immediately or users would press again, thinking it didn't register. That's the kind of decision you only make standing next to a spinning camera.

The display was 256×256, 2fps, viewed at arm's length in everything from dim interiors to direct sun. The work was a back-and-forth with Allen on firmware. I'd design an animation, he'd send a video of it on hardware, we'd see what fell apart at 2fps that looked fine at 60. Spinners read as static frames. Battery animations looked like glitches. For outdoor readability I cut the palette to high-contrast black, white, and a single accent. The constraint forced a better split: the display became "what's happening right now" and the app carried context. Three surfaces, each doing what it was best at.
The right design decisions on hardware sit at the architecture layer, not the pixel layer — BLE wasn't a technical decision, it was a UX decision with technical implementation.
And severe constraints don't limit design, they force it: the 256×256 display couldn't carry rich UI, so it didn't try, and the system got clearer because of it.
The bar I held the whole project to: if a user needs the manual, we failed.