Screenshot 2026-02-11 at 9.01.34 PM

Object Mode

Object capture was a real threat to the business. Users scanning objects had higher churn, lower NPS, and worse research scores than Space users, and we couldn't scale upmarket until we fixed it. I redesigned the experience around guided feedback, and free trial graduation went from 7.89% to 22%.

Company
Polycam (3D Scanning platform)

Role
Product Designer + Product Manager

Team
1 iOS engineer

Impact
Free trial graduation: 7.89% → 22%
Guided Mode became default capture experience
Capture rating data here

What I did

Object scanning looked like a feature but it was actually a liability. Every metric told the same story: users capturing objects churned faster, rated the app lower, and showed up worse in research than those scanning Spaces. They didn't know what to scan, how to move around the object, or when they were done, and when captures failed they blamed the product.

Polycam needed object capture to work if we were going to scale upmarket. I restructured the IA, explored multiple approaches to real-time feedback, and found an ARKit primitive that was designed for something else entirely. I worked with engineering to repurpose it, shipped it as an option, saw low adoption, and then made the case to flip it to the default.

Object Mode 1

The wrong
approaches

Object Mode 2
Object Mode 3

I tried hacking LiDAR and Photo Mode together but performance tanked on older devices. Tried UI overlays like progress indicators and coverage maps, but they added cognitive load without actually helping. The constraint I kept hitting was that anything useful required more processing than we could spare mid-capture, and those explorations clarified what wouldn't work, which was just as valuable.

The explorations ranged widely: text-based instructions (too easy to ignore), small directional arrows (confusing about what they meant), and dot indicators (unclear relationship to the object). The point cloud won because it's spatial. You see the coverage building in 3D space as you move, and paired with haptic feedback cycles that fire as you capture, users get continuous confirmation without having to look at the UI.

Object Mode 4
download-6

The ARKit discovery

I found a primitive in Apple's documentation that was designed for a different purpose but could show spatial coverage without the performance hit. I worked with engineering directly in Xcode on this, not "design and hand off" but actually pairing on feasibility and implementation together. The result was visual feedback showing what you've captured and what you're missing, intuitive enough that it doesn't need explanation.

 

Making it default

We launched it as an option and adoption was low, not because users didn't want guidance but because they didn't know it existed. This is where most features quietly die: technically shipped but practically invisible.

I pushed to make it the default and graduation rate hit 22%.

The Outcomes

Graduation rate hit 22%, nearly triple where we started. Object capture went from a liability that was dragging down every metric to a viable path for new users. One of our largest enterprise customers expanded their contract after Guided Mode let their teams produce consistent results without extensive training, and what used to require onboarding and custom support became self-serve.