Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fully agreed.

Once worked in a place that hired a design researcher and they were amazing, they took the time to dig in to what you’re talking about here: the actual issues, not the words that came out of people’s mouthes. Had a lot of good insights, which were ultimately wasted on that particular business, but that’s for different reasons.



The most successful company I ever worked out figured out a way to perform and get high quality feedback on low cost user experiments with a quick turnaround.

About 4/10 failed, 5/10 did ok and then with 1/10 they would absolutely hit it out of the park - usually on something that looked about as promising as the other 10. They did this frequently enough that they clobbered the competition, who were mostly just trying to do normal product management stuff like user surveys, etc.

The most interesting thing was that most people if you'd have asked them before would have gone "I don't think you can turn this into an experiment" and then immediately adopted a different strategy. Somehow they always figured out a creative way of injecting a low cost experiment where most people would have imagined that it wasnt feasible to run any experiment at all. It required quite a lot of pressure/leadership from above.


Concrete examples sound interesting here in so far as you’re able to share


One example was when they decided to test out the idea of sending particular kinds of price change notification emails to users. They created one checkbox on a form our users could click and set up a trigger to fire notify somebody in the Philippines who would construct the emails manually from the database. They then iterated on those emails, creating them manually the whole time.

Once the email was iterated upon and the idea was proven (i.e. that users actually wanted it in the form that existed), it was handed back to me to automate properly, which in that case took about 7-10 days of dev work.

Or not, if it turned out the customers simply weren't interested. There were plenty of experiments like that where I created a checkbox or something and then we quietly removed it 2 weeks later.

It was a refreshing change from past jobs where I'd spent weeks working on a feature asked for by users that we were so sure would be a game changer only to see 1 or 2 people actually use it.


It's the same advice as building an MVP, do everything manually first, then automate once you've proven the market, basically Paul Graham's advice of doing things that don't scale. The Doordash founders hand-delivered food first before building an app.


Obviously, that does not apply if the market already exists, which it usually does. In that case you have to be different and better enough than the alternatives.


Why doesn't it apply? Even if there were a lot of food delivery companies, it would still be good to gain traction through doing it manually instead of (or before) spending months building out a software solution. Even better, because one would personally deliver the food, they can talk to customers and ask what problems they have currently with other food delivery competitors, thereby learning what differentiating advantage would work well when creating the software solution.

In other words, doing things that don't scale is not just useful for validating the market, it's arguably even more useful for validating one's specific implementation of a product, and also derisking the business model as a whole rather than having high capex initially.


If you're building in an established category, you should bring product intuition, honed by deep knowledge of the existing products, and confirmation of their weaknesses from their users. You are not going to win by rapidly cornering the market ("blitzscaling") because it's already cornered. You want to disrupt it with something better and different. I think the focus on "things that don't scale" takes away from this more important requirement.

Of course, you should try to validate assumptions as quickly and cheaply as possible. If "doing things that don't scale" means temporarily not automating things involved in validating assumptions, then fine. But don't manually do things that are not part of that calculus.


thanks! Seems very pragmatic. It seems the secret ingredient is having a responsive userbase big enough to have statistical significance though. Something we've struggled with in a recent startup.


Would love to hear the examples and how they were turned into experiments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: