One example was when they decided to test out the idea of sending particular kinds of price change notification emails to users. They created one checkbox on a form our users could click and set up a trigger to fire notify somebody in the Philippines who would construct the emails manually from the database. They then iterated on those emails, creating them manually the whole time.
Once the email was iterated upon and the idea was proven (i.e. that users actually wanted it in the form that existed), it was handed back to me to automate properly, which in that case took about 7-10 days of dev work.
Or not, if it turned out the customers simply weren't interested. There were plenty of experiments like that where I created a checkbox or something and then we quietly removed it 2 weeks later.
It was a refreshing change from past jobs where I'd spent weeks working on a feature asked for by users that we were so sure would be a game changer only to see 1 or 2 people actually use it.
It's the same advice as building an MVP, do everything manually first, then automate once you've proven the market, basically Paul Graham's advice of doing things that don't scale. The Doordash founders hand-delivered food first before building an app.
Obviously, that does not apply if the market already exists, which it usually does. In that case you have to be different and better enough than the alternatives.
Why doesn't it apply? Even if there were a lot of food delivery companies, it would still be good to gain traction through doing it manually instead of (or before) spending months building out a software solution. Even better, because one would personally deliver the food, they can talk to customers and ask what problems they have currently with other food delivery competitors, thereby learning what differentiating advantage would work well when creating the software solution.
In other words, doing things that don't scale is not just useful for validating the market, it's arguably even more useful for validating one's specific implementation of a product, and also derisking the business model as a whole rather than having high capex initially.
If you're building in an established category, you should bring product intuition, honed by deep knowledge of the existing products, and confirmation of their weaknesses from their users. You are not going to win by rapidly cornering the market ("blitzscaling") because it's already cornered. You want to disrupt it with something better and different. I think the focus on "things that don't scale" takes away from this more important requirement.
Of course, you should try to validate assumptions as quickly and cheaply as possible. If "doing things that don't scale" means temporarily not automating things involved in validating assumptions, then fine. But don't manually do things that are not part of that calculus.
thanks! Seems very pragmatic. It seems the secret ingredient is having a responsive userbase big enough to have statistical significance though. Something we've struggled with in a recent startup.