You don't have interesting taste if you write articles like this.
People are just figuring out taste matters for product, so at this pace in 10 years they'll figure out that having novel tastes that aren't just a distillation of the echo chamber you live in matters just as much.
No one is sleeping on nano-banana/Gemini Flash, it's highly over-tuned for editing vs novel generation and maxes out at a pretty low resolution.
Seedream 4.0 is somewhat slept on for being 4k at the same cost as nano-banana. It's not as great at perfect 1:1 edits, but it's aesthetics are much better and it's significantly more reliable in production for me.
Models with LLM backbones/omni-modal models are not rare anymore, even Qwen Image Edit is out there for open-weights.
Most designers I've worked with don't want Tailwind slop as a starting point.
At most AI prototypes and images serve the role that a whiteboard drawing or wireframe did before: that's a win, but it's not an monumental change in efficiency.
I ironically think AI is already there in terms of being capable of more, but no one has built the right harnesses for AI.
People are just figuring out taste matters for product, so at this pace in 10 years they'll figure out that having novel tastes that aren't just a distillation of the echo chamber you live in matters just as much.
reply