> every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric
Must be nice. In my career, all working on webapps, I've seen a few leaders popping in to ask us to fix a particularly egregious performance issue if the right customers complain, but aside from those finely-targeted and limited-attention-span drives to "improve performance" it seems the answer for the past decade or so is just to assume everyone is on at least a gigabit connection, stick fingers in ears, and just keep adding more node modules. If the developers' disks get full because node_modules got too big, buy a bigger SSD and keep going. (ok that last part is slight hyperbole but I also don't think frontend devs would be deterred from their ravenous appetite for libraries by a full disk).
Must be nice. In my career, all working on webapps, I've seen a few leaders popping in to ask us to fix a particularly egregious performance issue if the right customers complain, but aside from those finely-targeted and limited-attention-span drives to "improve performance" it seems the answer for the past decade or so is just to assume everyone is on at least a gigabit connection, stick fingers in ears, and just keep adding more node modules. If the developers' disks get full because node_modules got too big, buy a bigger SSD and keep going. (ok that last part is slight hyperbole but I also don't think frontend devs would be deterred from their ravenous appetite for libraries by a full disk).