Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This puts all the computational load on the server.

Imagine 10s of thousands of clients requesting millions of HTML fragments be put together by a single server maintaining all the states while all the powerful high end computing power at the end user's fingertips goes completely to waste.

Not convinced.



How is it fundamentally any different than 10s of thousands of clients requesting JSON or whatever other serialized data format?


Insofar as only retrieving data and returning it as json is way less work for the server than retrieving data plus rendering it.


For the same data, why should serializing it to JSON be particularly faster than rendering it as HTML? The server is converting the data directly to a byte string in either case, which is about the same amount of work.

HTML is more verbose, so I would guess that JSON serialization is slightly faster, but I doubt there's an order of magnitude difference. (I could be proven wrong though)

I agree that taking HTMX to the extreme where _all_ interactions require a request to the server is too much overhead for interactive web apps. But there's likely a good middle ground where I can have mainly server side rendering of HTML fragments with a small amount of client side state/code that doesn't incur particularly more or less server load.


Converting data to a HTML string is not a performance bottleneck you'll be worrying about. I wasn't worrying about it much in 2000, and you really shouldn't need to in 2023.


> by a single server maintaining all the states

HTTP is stateless. This is the whole point of the hypermedia paradigm.

If you have a page with many partial UI page changes over htmx, then yes, this paradigm puts increased load on the server, but your DB will almost certainly be your bottleneck before this will be, just as in the SPA case.


I'm not talking about network state, but app state.

Yes, in HTMX the server is handling client app state, even things as little as whether a todo is in read or edit state.

That just seems absurd to me, let the client take care of that.


Not really? The server in your example serves the read & the edit components statelessly; the component which the user is viewing exists only on the client.


This avoids unnecessary computation at the client, it does not substantially add to the burden of the server. Which would need to be reconciled regardless of the markup format used over the pipe. Alpine is available for local flair.


I don't know but adapting UI to reflect the edit state of a todo seems like a classic client responsibility imho, not unnecessary.

What's unnecessary to me however is sending bytes thousands of miles across the wire to some server to do the same.


Batching in a SPA could alleviate some work but could be done with alpine instead, as needed. With a significant cut in overhead, download size, developer ramp up, etc. Depends, but think the reduction in complexity is significant.


Most users these days are probably using phones, not high end computers.


Most phones have more computing power than all of NASA did in the 80s.

I was specifically thinking of modern smartphones in fact, which are pretty damn fast at executing a little bit of JS.

(Though I agree that some of the bloated bundles resulting from modern frameworks or their poor usage definitely go to far)


The processor on my phone is better than the one on a 2015 Macbook Pro 13" (i5)


The processor on an average phone does not.


Most phones have an awful lot od computational power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: