Hacker Newsnew | past | comments | ask | show | jobs | submit | aatd86's favoriteslogin

I'm a fan of anything that allows me to build with javascript that doesn't require a build step.

Modern HTML/CSS with Web Components and JSDoc is underrated. Not for everyone but should be more in the running for a modern frontend stack than it is.


Why does the react development team keeps investing their time on confusing features that only reinvent the wheel and cause more problems than solve?

What does server components do so much better than SSR? What minute performance gain is achieved more than client side rendering?

Why won’t they invest more on solving the developer experience that took a nosedive when hooks were introduced? They finally added a compiler, but instead of going the svelte route of handling the entire state, it only adds memoization?

If I can send a direct message to the react team it would be to abandon all their current plans, and work on allowing users to write native JS control flows in their component logic.

sorry for the rant.


From some dystopic device log:

    [alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
    [debug] Perceived injustice, from authority, in-person
    [info]  Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
    [debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.

The argument here is that React has permanently won because LLMs are so heavily trained on it and default to it in their answers.

I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.

Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.

Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.

Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.


Here's a recent interview with David Letterman (yes, if you're under 40, you think he's overrated) about this suspension:

https://youtu.be/mpAHFlZqIKw


This is fantastic work. The focus on a local, sandboxed execution layer is a huge piece of the puzzle for a private AI workspace. The `coderunner` tool looks incredibly useful.

A complementary challenge is the knowledge layer: making the AI aware of your personal data (emails, notes, files) via RAG. As soon as you try this on a large scale, storage becomes a massive bottleneck. A vector database for years of emails can easily exceed 50GB.

(Full disclosure: I'm part of the team at Berkeley that tackled this). We built LEANN, a vector index that cuts storage by ~97% by not storing the embeddings at all. It makes indexing your entire digital life locally actually feasible.

Combining a local execution engine like this with a hyper-efficient knowledge index like LEANN feels like the real path to a true "local Jarvis."

Code: https://github.com/yichuan-w/LEANN Paper: https://arxiv.org/abs/2405.08051


> At some point I hope it becomes obvious that well-engineered SSR webapps on a modern internet connection are indistinguishable from a purely client side experience.

I dunno; other than the fact that there are some webapps that really are better done mostly client-side with routine JSON hydration (webmail, for example, or maps), my recent experimentation with making the backend serve only static files (html, css, etc) and dynamic data (JSON) turned out a lot better than a backend that generates HTML pages using templates.

Especially when I want to add MCP capabilities to a system, it becomes almost trivial in my framework, because the backend endpoints that serve dynamic data serve all the dynamic data as JSON. The backend really is nicer to work with than one that generates HTML.

I'm sure in a lot of cases, it's the f/end frameworks that leave a bad taste in your mouth, and truth be told, I don't really have an answer for that other than looking into creating a framework for front-end to replace the spaghetti-pattern that most front-ends have.

I'm not even sure if it is possible to have non-spaghetti logic in the front-end anymore - surely a framework that did that would have been adopted en-masse by now?


For what it's worth, I rarely do use the mouse in my terminal, but on those few occasions when I do want to, Bubbletea/Lipgloss applications have had a history of being pretty infuriating for me as a user.

P.S. Keep up the great work! The world needs more IRC.


I have a problem that I test new UI frameworks on; generating celtic knotwork. I've written three of these now (React, VueJS, and Go manipulating SVGs). The first was hard because I had to learn how to solve the problem. The others were hard because I had to learn how that solution changed because of the framework.

There's a joy in rewriting software, it is obviously better the second time around. As the author says, the mistakes become apparent in hindsight and only by throwing it all away can we really see how to do it better.

I also sketch (badly) and the same is true there; sketching the same scene multiple times is the easiest way of improving and getting better.


Ah, but go doesn't have union types.

Yeah, this still doesn't answer: can I use this with agpl, gpl, lgpl, apache2, others. The summary on top seems to imply that it's not AGPL/GPL compatible since it talks about granting specific types of organisations some monetization privileges. It doesn't simplify what the rules are (what qualifies as use?), without having to read the licence itself.

On the first read it's basically: compatible with BSD-like and PPL... and not much else?

It actually seems to break the licensing itself - the colorize gem is GPL2, which seems not compatible with PPL. https://github.com/fazibear/colorize/blob/master/LICENSE https://codeberg.org/skinnyjames/hokusai/src/commit/5380728d...


I haven’t seen any cognitive / mood difference for me when taking magnesium supplements in any form.

But then I went to one of those sensory deprivation chambers, where they use magnesium salts to change the water’s buoyancy.

I felt the most content and happy in my life for a week after. It was really bizarre - I would “just not care” about pressures at work, failing personal relationships, any stress really.

And I remained effective, just with a higher EQ because I wouldn’t overthink things.

Tried it some more times with similar effect. So now when I end up in a situation where I build up stress and can’t seem to get to a chill state - just go for a deprivation tank and align myself back, though I try not to get into a situation where I need one altogether.


Could not agree more.

In fact, we’re tackling this exact problem with Hypership (https://hypership.dev) but in the React/Next.js and JavaScript space. Infra, auth, events, analytics, forms, database, API, everything you need to ship a product, all configurable in minutes with no glue code.

Laravel is 100% going all in on their cloud, tightly integrating their entire ecosystem. I mean, have you seen how many products they have?!

The trade-off is clear: speed vs lock-in. I'm betting on flexibility without the setup overhead. Agencies will gobble this service offering up in no time.


There's a bit of an answer to that from Charlie Marsh here: https://hachyderm.io/@charliermarsh/113103579845931079

> What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.

> An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]


How I wish andlibs/ui (actually native widgets) wasn’t abandoned in a half-done state. Now to build any GUI in Go that doesn’t look abominable I have to use Wails, this is no exception.

The"too ugly"has been my reaction as well.

That said, I can't even elaborate as to why I feel that way.

I'm totally fine with Material design, Fluent design or Apple's HIG, kde or gnome default looks. But this is just... not for me.

I wonder if a ui/ux expert could put into words why so many others feel the same.


For regular Swift developers who make iOS apps, yes it's niche.

C++ interop is probably more important for Apple itself.

Swift compiler speed and error reporting are abysmal and improving them would have a much bigger impact. So far it's not getting any better at all.

If I could give up some level of type inferrence and get the build time from 3 minutes to 20 seconds, I would.


Those kind of mechanisms can't create the kind of correlations that are observed in real particles, so we know that something else is going on.

You can play around with this by trying to design the pair of devices that were described in my second link (https://news.ycombinator.com/item?id=35905284).

To recap, you want to design a pair of devices that each have 3 buttons labeled A, B, and C, a red LED, a green LED, and a counter. The counter starts at 1000. When you press any one of the buttons one of the LEDs flashes and the counter decrements. When the counter reaches 0 the device stops responding.

You should also specify a way that if the devices are brought together the pair of them can be reset.

You can specify any kind of non-quantum hardware you want in the devices. As much computing as you need, as much RAM and ROM and disk as you want, and physical sensors. Include clocks if you need to. You can include true random number generators. It doesn't have to be limited to current technology--it just has to be limited to known physics and not use quantum entanglement.

What are need to achieve with that hardware and whatever algorithms you specify is:

1. Suppose someone has used one of the devices, and recorded the results of a very large number of interactions.

Suppose that a statistician is given a list of 5-tuples (P, F, n, R, t) of those interactions with one of the devices, where P is which button was pressed, F is which LED flashed, n is the value on the counter when the button was pressed, and R is how many times the device has been reset (i.e., R = 0 the first 1000 times the device is used, then when it and the other device are reset R = 1 for the next 1000 uses and so on), and finally t is the time at which the button was pressed.

It should not be possible using any known statistical test on that list of 5-tuples for the statistician to distinguish the device from a device whose algorithm is simply:

  if any_button_pressed():
    r = uniform_true_random_from_0_to_1()
    if r < 0.5:
      flash(GREEN)
    else:
      flash(RED)
2. If the lists of 5-tuples from both devices matched up by n and R we should find that (1) if the same button was pressed on both, the same color LED flashed on both, (2) if B was pressed on one and A or C on the other, then 85.355% of the time the same color flashed on both, and (3) if A was pressed on one and C on the other than 50% of the time the same color flashed.

A couple things to note.

1. The above has to hold even if the users take the devices very far apart from each other before they start pressing buttons.

In particular the users might choose to take the devices so far apart before they start pressing buttons that each has finished their run of 1000 before any possible communications from their device could reach the other.

2. The users might wait a long time before starting a run of 1000, and they might wait a long time between presses within a run.

3. The users are determining when to press independently so you can't count on them alternating. You can't even count on them overlapping: one might do all 1000 presses before user the other starts.

4. The users might use a true random number generator to determine which buttons to press.


This looks really nice, congrats!

1) I see Kamal was an inspiration; care to explain what differs from it? I'm still rocking custom Ansible playbooks, but I was planning on checking out Kamal after version 2 is released soon (I think alongside Rails 8).

2) I see databases are in your roadmap, and that's great.

One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.

Even for side projects, a periodic SQL dump stored in S3 is generally not enough nowadays, and any project that gains traction will need to implement some sort of streaming backup, like Litestream (for SQLite) or Barman with streaming backup (for Postgres).

If I may suggest this feature, having this tool to provision a Barman server in a different VPS, and automate the process of having Postgres stream to it would be game changer.

One barman server can actually accommodate multiple database backups, so N projects could do streaming backup to one single barman server.

Of course, there would need to be a way to monitor if the streaming is working correctly, and maybe even help the user with the restoration process. But that effectively brings RTO down to near 0 (so no data loss) and can even allow point in time restoration.


I did this! I founded EUI [0] at Elastic and helped teams adopt it. I attribute success to a few factors:

1. Publish components, not styles. This has been rehashed in other HN threads, but the idea is to provide Lego blocks to help engineers build UIs more quickly. This might be out of fashion with Tailwind adherents but we found success by treating the framework's primary interface as React/Vue/JS, not CSS.

2. Be transparent and receptive. Share your goals with consuming teams and ask how you can help them. They might ask for specific components or might ask you to help them convert a UI to use the library. By demonstrating that you and the library are there to serve (as opposed to dictate) you'll earn goodwill and your work will have more impact.

3. Have high-quality components. This applies to the UI design (both how the components look and how they behave) and software design. Are the props intuitive? Do the components compose well? We were fortunate enough to have a strong design and engineering team that did well on this points.

4. Seed the consuming codebase. Patterns in codebases propagate because engineers like to copy/paste. For example, start by migrating all buttons in the consuming codebase over. Then migrate over all forms, modals, and so on. It also helps to take a vertical approach, by migrating a single view entirely over to the library, or building one from scratch. This can get people excited about the library because they can see and experience the end result.

5. Ensure compatibility with the existing codebase. Make sure styles don't collide, make sure the underlying JS libraries are compatible. I chose React for EUI, but the consuming codebase (Kibana) was Angular. We made the decision to convert Kibana to React too, but there was a temporary period where we mounted React components inside of Angular. We needed simple guidelines to support the engineers who were doing that.

[0] https://elastic.github.io/eui


I am definitely target audience for this.

I am often in need of building internal tools, dashboards - simple apps with simple UI that doesn't need to be unique, drive engagement, or whatever. It needs to get the job done and let me move on.

Streamlit is close but the peculiar approach they take (rerun the script) makes it unwieldly for more complex apps (say, a few related pages, a few dozen components each).

I've been looking for a way to just let me do "GUI app in Python" that get delivered over HTTP and rendered in browser, and Rio is exactly what I was hoping for.

Yeah, no chance Meta will rewrite FB frontend to use Rio. Also pretty sure that I won't be doing any fancy websites in it. But if I can skip dealing with React/Vue/HTMX/whatever on the frontend for some internal thingy.

I only tried doing some simple stuff but so far I really like what I see!


I have always been using Gradio, and it still comes short when it comes to slick UI, even with themes. So it is nice to see solutions like this that has a slicker UI.

That’s really smart. I was previously a data scientist who only knew python and made the transition last year to become a full stack web dev and I honestly had no idea what I was in for. I probably won’t use your tool, but I’ll happily recommend other data scientists check it out so they don’t have to go through what I did.

I spend 95% of my work time doing backend python microservices for internal tools, 5% on terraform for the infrastructure, and 0.0001% of my time building frontends for these tools (I just use plain html and JS, and only add a frontend when absolutely necessary). I've build a react app for fun in the past just to learn how that works but if I had to do it again for work I would basically have to go through the entire learning process again.

So, something like this where I'm writing pure python for my web components could really save me a lot of that churn time, not to mention that many of my coworkers have absolutely no JS experience. I have an upcoming task to build a new frontend and am going to add in a couple days to try this out to see if it meets our needs.


Have read the first few chapters and it expects that you either read the accompanying source code or implement your own and pass the tests. The pseudo code presented in the book often look like function calls with the inner details not there in the book. Furthermore, as already pointed out in another comment, the available implementation is in OCaml, which is probably not something many C programmers have experience with.

Nevertheless, I think I'm learning more from this book than most other books I've tried before that are far more theoretical or abstract. I'm still eager to reach the chapter on implementing C types. I think it's a good book, but it requires more effort than something like Crafting Interpreters or Writing a Compiler/Interpreter in Go, while also covering topics not in those books.


I want a way to embed a terminal (it doesn't have to support a myriad terminal emulations, only one) inside a graphical program. MacOS first, but other platforms would be nice.

So, imagine a normal GUI window, but one of the components in it is a terminal window. Is there something like that?

Or should I just use mono font text view?


The game Silicon Zeroes ( https://store.steampowered.com/app/684270/Silicon_Zeroes/ ) teaches this. It starts out with components computing their outputs instantaneously, but then introduces the concept of microticks such that the output of a component is unstable until its inputs are stable enough for some time, so the clock speed must be adjusted according to the largest delay in all circuit paths. The game starts off with simple circuits but very quickly becomes about making a CPU, although the ISA is hard-coded into the game and very small.

Another game Turing Complete ( https://store.steampowered.com/app/1444480/Turing_Complete/ , https://news.ycombinator.com/item?id=38925307 ) lets you build a CPU from basic gates and a much larger (and customizable) instruction set. It also has the concept of gate delays, thopugh it doesn't visually show the unstable output as Silicon Zeroes does.


Sometimes you do just want to use off-the-shelf libraries for things and not need to re-implement them yourself. If you already have a web app you’re building around, it’s annoying to need to rewrite bits of it - especially rock solid 3rd party libraries - unnecessarily.

Another consideration is that any webview<->native bridge is going to impose some kind of bridge toll in the form of serialization and message passing (memcopy) overhead. You can call into WASM from JS synchronously, and share memory between JS and WASM without copying. Sync calls from webview to Tauri native code doesn’t appear to be possible according to https://github.com/tauri-apps/wry/issues/454

Finally, security posture of native code running in the main process is quite different from WASM code in the webview. I maintain a WASM build of QuickJS and I would never run untrusted JS in real native QuickJS, whereas I think it’s probably safe enough to run untrusted JS in QuickJS inside a WASM container


I enjoyed this talk at the DAFx17 conference by Avery Wang, co-founder of Shazam. It goes a little into the theory behind the algorithm, and looks at some of the more practical issues (background noise, etc.): https://www.youtube.com/watch?v=YVTnj3OIhwI

If the site has no users and it's just you, it's like $20/mo.

If you have enough traffic that your bill is > $1,000, put some time into switching.

But I can have my entire site deployed with CI/CD on Github to Vercel in less than an hour. If I'm doing client work, my clients can go preview new work immediately. I can test and build deployments on different branches and send test builds to stakeholders. It's got a lot to like and it ends up saving you a lot of money.

What is right for just starting out is rarely right for scaling up. Too many people wasting too much time on AWS instead of shipping their app first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: