Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The future of AI is Ruby on Rails (seangoedecke.com)
42 points by daviducolo 9 months ago | hide | past | favorite | 35 comments


Hard disagree. The best language for AI is one that has strong compiler soundness (ie type system) that can reject incorrectness and provide feedback as to correctness to refactors. One can substitute with property based tests but it won’t be as good as soundness + pbt. Speaking from experience here… https://ghuntley.com/oh-fuck && https://ghuntley.com/stdlib

I do however agree that there’s langs that are not as suitable for LLMs (Java) due to their verbosity (and engineers sort classes onto filesystem like a filing drawer. LLMs work best when everything for a domain, including tests, is in a single file).

Increased verbosity of the grammar does yield token wastage/ineffectiveness.

Having said all that - compile time matters HEAPS. The faster you can optimise your compile time the more loops (throwing pancakes at the wall) you can do. Here’s where it gets crazy though, one can loop a verbose compilation w/metrics back into the LLM and ask the LLM to propose improvements on how to make the compile faster. It works…


I’ve recently ported some legacy code to Golang with AI assistance, and the process was painless. The compiler caught a few issues along the way, which were easily fixed. I did a 1:1 port module by module and gave context on intention. Of course, I had to test and audit the code, but I had a pretty good integration test that was easily reused. I wouldn’t have felt nearly as confident without the compiler.


I’m not sure there’s a clear winner here beyond using languages and libraries that are very well represented in the training data. While types can let the LLM try to solve certain problems on its own, they’re also, as you mention, a source of verbosity that provide more surface area for hallucinations themselves.

> LLMs work best when everything for a domain, including tests, is in a single file

Context management goes a long way to getting better results from LLMs (why I still like Aider over more agentic tools) and this gives you less flexibility to compose the context yourself based on what’s important to a given task.


FWIW I’ve been doing a lot of work with text to SQL and in that space verbosity in naming tables and columns matters a lot because it adds additional context about what data is in the table and how it can be used. Think “subs.id” versus “subscriptions.stripe_customer_id”.


Funny enough, this trick also works on humans!


Completely agree. I’d add one more - if the compiler can describe errors in simple, plain English it’s far more likely that the LLM can fix the problem.

Some languages do this much better than others.


Nailed it. These LLMs do not understand code. They are silly string lookup services. Compilers with soundness tend to be INCREDIBLY verbose with their error messages which these LLMs use to ingest to get better outcomes on the re-attempt…


Given the fact that I think of myself roughly like a string lookup service, i think there are much stronger places to argue for the incapacity of LLMs than deferring to the ill-defined and generally-inapplicable "understand".


This.

I'd also add that code readability goes a long way too. Rust compiler has better checks than Go, but when LLMs make mistakes it's a lot easier to identify and fix generated Go code than Rust.

So the balance between verbosity and readability is important too, in addition to soundness checks. Java and Go are both verbose, but Go is intentionally designed for readability.

And compile time as already mentioned. Go wins there too.

All of these help together to iterate faster with fast and often subtly wrong generated code. LLMs will continue to be less wrong as the available training samples increase over time. That's something for the future to see what wins there.


So Ada / SPARK? It has a pretty neat type system.


I’ve stumbled across folks using TLA+ / LEAN / COQ. Instead of ramming TypeScript/Whatever into the context window, they just stuff the formal proofs into the context window. Then code generate off em…

That’s most Chad pattern I’ve discovered so far…


This reminds me of that midwit meme,

Far left: Just build better models

Midwit: Nooo you need succict languages to maximise the token efficiency, succinctness is a form of compression so that you can ...

Far right: Just build better models


By this logic, the future of AI is golf languages.


the future is perl


My preferred language to build apps is Ruby, and I'm still more likely to pickup RoR as a backend. But I can't see if winning in the AI generation.

I never recommend it with LLMs, because there is a definite context window and attention problem with a lot of languages, but Type-safety + being pre-trained on strong typing, makes any issues with context sizes moot. The latest generation of AI dev tools, are getting really good at solving problems using the type errors that it creates.

Also a lot of Rails niceties can be achieved in languages like Typescript with patterns such as decorators, which do an amazing job DRYing things up and reducing those contexts.


Why would working with AI need Rails?

I would prefer to bet on Mojo[0].

> The Mojo programming language was created by Modular Inc, which was founded by Chris Lattner, the original architect of the Swift programming language and LLVM, and Tim Davis, a former Google employee. Intention behind Mojo is to bridge the gap between Python’s ease of use and the fast performance required for cutting-edge AI applications.

[0] https://en.wikipedia.org/wiki/Mojo_(programming_language)


There is very little Ruby has going for itself. People despair at Python's speed of a sloth on tranquilizers and Ruby is even slower, with a community the size of Lua. When it comes to people considering languages for anything it's usually not even on the list.

If anything, using LLMs means we can use less language abstraction for more speed, and have them write Cpp or Rust directly without actually having to deal with their verbosity by hand. Then we can repeal Wirth's law and have our complexity too.


Rails is one of the most battle tested, opinionated, and productive frameworks there is. If you aren’t at least considering it, depending on what you’re trying to do, you’re doing yourself a huge disservice.


So the only argument is that the token window is a disadvantage for languages that use more tokens for an equivalent program. But token windows are only getting bigger, and there's also no need to fit the entire codebase in a single prompt.

If anything, it sounds like ruby on rails should've been _the past_ of AI. But it clearly wasn't.


The article never explains the logical leap from Ruby being a concise language to why you'd build AI apps in *Rails*.


Drawing from my experience, modern technologies like Hotwire and Turbo Frames—when integrated with background jobs and Action Mailbox for handling inbound emails—alongside mature tooling, create a surprisingly effective (and enjoyable) platform for developing AI applications.


Isn't that just tooling around integrating with any API?


I'm not sure what you mean.


The premise is wrong. You hardly need your entire codebase in the context window to generate code. If your code is properly modularized, all you need is for a specific module to be in it, maybe with some interfaces from other modules.



... because the author considers Ruby and Rails to be a very concise language. And then the theory is that less tokens take up less space in the attention window.


Rails isn't a language. Ruby is. Rails is just glue for other libraries in Ruby.


The posted link has an anchor to the bottom of the page. Presumably a mistake?


The future of AI is Rust.


I actually agree with this. This will be true, as long as Rust doesn’t break their language semantics. Zig keeps making breaking changes and when they do it essentially means all the pre-trained data is incorrect.

Going forward it is incredibly important for language designers to not break things. It always was but now the stakes are higher…


Clojure is quite strong in that respect, but probably too niche for AI.


I have been able to create a brand new programming language purely based on specs and the eval loop technique. If I can build an ESO language using a foundational model (ie. no pre-existing training data) and program with it, then Clojure is not too niche.


The "future of AI" is an unending sea of half code and half language in heavily marketed products that claims heaven and earth, with premium VIP stealth access to hell, such that heaven never knows. Which is of course nonsense, but will be a huge hit with all the religious, and will continually mint billionaires to the uncanny dismay of anyone with a brain.


Lol


Jesus, from the quagmire of Python to the frying pan of Ruby.

Can we not just use a language with a decent type system and compiler story? Heck, at this point I'd take C# or TypeScript over Ruby.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: