I've been using a development server for about 9 years and the best thing I ever did was move to a machine with a low-power Xeon D for a time. It made development painful enough that I quickly fixed the performance issues I was able to overlook on more powerful hardware. I recommend it, even just as an exercise.
For similar reasons, in the Google office I worked in you had the option to connect to a really intentionally crappy wifi that was simulating a 2G connection.
> I've been coding for decades already, but if I need to put something together in an unfamiliar language? I can just ask AI about any stupid noob mistake I make.
So you aren’t still learning foundational concepts or how to think about problems, you are using it as a translation tool. Very different, in my opinion.
It's not though. Processes can be supervised and crashes can just lead to "restart with good state" behavior. It's not that you don't try handling any errors at all, you just can be confident that anything you missed won't bring the system down.
And Elixir is strongly typed by most definitions. Perhaps you mean static?
You can be more confident. But remember that time an Ericsson switch crashed upon handling a message that it sends to adjacent switches every time it restarts? That crashed the whole network, and you could still do that in Erlang.
LiveView uploads are baked in, previews and all. Everything else you list is included in the Flop library, if you want something off the shelf. In rails you are still including Kaminari or whatever other gems for all this too, so this is really no different.
That's so disappointing to hear. I have an intern who hadn't touched Elixir 4 weeks ago who is already making meaningful PRs. She's done the PragProg courses and leans a bit on Copilot/Claude, but she's proving how quickly one can get up to speed on the language and contribute. To hear that a major company couldn't bring resources up to speed, to me, shows a failure of the organization, not the language or ecosystem.
Yeah it's irritating enough when humans do it, it's so transparently insincere. Just help me with my problem.
I guess I am just old now but I hate talking to computers, I never use Siri or any other voice interfaces, and I don't want computers talking to me as if they are human. Maybe if it were like Star Trek and the computer just said "Working..." and then gave me the answer it would be tolerable. Just please cut out all the conversation.
I agree it seems transparently insincere yes, but the reason it’s done is because it works on some people who either don’t detect it or need it as politeness norms and the ones who see it as insincere just ignore it and move on. Thus net, you win by doing this because it rarely if ever costs you and thus you only have upside.
> The ones who see it as insincere just ignore it and move on.
Except I "just move on" to another product.
The only person I know who doesn't find this pretension annoying is my 90 year-old mother. I don't have time to waste on any company that wastes my time with pointless cut-and-paste babble. And any company actually intentionally catering to my 90 year-old mother as a primary target customer is clearly signaling they aren't for me.
A decade from now such blatant condescension from an AI will be a trope: "OMG, that's so mid-2020s AI it's painful."
It will be a trope eventually. But like I said, the cost benefit analysis puts it generally in the benefit camp. And if every next product also does this, are you actually going to not use the product? In most cases I think people put up with this & just minimize the interaction that leads to this (another benefit for the support team wording things this way since they have to field fewer support requests)
It's also impossible to turn off in my experience. I have like 5 lines in my ChatGPT profile to tell it to fucking cut off any attempts to validate what I'm saying and all other patronizing behavior. It doesn't give a fuck, stupid shit will tell me that "you are right to question" blah-blah anyway.
Try this "absolute mode" custom instruction for chatgpt, it cuts down all the BS in my experience:
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes.
Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias.
Never mirror the user's present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered - no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking.
Model obsolescence by user self-sufficiency is the final outcome.
What's more likely to be a problem, is the request to be concise.
For some reason, this still seems to not be widely known among even technical users: token generation is where the computation/"thinking" in LLMs happen! By forcing it to keep its answers short, you're starving the model for compute, making each token do more work. There's a small, fixed amount of "thinking" LLM can do per token, so the more you squeeze it, the less reliable it gets, until eventually it's not able to "spend" enough tokens to produce a reliable answer at all.
In other words: all those instructions to "be terse", "be concise", "don't be verbose", "just give answer, no explanation" - or even asking for answer first, then explanations - they're all just different ways to dumb down the model.
I wonder if this can explain, at least in part, why there's so much conflicted experiences with LLMs - in every other LLM thread, you'll see someone claim they're getting great results at some tasks, and then someone else saying they're getting disastrously bad results with the same model on the same tasks. Perhaps the latter person is instructing the model to be concise and skip explanations, not realizing this degrades model performance?
(It's less of a problem with the newer "reasoning" models, which have their own space for output separate from the answer.)
If that's correct then it's a significant problem with LLMs that needs to be addressed. Would it work to have the agent keep the talky, verbose answer to itself and only return to a finally summary to the user?
That's what the "reasoning" models do, effectively. Some LLM services hide or summarize that part for you, other return it verbatim, and ofc. you get the full thing if you're using a local reasoning model.
I imagine they design these AI's to condescend to you with the "you right to question..." languages to increase engagement.
That said, they probably also do this because they don't want the model to double down, start a pissing contest, and argue with you like an online human might if questioned on a mistake it made. So I'm guessing the patronizing language is somewhat functional in influencing how the model responds.
This is straight out of the movie "Her", when OS1 said something like this. And the voice and the intonation is eerily similar to Scarlett Johansson. As soon as I heard this clip, I knew it was meant to mimic that.
I dont know man. It makes me inclined to shut off that conversation. Because it sounds like something a nitpicky, “nose all over your business”, tut-tutting Karen would say. It doesn’t convey competence, rather someone trying to manage you using a playbook.
Look at it this way—if someone were trying to sabotage the entire tech support industry, convincing companies to ditch all their existing staff and infrastructure and replace them with our cheerfully unhelpful and fault-prone AI friends would be a great start!
> Performance of what, exactly? Hard to beat the concurrency model and performance under load of elixir.
The performance of my crummy web apps. My understanding is that even something like ASP.NET or Spring is significantly more performant than either Rails or Phoenix, but I'd be very happy to be corrected if this isn't the case.
I appreciate the BEAM and its actor model are well adapted to be resilient under load, which is awesome. But if that load is substantially greater than it would be with an alternative stack, that seems like it mitigates the concurrency advantage. I genuinely don't know, though, which is why I'm asking.
Some of the big performance wins don’t come from the raw compute speed of Erlang/Elixir.
Phoenix has significantly faster templates than Rails by compiling templates and leveraging Erlang's IO Lists. So you will basically never think about caching a template in Phoenix.
Most of the Phoenix “magic” is just code/configuration in your app and gets resolved at compile time, unlike Rails with layers and layers of objects to resolve at every call.
Generally Phoenix requires way less RAM than Rails and can serve like orders of magnitude more users on the same hardware compared to rails.
The core Elixir and Phoenix libraries are polished and quite good, but the ecosystem overall is pretty far behind Rails in terms of maturity. It’s manageable but you’ll end up doing more things yourself. For things like API wrappers that can actually be an advantage but others it’s just annoying.
ASP.NET and Springboot seem to only have theoretical performance, I’m not sure I’ve ever seen it in practice. Rust and Go are better contenders IMO.
My general experience is Phoenix is way faster than Rails and most similar backends and has good to great developer experience. (But not quite excellent yet)
Go might be another option worth considering if you’re open to Java and C#
Thank you, I really, really appreciate the thoughtful answer.
I've written APIs in Rust, they were performant but the dev experience is needlessly painful, even after years of experience using the language. I'm now using Rails for a major user-facing project, and while the dev experience is all sunshine and flowers, I can't shake the feeling that every line I write is instant tech debt. Refactoring the simplest Rails-favoured Ruby code is a thousand times more painful than refactoring even the most sophisticated system in Rust. I yearn for some kind of sensible mid-point.
Elixir seems extremely neat, but I've been blocked from seriously exploring it by (a) a sense that it may not be more any more performant than Ruby, so why give up the convenience of the latter, and (b) not having seen any obvious improvement on Ruby's hostility to gradual typing / overuse of runtime metaprogramming, which is by far my biggest pain point. I'm chuffed to hear that the performance is indeed better, that the magic in Phoenix happens at compile time, and that gradual types are being taken seriously by the language leadership.
There's three reasons to choose elixir or perhaps any technology
The community and it's values, because you enjoy it, because the technology fits your use case. Most web apps fit. 1 and 2 are personal and I'd take a 25% pay cut to not spend my days in ASP or Spring, no offense to those who enjoy it.
I'm sure they meant an actual statically typed language. I agree that dynamic languages are fun and productive ... until the codebase becomes big and complex, and then not knowing what shape any data is quickly becomes a nightmare to understand and debug.