> The comment I responded to hand waved away consuming almost 50% of your pps on heartbeats every 30 seconds as "no big deal".
> The network saturation is just a necessary cost of running such a massive cluster.
I think this actually answers it perfectly.
1. If you are running 1K distributed nodes, you have to understand that means you have some overhead for running such a large cluster. No one is hand waving this away, it's just being acknowledged that this level of complexity has a cost.
2. If heartbeats are almost 50% of your pps, you are trying to use 1Gbe to run a 1K-node cluster. No one would do this in production and no one is claiming you should.
3. If your system can tolerate it, change the heartbeat interval to whatever you want.
4. Don't use distributed Erlang if you don't have to. Erlang/Elixir/Gleam work perfectly fine for non-distributed workloads as do most languages that can't distribute in the first place. But if you do need a distributed system, you are unlikely to find a better way to do it than the BEAM.
Basically, it seems you are taking issue with something that 1) is that way because that's how things work, and 2) is not how anyone would actually use it.
Do you though? M4 is what is on the market now and this chip is just coming out. Maybe they are on different processes, but you still have to compare things at a given point in time.
That's one reason I like the stainless (now Titanium) models. They have sapphire crystal instead of glass and I haven't had scratches in years. Scratches on the body are just patina.
Sorry you are having so many issues. That has not been my experience with elixir-ls either locally or in Codespaces. Just want to say that when you do get it working, it does indeed have that and many more features. If you are interested, the Elixir slack is very active and helpful and there is a #language-server channel.
Thank you! It seems the Elixir LSP does provide useful features... I'm worried that even after getting it working on my Mac, the same troubles happen again when I move to remote machines. I'll try the Elixir slack!
> except if the tokenizer or whatever doesn't follow a particular format but in that case you just upload it to some free web service and make a PR with the result and reference that version hash specifically and it'll work.
Out of the box, Phoenix applications respond to simple http requests in times measured in microseconds. What appreciable improvements from that do you get with Python? And considering how much of your total request time is not processing by the language (db calls, network latency, etc.), why would you decide on a language purely on the minor speed improvement of a small part of the overall picture? I’ll gladly trade what might amount to a few ms of request time for the concurrency model, scalability, and latency characteristics of Elixir and the BEAM.
Yeah I don't get this argument either. It's like people are still stuck in micro-benchmark-land. Sub-1ms responses are the norm in Phoenix and I tend to never get anything above 100µs for LiveView messages, sure, they're noops, but calling it slow is .. well, strange.
Sure, you don't want to do number crunching in pure Elixir, but I'm always curious as to the actual needs people have, once being one of the "performance is everything" kind of developers myself.
For number-crunching there's Nx. As far as I can tell it's a solved problem, with easy handling of things like batching over clusters of GPU:s and things like that as a bonus.
I built this last March. It captures audio from a live HLS stream and transcribes and translates into 18 languages on the fly. Used by a customer with about 25K international employees for their internal events. Works surprisingly well.