Hacker Newsnew | past | comments | ask | show | jobs | submit | Rikudou's commentslogin

Nah, I'm pretty sure we invented it. Otherwise I'm not sure what costs all these companies so much money.

Granted, I only managed to read two and half paragraphs before deciding it's not worth my time, but the argument that we didn't teach it irony is bullshit: we did exactly that by feeding it text with irony.


Gaming GPUs enabled it. That's random serendipidous connective tissue that was presaged by none of the people who wrote the first papers fifty years ago.

Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.

Now that the salient has emerged, everyone wants to control it.

Capital battles it out for the chance to monopolize it.

There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.

The battle to become the first multi-trillionaire is why so much money is being spent.


What if I no good in English?

Jokes aside, my English is passable and I'm fine with it when writing comments but I'm very aware that some of it doesn't sound native due to me, well, not being native speaker.

I use AI to make it sound more fluent when writing for my blog.


As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.

I write the full article in my Czenglish (English influenced by Czech sentence structure). Then I let it rewrite it in proper English.

So it's me doing the writing and GPT making it sound more English.


> What if I no good in English?

It would still sound more human coming from you.


I wish one day to be so brave to let a tool I clearly don't understand* ssh to a production server with root access**.

* calling it a god-level programmer kinda gave it away they have no idea what's actually going on

** to restart docker containers you either have to be root or part of the docker group which effectively gives you root privileges


The consensus is that nobody should have root SSH to a production server.

"I'm not a fan of regulating extremely huge companies, except for the way I'd regulate them."

We must have regulation, and I support that fully. It also seems healthy to me to have an independent view on the specifics of said regulations. I mostly agree with the vision and direction of the DMA, but in my opinion it lacks specificity and clear unacceptable boundaries.

That lack of specificity, to me, is why Apple has been able to implement malicious compliance. At the same time the lack of specifics risks companies leaving the EU market in its entirety due to regulatory unclarity with high fines.


There's a difference between malicious compliance and noncompliance. The EU has generally ruled that the lack of specificity you allude to does not exist; Apple has misinterpreted things that provide specific requirements to mean something other than what they legally mean. Fines have been levied and it seems that the situation has not yet been resolved; the fines will likely grow if Apple doesn't comply.

https://ec.europa.eu/commission/presscorner/detail/en/ip_24_...


Wow, imagine living in a world not being black and white. Crazy!

People make exceptions sometimes, what’s your point?

You can use an Android phone without a Google account.

For the average person, including buying apps, this simply isn't a reality.

And Google will now be throwing up massive "OMG! You're going to install an app that isn't from the Play Store?!" warnings to anyone that tries, including requiring some degree of technical skill to do so.

https://news.ycombinator.com/item?id=45908938

You can nitpick this, but the truth is my comments are about the average user, and from that perspective, factually accurate.


The AOSP exists. You're just wrong, regardless of what arbitrary goalpost the average person considers accessible.

From my post:

An Apple account, or a Google account is required to use an iphone or pixel in its default config, and all the features it entails.

Are you suggesting Google is selling Pixels with pure AOSP? Context counts.


>> regardless of what arbitrary goalpost the average person considers accessible.

You stated I was wrong. I am, and was not. This is because I have contextually stated that I am referring to the average person's reality.

I was specific in this point, because yes aosp exists. If you want to discuss conditions outside of those I mentioned, that does not make me wrong.

Instead, that means you are discussing something else.

Aosp existing does not mean the average person may or even can use it. This matters, for consumer protection is aimed at the 99.9%.. not 0.1%.

One sad example, many banking apps won't work without firebase and google play. You cannot, as an average user, even find such apps without the Play store.

A play account, or apple account has serious gatekeeping ramifications for the average person.

Pretending otherwise is ignoring reality.

It lets them win.


Not for long. Android phones (with Google Play Services) will soon require some degree of authentication to sideload applications, once that happens then those phones will only have the barest of features available without a Google account.

I do.

Not an iCloud user, but I use Immich on my NAS.

Didn't they say they're not accepting any new proposals for error handling?

I kinda got used to it eventually, but I'll never ever consider not having enums a good thing.


I think the Go part is missing a pretty important thing: the easiest concurrency model there is. Goroutines are one of the biggest reasons I even started with Go.


Agreed. Rob Pike presented a good talk "Concurrency is not Parallelism" which explains the motivations behind Go's concurrency model: https://youtu.be/oV9rvDllKEg

Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.


Its a bit messy to do parallelism with it but it still works and its a consistent pattern and their are libraries that add it for the processing of slices and such. It could be made easier IMO, they are trying to dissuade its use but its actually really common to want to process N things distributed across multiple CPUs nowadays.


True. But in my experience, the pattern of just using short lived goroutines via errgroup or a channel based semaphore, will typically get you full utilization across all cores assuming your limit is high enough.

Perhaps less guaranteed in patterns that feed a fixed limited number of long running goroutines.


Just the fact that you can prototype with a direct solution and then just pretty much slap on concurrency by wrapping it in "go" and adding channels is amazing.


I'll disagree with you there. Structured concurrency is the easiest concurrency model there is: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...


But how does one communicate and synchronize between tasks with structured concurrency?

Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.

This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.


You do the same thing, if that's really the architecture you need.

Channels communicating between persistent workers are fine when you need decoupled asynchronous operation like that. However, channels and detached coroutines are less appropriate in a bunch of other situations, like fork-join, data parallelism, cancellation of task trees, etc. You can still do it, but you're responsible for adding that structure, and ensuring you don't forget to wait for something, don't forget to cancel something.


You can accomplish fork-join, data parallelism, and cancellation of task trees in a with `errgroup` in Go (which provides a way to approach structured concurrency).

So at least those are a subset of Go's concurrency model.


> So at least those are a subset of Go's concurrency model.

That's why the article about structured concurrency compared it to goto. Everything is a subset of goto. It can do everything that structured programming can do, and more! With goto you can implement your own conditions, switches, loops, and everything else.

The problem is not the lack of power, but lack of enforced structure. You can implement fork-join, but an idiomatic golang implementation won't stop you from forking and forgetting to join.

Another aspect of it is not really technical, but conventions that fell out of what the language offers. It's just way more common to DIY something custom from a couple of channels, even if it could be done with some pre-defined standard pattern. To me, this makes understanding behavior of golang programs harder, because instead of seeing something I already know, like list.par_iter().map().collect(), I need to recognize such behavior across a larger block of code, and think twice whether each channel-goroutine dance properly handles cancellations, thread pool limits, recursive dependencies, is everything is correctly read-only/atomic/locked, and so on.


The point of structured concurrency is that if you need to do that in code, then there is a need of a predefined structured way to do that. Safely, without running with scissors like how channel usage tend to be.


It would be good to see an example of what that looks like.


But how does one actually do that? What does the architecture and code look like?


> the easiest concurrency model there is

Erlang programmers might disagree with you there.


Erlang is great for distributed systems. But my bugbear is when people look at how distributed systems are inherently parallel, and then look at a would-be concurrent program and go, "I know, I'll make my program concurrent by making it into a distributed system".

But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.


have you ever deployed an erlamg system?

the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.

if for no other reason than that erlang is saner than go for concurrency

like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky


The new (unreleased right now, in the nightly builds) std.Io interface in Zig maps quite nicely to the concurrency constructs in Go. The go keyword maps to std.Io.async to run a function asynchronously. Channels map to the std.Io.Queue data structure. The select keyword maps to the std.Io.select function.


One other thing I think it misses, is how easy it is to navigate a massive code base because everything looks the same. In a large team, this is crucial and I value the legibility over cleverness (I really dislike meta programming).

Really the only thing I found difficult is finding the concrete implementation of an interface when the interface is defined close to where it is, and when interfaces are duplicated everywhere.


See, you assume this is made for queer people. It's not. Most queer people are just people - as in people first, queer second.

This is for people who are queer first, people second. This is for the loud minority of a minority.


Codex also has the shortcut --yolo for that which I find hilarious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: