Hacker Newsnew | past | comments | ask | show | jobs | submit | more kikimora's commentslogin

Last time I checked history books said Britain donated land to Jews. At the time Britain took house land there were no state and no nation called Palestinians, just tribes. Since then Palestinians formed as a nation.

So what do you want Israel to do, disappear? Or negotiate, but with whom? The only power there is hamas which is non-negotiable. I really interested in seeing any realistic solution to the problem, however far fetched it is.


> Britain donated land to Jews

Land it didn't own. Most people can be very generous with what they don't have.


Agree, but my point is in the question how to untangle the mess we have today.


If you start from made up premises, the conclusion is also made up.

Try to read a non fantasy sionist history book…


There is no conclusion on my part. There is an ask for reasonable ideas how to untangle the mess between jews and palestinians.


If you start from made up premises, you will not be able to judge "reasonable ideas".


So I’m not good enough for you to share your ideas, did I get it right? You realize this is not how people reach consensus? If you cannot give me a compelling argument what makes you think jews and arabs would be happy with your ideas?


You are arguing in favour of the land allocations in 1948?


I’m asking for realistic ideas how to deal jews and palestinians occupying same land, hating each other and having no where to go from that land.


Quite a few, to be honest. USA debt to GDP ratio is high but not catastrophically high.


Thanks for the great article, this is much needed to understand how to properly use LLM at scale.

You mentioned that LLM should never touch tests. Then followed up with an example refactoring changing 500+ endpoints completed in 4 hours. This is impressive! I wonder if these 4 hours included test refactoring as well or it is just prompting time?


that didn't include the testing, that def took a lot longer but at least now my devs don't have an excuse for poorly written tests lol


Writing code that would not fragment memory over time is arguable much harder than writing GC friendly code.


I haven't found that to be the case in my experience: just for example in java you tend to end up with essentially a lot of `Vec<Box<Thing>>` which causes a lot of fragmentation. In rust you tend to end up with `Vec<Thing>` where `Thing`s are inlined. (And replace Vec with the stack for the common case). I find it more like Java is better at solving a problem it created by making everything an object.


Yeah, cooking food in kitchen is much harder than having it delivered from restaurant at doorstep.

Reasonable people will see if cost makes it worthwhile.


I don’t understand how this can work. Given probabilistic nature of LLMs the more steps you have more chances something goes off. What is good in the dashboard if you cannot be sure it was not partially hallucinated?


> What is good in the dashboard if you cannot be sure it was not partially hallucinated?

A lot of the time the dashboard contents doesn't actually matter anyway, just needs to look pretty...

On a serious note, the systems being built now will eventually be "correct enough most of the time" and that will be good enough (read: cheaper than doing it any other way).


>On a serious note, the systems being built now will eventually be "correct enough most of the time"

I don’t believe this would work. File a “good enough” tax return one year and enjoy hefty fine 5 years later. Or constantly deal with customers not understanding why one amount is in the dashboard and another is in their warehouse.

Probability of error increase rapidly when you start layer one probabilistic component onto another. Four 99% reliable components sequenced one after another have error rate of 4%.


We're all just going to be glorified debuggers trawling through reams of generated code we've never seen before to root out the hallucinations.


Probabilistic nature means nothing on its own. LLM that can solve your deterministic task will easily assign 100% to the correct answer (or 99%, the noise floor can be truncated with a sampler). If it doesn't do that and your reply is unstable, it cannot solve it confidently. Which happens to all LLMs on a sufficiently complex task, but it's not related to their probabilistic nature.

Of course that still doesn't mean that you should do that. If you want to maximize model's performance, offload as much distracting stuff as possible to the code.


But doesn’t it also waste a few seconds of your time here and there when it fails to autocomplete and writes bad code you have to understand and fix?


Typically you have to confirm additions and cancellation is just a press of ESC key. Ctrl+z is available too.

Even when the code is not 100% correct, it's often faster to select it and make the small.fix myself than to type all of it out myself. It's surprisingly good about keeping your patterns for naming and using recent edits as context for what you are likely to do next around your cursor position, even across files.


>2023 MacBook Pro, Apple M3 Max chip.

Then you likely measure how fast PG updates in-memory buffer rather than actual writes to disk. I cannot find links to discussions where people mentioned that desktop OS and consumer grade SSDs can delay writes to get more performance. This is what ChatGPT has to say about this.

Historically and even in recent versions, macOS has had issues where fsync() does not guarantee data is truly flushed to persistent storage (e.g., SSD or spinning disk). This is due to: • Disk write caching: macOS may acknowledge fsync() even if the data is just in the drive’s volatile cache (not flushed to physical storage). • APFS quirks: The Apple File System (APFS) has been reported to coalesce or delay flushes more aggressively than other filesystems (like ext4 or xfs). • SSD controller behavior: Even if macOS passes the flush through, the SSD itself may lie unless it supports FLUSH_CACHE and has power-loss protection.


Author ditched Raft because it can only have one leader. But Raft has many leaders, one per partition. After reading the article I’m not sure author knows what they are doing.


Emphasis on "one per partition", which if I understand correctly as "network partition", means that in the absence of network partition, there is one leader.

I do have only a surface understanding of Raft, and I'm learning while doing yes.


In Raft state space is split into partitions. Each partition gets it leader. For example in a cluster of 3 nodes and 65536 partitions each node is a leader for 1/3 of partitions with two others acting as replicas. This way each node simultaneously leader for some partitions and replica for others.


Gotcha, thank you for the clarification.

I'd add though, that the "one leader" thing was not the only reason why I ditched Raft. The Go library hashicorp/raft was quite complex to use, and I've had a lot of situations where the cluster failed to elect a leader, and ending up with a corrupted state.

This might be a PEBKAC issue of course.


You describe new ways of feeding information into the model and new ways model presents outputs. Nothing radically changed in how model transforms inputs into outputs.


It comes down to solving this - given instruction X find out how to change the training data such that X is obeyed and none other side effects appears. Given amount if the training data and complexities of involved in training I don’t think there is a clear way to do it.


I'm slightly less sceptical that they can do it, but we presumably agree that changing the prompt is far faster, and so you change the prompt first, and the prompt effectively will serve in part as documentation of issues to chip away at while working on the next iterations of the underlying models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: