> because you could use regular find/grep to analyze it
They were meant to be analyzable in some ways. Count lines, extract headers, maybe sed-replace some words. But being able to operate/analyze over multiline strings was never a strong point of unix tools.
Calling something "evil" is implying an external force (often linked to religion), that I don't think should be used. It reduces the responsibility of the person doing "evil" acts.
I think the missing ingredient is simply not caring about the outcome. It could be because they don't have empathy (sociopaths), or that the society has trained them into normalising obedience to the cause (facsism / communism) or inhumanized their targets (consentration camps).
Acts can certainly be described as "evil", but I don't agree that "evil" is some type of force that affects people.
Not caring about the outcome doesn't make sense, people that are driven by something care about the outcome.
To go back to my original point, the simplistic equation falls apart if you spend a second looking for counter examples.
Sikhs give free food to any who asks, without expecting anything in return. They are deluded (they do to it please god), and need power and conviction to do so.
A good point. You can for sure accidentally do good things by being deluded and having conviction and power. One could also say that they have a small delusion (god) that gives them a bigger truth (being nice to people is good). So like their total delusion level in this regard is low.
I'm not a native speaker, and I see that I may have been unclear.
I was thinking of the human consequences. In my language they are almost synonyms.
They of course care about the outcome, but not the effect it has on the target group
It's strange how one can normalize cruelty. Just think of how prison rapes are joked about in media and movies, as if it is an accepted consequence of committing a crime. It is a cruel and evil act that many choose to simply ignore because it is so common
> Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
Their next example was exactly what you asked for, 2 years rewrite.
Bonus points from me because they didn't wait for the whole rewrite to be done, and instead started using the new project by replacing only parts of the original one.
If you are not architectured around the bridge it is really hard to add it latter. Probably the biggest advantage of micro services is they have built in bridges. Monoliths often have no obvious way to break things up. Every time you think you want to you discover some other headache.
There's no serious sources about people wanting to fire 100% of [insert title here] for LLMs. It's more about reducing head-count by leveraging LLMs as a productivity multiplier.
I haven't heard of companies successfully doing that at scale though.
If you nominally have an audience (if you feel like you do, regardless of the reality), you'll perform differently, same as with speaking. This may be a good thing.
It's weird that's the first sentence of the abstract is different:
> In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting.
I think that's more aligned with selective forgetting
Maybe someone knows, what's the usual recommendation regarding big context windows? Is it safe to use it to the max, or performance will degrade and we should adapt the maximum to our use case?
They were meant to be analyzable in some ways. Count lines, extract headers, maybe sed-replace some words. But being able to operate/analyze over multiline strings was never a strong point of unix tools.