In the end there are plenty of stories, but they're ones that are relevant. The story that the LLM gave feedback on was about flipping a raft on the Grand Canyon, the LLM's advice was that it felt unrelated to the point I was trying to make. That made me realize I had it in there more because I wanted to talk about the rafting Grand Canyon, vs. it being useful and entertaining to readers.
Thanks for posting this, it's a very interesting case study. Considering that the thing they seem to excel at is this type of writing, it's interesting that they still seem to be only ok at it if you're trying to produce a serious, genuinely useful output. This fits with my experience, though yours is much more extensive and thorough. In particular I fully concur with the voice/tone, and the need to verify everything (always the case anyway), and "Never abdicate your role as the human mind in charge" -- sometimes the suggestions it makes are just not that good.
Question is, do you think this process was faster using the various LLMs? Could two (or N) sufficiently motivated people produce the same thing in the same time? (and if so, what is N). I'm wondering if the caveats and limitations end up costing as much time as they save. Maybe you're 2x faster, if so that would be significant and good to know.
In the abstract, this is similar to my experience with AI produced code. Except for very simple, contained code, you ultimately, need to read and understand it well enough to make sure that it's doing all the things that you want and not producing bugs. I'm not sure this saves me much time.
I think it was faster in that I would have never written the book without the LLMs. Essentially they unlocked the swirl of thoughts and notes that lived somewhere between my head, TextEdit, emails to myself, and anywhere else I stashed things.
It's like it unblocked the "hard part" (getting the words into a coherent form for others), while letting me focus on the "value parts" (my unique perspective / ideas).
It might not be that overall it saved me time, but it made it a hell of a lot more fun, so in the end I completed it - and maybe AI helping us see things through to completion is where we'll see a big unblock in human potential.
"Briefly, the US has the capacity to decisively win on one or two fronts at a time, so its strategic logic leads it to want to wrap up conflicts in order: put an end to the Ukraine war, and address Iran next, to preserve its ability to respond to a Chinese invasion of Taiwan. The logic of its rivals is then the opposite: to tightly coordinate and threaten to expand conflicts on each front so that the US can’t effectively respond to any. This is a path to a world war."
So, if US hit Iran, we have to watch out for escalation from Russia and China.
They would just arm Iran and give her satellite intel to enable the Mullahs kills as many invading American troops as possible.
That's not an escalation.
How? Your response feels like a low effort and not really anything more valuable than a strawman as well.
Russia does not want to get into a 1:1 shooting war with the US--especially now that it has a puppet in the WH. Russia has always done what was laid out in GP. It has previously been doing this in Iran, Syria, etc.
I'll give the flaggers the benefit of the doubt and say that the article has a lot of assertions but little substance backing them up. I've seen comments (indeed on some of the flagged articles) with more substance than this article. Let's see some facts backing up these assertions (or articles with such) because I agree, this is super important.
Insiders have little say. NSF is probably the most merit based system in all the US government. Literally any other program (defense?) is less merit based.
Also, if nepotism and favoritism are the criteria for removal, let's start with the Executive branch.
This is one of the core problems that many on the "left" will not understand.
The problem is that the people who have seen and who have experienced this will never tell you. I and many like me I've talked to will simply never tell their actual beliefs to a colleague who believes like this.
Cannot tell you how many countless meetings I've been in where I have a differing opinion and say nothing because of backlash and loss of softpower.
The truth is that there are huge numbers of your coworkers, bosses, and employees who have different thoughts that don't align with the current ideology. These people have learned to say nothing. I myself being one of them.
I have on multiple occasions just straight lied to a liberal coworker about my beliefs because me telling what I actually think would make it very difficult to work with them.
This is false. The grants were not approved based on race. The grants were approved based on merit toward the goals of the field of science to which they were submitted. Showing how your work had broader impacts toward a more diverse, equitable and inclusive society was one part of a list of many criteria, recently updated here: