If this is being recognized as a systemic problem should not the government step in and regulate? Why do they have to wait until after a crash to do anything? Just a statement that there will be no bailout could moderate behavior.
Can't anyone use AI to surveil social media, even ordinary citizens? It seems like it would be easy to surveil ICE, the police, immigrants, all politicians, the military, businesses, government, individuals, groups, anyone and anything anyone has an interest in. Is the future everyone surveilling everyone else? There used to be web services that let you set up "standing queries" for anything you were interested in. In a sense chatbots already contain a historical record of the internet in compressed format and allow anyone to do historical queries on anything, limited only to what has been accessible on the internet. "Googling someone" is becoming "ChatGPTing someone". People felt Googling someone was somewhat rude and parents warned their children to limit what they posted in case future employers looked them up. Same for anyone employed, they are learning to be careful what they post in case their employers see it. Seems like free speech is being suppressed because it can be used against you by various people and groups already. This may help explain why the web has become less interesting and anonymous posting is ubiquitous.
Basically a metal rod/pipe with a good earth ground pointed at the sky. How conductive are trees? Maybe trees already act as cloudbusters, so we should plant more trees! Someone should apply for an NIH grant to try this. Don't know if I'm being sarcastic, it might actually have health benefits, there doesn't seem to have been much independent research.
Rare earths are not rare on Earth, but production of rare earth metals is rare and difficult and almost exclusively done by China. There are two other factors that make this announcement important though. One is the use of the foreign direct product rule, which means China is requiring all use of rare earths produced by China to be tracked and require approval, and all military applications are not going to be approved (why would China arm it's competitors?) The other factor is that while things like F-35's may only use a few hundred pounds of rare earths each and there are not many of them, things like smart bombs and semiconductors need rare earths and there are a LOT of those. If China can truly cut the US from China's production, it's likely going to greatly reduce the US's current attempts to scale up both weapons production and the more advanced semiconductors (like GPU's for AI) until the US can get alternate sources. It will take 5-10 years to build alternate sources (some small pilot projects are near completion, but scaling up will take a while), so during that time the US could be short on weapons and compute power. The US military has done some stock piling of rare earths, but it's a fairly small stockpile. So worst case is no weapons or AI for the US for some time.
There will also be consumer effects. EV's, drones, phones, TV's, RC cars, and more all use rare earths or rare earth magnets. Because rare earths were cheap before, most quality electric motors now use them. China can now cut off those uses also if they want to.
How effectively China can halt sales to the US is debatable. The CIA could start a toy manufacturer front company and buy rare earth magnets for example. China may eventually find out and cut them off, but then the CIA can just start a new front company. Buying from European or Asian companies as intermediaries may be difficult to enforce. If a war started over Taiwan, China could just cut off all shipments to the world. So there is perhaps a five year window here where China can exercise power via rare earths. Beyond that alternate sources will likely be in place.
So one thing China is "saying" here is that if the US is going to cut China off from advanced computer chips, China is going to make it impossible to make those chips so the US won't have them either. This could be enough to bring a sudden halt to US AI investment. It would definitely introduce a big new uncertainty.
"It will take 5-10 years to build alternate sources (some small pilot projects are near completion, but scaling up will take a while), so during that time the US could be short on weapons"
As I said elsewhere, if the US really wanted to it could solve the shortage in only months. I refer you to the phenomenal retooling exercise and enormous production growth in WWII. I suggest you read those stats.
From about the 1910s to the 1960s, the USA was considered the world's factory. If you wanted the get a product made, you'd go there to set up a production line; if you wanted to make a factory elsewhere, you'd hire American experts to teach you and tool it up for you.
The USA no longer has that role for hardware, although it does for software.
The US had the knowledge in the workforce to do the retooling 80 years ago, why do you think that still exists? You can believe that all you want, it's a comforting thought but I don't see 2025 USA having at all the same capacity.
The US still one of the leading countries in the world for mining and refining by any measure. It has extensive expertise and its mineral wealth is unusually diverse.
All of this is despite the fact the US effectively banned new mining several decades ago. The US is a mineral juggernaut and has the technical knowledge but growth has been severely restricted as a matter of policy for a long time.
By analogy, US oil production was in terminal decline since the 1970s and presumed dead at the end of the 20th century. Now the US is the world’s leading oil producer with no sign of slowing down.
There is every reason to believe the same thing would happen if the US decided to re-open the mountain west to mineral exploration.
> How effectively China can halt sales to the US is debatable.
Every intermediary or degree of separation introduced raises the price as each link in the chain demands their slice of the action. They might not be able to stop sales, but I imagine they might make it quite expensive.
If these strikes really are hitting drug gangs, I would expect the gangs to strike back, just as they do when they attack each other. They already have people in the USA, and they would have no problems with attacking civilians or politicians, they already do that. Maybe that's part of the plan behind using the military, create incidents in the US in order to impose martial law? On the other hand the War On Drugs has been around for a long time and I don't know of any attacks on the US because of it (except for the drugs themselves). Maybe the use of military force was what the drug gangs were afraid of, but now that restraint has been removed. Even if the real purpose of the strikes is regime change and taking over the oil industry, there may be side affects for US citizens due to pulling the gangs into it. Difficult to tell where this goes. There must be ongoing legal debates also, this seems clearly outside of international law.
Isn't the training most of the cost? In which case the current models could have a very long lifetime even if new models are never trained again. They'll go gradually out of date, but for many purposes will still be useful. If they can pull new info from the web they may stay relevant for decades. It's only if running the chatbots is not cost effective that everything halts and my understanding is that the cost of that is lower relatively. Even now, older models are still being used. Also, performance optimizations seem likely to soon reduce the need for data center build out and reduce costs. Seems too soon to say where this is all going. Who even knows if the GPU chips will improve dramatically or if something else (more AI optimized processor architectures) will replace them? It's true that right now it looks like a bubble, but the future is still very much in flux, and the value of the models already created may not disappear overnight.
One potential application might be fighting wildfires. Seems like it would be really useful if one firefighter could remotely monitor a dozen autonomous bulldozers that were given general instructions. Would have to acquire a different training data set I'd guess, but the same approach seems applicable, get the data from teleoperated bulldozers used to fight fires. Getting the bulldozers into the right area would require some transport, like a truck or heavy lift helicopter, though maybe mini bulldozers would also work.
You might be interested in the work of Peter Corke also, he's automated horizontal mine shaft loaders and huge drag line shovels in his research:
I think he used a different approach than you do, using visual servoing to get feedback and data from a camera. Maybe there's some value in combining both approaches, learn to control a machine from an operator, and also keep track of what is being moved with a camera to add another layer of control.
About 20 years ago the CS community was getting excited about optical memory. It promised to be huge, must faster than static RAM, and hold it's state. Tied directly to the CPU as a very large cache+RAM replacement it would have revolutionized computing. There were other advantages besides speed. One was that you could just pause the CPU, put the computer to sleep, then wake it up later and everything was already in RAM and computation would continue where it left off. Instant boot. Running apps would be instant, they were already in RAM and could be run in place. Prototypes existed but optical memory never happened commercially. Not sure I remember why, maybe couldn't scale, or manufacturing problems. There was also the problem that code is never perfect, so what to do when something stored became corrupted? Without a boot phase there would be no integrity checks.
I worked with an SGI 2400T workstation and it came with a 4:3 aspect high resolution monitor (4K I think, different from today's 4K). Later workstations probably had wider screens. However even that old machine could display to a wide variety of screen sizes. I connected ours to an NTSC projector and they were often used for rendering movie computer graphics (though rendering doesn't depend on the display size). If I remember correctly the pixels were square by default, but there was a lot of control over rendering and display. NTSC at that time wasn't even a very firm standard, lots of companies implemented it differently and hi-res displays tended to be custom with no standards at all (used for air traffic control for example).
reply