I've wondered if LLMs are infact conscious as per some underwhelming definition as you mentioned. Just for the brief moment they operate on a prompt. They wake up, they perceive their world through tokens, do a few thinking loops then sleep until the next prompt.
So what? Should we feel bad for spawning them and effectively killing them? I think not.
I did this for an entire cities water supply network. SCADA systems UIs are a decade or more behind the modern web, not to mention slow loading and expensive. I took a reticulation diagram and marked all the flow meters, pressure transducers, pumps, reservoir level sensors and even river flow meters just like you said using ids for each element. The data is essentially pulled from a sql query every minute and pushed out through json including any active alarms.
This lets as many users view it as necessary and loads instantly. Has pan and zoom so yiu can capture everything on a page. Fully customisable, used draw.io for the diagram. Professional automation guys jaws dropped when i told them it took a few days to build and didnt require some ridiculous software license
This vaguely reminds me of when I briefly worked at an oil company as a drafter. All their P&ID's are drawn in AutoCAD .dwg files, which isn't searchable at all or intelligible to the computer. AutoCAD sucks and has a stranglehold on these older "systems" drawings (as opposed to individual mechanical parts). Also, each company has its own different drawing spec and you need to have a lot of domain knowledge to understand a drawing. I had this grand idea that I didn't follow through with of JSON-defined "drawings" that could then be rendered to svg. Maybe a graph data structure would be better to represent the data. This would make them searchable, and LLM-ready so you could ask natural language questions of an entire refinery or pipeline or water treatment plant or chemical plant.
Not quite the same as its not real-time SCADA, but still interesting. It seems like there is big opportunity here for someone who can crack it.
How do you validate that each elements status is working? That there are no typos or copy/paste errors lurking in there? I guess you can inject test data after the sql query and verify the right elements change.
You can use the browser console to inject any value to any element to verify expected behaviour.
There are also patterns in the water network so issues reveal themselves pretty quick then easy to fix. E.g. pumps on=green, off=grey, fault=red. Easy to spot an element that is misconfigured especially when you look at it all day.
You can also hover over an element and it reveals its tagname in a tooltip which helps. And the script has an error log for things such as if there are any tagnames in the query it can't find a drawing element for or if it failed to set some value
I'd like to learn more about this system! Also, what's on the backend side? How do you collect data from the sensors? Check out my river meteo-station: https://imgur.com/a/agk-RRovIkg
Its a typical utilities SCADA system most cities operate.
Much like your setup their are sensors fitted to various infrastructure connected to PLC's connected to licensed uhf radio modems back to a base station (with a few hops through repeaters or microwave backhaul) into a PC running some kind of IO server which handles all the polling and collects the data into a database and SCADA software (Aveva).
Its a weird mashup of hardware and protocols (dnp3, modbus, serial) plus some data coming in through IoT devices over http and all sorts of bits.
All i built was a python script that runs the query (we are talking 200 bits of data) every minite and dumps it into a JSON file. Then there is a caddy server which serves the json, SVG (1.5mb uncompressed) and a vanilla html/javascript (300 ish lines of code, AI helped get started) page that displays it.
Its not open to public, nor is it a replacement for SCADA (SCADA has many many more objects plus the ability to control and send out alarms). There are many more people wanting 'view only' access than the city has pricey SCADA licenses for, so if fills this gap for free. Sorry i can't share more, i moved on from that job.
Just wanted to say thank very much for sharing this.
Over the last few months I've been inventing this almost exact approach in my head as a hobby without consciously knowing it had already been done. I love their little RC car demo.
As easy as wordpress, square space or whatever claim to be, in my mind this is even easier, the solution is more elegant, and you expose yourself to a whole lot less crap along the way.
Edit: I recognise this doesn't cover hosting, domain registration etc.
Thanks! Yeah, I find this a lot easier. And yes, hosting is an exercise left to the reader :-) I might write about my hosting setup in an upcoming blog post.
I don't really want it added to the training set, but eh. Here you go:
> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?
All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.
What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.
Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.
Unless you care about warmup time. LLMs have a habit of throwing in common-sense assumptions that you didn’t tell it to, so you have to be careful of that.
It’s not a bug. Outside of logic puzzles that’s a very good thing.
Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.
Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.
Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.
The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.
You can stipulate conditions to make the solution work out in either direction.
Suppose the bucket is the size of lake, and the leak is so miniscule that it takes many centuries to detect any loss. And also I need to keep the bucket full for a microsecond. In this case it is better to keep the bucket full, than to let it drain.
Now suppose the bucket is made out of chain-link and any water you put into it immediately falls out. The level is simply the amount of water that happens to be passing through at that moment. And also the next time I need the bucket full is after one century. Well in that case, it would be wasteful to be dumping water through this bucket for a century.
All heat that is lost must be replaced (we must input enough heat that the device returns to T_initial)
Hotter objects lose heat faster, so the longer we delay restoring temperature (for a fixed resume time) the less heat is lost that will need replacement.
Hotter objects require more energy to add another unit of heat, so the cooler we allow the device to get before re-heating (again, resume time is fixed) the more efficient our heating can be.
There is no countervailing effect to balance, preemptive heating of a device before the last possible moment is pure waste no matter the conditions (although the amount of waste will vary a lot, it will always be a positive number)
Even turning the heater off for a millisecond is a net gain.
Does it depend on whether you know in advance _when_ you need it back at the hot temperature?
If you don’t think ahead and simply switch the heater back on when you need it, then you need the heater on for_longer_.
That means you have to pay back the energy you lost, but also the energy you lose during the reheating process. Maybe that’s the countervailing effect?
> Hotter objects require more energy to add another unit of heat
Not sure about this. A unit of heat is a unit energy, right? Maybe you were thinking of entropy?
No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.
Allowing it to cool below the phase transition point of the melted plastic will cause it to release latent heat, so there is a theoretically possible corner case where maintaining it hot saves energy. I suspect that you are unlikely to hit this corner case, though I am too lazy to crunch the numbers in this comment.
This makes me wonder. OpenAI isn't the only company offering computer use. The list of companies and models that do this will only grow.
Meaning advertiser's will have to be selective which company they pay to get the most exposure to their target human customers via the agents. Will we see affiliate programs for AI agents which in turn promote products to the users? and we end up with the same shit show we have today?
Or what if eventually everyone has their own personal AI that can bypass the ads. Will we just decide that advertising is a drag on society and kill off that industry for good?
> and we end up with the same shit show we have today?
Absolutely certain. Furthermore, this can maximize exploitation of each individual human as the providers have such rich profiles about each human that they can customize pricing to extract the maximum amount of profit. It is by design.
We describe a computing architecture capable of simulating networks with billions of spiking neurons using an ordinary Apple MacBook Air equipped with a M2 processor, 24 GB of on-chip unified memory and a 4TB solid-state disk. We use an event-based propagation approach that processes packets of N spikes from the M neurons in the system on each processing cycle. Each neuron has C binary input connections, where C can be 128 or more. During the propagation phase, we increment the activation values for all targets of the N neurons that fired. In the second step, we use the histogram of activation values to determine the firing threshold on the fly and select the N neurons that will fire in the next packet. We note that this active selection process could be related to oscillatory activity in the brain, which may have the function of fixing the percentage of neurons that fire on each cycle. Critically, there are absolutely no restrictions on architecture, since each neuron can have a direct connection to any other neuron, allowing us to have both feed-forward as well as recurrent connections. With M = 2 32 neurons, this allows 2 64 possible connections, although actual connectivity is extremely sparse. Even with off-the-shelf hardware, the simulator can continuously propagate packets with thousands of spikes and millions of connections dozens of times a second. Remarkably, all this is possible using an energy budget of just 37 watts, close to the energy required by the human brain. The work demonstrates that brain-scale simulations are possible using current hardware, but this requires fundamentally rethinking how simulations are implemented.
So what? Should we feel bad for spawning them and effectively killing them? I think not.
reply