In 55 years I've never managed to do that, nor has anyone else I know. Plugs normally stay in the wall socket because they have a switch - each wall socket for general use must have a switch. The switch is quite hefty and very obviously off or on, with a red stripe. You get a satisfying audible and tactile click feedback when it is switched.
Recently a person brought in a laptop that had apparently been accidentally brushed off a desk, whilst closed, and had apparently fallen on an upturned plug. The plug had managed to hit the back of the screen, left quite a dent and spider cracking on the screen. The centre of the cracking did not match the dent ...
I'll have to do some trials but even if a plug is left on the ground, will it actually lie prongs upwards? I'll have to investigate lead torsion and all sorts of effects. Its on the to do list but not very high.
Don't leave them unplugged. The standard requires all modern sockets to have switches, so there is no reason to have the plugs lying around on the floor.
I've never had an experience in any house or office where anything has ever been unplugged other than to put it away (a kitchen appliance that doesn't need to live on a counter, or a hair dryer, for example).
Buy a fused extension cord with more plugs, you have now turned one socket into 4, 6, or 8 sockets. You can even get some that have USB built-in, so you don't use a socket up for a phone or tablet charger. They're not even very expensive.
And in an office, I'm pretty sure all equipment (computers, lights, controls for adjustable desks if you have them), are meant to remain permanently plugged in anyway in a properly installed desk setup. What is going on in your office where you're choosing what is plugged in and what isn't, constantly? And why can't your office manager spring £20 for an extension cord with multiple sockets?
I've never stepped on a plug myself, so I agree it's not a major problem.
However, some older houses in the UK have far fewer sockets than more modern properties - sometimes only one or two per room.
And sure, if you need to use a hairdryer and a hair straightener a person with an orderly lifestyle might return them both to a cupboard afterwards - but some people don't mind clutter and just leave them wherever.
When it comes to multiway extension leads - people in the UK are sometimes told it's bad to "overload" sockets but have only a vague understanding of what that means, so some people are reluctant to use them.
"When it comes to multiway extension leads - people in the UK are sometimes told it's bad to "overload" sockets but have only a vague understanding of what that means, so some people are reluctant to use them."
To be fair, most people work on the assumption that if the consumer unit doesn't complain, then it is fair game. They are relying on modern standards, which nowadays is quite reasonable. I suppose it is good that we can nowadays rely on standards.
However, I have lived in a couple of houses with fuse wire boards, one of which the previous occupants put in a nail for a circuit that kept burning out.
Good practice is to put a low rated fuse - eg 5A (red) into extension leads for most devices. A tuppence part is easy and cheap to replace but if a few devices not involved with room heating/cooling blow a 5A fuse, you need to investigate. A hair dryer, for example, should not blow a 5A fuse.
because sometimes you unplug it and leave it around. unless you live like a king sometimes there is 2 sockets and you have 5 devices to plug at different times. european and other ones will be on the side so stepping on it is no problem but uk ones will be the pointy end up
I have one too! 3 out of 6 plugs stopped working! I have 2 plugs outside of mini kitchen area and I have laptop, phone charger, camera charger, 2 ikea lamps, .......
there are no uk plugs here so I'm not complaining:)
if keeping everything plugged works for you, awesome!
The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.
The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.
Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...
Tom's hardware is mistaken in their reporting. That's raisng the limit without using CPUMASK_OFFSTACK. The kernel already supported thousands of cores with CPUMASK_OFFSTACK and has at least since the 2.6.x days.
> Computers didn’t exceed 64 logical processors per system until around 2014.
Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.
That said, development of NT started in 1988, and it may have been less obvious then.
"Server systems" but not server systems that Microsoft targeted. NT4 Enterprise Server (1996) only supported up to 8 sockets (some companies wrote their own HAL to exceed that limit). And 8 sockets was 8 threads with no NUMA back then, not something that would have been an issue for the purposes of this discussion.
That was what stuck, but supporting the big servers was also part of their multifaceted strategy. That's why the alpha, itanium, powerpc, and mips ports existed.
> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.
If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.
The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.
The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.
Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.
There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.
I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.
Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.
Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.
6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node
Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)
Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA
If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.
The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.
> other systems had been exceeding 64 cores since the late 90s.
Windows didn’t run on these other systems, why would Microsoft care about them?
> x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it
For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
> Windows didn’t run on these other systems, why would Microsoft care about them?
Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.
. For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.
Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.
That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.
Cockies are the pranksters of the bird world. They're smart and they think it's hilarious to mess with each other and anyone else. They also tear everything to pieces. So it's no surprise really that if any bird worked out how to operate a drinking fountain it'd be these hilarious little jerks.
I was visiting a place that takes in rescue animals, in this case they had a lot of birds.
In their typical speech to people about NOT keeping birds as pets they described some of the birds as "highly curious, the maturity of a human 5 year old, with an intense desire to be destructive".
My wife always joke about how parrots sound like a fun pet until you consider the phrase "Flying eternal toddlers, that cannot be diapered or potty-trained, with can-opener mouths."
On top of that, they have one tool, and it's a pair of boltcutters you can't take away. And the most clever of them have a good chance to outlive their owners.
There's one at a wildlife sanctuary in Tasmania reported to be 110 or so ("Fred", Bonorong Wildlife Sanctuary). Original owner is long dead, obviously.
I aspire to one day befriend a local murder of crows. Not to keep as pets or to make dependent on me, but maybe to bribe to clean up trash or steal quarters for me... or to defend my honor should the need arise.
We had a galah chewing our hosepipe the other day. I pointed and said "oi!" and the little scamp stopped, straightened up, looked me right in the eye and ... did it again.
Oh and not to forget the kookas. I heard a pop and noise like water a few weeks ago, and ran into our living room. Outside the main window there's that hose reel mounted on the wall that was spraying freely against the glass. A kookaburra had somehow pulled the hozelock end off and was taking a shower.
I will never forget watching a kookaburra swoop down as my grandmother went to take a bite out of a bacon sandwich, and stealing a piece of bacon out of it without touching her or the bread. It then sat on a branch whacking the bacon against it to "kill it" before eating it.
Same with me, but I was camping as a kid. One took the snag out of my mates bread just as he was about to bite it. It made sure it was dead by hitting it on the tree it landed in.
It seems a standard childhood memory! I had a chicken and salad sandwich downgraded to a salad sandwich while I held it my hands as a child. Couple of decades later, almost identical thing happened to my own kid.
The most accurate representation of "Chaotic Neutral" - the cheeky bastards love stealing ANYTHING, and when there's nothing to steal they'll start ripping the rubber off your car door seals (or windshield wipers).
They are amazing birds, very deserving of the name "Clown of the Mountains".
Seagulls, magpie and ibis (im not being fun or joking here) have evolved to exhibit cooperative traits and behaviours to get food, including tricking, diverting, cooperating and most annoying literally staunching people.
I was having a burrito on manly wharf a long while back, a seagull just lands on the table and death stares me...i felt uncomfortable and moved, because i know they will try and take my food off me!
I saw an ibis and magpie work on opening a macdonalds bin, take out the black rubbish bag, tear it, splay its contents and fish for paper macdonalds bags!
When I lived in Australia we had a wooden full length porch (elevated), and where we lived in the hills outside Melbourne we could easily have 20-30 cockatoos hang out on it in the morning. They were mercifully not loud, but they absolutely destroyed the deck rails, and we had to replace them with heavier duty industrial plastic deck.
Or gangsters. We had a bird feeder, which we occasionally let run dry. A cockatoo got pissed with this, and concocted a scheme. When the feeder was empty he sat on the outside fridge and screeched. Once he got your attention, he made sure he was in full view and started destroying things . He only stopped when you put out more feed.
Amused by this I mentioned it at a neighborhood BBQ, and was greeted by a chorus of "oh yes, that happens at my place too". The guy holding the BBQ held up his BBQ tools and said: "See, brand new, this is the 3rd set". It was a neighborhood wide protection racket run by one bird.
Indeed. My father spent a lot of time bellowing at cockatoos that’d land in his fruit trees and tear them to pieces. He’d storm about and wave a broom at them until they took off. Classic old man yelling at clouds.
When he was on the other side of the house in the garage, they’d take fruit from the trees and drop them on the sloping driveway so they rolled down into the garage. Come play old fella.
The Bellmac-32 was pretty amazing for its time - yet I note that the article fails to mention the immense debt that it owes to the VAX-11/780 architecture, which preceded it by three years.
The VAX was a 32-bit CPU with a two stage pipeline which introduced modern demand paged virtual memory. It was also the dominant platform for C and Unix by the time the Bellmac-32 was released.
The Bellmac-32 was a 32-bit CPU with a two stage pipeline and demand paged virtual memory very like the VAX's, which ran C and Unix. It's no mystery where it was getting a lot of its inspiration. I think the article makes it sound like these features were more original than they were.
Where the Bellmac-32 was impressive is in their success in implementing the latest features in CMOS, when the VAX was languishing in the supermini world of discrete logic. Ultimately the Bellmax-32 was a step in the right direction, and the VAX line ended up adopting LSI too slowly and became obsolete.
You might want to be more specific by what you mean by "modern", because there were certainly machines with demand-paged virtual memory before the VAX. It was introduced on the Manchester Atlas in 1962; manufacturers that shipped the feature included IBM (on the 360/67 and all but the earliest machines in the 370 line), Honeywell (6180), and, well... DEC (later PDP-10 models, preceding the VAX).
My impression of the VAX is, regardless of whether it was absolutely first at anything, it was early to have 32-bit addresses, 32-bit registers and virtual memory as we know it. You could say machines like 68k, the 80386, SPARC, ARM and such all derived from it.
There were just a lot of them. My high school had a VAX-11/730 which was a small machine you don't hear much about today. It replaced the PDP-8 that my high school had when I was in elementary school and visiting to use that machine. Using the VAX was a lot like using a Unix machine although the OS was VMS.
In southern NH in the late 1970s through mid 1980s I saw tons of DEC minicomputers, not least because Digital was based in Massachusetts next door and was selling lots to the education market. I probably saw 10 DECs for every IBM, Prime or other mini or micro.
In all those respects, the VAX was just following on to the IBM 360/67 and its S/370 successors -- they all had a register file of 32-bit general purpose registers which could be used to index byte-addressed virtual memory. It wasn't exactly an IBM knockoff -- there were a bunch of those, too (e.g., Amdahl's) -- but the influence is extremely clear.
The article says the Bellmac-32 was single-cycle CISC. The VAX was very CISC and very definitely not single cycle.
It would have been good to know more about why the chip failed. There's a mention of NCR, who had their own NCR/32 chips, which leaned more to emulations of the System/370. So perhaps it was orders from management and not so much a technical failure.
Single-cycle doesn't mean that everything is single cycle, but that the simple basic instructions are. As a rule of thumb, if you can add two registers together in a single cycle, it's a single-cycle architecture.
I should have said "supermini". While mainframes had tried a variety of virtual memory schemes, the VAX was the first supermini to adopt the style of demand paged flat address space virtual memory which pretty much set the style for all CPUs since then. A lot of VAX features, like the protection rings etc., were copied to the 80386 and its successors.
Yeah, 1972 - "Nord-5 was Norsk Data's first 32-bit machine and was claimed to be the first 32-bit minicomputer".
The Wikipedia record: https://en.wikipedia.org/wiki/Nord-5
Well in this case it wouldn’t be abuse. I hope they do convict him if he’s guilty of perjury. It will set an example for the other weasels. “Percentage of executives of Fortune 500 companies who do time for real crimes they committed” should be a big KPI for the DOJ in my book.
I really hope you're right, but I think we're more likely to see the case swept under the rug in exchange for totally voluntary donations from Apple to various organisations with 'Trump' in their name.
That's one vision, but it's probably not the most likely one. People like privately owning cars, and as long as they're more convenient than hiring taxis it'll probably stay that way.
Here's another vision of the future - gradually everyone's cars become self-driving, and now cars are more accessible to a wider range of people. 30% of the population currently can't drive due to age or disability, but if cars drive themselves the elderly, disabled, and even children can now own and operate vehicles. And now you have 30% more cars on an already congested road system. That should be enough to make traffic jams the norm everywhere.
But in case that wasn't bad enough, consider this - now people can do other things while they travel, because they don't have to be driving. So, in turn, they can live further and further away from their workplaces in cheaper, larger houses and do more of their work on the go. And while they do this they're spending more time on the roads, and - you guessed it - causing more congestion.
And because parking will always be expensive and hard to find in busy city centers, people will set their cars to loiter while they visit, rather than parking. Just going round and round while their owners shop. Causing - you guessed it - even more congestion.
TL;DR - the most likely result of autonomous vehicles is out of control congestion.
When teleportation becomes a thing society will force supercommuters to teleport in from farther and farther-out to maximize shareholder value while remaining in compliance with their respective companies' hybrid work policies. That you arguably die and are recreated every time you pass through the portal will finally end all discussions around whether your life is worth more than productivity.
This is a very cool project but I feel like the claim is overstated: "PyXL is a custom hardware processor that executes Python directly — no interpreter, no JIT, and no tricks. It takes regular Python code and runs it in silicon."
Reading further down the page it says you have to compile the python code using CPython, then generate binary code for its custom ISA. That's neat, but it doesn't "execute python directly" - it runs compiled binaries just like any other CPU. You'd use the same process to compile for x86, for example. It certainly doesn't "take regular python code and run it in silicon" as claimed.
A more realistic claim would be "A processor with a custom architecture designed to support python".
Not related to the project in any way, but I would say that if the hardware is running on CPython bytecode, I’d say that’s as far as it can get for executing Python directly – AFAIK running python code with the `python3` executable also compiles Python code into bytecode `*.pyc` files before it runs it. I don’t think anyone claims that CPython is not running Python code directly…
I agree with you, if it ran pyc code directly I would be okay saying it "runs python".
However it doesn't seem like it does, the pyc still had to be further processed into machine code. So I also agree with the parent comment that this seems a bit misleading.
I could be convinced that that native code is sufficiently close to pyc that I don't feel misled. Would it be possible to write a boot loader which converts pyc to machine code at boot? If not, why not?
Fair point if you're looking at it through a strict compiler-theory lens, but just to clarify—when I say "runs Python directly," I mean there is no virtual machine or interpreter loop involved. The processor executes logic derived from Python ByteCode instructions.
What gets executed is a direct mapping of Python semantics to hardware. In that sense, this is more “direct” than most systems running Python.
This phrasing is about conveying the architectural distinction: Python logic executed natively in hardware, not interpreted in software.
Micropython does run directly on the hardware, though. It's a bare-metal binary, no OS. Which is a different claim to running the python code you give it 'directly'.
Well, runing python on Raspian, you could toggle a pin at maximum a couple of KHz, not near the 2 MHz you can do with this project. Also it claims predictability, so I assume the time jitter is much less, which is a very important parameter for real time applications.
Yeah that was my first thing. Wait a minute you run a compiler on it? It's literally compiled code, not direct. Which is fine, but yeah, overselling what it is/does.
Still cool, but I would definitely ease back the first claim.
I was going to say it does make me wonder how much a pain a direct processor like this would be in terms of having to constantly update it to adapt to the new syntax/semantics everytime there's a new release.
Also - are there any processors made to mimic ASTs directly? I figure a Lisp machine does something like that, but not quite... Though I've never even thought to look at how that worked on the hardware side.
EDIT: I'm not sure AST is the correct concept, exactly, but something akin to that... Like building a physical structure of the tree and process it like an interpreter would. I think something like that would require like a real-time self-programming FPGA?
PyXL deliberately avoids tying itself to Python’s high-level syntax or rapid surface changes.
The system compiles Python source to CPython ByteCode, and then from ByteCode to a hardware-friendly instruction set. Since it builds on ByteCode—not raw syntax—it’s largely insulated from most language-level changes. The ByteCode spec evolves slowly, and updates typically mean handling a few new opcodes in the compiler, not reworking the hardware.
Long-term, the hardware ISA is designed to remain fixed, with most future updates handled entirely in the toolchain. That separation ensures PyXL can evolve with Python without needing silicon changes.