Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
macOS 11 Virtualization Framework to Run Linux in a VM (developer.apple.com)
172 points by _venkatasg on July 23, 2020 | hide | past | favorite | 181 comments


Notably, Apple still has not released an Apple silicon chip that can do virtualization. So as far as I can tell, this framework is currently not testable on their systems using the A12Z.


What were they doing with the A12Z when they demoed Linux running in a VM at WWDC, then?


It appeared to be Ubuntu for ARM64. If it wasn't virtualized, could it have been running under QEMU? Or maybe some sort of WSL type shim into the Darwin kernel?


pardon my ignorance, but how is QEMU not virtualization? There have been MacOS builds of KVM/QEMU for a couple of years now and I distinctly remember seeing that it had either been added or was being added to the base OS (google-fu is failing me because of so many results about the reverse-- running MacOS under KVM/QEMU).


QEMU can either be used for virtualization (i.e. running x86 Linux on your x86 Mac) or emulation (i.e. running PPC Mac on your x86 Mac).


QEMU is an emulator, not virtualization. It allows the user to run an OS designed for one type of CPU on a host system of a different type. i.e running Raspbian (ARM Linux) through QEMU on an X64 Linux Host running on an X64 machine.

Virtualization, by contrast, allows the user to run a guest OS designed for the same architecture as the physical hardware. i.e running x64 Linux guest on an X64 host that runs Windows as a host OS.


QEMU can do both the complete emulation you describe and accelerated virtualization similar to that offered by VMWare.


But to add to the complexity, many virtualization systems use QEMU as a support library to achieve stuff.


That's hardly the point. Many virtualization systems do indeed use QEMU for device emulation, but they don't use it to emulate the entire CPU functionality itself. A key point about virtualization is that most code run native while privileged instructions are trapped and handled by the hypervisor, allowing multiple guest OSes to coexist. This gives guest OSes the illusion that it's in sole control of the hardware.


I'm not nearly smart enough on that stuff. I fear I will need to get deeply into it in order to setup tooling that will run on the new Macs for embedded development.


They must have not been using an A12Z for that.


One can certainly write software that virtualizes another machine. Sure, performance might not be great. Sure, it's better if you have an implementation in silicon. But that doesn't prevent making a demo of a software VM running another kernel. What am I missing?


Not sure what you mean by "software VMs," but if you mean CPU emulators, they're fundamentally different from VMs. Emulation is not "virtualization."


No judgement of not knowing but para-virtualization is not emulation and it is not a new or niche technology either. I think IBM was doing it in the 60s (might be off base with that factoid because modern-ish operating systems like the RC4000 weren't even a thing until the late 60s). It has simply been superseded by hardware assisted vt that allows for multiple "ring-0" guests.

There is a greater overhead and a limited scope when using paravirtualization. But that doesn't mean it is a relic, in fact you can try it right now in Virtualbox. I also believe that the Linux KVM has para-virtualization optimizations if the guest is Linux and the bare metal doesn't support hardware VT.


I'm confused, are you sure you've responded to the right comment? No one is talking about paravirtualization in this thread. As I take it, the parent comment was suggesting that Apple was faking virtualization with some kind of software trick in the demo, which isn't what paravirtualization is about.

Paravirtualization is a general optimization technique for VMs that allows the guest OS to better communicate its intent to the hypervisor through the use of hypercall APIs, sparing the guest OS from having to issue many series of privileged CPU instructions that each have to be trapped and handled by the hypervisor. It saves time because going back and forth between the hypervisor and the guest OS is an expensive operation, and reducing the number of times it happens helps a lot. It's not a simple replacement for hardware assisted virtualization or the other way around.


Perhaps I misunderstood. Here is the logic flow I followed:

(1) Apple has a virtualization framework that can "...boot and run a Linux-based operating system on an Apple silicon or Intel-based Mac computer."

(2) The virtualization framework for arm-based Macs will virtualize arm-Linux and x86-based macs would virtualize x86-linux. There maybe a semantic issue here if you believe that this api would also allow the "virtualization" of x86-linux on arm. That would not be considered virtualization to my best understanding of the definition.

(3) Looking at Apple's VZVirtualMachineConfiguration, VZVirtualMachine and "Virtualization Constants" there is nothing apparently exposing the underlying virtualization mechanism. So we don't know if Apple uses hardware (ring -1 in x86 parlance) or software virtualization (almost always para-virtualization) under the hood. Likely it uses both depending on the context.

(4) We know that Apple's current dev kit hardware (A12z) doesn't support hardware virtualization. Hardware virtualization is commonly referred to as the "Virtualization Host Extension" in arm64 parlance.

(5) We know that Apple demoed virtualization on arm with what appeared to be a arm64 ubuntu guest.

(6) Your original message said that you believed they weren't using an a12z for the demo, likely because its aforementioned lack of hardware virtualization.

(7) The comment below yours suggest that "One can certainly write software that virtualizes another machine". They were more than likely referring to para-virtualzation in their comment. I don't know any other blanket type of software virtualization.

(8) You responded saying you didn't know what software virtualization was, it is almost always para-virtualization. Para-virtualziation does not require a hardware hypervisor and predates all hardware virtualization technologies.

While there is full-hardware virtualization, it basically never a thing that happens anymore because of the rather large overhead. I don't know of any modern virtualization software that offers full software virtualization. Especially if your guest is Linux.


> (6) Your original message said that you believed they weren't using an a12z for the demo

I never claimed this

> (7) The comment below yours suggest that "One can certainly write software that virtualizes another machine". They were more than likely referring to para-virtualzation in their comment.

Well, yes and no. Virtualization (in the context of VMs) is about sharing the hardware between multiple OSes, so the statement about "virtualizing another machine" didn't even remotely make any sense. This led to my original comment, "emulation isn't virtualization."

And one small thing, it was the comment above mine, not below.

> (8) You responded saying you didn't know what software virtualization was, it is almost always para-virtualization.

Where can I find refereces to this term that supports this statement? I looked, and found the term "software virtualization" being used to refer to containerization and programming language runtimes, but not VMs.

> While there is full-hardware virtualization

I couldn't find anything about this either.


You are correct about (6)! Apologies different user!

7/8 together. A Google scholar search of "software virtualization" or "software virtualization para virtualization" returned the following (plus many more) that refer to software virtualization in the same way I meant:

https://dl.acm.org/doi/abs/10.1145/1168918.1168860

https://ieeexplore.ieee.org/abstract/document/4709159/

https://ir.library.oregonstate.edu/dspace/handle/1957/9907

http://www.cs.toronto.edu/~demke/2227/S.14/Papers/p2-adams.p...

Finally I dopishly wrote "full hardware virtualization" when I meant "full virtualization", but for posterity here is VMware doc on it vs paravirtualzation vs hardware-assisted virtualzation: https://www.vmware.com/content/dam/digitalmarketing/vmware/e...


Paravirtualization is relevant if the CPU does not support full virtualization - Xen was originally designed to use PV because it was very difficult to make x86 VMs fast because unmodified guest OSs required slow emulation. See section 2 of https://www.cl.cam.ac.uk/research/srg/netos/papers/2003-xens...


You may be confusing the concept of hardware-assisted virtualization with full virtualization. Moreover, the famous Xen paper you've linked to never claimed that x86 didn't support full-virtualization because it did thanks to VMWare.


VMware didn't do full virtualization - it couldn't because x86 did not support it at that time! There were privileged instructions that did not cause a trap, and so which could not be virtualized using the normal trap-to-monitor technique. VMware used dynamic translation to JIT the guest kernel so that it did not execute privileged instructions, and so that it would run much faster than a simple emulator. This is explained in the Xen paper and also in https://inst.eecs.berkeley.edu//~cs252/sp17/papers/vmware.pd...


From the Xen paper:

> VMware [10] and Connectix [8] both virtualize commodity PC hardware, allowing multiple operating systems to run on a single host. All of these examples implement a full virtualization of (at least a subset of) the underlying hardware, rather than paravirtualizing and presenting a modified interface to the guest OS.

And no, I haven't forgotten about binary translation. As you mention, it was only used to replace privileged instructions and not a full-blown CPU emulator. VMware VMs still ran native CPU instructions, and the overhead incurred by translation is a whole different matter unrelated to my original point.


A virtual machine does not imply virtualization, and can use an emulated CPU.


If you look at the original post, it says "virtualization" in big bold letters. The precise definition of the term "VM" may perhaps be debatable, but I don't think it's fair to market your system as supporting Linux VMs, when in fact, you're emulating the CPU instead of virtualizing.

More importantly, I still haven't got the slightest idea what a "software VM" means, either. It's a term that I've never seen before. I even did an online search and found nothing.


Visit Wikipedia, in the search field type "virtual machine" but do not hit enter or search. Notice the text in the immediate results says "software that emulates an entire computer." Now, visit the page[1]: "...a virtual machine (VM) is an emulation of a computer system." This says nothing about whether the virtualization is entirely software, assisted by hardware, or entirely hardware.

A "software virtual machine" is a disambiguation that I chose indicating that the "machine" is implemented entirely in software with no help from special silicon (contrast with [2]). I can't fathom why that would be so controversial.

The entire thread comes down to this: the demo of x86 Linux running on Apple Silicon could very easily have been running in a virtual machine made entirely of software. No one claimed, as I recall, that Silicon implemented any hardware assistance for executing x86 code. There might even be IP issues doing that (IP - intellectual property, not "internet protocol".)

See also [3]

1 - https://en.wikipedia.org/wiki/Virtual_machine

2 - https://en.wikipedia.org/wiki/Hardware_virtualization

3 - https://en.wikipedia.org/wiki/Comparison_of_platform_virtual...


> Visit Wikipedia, in the search field type "virtual machine" but do not hit enter or search.

Wikipedia is useful tool, but it's wrong to rely on it for preciseness or the as absolute source of truth, especially on highly technical topics.

> This says nothing about whether the virtualization is entirely software, assisted by hardware, or entirely hardware.

Again, what does this even mean? What's your specific example for an "entirely software" virtualization or "entirely hardware" virtualization?

> A "software virtual machine" is a disambiguation that I chose indicating that the "machine" is implemented entirely in software with no help from special silicon (contrast with [2]). I can't fathom why that would be so controversial.

You can't just invent a new term without any explanation and wonder why people wouldn't just "get it."

> The entire thread comes down to this: the demo of x86 Linux running on Apple Silicon could very easily have been running in a virtual machine made entirely of software

Are you sure of this? I was assuming it was ARM Linux.

> No one claimed, as I recall, that Silicon implemented any hardware assistance for executing x86 code.

No one claimed that you claimed such a thing either.


Where can I read about the true definition of virtualization?


If you really want a formal definition, you could read this:

https://profsandhu.com/cs6393_s14/popek-goldberg-1974.pdf

Though some details may arguably be outdated, the general concept applies.


Thanks, the first section is pretty simple and covers it well.


It was ARM Linux, one of the demos confirmed this.


I would consider that a fraudulent demo, since it is selling something different from what it is showing.


No? There was nothing in the context of that video to suggest it was running on an A12Z. They were showing you the features that will be available on the chip they're going to bring to market sometime this year. This chip must already exist in a near-production form in order to meet that deadline.


I’m not saying they did, I’m pretty sure they were running it on an undisclosed chip. What would be fraudulent would be emulating it on the A12Z and claiming they were virtualizing.


Apple isn't selling anything at the moment. It was only an announcement, and it remains to be seen if what they deliver matches what they announced.


They already said they were using the mac mini with A12Z for the whole presentation demo


They also mentioned pre-release hardware in the “undisclosed location” — if I recall correctly it was before they did a demo of virtualization.

Well, after Googling, I can’t find the exact quote that referenced new hardware in live blogs, but it’s telling that just before they did a demo of virtualization, Maya and so on, they said, “All of the Big Sur features demonstrated earlier were being run on the development platform” according to Anandtech — meaning the earlier Big Sur demo was on the A12Z but the next part would be possibly newer hardware.


Parallels clearly has something significant under wraps: https://www.parallels.com/blogs/apple-silicon-wwdc/

My guess: Parallels on Apple Silicon will support virtualizing AArch64 VMs, and also x64 VMs through Rosetta 2. Support for AArch64 alone doesn't seem interesting enough to keep under wraps, and Apple did commit to supporting x64 JITs, so x64 VMs seems like a natural extension.


We'll see, but I doubt it.

A user-level emulator is a completely different beast performance wise than a full system emulator. With a user-level emulator the kernel, and possibly even the shared libraries are native code, and the code that needs to be emulated is relatively easy to translate from one architecture to another.

For a full system emulator not only you need to emulate the kernel also, but all the system-levels instructions have to be emulated as well (unlike user space instruction that can be JITed).

Just compare the performance of qemu-system-aarch64 with qemu-aarch64 running on amd64, and you will see a MASSIVE difference in performance. It's very likely rosetta will be more optimised than qemu, but still there is a fundamental problem here.

I'm sure qemu-system-x86_64 will run on aarch64 macs, but I doubt Apple/Parallels/VMware will touch this space.


Fair points, and on reflection you are probably correct.

But Apple has a history of weird but successful systems that nobody else tried, like 64 bit user space with a 32 bit kernel, or mixed-endian processes sharing memory. And there's pointed coyness around "Apple Silicon" and Parallels keeping mum. So I want to believe.


I need to believe because it makes a massive difference for me as an embedded developer. I can't use Macs anymore if I can't run multiple x64/x86 Linux VMs on them.


When did Apple do the mixed-endian processes? Was this with PowerPC Macs?


It would be interesting to see a blend of Rosetta2 for translating and running user-space x64 code and a WSL-style kernel "shim" to provide direct access to the native aarch64 Darwin kernel.


They said a Mac with Apple silicon, not necessarily the developer transition kit. If they were running the demos on the dkt they would probably have shown the device and said so explicitly.


Where?


In the presentation itself, it was towards the end


They said the demos were running on apple silicon. And they said the DTK runs a A12Z. They never said the demos ran on the DTK or on an A12Z.


I'd argue that very little is actually known about the A12Z hardware implementation details because there are not many out there and they are largely in the hands of folks trying to build their commercial software as fast as they can. I'd wager the MacOS 11 build for these developer platforms is largely focused on just making sure Xcode runs and not extensibility in the system software


> I'd argue that very little is actually known about the A12Z hardware implementation details

It shipped on a couple million iPads, we know a decent amount about it already.


> Notably, Apple still has not released an Apple silicon chip that can do virtualization

Even if Apple's ARM chips are not classically virtualizable (and I don't know whether that is the case), that was indeed the case for the x86 (pre VT-x) when two fairly straightforward solutions were applied: binary translation of non-virtualizable instructions (VMware approach) and paravirtualization (Xen approach.) This sparked the x86 virtualization revolution even before hardware support was available from intel and AMD.

Other approaches that could potentially work on a "non-virtualizable" ARM CPU include user-mode Linux, emulation, microkernels (similar to paravirtualization - recall Xnu contains mach and presumably still has the ability to outsource system calls and paging) and containers.


AFAIK this is an ARM IP module they can just add.


It's not, it has to be heavily integrated into the processor core.

Particularly the main piece of virtualization is an added level of indirection to the page table walking hardware.

There's some IOMMU stuff that can be bought off the shelf (and ARM's SMMU for this is really good actually), but I imagine Apple would build their own (or acquire it) looking at the rest of their IP blocks.

Edit: Actually I'm not even sure if they could pull an SMMU in. Do we have confirmation that they're using AXI/CHI or do they have something else for their NoC protocol?


(Disclaimer: I know much more about x86 virtualization than ARM.)

Apple isn’t really in the server business, and the kinds of high performance VMs that want direct hardware access seem unlikely to run on Apple silicon in the near future. It seems to me that virtualization could work just fine without an IOMMU in this scenario. (Certain GPU workloads would be an exception.)

That being said, I would expect Apple to have an IOMMU at launch for a different reason: Thunderbolt or any other external PCIe connection. Doing this without an IOMMU is a security catastrophe, in contrast to doing it with an IOMMU, which is merely an enormous attack surface that no one secured properly.


Apple Silicon Macs should not only have IOMMU but apparently each device should have its own. They talk about this in the "Explore the new system architecture of Apple Silicon Macs" video [0] (starting at ~9:14).

[0] https://developer.apple.com/videos/play/wwdc2020/10686


I wouldn't count Apple out of the server business for long.

After they have a couple generations of laptop silicon under their belts, there's nothing stopping them from dogfooding a real server macOS for awhile and booting up an Apple Service Cloud.

No special insight into whether they actually will, but it's a natural play.


IIRC the old macOS server had atrocious performance. Building a product that can compete with Linux or FreeBSD for general server workloads is a lot of work. Apple could do it, but the investment might be hard to justify.

There’s also an issue of margins. Apple sells attractive hardware and provides a software ecosystem, and they charge high margins for it. Big server users use a large numbers of servers, and they want a lot of bang for their buck. This is not a game that Apple has historically played very well, nor do I see why they would want to.


Thinking of the same. I was skeptical of ARM Mac Pro, but now Apple is going All in I thought they might as well use those for their iCloud.

Darwin, macOS Server. This sounds fun.


They may have build something based on the A13 for their own cloud needs, and thus why the iPad CPU hasn’t changed and skipped a year.


Apple is really bad at building products which their leadership doesn't want to use personally. Ping and iAd come to mind, server would be the same.

Yet Apple has quite the powerful chip family. If they were to spin off a subsidiary without the consumer-focused mission...well, that's what I would do!


I can read the English, I can Google the lingo, but I'm just a software guy still, and this only parses to me as "there's a bunch of memory management hardware on a CPU that is really important to efficient virtualization, some of it standardized by ARM, some which might make it trivial to support virtualization, but no one knows which standards Apple has settled upon in their silicon".

Is there a more ELI5'ish accessible walk through the context and why's behind this part of the discussion? It sounds really fascinating to me, but I'm not yet equipped to understand it well.


The simple answer is that Apple builds their own CPU cores, so they have to build their own virtualization. Virtualization is not something you tack on around a CPU core, it's something that's part of a CPU core. Since Apple aren't using ARM CPUs, they can't use ARM virtualization. Anyone suggesting otherwise is confused :-)

(Apple might be able to leverage off the shelf ARM technologies that might help with virtualization, but not the core feature of virtualization itself)


Thanks so much, that watered it down for me just right.

So are there any tells/indications that Apple's silicon took this into consideration from the beginning, or is virtualization something they had to retrofit into the core? It would be pretty amazing if say, way back in A1 days, we could point to something in the core that indicated they already started laying the groundwork to make virtualization feasible later.


A1 is not a thing, and the first 64-bit Apple Silicon was the A7, so I don't expect anything to have appeared before then.

Virtualization isn't terribly hard to add to a core, so they could've started thinking about it at any time. It's possible that some of their cores already support it and we just don't know; that would be a very Apple thing to do. The way virtualization on ARM works (at least the way ARM themselves implemented it; Apple could've done something differently) is that there are three execution levels: EL2 (VM), EL1 (guest kernel), and EL0 (guest userspace). So a device that supports EL2 but drops immediately to EL1 on boot to run a normal kernel (without virtualization active) would not necessarily have obvious "tells" that it supports virtualization, unless you broke into the boot process early enough to catch it in EL2.

It would be interesting to break into an A11 device using the checkm8 exploit and see if there is any evidence of EL2/virtualization support on that core.

Here's a fun one though: Apple CPUs did at least at one point support EL3 (that's one level higher, TrustZone), which they used for KPP:

https://xerub.github.io/ios/kpp/2017/04/13/tick-tock.html

Which suggests they might support EL2 and virtualization too. Honestly, I can't find any trustworthy reference claiming that existing Apple Silicon supports virtualization, nor that it doesn't. For all we know it does..


No shipping Apple CPUs support it EL2. KPP/WatchTower was inherently racy/bypassable and has been dead for years, replaced with KTRR which is baked into the silicon itself.


They’re using ARM CPUs but not arm’s CPUs ;)


They are using the ARM ISA on their own custom silicon.


A14 is rumored to have this already, it's not like they don't know how to do this but more that they had no reason to do so until now.


Parallel Desktop demos running linux on Apple Silicon in WWDC, maybe they have internal developer kit based on A14, but it cannot be made public.


What's the reason to do it now?


They’re shipping Macs that need it by the end of the year.


Macs need virtualization to run guest OSes since Macs are big in the pro and developer market. There is no current use for VMs on a phone, so they probably just left that module out in past chips.


Developers are something absurd like 20% of their install base, so Docker gets a first class seat at the table from a product requirements perspective? Albeit in aarch64 mode.


Developers != Someone using macOS as pretty Linux to deploy on Linux.

Also I am yet to see any benefit with docker in Apple platforms ecosystem.


WSL


Rather than use Intel CPUs in laptops with an ARM touchbar, maybe they'll use ARM CPUs with Intel in the touchbar.

Then virtualization will just occur in the touchbar using emojis as feedback. ;)


Is virtualization that hard to do on ARMs? With chip talent Apple has, I’d be surprised if they don’t pull this off at least as well as Intel.


No, it’s not particularly difficult, it’s just they they didn’t have a reason to do it earlier so they just didn’t support it.


Looks like this is just the "Virtualization Extensions" (apple silicon support) for the hypervisor framework (which has been around since 10.10 and is used by xhyve, and I believe the macos docker port?): https://developer.apple.com/documentation/hypervisor

Is that correct?


No, this looks separate. You'll notice the APIs have to do with virtualized networks and filesystems, rather than lower-level hypervisor tasks (creating and running virtual machines, servicing exits, etc.)


Possibly it was needed for no-driver virtualization while preserving the same functionally of https://blogs.vmware.com/teamfusion/2020/07/fusion-big-sur-t... — note there’s a newer beta if you view the community link, I think, and I can’t find an equivalent public beta from Parallels yet but presumably they’re working hard on it...


Parallels has had a shipping product that uses Hypervisor.framework for a couple of years - its how they ship Parllels Lite (or whatever it's called) in the MAS.

In the non-MAS version there's a per-VM option to use either Parallels' or Apples' Hypervisor.


That VMWare tech preview ran in Beta 2 already which didn’t have this framework yet.

No kernel extension required


I’ll admit though I ran it in Beta 2, I’m not aware of the details of which release had which API or which features VMware is using or not. It’s possible VMWare, having inside access to Apple docs, started working with private APIs that Apple has now made public? I really don’t know the details, I’d be happy if others chimed in, though. :)


If by “no-driver” you mean “no kernel extensions”, that’s just part of Hypervisor.framework which shipped years ago. Perhaps they just added support for that?


My understanding is the Hypervisor framework didn’t easily support file system connectivity like fuse or different networking scenarios with virtio but perhaps the new, linked, APIs here in Big Sur now offer that? I presume Apple asked VMWare and Parallels why neither used the new Hypervisor framework and this was the resulting collaboration, but it might also have come about simply because Apple wanted to demo virtualization on ARM, of course...


Parallels have been able to use Hypervisor.framework already since version 12, AFAIK. that was not yesterday.


And it reportedly had limitations, including performance: https://communities.vmware.com/thread/551842

This is something Apple and VMware are now obviously working on together for Big Sur.


I believe Docker For Mac uses Hyperkit now, an offshoot of xhyve.

https://github.com/moby/hyperkit


xhyve uses Hypervisor.framework internally. It's just a port of bhyve's drivers to macOS's native hypervisor.


Right. For clarification purposes, Hyperkit is a fork (I think) of xhyve. Both Hyperkit and xhyve use Hypervisor.framework.


Maybe we can view it as a high-level API of hypervisor framework?


High level ObjC APIs vs lower level C APIs


Ironically the "Year of Desktop Linux" seems to be running GNU/Linux inside VMs on macOS, Windows and even ChromeOS.


Year of "virtualized Linux on Desktops!" - growing strong since 2010, powered by Docker, k8s and Cloud.


Thing is, it all proves the point that most users of such solutions don't really care about Linux per se, rather they want a POSIX CLI.

Otherwise they would be paying GNU/Linux OEMs.


Exactly. BlinGyUI on the streets, POSIX under the sheets - is what many really want.


The real question here is how documented/open the boot loader is gonna be and if (how) it'll be possible to just boot whatever ARM64 code instead of going down the hypervisor way.


You might be interested in this video, specifically the parts of how boot security works: https://developer.apple.com/videos/play/wwdc2020/10686/


Yeah, I’m aware of that process. The question here is whether disabling secure boot (which in that video would allow running “even old macOS versions”) would also permit unsigned blobs to boot and set up the silicon.

We are still not sure about ASi’s memory map and if/how there are other limitations that might prevent it from booting anything other than macOS.


Isn’t this just the HV FW that exists already? Would you really expect the API to change between x86 and Apple ARM chips?


Well hopefully not, however given that x86 and ARM are different CPUs API will likely change


This is a _much_ higher level API - hypervisor.framework is documented here: https://developer.apple.com/documentation/hypervisor


Yeah, hypervisor.framework is a very thin passthrough to the Intel VMCS page manipulation. It's so low level it wouldn't even make sense on an AMD CPU.


But they will be available on ARM?


Yes, it’s already documented (separately for Intel and Apple Silicon):

https://developer.apple.com/documentation/hypervisor


Not without significant changes. The Hypervisor.framework API was quite specific to x86.


Seems like the core API is the same to create and manage virtual machines, it’s just that Intel-specific features like VMCS aren’t available on Apple silicon.


I'm assuming Big Sur on ARM will be virtualizing ARM, not emulating Intel, meaning a VM will need to run an ARM Linux distro.

In the short term I expect that to be problematic. First party packages in the distro's package manager will be fine, but I expect it might be hard to find some third-party software compiled for ARM Linux. And any Docker containers in the Linux VM will need to use multi-arch or ARM Docker images.


I don't think it'll be all that bad: one advantage of Raspberry Pis having been out for several years is that there's already been some demand for ARM and ARM64 builds of software.

Closed source, commercial software might be a bit more of a crapshoot, of course, but this should provide a fair bit of impetus.


Actually, the rPi foundation have been extremely lazy about ARM64 support. The Pi 3 and newer support it (and I run 64bit Arch on mine), but Raspbian does not. I'd wager 95% of Pi users are stuck with 32-bit ARM software, as you have to go out of your way for ARM64 support.


There is a beta build of Raspberry Pi OS (née Raspbian) in ARM64 available. https://www.raspberrypi.org/forums/viewtopic.php?p=1668160


This. plus fedora and arch/manjaro and many others have great aarch64 support. pinebook pro user here.


If I am counting correctly, nixpkgs (NixOS) Hydra currently builds 22369 packages successfully on AArch64. IIRC also 98% of Debian unstable builds on AArch64.

So even though the rPi foundation may be slow (I don't know), the larger community has been working hard on making software build on AArch64.


To be fair, Raspbian is meant to support first-time users. They have explicitly said that they want to produce something that works for the widest range of hardware.


There might be nothing available on this page right now, but wow is this useful. I recently started doing some macOS dev and realized that this is a pretty big limitation when it comes to my normal CI workflows. I know there are some projects that dockerize macOS but they did not seem simple to work with.


> wow is this useful.

I agree.

I liked that you could run docker on osx, but there was no support for something like FROM macosx:10.6

Maybe it will be possible? If apple put 1/1000th of the effort into virtualization/containers that they put into emojis...


Many hosted CI solutions (i.e. Travis, Github Actions) support OS X natively.


I don’t understand the headline. I already run Ubuntu, which is Linux, under Virtual Box, on Mac.


I suppose the idea is to not need VirtualBox, like in WSL under Windows.


It’s the same kind of emulation as VirtualBox. (And WSL2.)


BTW AFAIK in Win10, both the Windows kernel and the WSL2 Linux kernel run under the same hypervisor. Apparently performance implications are minimal.


Apple’s CPUs have historically not been able to do this. Apple is adding APIs to make this possible.


Is it possible yet to virtualize an OSX install without breaking the EULA?


"(iii) to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using macOS Server; or (d) personal, non-commercial use."

https://www.apple.com/legal/sla/docs/macOSCatalina.pdf


That's an improvement, but still severely limiting. Real shame.


It has been this way since MacOS 10.7 (Lion) released in 2011. I recommend keeping up with current events.


I lost interest in Apple over a decade ago. Too many restrictions, and overpriced hardware. Plus, they've never once made a serious play in the server space (nobody uses OSX Server unless they really have to).


Yes, the EULA allows running macOS on macOS. (I realize this isn't what you want.)


Technically I believe you're still "covered" if you were to e.g. run a macOS VM, under some non-macos "host", on Mac hardware.

I'd imagine the most common scenario for that would be macOS guests running on ESXi on Mac hardware, but my understanding is that e.g. a macOS guest under say Vbox or KVM with a linux host OS should also be "ok" in terms of the EULA.


Someone quoted the actual EULA below. I won't repost it since I don't want to steal their thunder, but it's very clear you can virtualize macOS up to two instances on your Mac computer.

Extremely limiting, and a real bummer.

Here in 2020, Apple invests a lot of resources to allow other OS's to be virtualized on macOS, but you still can't virtualize macOS on other OS's


> but you still can't virtualize macOS on other OS's

The EULA does not say "you must virtualise macOS as a guest VM on a macOS host". It says you must do it on Mac hardware.

As I said above - ESXi on Mac with macOS guests is very much a thing that happens, at reasonable scale.


I doubt it. VMware would need to develop special drivers just for macOS, since there seems to be practically zero cooperation from Apple here. Even with some compatibility shim or custom drivers, just like Hackintoshes, these VM's would be subject to breaking on every OS update.

And the EULA clearly states it only allows up to 2 instances of macOS to be virtualized on a Mac computer already running Apple software, ie macOS.[1]

[1] https://news.ycombinator.com/item?id=23924200


ESXi runs on macs really well, some models are even officially supported by VMware: https://www.vmware.com/resources/compatibility/search.php?de....

And yes, you can run macOS in ESXi, on mac hardware.


You can doubt it all you want. Some people doubt the earth is a globe too.

https://www.macstadium.com/vmware


I'm not sure where the confusion is coming from. Have you read the linked EULA?

Of course you can run ESXi on Mac _hardware_. You can run nearly any OS or software on the Mac _hardware_. It's, until recently, pretty standard Intel x86 hardware.

You cannot, however, run macOS on Dell hardware as an ESXi guest. It very clearly violates Apple's EULA for macOS, even if you could trick it into doing so.

You can look around the internet for how to install a macOS guest on non Apple hardware running ESXi. It's not easy, and requires patching ESXi, and more.

Plus, the hosting company you linked to is clearly running things on Mac hardware. They even say it.

Doing otherwise would clearly violate Apple's EULA, which is what all this was originally about.


> Have you read the linked EULA?

Have you read what I wrote? Or even what you asked originally?

You asked if macOS can be virtualised. The answer is yes, it can, on Mac hardware.

If you want to know if it can be virtualised on non-mac hardware, perhaps that's what you should have asked.


> You asked if macOS can be virtualised. The answer is yes, it can, on Mac hardware.

I guess that's fair enough. It used to be you couldn't virtualize OSX/macOS on anything, period. So this is a step in the right direction.

I suppose the historical lack of virtualization provisions in the license agreement led to the rise of insane concoctions like Imgix[1] - literally custom fabricating racks to hold a bunch of Mac Pros in a data center - absolutely insane, but necessary if you wanted a macOS/OSX stack.

I guess it's implied that virtualization would be hardware agnostic... since that's a primary reason to virtualize an OS.

Artificial limitations of only two (2) instances on only Apple hardware is absurd, and barely useful at all.

From kbutler's post[1]:

> "(iii) to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using macOS Server; or (d) personal, non-commercial use."

This seems to imply you can only virtualize macOS on Apple hardware that is already running macOS. Since ESXi is a Type-1 Hypervisor and includes it's own kernel, etc, it seems dubious to wipe the OS, and install ESXi on Apple hardware. Perhaps you'll never be caught doing this... but it seems like it would still violate the EULA.

[1] https://photos.imgix.com/racking-mac-pros

[2] https://news.ycombinator.com/item?id=23924200


> Is it possible yet to virtualize an OSX install without breaking the EULA?

Has a single court of law yet made a ruling to set a precedent saying they are legally binding?

If no, I’ll just keep treating EULAs like what they are: a corporate wish list of unlawful restrictions they want to impose on their customers.

Why should anyone care about that?


Why would it not be binding? It's a legal contract containing terms of use and prohibited activities. You give consent too...

Many of the use cases for virtualizing OSX are for business purposes. Not a great idea to build a business off pirated software and trampled software licenses.


> Why would it not be binding?

By requesting and reading this reply you hereby grant me 50% of your future income the next 5 years.

By your logic, this statement should legally binding too, just because someone somewhere wrote it and put it on your screen.

Obviously it’s not though, so why should a EULA be different? It’s literally the same thing.



Yes, shrinkwrap EULAs have been upheld in the US as a contract.

At the very least: ProCD, Inc. v. Zeidenberg, 86 F.3d 1447 (7th Cir. 1996).


“No overview available.”


Documentation has been my biggest frustration with Apple development.


Their documentation had been stellar until just around the new focus on Swift. The C, C++, and Objective-C APIs were documented well, complete with guides on the preferred way to use them. I think it might have been well before the Swift switch that they headed downhill - I have a vague memory that some New Big Features came along on OS X and the documentation was lacking and then never caught up to the quality of the older stuff.


Linux is just an app that runs in macOS and windows.


Ignoring the "just" (it is much more than just that), running one OS under another is scarcely a new thing.

In early 1977, Unix V6 was an app that run on Interdata OS/32 [1]. (Within a few months, it had gained the ability to run directly on the Interdata 7/32 hardware, without the need for the rather primitive OS/32 operating system to sit under it.)

AT&T's 1979-1980 port of Unix to IBM mainframes ran it on top of the TSS/370 operating system [2]. (TSS was IBM's original, and ultimately unsuccessful, attempt to deliver timesharing for S/360 mainframes – TSO under MVS, and VM/CMS, succeeded where TSS largely failed. But TSS hung on for a few more years, sustained by the handful of customers using it, and AT&T decided it was the best base for their mainframe Unix port, and IBM was wiling to support them in that.)

[1] http://bitsavers.informatik.uni-stuttgart.de/bits/Interdata/...

[2] https://www.bell-labs.com/usr/dmr/www/otherports/ibm.pdf


There's an app called iSH on TestFlight that gives you precisely that for iOS as well ;)


iSH is actually pretty great, considering the compromises that must be made to get a local shell on an iPhone.

Just wish it was debian based


> No overview available.

Cool, why is this even posted?


Because it shipped as part of the SDK in the Big Sur beta today.


It’s just a bunch of undocumented class names though.


That's better than Hypervisor.framework was until recently.


That’s basically what you can get from apple at this time. They aren’t that motivated for documentation.


Most of the times when I face a problem on my Mac or iPhone I am surprised to find an official support article by Apple for that problem. But the answer is almost never in that article and then I have to anyway hunt for the answer on the Internet.


Is there a Docker release for Apple Silicon yet?



Not that I know of. It wouldn't run anyways for the above reason.


The header file likely has comments that didn’t make it into these docs.


It does! (I just picked the first one that I found, the rest also have comments.)

  $ cat /Applications/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/
  System/Library/Frameworks/Virtualization.framework/Headers/VZBootLoader.h 
  //
  //  VZBootLoader.h
  //  Virtualization
  //
  //  Copyright © 2019 Apple Inc. All rights reserved.
  //

  #import <Virtualization/VZDefines.h>
  
  NS_ASSUME_NONNULL_BEGIN
  
  /*!
   @abstract Base class of boot loader configuration.
   @discussion
       VZVirtualMachineConfiguration requires a boot loader defining how to start the virtual machine.
       VZBootLoader is the abstract base class of boot loader definitions.
  
       Don't instantiate VZBootLoader directly, instead you should use one of its subclasses.
  
   @see VZLinuxBootLoader.
   */
  VZ_EXPORT API_AVAILABLE(macos(11.0))
  @interface VZBootLoader : NSObject <NSCopying>
  
  + (instancetype)new NS_UNAVAILABLE;
  - (instancetype)init NS_UNAVAILABLE;
  
  @end
  
  NS_ASSUME_NONNULL_END


I put a line break in your file path to prevent the comment from borking the page layout. Sorry. It's a bug we're working on.


Right? It doesn’t look like any of it is documented?


It's usually like that at first. It'll change later on.


I'm not holding my breath. I have recently been working with Core Bluetooth (years old), and Swift Package Manager (also long in the tooth), and the documentation for them is still pretty awful. SwiftUI is really bad. In fact, someone has even written a documentation sidecar app for it[0].

I'm spoiled, though. I used to have the full 6-volume set of Inside Macintosh[1].

They don't write 'em like they used to...

[0] https://apps.apple.com/us/app/a-companion-for-swiftui/id1485...

[1] https://en.wikipedia.org/wiki/Inside_Macintosh


Swift Package Manager at least is open source. Unfortunately, digging into the source to find out how to do some incredibly basic thing that any package manager obviously must support and they just haven't documented usually just reveals that they really haven't implemented it.


Hopefully.


So, you create a problem... and then you create a solution. Windows 10 with Terminal, winget, WSL2 with soon X Window beats macOS as a development platform.


Well, no. What’s the problem they created?


Windows has first-class support for Docker with WSL2. It has a package manager, a multi-tab terminal. Microsoft is also a huge contributor to open-source projects. Should I go on? Why would I pick a desktop tailored solely toward easing mobile development when I'm writing server software or using my laptop not for mobile computing? Not to mention, Windows has first class touch screen support - unlike macOS!


It's fine that you prefer Windows, but it seems like you are posting in a thread about an Apple API just to tell everyone about your platform preference. I wish you would not attempt to derail an interesting discussion into a platform war.

"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents." - https://news.ycombinator.com/newsguidelines.html


I just gave a few examples of why Windows is becoming better for non-iOS developers.


The fact that it is very unlikely that you will be able to run linux(or any other OS; unix-based or otherwise) on a computer that you PAY(potentially more than 1500 hundred dollars) and all you get some "BootCamp" Spagetti Windows on the hardware asides from macOS. I'm sure the ones in the Apple Cult will still pay for these ARM-based machines, but Apple really is shooting themselves the foot with all this lockdown. And nybody who thinks this makes their platform "secure" or "less prone to attacks" is full of it. I'm kind of interested in seeing the clusterfuck this will turn into when Apple comes out with their own proprietary firmware for EVERYTHING on their glorified Apple Silicon.


Why so angry?

There will be some inconveniences, but the reign of desktop x86 can't come to an end fast enough.

> The fact that it is very unlikely that you will be able to run linux

The jury is still out. There's some problem with booting other operating systems natively (BIOS support?). If people can get that to boot up, Linux on ARM runs fine.

Apple has never supported Linux on Macs, and yet people have run it successfully, myself included.

Also note that many (maybe most?) people get Apple computers in order to run OSX. Linux users are a tiny minority - even Windows users are quite niche.


I’m not angry about anything. Was just offering my POV. My point was, given Apple’s recent history, they are going to lock down apple computers even further now. Last I checked, you could not run anything other than what was “approved” by Apple because they GLUED an ssd to a motherboard. Now that they call the shots on the CPU(you know, the actual computer), they might require people to log in with their icloud-crap account to just play candy crush on a computer. And not that I care about karma, downvote away. But you all need to chill.


> Last I checked, you could not run anything other than what was “approved” by Apple because they GLUED an ssd to a motherboard.

This isn’t really accurate at all.


Apple seems committed to supporting running Linux in a VM at least. Federighi said:

> We’re also introducing new virtualization technologies in macOS Big Sur. So for developers who want to run other environments like Linux or tools like Docker, we have you covered.


You’ve got to acknowledge that people have been saying this for 30+ years and it keeps working for them. If anything the history of the company shows that when they use other people’s tech it burns them.

ESR wrote the cathedral and the bazaar how many years ago?


They have been relying on "other people's tech" like Linux Kernel, Apache Mesos and now Kubernetes to power their own systems for decades now, doesn't seem to burn them in the slightest. It's all about controlling users for them.


I guess I should have been specific and said hardware. Obviously Apple uses open source software. In fact I saw a claim on here that they are one of the worlds largest upstream OSS contributors.


The parent comment you replied was specifically talking about software lockdown that Apple unnecessarily forces on their hardware and will further go down the road with their own chips. Being open source contributor and oppose user freedom are two different things.


All the rants and objections aside, isn’t it exciting that we get to see something new and complicated to tear apart and burn down? Because otherwise we’ll just be paying $250/yr for New Core i2000-100006000086 for coming decades


Except that AMD is killing it lately. 7nm MOSFET process, large L3 caches, chiplet design. Even better: everyone can get their hands on them. At least it seems that AMD's resurgence already had the effect of Intel increasing core counts.

Mac on Apple CPUs is exciting, but Apple doesn't sell their CPU or designs to third parties. So, the impact is limited to Mac users.

Of course, it may have the side-effect of making AArch64 more popular on servers as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: