This means that your VMs won't be able to get IP addresses on your LAN via DHCP. Kind of a fundamental omission for this kind of product I would think. Curious that it's not a high priority issue for them.
Also, what's the IPv6 story here? I didn't see anything in the docs addressing that, unless I'm missing something.
Maybe we have very different use cases, but I almost never use bridged networking. Typically what I want is all outgoing connections to be NATed through the host.
That said I agree, it should be an option. There are certainly use cases for it.
They're marketing this as a "mini-cloud". If it can only network through NAT, that eliminates all use cases where the VM instances would act as a server.
I'm a sysadmin, so I like tools like this to test out provisioning of servers with configuration management such as Ansible or Puppet.
Running tests at the end where I actually test the endpoints of the deployed services would be really nice to have, but impossible to do through NAT.
I guess that's a niche use case for this because Vagrant had the same issue for a long time, where setting up a bridged network was not possible or required some hacks.
> They're marketing this as a "mini-cloud". If it can only network through NAT, that eliminates all use cases where the VM instances would act as a server.
Think about this as if you want to carry around 3-4 hosts running different services and interacting with each other... on your laptop.
You don't really want bridged networking, because you connect to a different network and screw up the entire environment with different numbering, etc.
This way you can be on a plane without network access and still get work done, or carry a complete demonstration environment to a customer site, or...
For a DevOps tool this seems way more developer-oriented than ops-oriented.
This streamlines the use cases where I need a _local_ VM that I can access from my workstation; I don't need (or even want) a tool like this to generate VMs that are externally accessible.
Vagrant — as you pointed out — already does the thing you want it to do. It's more powerful, more flexible, and more complicated.
I'm missing something - even without bridged networking, the VMs should still be able to network with each other, and the VM host should also be able to reach each VM. So I don't see how the lack of bridged networking prevents you from testing the deployed VMs. Do you need to control the tests from somewhere outside the VM host?
NAT networking does not imply the host running Multipass can access ports exposed on the VMs, quite the opposite. Host only would imply that, but not typical NAT in a virtual machine. Not saying it’s not possible with Multipass, just saying it shouldn’t be assumed it does.
Generally if you are on a router performing NAT, you have routes to the hosts behind the NAT. Whenever I've used VMs with NAT I've been able to interact with the NAT'd network from the actual hypervisor host.
Yes, VirtualBox is an exception, because it does its own weird NAT.
VMware:
> The host computer has an adapter on the NAT network (identical to the host-only adapter on the host-only network). This adapter allows the host and the virtual machines to communicate with each other for such purposes as file sharing. The NAT never forwards traffic from the host adapter.
Libvirt/KVM:
> By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
Hyper-V lets you connect from host to NAT'd guests, though the documentation doesn't explicitly say this. Parallels works this way too. Xen is a weird one, because it doesn't really do the NAT itself; if you follow the Linux instructions it'll work the way I describe.
While I agree, some hypervisors act differently, but my original comment stands as due to at least one major hypervisor not allowing direct host access to NAT’d VMs, you can’t assume it works given no context.
Either way, thanks for the research. I stopped after checking VMware.
At home I have a Hyper-V-based Ubuntu VM on a Windows workstation that runs almost all my LAN services. Not having bridged networking would be a complete deal breaker.
My primary desktop is an iMac. I built the Windows workstation to very good specs though and only use it for the occasional cross-platform development on Windows or gaming, so all the compute power is idle most of the time. Hosting services on it is a good use of the resources.
I haven't tested this, but I'm thinking that you could use cloud-init to configure IPv6 on the guest, as long as your host and Hyper-V configuration supports it.
Since the target audience is workstations, using bridging as a default is a monumentally bad idea. The proliferation of easy to use tools to host and run instances on workstations has been a huge boon for engineers, but it comes at a cost - those instances need to be maintained, or they are source of vulnerability.
By forcing users to be intentional about how those instances are exposed inbound network traffic and the internet in general, it drastically reduces the attack surface of virtual machines on the workstation. It would be great if there was an option for it though (not familiar enough with this tool to know if it is).
This actually looks great, installing it now to try out.
But the title of this HN post doesn't really explain what it is, since a "mini-cloud" implies a lot more than just Ubuntu VMs. The actual headline of the target page is: "Instant Ubuntu VMs", with a subtitle of: "A mini-cloud on your Mac or Windows workstation." And the title of the page is: "Multipass orchestrates virtual Ubuntu instances" which makes much more sense.
This really is great - after installing it's as simple as "multipass launch" to create and start a new instance, and then "multipass shell" to get a shell prompt. In the background it uses the native Hyper-V hypervisor to run the VMs.
I've been playing around with it for a bit, but I don't really see what it has to offer that Vagrant doesn't already do.
"multipass launch" and "multipass shell" do the same as "vagrant up" and "vagrant ssh".
Vagrant has been around since 2010 and is super mature by now. Multipass seems to be limited to LTS Ubuntu releases, for now at least. There are Vagrant boxes for all Ubuntu releases but also Debian, CentOS or whatever else you would want to run.
The advantage over Vagrant is that it's a lot less complex. I've tried Vagrant on Windows before and really struggled with it, when all I wanted was a quick and easy way to launch lots of VMs.
> after installing it's as simple as "multipass launch" to create and start a new instance, and then "multipass shell" to get a shell prompt. In the background it uses the native Hyper-V hypervisor to run the VMs.
I appreciate the Fifth Element reference, but what does running a mini cloud have to do with an identification card? Shouldn't 'Multipass' be used for some kind of oauth library or something? Not a great name for a personal cloud.
Or is it 'MultiPASS, multiple platforms as a service'? Still doesn't make sense, presumably this is a personal PASS, since it isn't multiple platforms as a service, it's a single platform as a service provider, different from other PASS providers in that you can run it on your laptop.
So besides the fact that this tool has exactly the same “problems” you describe in the first two points (see another sub thread for MacOS vs Linux host running this tool - different images are available; and you’re just trusting that “bionic” is a “solid reliable” image to use), the rest of your complaint is: “I don’t want a file that defines my one or 100 vm setup so I can commit it to the project. I want to run a command and define the same attributes over and over again every time any person on the project needs to use it”.
Also I don’t know why you mentioned “vagrant destroy” and then went on about “toil through your provisioning”.
If you want to use cloud init, there is a vagrant provisioner that supports it. If you don’t, you can use another one like shell scripts or chef or salt or puppet or Ansible or whatever.
If you destroy the machine the provisioning will need to run again - yes. But why would you destroy the machine if you don’t want it to start from scratch? But what toiling is there? Once the provisioners are defined, you just let them run - how is that different than a cloud init provisioned vm?
My main point is simplicity wins. No need to mess with providers and plugins and tooling/provider/plugin specific provisioning logistics.
Cloud-init is becoming the de-facto machine provisioning format, it's great to be able to hack on it locally and get the same results elsewhere. Sharing it is great as well, install multipass and point to the cloud-config. Done.
> If you destroy the machine the provisioning will need to run again - yes. But why would you destroy the machine if you don’t want it to start from scratch? But what toiling is there? Once the provisioners are defined, you just let them run - how is that different than a cloud init provisioned vm?
Fair enough! I agree which you here it was probably a bad example on my part.
> Once the provisioners are defined, you just let them run - how is that different than a cloud init provisioned vm?
Which provisioner for which provider? And what plugins does it need? cloud-init is just more simple and portable to use IMO.
Not true. The moment you need to do anything too complex for a simplistic tool, simplicity loses. Multipass doesn't do CentOS, for example. Therfore, for some of my use cases, multipass's simplicity loses.
Furthermore, with multipass, as soon as you need to do anything beyond launching a default image, it's no longer simple. It's just as complicated as Vagrant. Someone has to write your cloud-config.yaml file, just as someone has to write an Ansible playbook.
Multipass on Linux does CentOS and any other cloud-init enabled image.
It currently does not support this for some reason on macos which really sucks! Hopefully it will eventually.
I also want to say that I enjoy using Vagrant as well. It certainly has it's advantages for certain use-cases. For my personal most common use-case, I prefer multipass. That's all! Glad to see there are multiple options continuing to evolve in this space!
Completely agree. I do not see any advantage here.
To add: Virtualbox is supported on Windows/Mac/Linux, is trivial to install, and is the default Vagrant provider. If I want top performance, I can choose other options. Vagrant+Parallels on Mac is nice. So is Vagrant+KVM on Linux.
A Vagrantfile is 4 lines for a basic config, and vagrant will create one for you, and your config is saved should you need to edit it. It's trivially easy. Multipass has the same needs the moment you need anything beyond the defaults. So I'd call that a wash.
How is "vagrant up" harder than "multipass launch"?
What does "vagrant-specific provisioning" means exactly? Vagrant provisions with the same toolchain you are probably using for your production servers. It supports ansible, chef, and puppet. How is that "vagrant-specific"?
When I want to launch a development vm, I want it provisioned in specific ways that match the production instances I will eventually be launching. Vagrant+Ansible gives me that in spades, and it's easy.
The Hashicorp toolchain and ecosystem also provides other advantages: Packer for easily building my own base images, and targeting them at multiple hypervisors, including EC2. Not to mention that all distros are supported. Packer also answers the GP's provisioning complaint. Provision a VM, and use it as the basis for a new box in Packer. Problem solved.
This reminds me of people who like MySQL "because it's easier than Postgres." Not easier. Just not the same.
I tend to agree, vagrant offers way better configuration possibilities and flexibility. However, there is one significant benefit: on macOS multipass uses hyperkit/xhyve to run the VM, which in my experience performs _much_ better than virtualbox ("docker for mac" also uses hyperkit). I'd love to see a vagrant hyperkit provider (or even a multipass provider! :-), but sadly have found nothing beyond a few alpha-level sketches so far.
This is true. Virtualbox is a pig pretty much everywhere.
Veertu used to ship Veertu Desktop (using Hypervisor.framework, and able to ship via the App Store) and claimed to be supporting vagrant at some point, but then they changed approach completely and focus purely on virtualising macOS for CI build environments now.
Personally I always use either VMware Fusion or Parallels with vagrant anyway (well unless I'm debugging some weird vagrant/vbox issue for someone else).
It'd be nice to have an actively maintained provider that's backed by the built-in framework, but the only one I'm aware of stopped development 3+ years ago.
Great to see this project maturing! I love testing out random command line utilities and programs with this. IMO, this is a much nicer tool for evaluating CLI tools then docker containers because its a full VM and behaves a lot more like my laptop then a docker container.
Docker containers can be as like your laptop as you wish them to be in terms of user land. You can install all the usual stuff present on your laptop into the container.
Currently, you can only use WSL1 in Windows. WSL1 is emulation and has way too few features compared to a proper virtual machine. WSL2 currently is available only through the Windows Insider Program, which is something that you will NOT setup on a work machine.
WSL2 is still work in progress. Currently it misses many features and the provided Linux kernel does not include many kernel modules (several for networking are missing).
For example, WSL2 currently does not support bridge networking.
In my experience, WSL1 is also very, very slow and cannot run all programs. A while ago, no haskell programs would run, for instance. Not sure about the current status though.
Another limitation in WSL1 is that it cannot run docker. And so on.
It still might be an OK way to obtain *nix based software, and can «see» your windows directory tree without any mounting/configuration. And no VM is needed.
I'm running Multipass alongside WSL2 with no ill effects (other than my not being able to ping/SSH directly to Multipass instances from inside WSL2, which is probably due to their being on a separate HyperV "switch"), and I spotted three things right off the bat:
* WSL2 sees all my CPU cores (Multipass might be able to do that, but I haven't figured out how to yet)
* Home directory integration seems to be there, but require a manual mount (might be just me, I am running this on an Insiders build)
* I was able to run multipass.exe from inside WSL2 and get a shell to its instances without even thinking twice about it :)
I do use WSL1 on my main machine (and have done so for a long while without any significant issues), and might set up Multipass on it and my Macs to work with Docker containers without the slow, pokey Docker Desktop UI (all I really need is a quick way to get a VM running on any of the native hypervisors, and I'm good).
Is there something comparable to this that is offered as a native $distro (Debian in particular) package, rather than a Snap? I'm slightly allergic to running parallel package managers.
Looks fairly comparable, but the writeup at https://docs.cumulusnetworks.com/cumulus-vx/Development-Envi... makes me think it might not quite treat "standard Linux-world solutions" as first class citizens. (Not to mention "vagrant plugin install ..." also looks a lot like a parallel package manager.)
I’ve used Multipass extensively and I can honestly say, while it’s a great start, it’s not even remotely ready for prime time on MacOS. I have lost tons of VMs that get stuck shutting down and are never able to come back up due to corruption. Still, it’s worth checking back in at a later date. Docker continues to be my go-to for now.
Multipass was the only way I could get a local k8s install that didn't shred my laptop. Albeit I/O perf was bad, but would build containers on the host and put them in a spot containerd could pick them up in the VM. Worked pretty well, but a bit hairy to set up.
If by chance someone is looking for a VM management software around hyperkit, I've been working on https://github.com/bensallen/hkmgr in my spare time.
On MacOS, Multipass uses the native VM software provided by the operating system. And you use Multipass to launch Ubuntu virtual machines. I do not think that Multipass would be able to directly launch custom qcow2 images.
However, your question is actually whether on MacOS you can have nested virtualization. Because, if MacOS (and Multipass support) it, then this is what you need.
> I do not think that Multipass would be able to directly launch custom qcow2 images.
It absolutely does on linux (which uses the qemu driver). <url> is a custom image URL that is in http://, https://, or file:// format.
As long as the image is a "cloud" image with cloud-init installed it works fine. I tested this with a fedora image on linux.
> However, your question is actually whether on MacOS you can have nested virtualization. Because, if MacOS (and Multipass support) it, then this is what you need.
This was not my question, but I don't believe nested virt is supported on macOS with multipass. Someone correct me if i'm wrong but I think nested virt is only available via VMware Fusion on macOS, multipass uses hypervisor.framework via hyperkit
Right, "multipass find" only lists the curated ubuntu LTS images on macos. Attempting to launch a qcow2 or img on macos shows "launch failed: http and file based images are not supported" which leads back to my original question.
multipass supports two hard-coded remotes, `release:` and `snapcraft:`. It should be feasible to be able to launch any of the VM images from LXD (i.e. https://us.images.linuxcontainers.org/), as long as there was a way to add a new _remote_.
This is like what Docker Desktop did for Virtualbox and docker-machine. Lean UX, faster hypervisor. Don't need the complexity of Vagrant when one command gets me what I need - a fresh Ubuntu VM with our without my custom cloud-init.
I just don't get what's closed source and open and why they had to write it in C++
Don't containers only use one process?
I.e. you would need some orchestration tool to use nginx and python at the same time for example when using containers.
Whereas this would allow both to run on the same instance due to being a VM?
Am I conceptualizing the differences correctly?
How does this compare to WSL 2 on windows?
Launching WSL based linux distros is a breeze and they are lightweight due to lack of full virtualization overhead. Since these use Hyper-V/VBox I'm guessing they would be heavier.
Looks pretty nice and reminds me of Docker desktop. Are there any plans to make it easy to consume other non Ubuntu vms? We built something similar for Mac for Bitnami VMs and would be great to extend it to Windows and Linux
Depends what you are doing. Docker is just containers so shares the host kernel, and isn't quite as secure as a VM. This actually spins up guest VMs running their own kernel, isolated via hypervisor.
If you are running anything that requires kernel changes, containers won't really work (there are ways around it but I feel it's hacky. I could be convinced however). If you're running potentially evil software, VM is also much more the way to go.
This runs on Mac and Windows, where you cannot run lxc containers (although I understand now that Docker supports some sort of native Windows containerization).
Docker Desktop for Mac and Windows is not free software; it is proprietary and closed source. I am glad to see something like this (which is free software) being made available as an alternative.
You get a virtual machine, so you can build and run untrusted software. If you have very specific old software (for example, something that runs on Ubuntu 12.04 LTS), then you can install LXD inside a Multipass VM, and from LXD create a system container with Ubuntu 12.04.
While Multipass supports LTS versions of Ubuntu and recent development versions of Ubuntu, with LXD you get container images for many more versions and other distributions. See the list at https://us.images.linuxcontainers.org/
Make sure your hypervisor is up to date. It's uncommon but vulnerabilities that allow applications in the guest VM to escape the hypervisor and infect the host do happen sometimes. For the most part as long as your hypervisor is up-to-date this would be a great solution for you to use untrusted software. Just keep in mind the "primary" machine gets special connections to the host, which can include file sharing and hardware sharing, etc. Do it in a machine without those things for increased safety.
At first glance it seems to be much simpler, less complex, and therefore less features than Vagrant. Depending on your requirements, Vagrant may be a better option, but for me Multipass looks great.
https://github.com/canonical/multipass/issues/118
This means that your VMs won't be able to get IP addresses on your LAN via DHCP. Kind of a fundamental omission for this kind of product I would think. Curious that it's not a high priority issue for them.
Also, what's the IPv6 story here? I didn't see anything in the docs addressing that, unless I'm missing something.