The demo looks really appealing. I have a real-world use case in mind: analyzing an Excel file and asking questions about its contents. The current approach (https://github.com/pydantic/pydantic-ai/blob/main/mcp-run-py...) seems limited to running standalone scripts—it doesn't support reading and processing files. Is there an extension or workaround to enable file input and processing?
Crosvm (our original Google project) and its children projects Firecracker, Cloud-Hypervisor are all based on top of "/dev/kvm" i.e. the Linux Virtualization stack.
Apple's equivalent is the Apple Virtualization Framework which exposes kvm like functionality at a higher level.
I was interested in how this compares in a kind of absolute sense. For comparison, an optimized C hello world program gave these results using `perf` on my Dell XPS 13 laptop:
0.000636230 seconds time elapsed
0.000759000 seconds user
0.000000000 seconds sys
That's 36,800% faster. Hand-written assembly was very slightly slower. Using the standard library for output instead of a syscall brought it down to 20,900% faster.
(Yes I used percentages to underscore how big the difference is. It's 368x and 209x respectively. That's huge.)
Begrudgingly, here are the standard Python numbers:
real 0m0.019s
user 0m0.015s
sys 0m0.004s
About 1230% faster than the sandbox, i.e. 12.3x. About an order of magnitude, which is typical for these kinds of exercises.
I see, the way I would approach is it by running a client on in a specific python env on an incus instance, with LLM hosted either on the host or another seperate an incus instance. Lately been addicted to sandboxing apps in incus, specifically for isolated vpn tunnels, and automating certain web access.
I'm hoping some day to find a recipe I really like for running Python code in a WASM container directly inside Python. Here's the closest I've got, using wasmtime: https://til.simonwillison.net/webassembly/python-in-a-wasm-s...