Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They definitely won't idle out- if they idle out, it'll take on the order of up to 60 seconds to load the model back into VRAM, depending on the model.

That's an eternity for a request. I highly doubt they will timeout any model they serve.



  > That's an eternity for a request. I highly doubt they will timeout any model they serve.
That's what easing functions are for.

Let's say 10 GPUs are in use. You keep another 3 with the model loaded. If demand increases slowly you slowly increase your headroom. If demand increases rapidly, you also increase rapidly.

The correct way to do this is more complicated and you should model based on your usage history, but if you have sufficient headroom then very few should be left idle. Remember that these models do requests in batches.

If they don't timeout models, they're throwing money down the drain. Though that wouldn't be uncommon.


That's only if you're expecting 10 GPUs in use. They're dealing with ~1 GPU in use for a model, just sitting there. Alibaba has a very long tail of old models that barely anyone uses anymore, and yet they still serve.

Here's a quote from the paper above:

> Given a list of M models to be served, our goal is to minimize the number of GPU instances N required to meet the SLOs for all models through auto-scaling, thus maximizing resource usage. The strawman strategy, i.e., no auto-scaling at all, reserves at least one dedicated instance for each model, leading to N = O(M)

For example, Qwen2 72b is rarely used these days. And yet it will take up 2 of their H20 gpus (with 96GB VRAM) to serve, at the bare minimum, assuming that they don't quantize the BF16 down to FP8 (and I don't think they would, although other providers probably would). And then there's other older models, like the Qwen 2.5, Qwen 2, Qwen 1.5, and Qwen 1 series models. They all take up VRAM if the endpoint is active!

Alibaba cannot easily just timeout these models from VRAM, even if they only get 1 request per hour.

That's the issue. Their backlog of models take up a large amount of VRAM, and yet get ZERO compute most of the time! You can easily use an easing function to scale up from 2 gpus to 200 gpus, but you cannot ever timeout the last 2 gpus that's serving the model.

If you read the paper linked above, it's actually quite interesting how Alibaba goes and solves this problem.

Meanwhile on the other hand, Deepseek solves the issue by just saying "fuck you, we're serving only our latest model and you can deal with it". They're pretty pragmatic about it at least.


The thundering herd breaks this scheme.


If I had to handle this problem, I'd do some kind of "split on existing loaded GPUs" for new sessions, and then when some cap is hit, spool an additional GPU in the background and the transfer the new session to that GPU as soon as the model is loaded.

I'd have to play with the configuration and load calcs, but I'm sure there's a low param, neat solution to the request/service problem.


Why does it take 60 seconds to load data from RAM to VRAM? Shouldn't the PCIE bandwidth allow it to fully load it in a few seconds?


Because ML infra is bloatware beyond belief.

If it was engineered right, it would take:

- transfer model weights from NVMe drive/RAM to GPU via PCIe

- upload tiny precompiled code to GPU

- run it with tiny CPU host code

But what you get instead is gigabytes of PyTorch + Nvidia docker container bloatware (hi Nvidia NeMo) that takes forever to start.


That's why deepseek only serves two models




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: