Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be bad for single-consumer-GPU inference setups.


Not an expert (no pun intended), but MoE where each expert is actually just a LoRA adaptor on top of the base model gets me pretty excited. Since LoRA adaptors can be swapped in and out at runtime, it might be possible to get decent performance without a lot of extra memory pressure.


While MoE-LoRAs are exciting in themselves, they are a very different pitch from full on MoEs. If the idea behind MoEs is that you want completely separate layers to handle different parts of the input/computation, then it is unlikely that you can get away with low-rank tweaks to an existing linear layer.


Could this work well with distributed solutions like petals?

https://github.com/bigscience-workshop/petals

I don't understand how petals can work though. I thought LLMs were typically quite monolithic.


Petals does a layerwise split I think. You could probably run separate experts on each system. I don't think this sort of tech is very promising so I haven't looked.


It could be good if the relevant expert(s) can be loaded on demand after reading the prompt? If the MOE is, say 8x8b params, then you could get good speed out of a 12GB GPU, despite the model being 64 params in size. Or am I misunderstanding how this all works?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: