Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Along with what feels like billions of others right now, I'm building a pet project as a learning exercise involving rag agents, MCP and locally hosted llm's to work with a 15 year old pile of proprietary wiki data and a large 20yr old codebase.


What kind of hardware are you using to run the LLMs locally?


5090 GPU, and on a system with 64GB ram. I'm comfortably running 8b param models


Any pointers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: