Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
zaptrem
on Dec 1, 2022
|
parent
|
context
|
favorite
| on:
OpenAI ChatGPT: Optimizing language models for dia...
I'm able to run a 22b parameter GPT-Neo model on my 24gb 3090 and can fit a 30b parameter OPT model when combining my 3090 and 12gb 3080
mdda
on Dec 1, 2022
[–]
Could you point to any resources online about how to do this? e.g. is this using 8-bit quantisation?
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: