Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
kaliqt
on March 30, 2024
|
parent
|
context
|
favorite
| on:
Qwen1.5-Moe: Matching 7B Model Performance with 1/...
Something tells me that image models are sufficiently small, it's easier to just have your differently tuned models sitting side by side so you can easily swap and inference on them rather than compiling it into one model.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: