Hacker Newsnew | past | comments | ask | show | jobs | submit | fzzzy's commentslogin

- Reinforcement learning with verifiable rewards (RLVR): instead of using a grader model you use a domain that can be deterministically graded, such as math problems.

Tulip mania is a mania because it short circuits thought.

They also have a vscode extension that compares with github copilot now, just so you know.

Gpt-5 is a “model router”

Yeah they are. llama.cpp has had good performance on cpu, amd, and apple metal for at least a year now.

Thw hardware is not the issue. It's the model architectures leading to cascading errors

9 has been possible on that board for years now. No internal speaker but the headphone jack works.

When did that happen? Do you have a link to the exact CD image you used?

You can find a 9 image for it at macos9lives.com

How does it know the letters in the token?

It doesn't.

There's literally no mapping anywhere of the letters in a token.


There is a mapping. An internal, fully learned mapping that's derived from seeing misspellings and words spelled out letter by letter. Some models make it an explicit part of the training with subword regularization, but many don't.

It's hard to access that mapping though.

A typical LLM can semi-reliably spell common words out letter by letter - but it can't say how many of each are in a single word immediately.

But spelling the word out first and THEN counting the letters? That works just fine.


If it did frequency analysis then I would consider it having a PhD level intelligence, not just a PhD level of knowledge (like a dictionary).

How does denial of reality help you?

Calling people out is extremely satisfying.

You wouldn't know anything about it considering you've been wrong in all your accusations and predictions. Glad to see no-one takes you seriously anymore.

:eyes: Go back to the lesswrong comment section.

Running in cpu ram works fine. It’s not hard to build a machine with a terabyte of RAM.


Admittedly I've not tried running on system RAM often, but every time I've tried it's been abysmally slow (< 1 T/s) when I've tried on something like KoboldCPP or ollama. Is there any particular method required to run them faster? Or is it just "get faster RAM"? I fully admit my DDR3 system has quite slow RAM...


How is a gpt browser useful for that?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: