Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also if you run it twice, is it gonna be a carrot again?


It's subjected to randomness. But you're ultimately in control of the LLMs's hyperparams -- temperature, top_p, and seed -- so, you get deterministic outputs if that's what you need. However, there are downsides to this kind of LLM deterministic tweaks because of the inherent autoregressive nature of the LLM.

For instance, with temperature 1 there *could be* a path that satisfies your instruction which otherwise gets missed. There's interesting work here at the intersection of generative grammars and LLMs, where you can cast the problem as an FSM/PA automaton such that you only sample from that grammar with the LLM (you use something like logits_bias to turn off unwanted tokens and keep only those that define the grammar). You can define grammars with libs like lark or parsimonious, and this was how people solved JSON format with LLMs -- JSON is a formal grammar.

Contracts alleviate some of this through post validation, *as long as* you find a way to semantically encode your deterministic constraint.


Since these seem like short prompts, you can send as context data that was correct on past prompts

You can create a test suite for your code that will compile correct results according to another prompt or dictionary verification

  t.test(
     Symbol(['apple', 'banana', 'cherry', 'cat', 'dog']).map('convert all fruits to vegetables'),
     "list only has vegetable and cat,dog"
  )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: