Thanks for clarifying, for the record, I generally agree with you. I think we just disagree on the snippets and how in-depth they need to be. Our library is built on HF libraries (we don't implement the training code ourselves), which are popular and commonly utilized by researchers, and people know how to build good models on those libraries. The package is simply meant to provide an easier interface to create some of these complex multi-stage LLM workflows that are starting to become common at ML research conferences and reduce boilerplate code around common functions (caching or tokenizing).
But I hear you on it would be useful to also have some examples that show a proper, reliable model being trained with the library v.s. just example models. The project is pretty early, and we'll work on adding more examples.
But I hear you on it would be useful to also have some examples that show a proper, reliable model being trained with the library v.s. just example models. The project is pretty early, and we'll work on adding more examples.