Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm curious why we seem convinced that this is a task that is possible or something worthy of investigation.

There's a huge industry around time series forecasting used for all kinds of things like engineering, finance, climate science, etc. and many of the modern ones incorporate some kind of machine learning because they deal with very high dimensional data. Given the very surprising success of LLMs in non-language fields, it seems reasonable that people would work on this.



Task specific time series models, not time series “foundation models” - we are discussing different things.


I don't think we are. The premise of this is that the foundation model can learn some kind of baseline ability to reason about forecasting, that is generalizable across different domains (each which needs fine tuning.) I don't know if it will find anything, but LLMs totally surprised us, and this kind of thing seems totally worthy of investigation.


Foundational time series models have been around since 2019 and show competitive levels of performance with task specific models.

https://arxiv.org/abs/1905.10437




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: