Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not an expert in this, but...

> BNNs bring the following advantages over GPs: First, training large GPs is computationally expensive, and traditional training algorithms scale as the cube of the number of data points in the time series. In contrast, for a fixed width, training a BNN will often be approximately linear in the number of data points. Second, BNNs lend themselves better to GPU and TPU hardware acceleration than GP training operations.

If I'm not mistaken Hilbert Space Gaussian Processes (HSGPs) are O(mn+m) (where m is the number of basis functions, often something like m=30, m=60, or m=100), which is also a huge improvement over conventional GPs' O(n^3). I know that there are some constraints on HSGPs (e.g. they work best with stationary time series, and they're not quite as accurate, flexible, or readily interpretable or tunable as conventional GPs), but what would be the argument for an AutoBNN over an HSGP? Is it mainly about the lack of a need for domain expert input?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: