Hacker Newsnew | past | comments | ask | show | jobs | submit | atschantz's commentslogin

It's remarkable how much this still rings true today.

The idea that we should try and understand life as a battle against the second law of thermodynamics has had a significant influence on many of today's great thinkers - such as Friston [1] and Dennett [2], as well as countless others.

[1] https://royalsocietypublishing.org/doi/full/10.1098/rsif.201...

[2] https://www.youtube.com/watch?v=iJ1YxR8qNpY


I've heard this called the "every refrigerator is a miracle" theory.


I don't see anything on this phrase on Google Search


Would phenomena like rocks crystallizing or stars forming also be a battle against the second law of thermodynamics?


Not sure.

Carl Friedrich von Weizsäcker, in his Aufbau der Physik argues/shows that the 2nd law is often misunderstood as leading to "goo". His take is that this is not so, rather it leads to skeletons.

So rocks crystallising and stars forming would actually be going with the flow of the 2nd law, rather than battling against it.


Stars die, so I'm not sure why your example includes stars forming -- that's not a steady state.

Still, rocks and crystals are not quite an end state either, in a non expanding universe -- eventually they collide and crumble to dust.


Thermodynamics and heat dissipation also play a central role in Jeremy England’s work on abiogenesis.


I previously tried to sign up via a Google account but it required a 'company Google account' - I was slightly confused about the reasoning behind this.


It's designed for teams, Google auth is used to avoid having to manage invites etc. If you could sign in with any old Google account then it would introduce a class of problems with having to manage invites / user management / accidental duplicate teams.

Having said that, I do think it would be cool if signing in with your personal gmail account gave you a personal non-team wiki.


I used Notion for a significant period but ended up switching to Nuclino [1] - which is identical in many respects, but without the various add-ons that are unnecessary if you're working with text/images.

I've found it to be more responsive, and to my tastes, it has a better UI. I'm not a big fan of the emoji/blank file image that is necessary with every Notion entry.

[1] - https://www.nuclino.com/


I love Nuclino too. Their editor uses the wonderful ProseMirror (https://discuss.prosemirror.net/t/nuclino-a-lightweight-real...) by Marijn Haverbeke (https://marijnhaverbeke.nl/) of CodeMirror and Tern fame.


I mean it's difficult to 'observe' gradient descent, there are no characteristic properties that you can identify without specifying the relative objective function. But most of the process theories from computational neuroscience are based on some form of gradient descent. Even if it's only implicit, you'll be able to describe the variables of the system as moving against the gradient of some function.

But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.


As a general answer, the theory suggests that organisms maximize a quantity known as model evidence, which is just a way of saying 'how much evidence does some data provide for my model of the world?'

There are two complementary ways to maximize this - change your model or change your world.

If we now grant that actions also maximize model evidence, then actions can either be conducted to sample data that make the model a better fit of the data (exploration), or they can be conducted to sample observations that are consistent with the current model (exploitation).


And the optimization process itself would determine whether updating the model or changing the world is optimal, I guess. Thanks.


The idea is that you learn a model by calculating the derivative of free energy with respect to your model parameters.


Yes, but you have to specify a generative model (or at least put boundaries to it). Then you learn params of that model.

I was talking about learning the model structure also.


Some attempts have been made in the form of Bayesian model reduction [1].

The idea is to 'carve' out the structure of your model using free energy minimization.

[1] https://arxiv.org/abs/1805.07092


It's worth noting that 'free energy' is just the 'evidence lower bound' that is optimized by a large portion of today's machine learning algorithms (i.e. variational auto-encoders).

It's also worth noting that 'predictive coding' - a dominant paradigm in neuroscience - is a form of free energy minimization.

Moreover, free energy minimization (as predictive coding) approximates the backpropagation algorithm [1], but in a biologically plausible fashion. In fact, most biologically plausible deep learning approaches use some form of prediction error signal, and are therefore functionally akin to predictive coding.

Which is all just to say that the notion of free energy minimization is somewhat commonplace in both neuroscience and machine learning.

[1] https://www.ncbi.nlm.nih.gov/pubmed/28333583


The particular study cited in the article is [1], however for a more general review of the links to reinforcement learning [2].

[1] https://www.biologicalpsychiatrycnni.org/article/S2451-9022(... [2] https://journals.plos.org/plosone/article?id=10.1371/journal...


Cheers.


In terms of the free energy 'principle', it makes no predictions about how free energy minimized. But there have been multiple process theories suggested, most notably predictive coding (which is a dominant paradigm in neuroscience) [1] and variational message passing [2].

[1] https://en.wikipedia.org/wiki/Predictive_coding [2] http://www.jmlr.org/papers/volume6/winn05a/winn05a.pdf


Isn't variational message-passing the algorithmic-level theory about where predictive coding comes from?


I think you might be right, a quote from Friston on the relationship (in reference to belief propagation):

"We turn to the equivalent message passing for continuous variables, which transpires to be predictive coding [...]"

It could be that belief propagation is in the context of discrete variables, whereas predictive coding is in the context of continuous, both of which are a form of (variational) message passing.


Well, a significant portion of empirical neuroscience works under the assumption that parts of the brain operate according to a predictive coding scheme, and there are countless studies that support this notion.

As predictive coding is a form of free energy minimization (under Gaussian assumptions), this implicitly provides empirical evidence.

As for the request to test the idea on live neurons, "In vitro neural networks minimise variational free energy" [1]

https://www.biorxiv.org/content/early/2018/05/16/323550


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: