Pet peeve of mine: solving practical problems often involves capitalism in our system, but that doesn't mean practicality is capitalism. If you're writing ad code, you're working on capitalism only. If you're writing railway logistics code, you're doing practical work that any society needs, capitalist or not. And yes, this might involve understanding customers (or as a different system might call them, "people").
The important question would be "is this work useful," not "does this work involve capitalism."
They're adjusting the integration method (used to calculate positions given forces) to take into account the properties of the thermostat (used to maintain a roughly-constant temperature in the simulation). This allows bigger timesteps.
In particular, they studied the Langevin thermostat, in which all the atoms are subject to small random forces that smooth out their average temperature. By adding a term to the integrator that includes the magnitude of the random forces, it is possible to widen the stability bounds of the integration.
The caveat is that the proof is only for simple potential energy surfaces. They haven't given a lot of evidence, even empirically, that this works for the potential energy surfaces we really care about, like protein binding calculations. We already have many empirical tricks for increasing timesteps for these simulations, like freezing bond lengths for hydrogen. Any new method has to beat these empirical methods, not just beat a naive approach.
Yeah I've worked a lot with this and was also curious as the thermo distribution is just one part, you also have a lot of short range forces that explode if you increase the timestep by too much and I didn't see any tricks around that (except the usual like freezing the fastest degrees of freedom, completely removing hydrogens with virtual sites etc).
This is a great example of why someone else should write the high level description for somebody else's work. Without the baggage of what was hard, fun, easy, interesting (and the desire to sell/market), an outsider can focus on the importance and utility of a work.
Molecular dynamics simulations are usually O(N log(N)), because that's the time complexity of the Fast Fourier Transform, which allows computation of the N^2 charge interactions in less than N^2 time (e.g. Ewald Summation https://en.wikipedia.org/wiki/Ewald_summation).
For short-range interactions like Van der Waals forces, MD is O(N) like you said, because you only have to evaluate a finite radius around each atom. But charge interactions, decaying as 1/r, don't converge in any finite radius.
I like the core description of programming, but the framing - that programming is tougher than manual labor - is childish. Life is not a competition for whose job is hardest, and if it were, no one on Hacker News would win it.
I found this title subtly wrong, and it took me a little while to realize why: my accent doesn't have the Mary–marry–merry merger, so "Be Wary" and "Be Merry" don't rhyme for me.
> Like all hypotheses, it cannot be proven true, only disproven by contradictory evidence.
This is true, but not helpful, because it's a framework that treats all probabilities except 0 and 1 as equivalently vague. Anthropogenic climate change > 1℃ is extremely likely, 99%+ probability. It could be wrong, but it's not useful skepticism to just say "it could be wrong" about everything. The IPCC is putting out good probabilistic forecasts. Scientists are doing the work of testing the hypothesis. As new evidence comes out, we should respond to it. In the present, we have to act using the probabilities we have.
"99% probability is nothing" is innumeracy. A demand for five sigma before taking action - treating probabilities between 0 and 99.9999% as equivalent - is defensible in a high-energy physics experiment, where adding another sigma just means collecting data for another month. But in most of life, demanding that level of certainty is a guarantee of mistakes: getting the extra certainty takes time, and inaction during that time is itself a decision that can be wrong.
I don't know if you consider chemistry a real science, but my employer makes decisions based on probabilities less certain than 99% every day! We accept that sometimes we will be wrong, and plan so we can tolerate that too. This is how you use science in the real world.
Breaking a cryptosystem is not a good metric for whether quantum computers have become useful. Calculating energy levels of molecules, for example, is a much easier problem in the near term, and is of benefit to humanity. Breaking a cryptosystem takes vastly more and higher-quality qubits, and the end result is just that everyone upgrades to different math.
As a computational chemist, I assign 50% probability to the idea that quantum computers will be able to do useful molecular calculations cheaper than classical computers (for large batches) within 18 years. For small batches, classical computers will remain cheapest much longer because of overhead.
Peloton has almost a billion a year in revenue, and rising fast. I don't have a dog in this fight, but I don't see that a valuation of 8 billion is obviously wrong.
You can have infinite revenue if you sell dollar bills for $0.50. What matters is profits, not revenue, something the majority of these bullshit VC-funded startups overlook because they don't actually have a viable business model and hope for a miracle.
The important question would be "is this work useful," not "does this work involve capitalism."