When I'm explaining logarithms, I find it helps to relate it to the number of digits. This code is a good example of the concept: you don't need log, just convert the int to a string and check its length. A string with 1-3 digits is bytes, 4-6 is kb, etc.
There's a good video on this event, by [LindyBeige](https://www.youtube.com/watch?v=s6QhW5S8Gk4). Their videos are very informative and entertaining, but very focused on Britain.
These "statistics" are not credible. More than 0.1% of all Americans (340k) have already died of COVID-19. If the IFR were 0.15-0.2%, more than half of the population would have been infected already and the spread would be pretty much over. Instead, seroprevalence shows that about 20% of Americans have been infected, meaning we have 600k or so deaths to go in the absence of controls.
As for "99.96% of nearly everyone who gets the virus lives" this is weaseling even given their low IFR estimates. It deliberately excludes those most likely to die! This doesn't look like good faith. An article about the harms of isolation doesn't have to, and shouldn't, minimize the facts in this way.
Scientists can verify that an AlphaFold-predicted structure is correct, or at least useful, without being able to get the structure experimentally. For instance, we could use the AlphaFold-predicted structure to do protein-ligand binding calculations for a bunch of known molecules. If these calculations agree with experimental protein-ligand binding (which they generally do for proteins with known structures), then we can say with high confidence that we've got a good structure.
The way computer scientists do it, yes, it is. In the CS situation you define an energy function (in this case representing the physical behavior of the protein in water) and find a heuristic to approximate the coordinates of the lowest energy configuration; done, problem solved.
in reality, that's not how it works at all. The energy functions we have are crappy and require too much sampling before we can find the lowest energy configuration. And more importantly, it doesn't look like proteins typically fold to their lowest energy configuration (with the exception of some small fast two state folders), but rather explore a kinetically accessible region around there (or even somewhere else entirely, if the energy cost to transition is too high).
Methods like AF depend heavily on large amount of information correlation from evolutionary data, which has historically been of the highest value for making decisions about protein structure.
The most accurate technique in computational drug discovery is protein-ligand binding prediction (https://blogs.sciencemag.org/pipeline/archives/2015/02/23/is...). Given the protein structure, you can predict which molecules will bind with it, even for molecules which have never been sythesized. Many protein targets have not been amenable to this because we don't know what the potential binding pockets look like. That set of proteins will now drastically shrink. We're going to have a lot of new drug candidates, and with any luck new drugs, come out of this.
CASP (Critical Assessment of protein Structure Prediction) is calling it a solution. To quote from the article:
"We have been stuck on this one problem – how do proteins fold up – for nearly 50 years. To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we’d ever get there, is a very special moment."
--Professor John Moult
Co-founder and chair of CASP
It's an improvement- and a big one- but not a solution to the problem. It mainly shows just how stuck the community had gotten with their techniques and how recently improvements in DNNs and information theory methods can be exploited if you have lots of TPU time.
Well, it's not. Nature does not have a committee sorry. Proteins are delicate "machines" where even a a small change in the sequence (and thus the 3D structure) as small as a few amino-acids would change effectively the structure and the function of it. On top of that, proteins are dynamic beasts. In any case, it's a great advance, but DM, as many companies likes a little bit too much to tout its own horn.
I think that missed the mark, regardless of the rest of the discussion. It's like saying that the winner of the DARPA Grand Challenge for self-driving cars "solved" autonomous driving back in 2010.
This benchmark maybe solved, but simultaneously, there remain other open problems relating to protein folding which are unsolved and which may not even have benchmarks yet :)
Said differently, there's vast space between having a great result on a specific benchmark (this) and solving all interesting problems in a scientific field.
This is an issue of the more subtle aspects of English.
"To see DeepMind produce a solution for this" does not imply something is solved. I can produce a bad solution. I can produce a really good solution. All without solving a problem.
This is a really good solution. Of course, there's still room for more research and better methods in the future, but now computational protein structure prediction can compete with experiments actually measuring the structure.
Laypersons often use the word "solution" in situations where an academic would say "method" or "approach": we did something useful, but it may not be the best possible way.
In pure math, "solution" means determining whether a logical statement is true or false. For example, in (asymptotic, worst-case) analysis of algorithms, the logical statements take the form "there exists an algorithm to compute X with asymptotic complexity O(f(n)), and no algorithm with lower complexity exists." These are crisp notions with no room for debate.
In this competition, they defined "solved" as achieving 90% accuracy. This is somewhere in between the two definitions. It's technically a valid problem statement, but it can become obsolete in a weird way. If someone else solves the problem of achieving 95% accuracy, then suddenly the 90% solution doesn't look so good. Compare to e.g. sorting. If we add the requirement of a stable sort, it becomes a new problem. Stable sorting algorithms are not automatically "better" than unstable ones.
This is the key fact. Only 2% of the subjects tested antibody-positive at the end of the study (low enough that it's near the false-positive rate of the home antibody tests). The mask group had fewer cases (by PCR, they had zero). But the numbers are too small to say. If you ran the study in the US in the same time period, you would have had a better shot.
If people with no current symptoms couldn't spread COVID-19, that would be an extremely convenient fact for controlling spread! But it doesn't look like we're that lucky.
Only 6% of the subjects in that study were asymptomatic by their definition. I think this is a confusion of asymptomatic and pre-symptomatic: most cases will have a few days of pre-symptomatic infection regardless of the symptoms they have later.
I think it might be a good point but I’m not quite understanding.
Why does 6% matter if the statistical significance works out? Also I can’t seem to see how you’re saying the study has asymptomatic and presymptomatic confused.
(I don’t have a great background in all this so I could def be missing this.)
The study is fine, but the way it's using the word "asymptomatic" is not consistent with the conclusion you drew from it, and the 6% shows why. A case which is "asymptomatic" in the study has no noticeable symptoms through the whole course of infection. If you know in advance that you're that kind of case, then you're less likely to spread the disease. But 94% of cases are not that kind of case, and no test except waiting a couple of weeks can distinguish asymptomatic from pre-symptomatic. Information that's not available until 14 days from now can't be used to determine who should wear a mask today.
"What is the difference between people who are asymptomatic or pre-symptomatic? Don’t they both mean someone without symptoms?"
"Both terms refer to people who do not have symptoms. The difference is that ‘asymptomatic’ refers to people who are infected but never develop any symptoms, while ‘pre-symptomatic’ refers to infected people who have not yet developed symptoms but go on to develop symptoms later."
There is a good reason to develop at least one rhinovirus vaccine, though: so we have the expertise to deal with a bad rhinovirus strain if one should arise later. The coronavirus experience - SARS, MERS, and now COVID-19 - seems to suggest that such groundwork on common virus types would be a good idea.