Hacker Newsnew | past | comments | ask | show | jobs | submit | more comicjk's commentslogin

I think investigative journalism nonprofits like ProPublica are the best point of leverage for this problem. Big newspapers like the Washington Post & NY Times do good investigative work too, but pair it with lots of duplicate stories.


If we assume everyone has been infected? Why would we assume that? The estimates I've seen (including asymptomatic) are around 1/3 infected. The source you cited is from 2020, so it doesn't tell us much about today.


Yup good catch about the old source, I removed that from my comment.

The source provided by animats puts the seroprevalence at 21.6% for the U.S. I think that should be a lower bound of the number of individuals who have been infected, given antibody levels wane over time (even though immunity does not necessarily wane).


On the one hand, peer review takes long enough already. On the other... I saw an influential paper that published obviously-wrong timing data, essentially saying that time(A) + time(B) < time(B). It seems they were including the Python interpreter startup time (~0.1s) in a profile of a ~0.01s algorithm.


Random wiki pages are one of the things that give HN its pleasant Old Internet atmosphere to me. Sometimes I've thought about sampling over wikipedia uniformly at random, submitting to HN, and observing the trends in what rises to the top.


Hahaha. Have you ever clicked the "Random Article" button on Wikipedia? I learned the world has a _lot_ of tiny villages no one has ever heard of.


What if there were a recommender system in the loop that looked at which articles tend to be voted up and then would be more likely to submit articles that it thought HN would find interesting — would a uniform distribution filtered by a "probability to be well-upvoted" be a good way to do this?


I used to do that a lot. Lots of small villages and soccer/football players...


The rules are what matters, yes, but the naming of games is a classification problem over sets of rules. This is most obvious with competitive video games, which have complex rules that are tweaked often - each balance patch changes the rules a little, but it's still the same game. The boundaries are fuzzy, and even the players may disagree about what constitutes enough change to be a different game.

With Senet, we don't even know how much the rules have changed, so it's hard to say. But hopefully the reconstructors did well enough that an ancient player wouldn't say "what game is that?" but instead "that's a goofy way to play Senet."


Here is the reference wikipedia gives for the two historians who have tried reconstructing the rules[0].

It seems that only "tomb images" and some game boards and pieces are used in the reconstruction. The article doesn't suggest that any rules have a basis in history. What rules we have are fabricated from nothing but the pieces, like reconstructing the English language from nothing but the alphabet.

0. http://www.gamecabinet.com/history/Senet.html


A video game with two hundred thousand lines of code doesn't seem like an apt comparison to a board game.


Pretty close to a physical sport, if we assume that most of that code is involved in simulating something resembling physics, or in running simple robot players.

Like, how much of a basketball game is concerned with the actual Rules Of Basketball vs. simulating a bunch of people playing basketball? And if you made Space Basketball with characters with superpowers who kept getting balance tweaks as the player community figured out holes in the rules, most of that simulation code would stay the same, as the tiny percentage of rules code evolved.


I don't think they do. The electrons are indistinguishable - there is no "which" and no need for selection.


I think that's what was meant by "Middle Ages/ early Renaissance" in the comment above. In this time period, literacy was a rare advantage. For instance, in early modern England, literate people were not subject to ordinary criminal courts on a first offense.

https://en.m.wikipedia.org/wiki/Benefit_of_clergy


Thank you.


It would be awful to write every test using Copilot, but there is potential there for a certain kind of test. If I'm writing an API, I want fresh eyes on it, not just tests written by the person who understands it most (me). For example, a fresh user might try to apply a common pattern that my API breaks. Copilot might be able to act like such a tester. By writing generic tests, it could mimic developers who haven't understood the API before they start using it (most of them).


If you can find an example of Copilot coming up with a test you wouldn't have thought of, I'd be very interested to see it.

Even if that happened, which I am not expecting, I think the need is much more easily solved via means that are simpler and more effective. E.g., a good tester writing up a list of things they test about APIs: https://www.sisense.com/blog/rest-api-testing-strategy-what-...


copilot as defined would not be "fresh eyes"... it would be "old tired eyes of every code writer who uploaded stuff to github, not knowing if they made one off errors or mistakes in their code"


I mean fresh eyes with respect to my new API. Having seen a lot of other code is a benefit. I expect most tests that Copilot writes to fail, but I would hope some would fail in interesting ways. For example, off-by-one errors might encourage me to document my indexing convention, or to use a generator rather than indexing.


> I want fresh eyes on it

Crucially, that's not what copilot is.


This is a statistically shoddy talking point. The average of all causes of death is right at the average life expectancy, by definition. Any cause of death weighted towards the elderly will have an average age above the average life expectancy.


This is precisely my point. Quite easy to miss the spread of a virus when:

1) Your population is by and large not susceptible to severe illness from the virus. Rates of asymptomatic & mild cases are much higher than in developed countries. Tragically, this is because most do not live long enough to be in the high risk group.

2) No widespread testing. Pneumonia deaths are just pneumonia.


In addition to the corrosion problem, extracting anything valuable from seawater means processing a huge volume of it. Expensive membrane * huge volume = huge expense.


They mention sending wastewater downstream for desalination, so if this plant is built alongside an existing desalination plant, then the desalination plant can take care of the offshore piping and pumping, removing a lot of the complexity for this lithium extraction plant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: