Hacker Newsnew | past | comments | ask | show | jobs | submit | ejlangev's commentslogin

This article, while probably true an accurate, seems designed to make SBF the fall guy by portraying him as a uniquely bad actor in the crypto space. In reality, the crypto space is full of fraud that is probably criminal and he is just the latest example. That doesn't excuse his behavior and he should probably go to prison for this but it would be a shame if people get taken in by thinking of him as just a bad apple.


He is not the "fall guy" considering he, along with Alameda's CEO, make the direct, conscious decision to embezzle customer funds to prop up Alameda. Just because he stole what looks to be a billion dollars doesn't mean there are no other bad actors.

I think this article is great. I've commented on this for a while, but I still can't fathom why most of the media isn't highlighting that this was plain old, straightforward theft. There is no gray area here, the lack of crypto regulations are pretty irrelevant, and I'm glad a journalist is saying clearly that this was just fraud and theft.


I totally missed the part where they say that there have been no other criminals in crypto. Could you point me to that?

Or are you just thinking that the PR narrative of well-intentioned-but-made-some-mistakes would be better at protecting people from future scams?


What an embarrassing article to publish. I can't believe someone was paid for writing this many words about this particular topic. Regardless of if you agree with his politics, there is nothing opulent about having a net worth of $2 million at his age with his level of success. Much of his money was made through writing a book and you know he would be willing to pay the increased taxes himself. Just stupid.


Fitting that in a world where all the same old labor fights that were settled in the past are happening again that someone would un-ironically invent a new form of indentured servitude.

Makes me laugh at people who think that non STEM subjects have no value. Perhaps opening a book in one of those non STEM fields once in a while would help you avoid reinventing things consigned to the dust bin of history without realizing it.

Nearly but not quite as rich as when ride-sharing companies accidentally invent the city bus.


Oh come on! Quit it with the hyperbole about indentured servitude. Agreeing to pay a lender a cut of your paycheck is in no form slavery.


Who is reinventing indentured servitude? There is no obligation to work for any particular employer.

The public service student loan forgiveness (and simmilar private programs, although the latter seem uncommon). Is closer to indentured servitude, as you either have to work an eligable job for a particular employer (eg, the government) or face a significant financial cost.


The belief that this kind of knowledge and critical reasoning skills are exclusive to non stem domains is part of the reason non stem has a strained reputation


Computer Science itself is rife with reinvention of its own concepts, so I don't think wider reading would change much.

Furthermore, banksters aren't coming up with these mortal-debt schemes due to not knowing history or ethics, but because owning people is quite lucrative.

As far as mortal debt schemes go, this actually seems nicer than student loans denominated in straight dollars. But of course it will be nicer to start off - longer term it will probably converge on a similar income siphon as traditional loans, just with a nice bonus payday when one of the subjects makes it big.


The man truly is his own worst enemy. Seemed clear that something was up with him when he called that cave diver a "pedo" for no apparent reason. I hope that being held accountable for his actions deflates his head a little bit. Smart guy who is working on some good stuff that most people support and he doesn't need to keep lying about it.


There's that line that around 2pm when after work we do stupid stuff like: mistakes in hospital work, stupid tweets etc.

Reference to "When" by Daniel Pink


> "Private by design"

Lol

It's me, the company that hoards your personal data and just got hacked in a huge way, please put a camera in your house that I control which I _swear_ won't be used to spy on you.

Sort of hilariously tone deaf that they launch this right after they just got hacked.


I recall hearing that they delayed the release, which was scheduled within days of the CA incident (or one of the several others since).


Best news from them in months. I wish Instagram would do the same. The "smart" ordering of tweets and photos just makes my experience with both the apps worse.


This idea always struck me as fairly misguided. It of course makes sense to try to muster evidence and data when making decisions but it's not some sort of panacea. The book The Tyranny of Metrics (https://www.amazon.com/Tyranny-Metrics-Jerry-Z-Muller/dp/069...) goes into the successes and failures of various attempts to use data for decision making in some detail. Found it to be an interesting read.

In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness.


The underlying problem is politization. You can assume that if some political party wants some something and another political party wants the opposite something, you could find a set of impartial experts that would provide hard data and solve the question. In the real world, there are two sets of experts, holding opposite views and providing contradictory data. Everybody will make a big noise and eventually nobody knows what happened, just that the question got muddied and you aren't so sure about anything anymore.


The problem underlying politicisation is confidence. Science isn't binary, it's a set of circles of decreasing confidence that spreads out from a core of propositions that we're very confident about - more or less what you'll learn on an undergrad physics course - to a set of increasingly tentative hypotheses.

A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.

Policy makers and the media are some combination of corrupt and clueless, so they're happy to go with the false equivalence this creates.

One way to depoliticise science would be to have an international science foundation, which was funded independently of any individual government.

Of course there would be squeals of disapproval from vested interests, but that would simply highlight the problem - the vested interests don't want independent criticism or oversight. Their entire MO is based on regulatory capture which gives them the freedom (for themselves only) to operate as they want with no personal or financial consequences.

Scientific accountability would set them on the path to democratic accountability, which is the last thing they'll accept.


> A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.

I think scale/proportion is also a problem. Humans seem to place a lot of value in narratives/stories but we aren't so good with quantities (e.g. https://en.wikipedia.org/wiki/Conjunction_fallacy ). Pretty much everything (economics, climate, etc.) has factors pushing it in different directions, so we can always find a counterargument to any position (e.g. we can rebuff climate change by pointing to solar cycles, CO2 causing extra plant growth, etc.); that's fine, but some factors are overwhelmingly more important than others, whilst we seem to cling on to these stories/narratives and give them more equal weighting than we should.

As a concrete example, a family member used to leave their lights on overnight, claiming that "they use more energy than normal when they're first switched on". Whilst true, the saving is cancelled out after seconds ( e.g. https://www.energy.gov/energysaver/when-turn-your-lights )


There is also an issue of getting what you measure for since humans game systems to their benefit. Look at standardized tests - they guided education from an early age as opposed to actual educational outcomes. I remember vividly being in elementary school and they multiple workbooks with pages of analogies with occasional ambiguous answers. There wasn't any real learning just a bunch of drilling that depended on existing knowledge. Then the SAT dropped it for a writing section and analogies practically disappeared off the face of the earth. They showed up four times a year at most - literally. Usually because the quarterly state tests had one question with them.


There's a middle ground.

Practically no systemic analysis is done within government. At least in Chicago. Government systems are compartmentalized in ways that makes interfacing with them impossible for any worthwhile analysis. Example: the only analysis that Chicago's finance department had done on its parking tickets is a single very high level spreadsheet.

Some analysis is better than no analysis.


> Practically no systemic analysis is done within government

This is so far from being true, it undermines your point and your post.

The Federal Reserve does no systematic analysis? The U.S. Treasury does no systematic analysis? The Bureau of Labor Statistics does no systematic analysis? The Congressional Budget Office does no systematic analysis? The Centers for Disease Control do no systematic analysis?


Systemic, not systematic.


I don't even understand what you mean. I have read quite a few government papers (mainly federal) and they usually were well researched. I don't know how things are on a local level but the politicians in Congress have a lot of well researched data available if they want to listen (which they often don't).


Sorry - should have been more clear. I'm mostly talking about local policy, which is where most of my experience with government comes from, through many FOIA requests. Federal is much more calculated (read: slow) in comparison to local government. I've found local government to be very "we've checked the box, let's move on", which doesn't leave room for analysis, let alone the acknowledgement that analysis is even possible.


Makes sense. The way local governments deal with things like pensions is truly horrifying. Even the simplest analysis would quickly show that they are setting themselves up for disaster.


"We could add more money to the pension fund. Or we could assume a 10% market rate of return forever, and spend that money on new office chairs and computers instead. We can't get a tax increase just for those, but we could for the prospect of homeless old people eating cat food, and the next guy will get blamed for it."

They're actually setting other people up for disaster, hoping that they will already be gone when it hits.


If it is that easy to predict then it is probably career suicide to be the person who produces the analysis that proves a disaster is going to happen.


> Some analysis is better than no analysis.

I don't agree with that. Doing some analysis on a limited and possibly skewed data set can lull you into a false sense of understanding. It makes your ideas seems objective when they can in fact be completely baseless.


How is no analysis just as good?

It's hard to believe that the percentage of successes with no analysis is GTE the percentage of successes with some analysis.


I presume because faulty analysis or bad data can actually get you more off track than when its just a "back of the napkin" based guess or hunch.

And additionally, you now have the pride/confidence thing in your even worse results because you did "analysis"...


Often the pride is constant.


As always, you need to take the limitations of the available information into account. That ought to be part of the analysis, though of course it often, or perhaps usually, isn't.


Not at all. Analysis of limited data gives you wide credible intervals, the exact thing that guards against unwarranted confidence.


Are you saying that zero middle ground exists?


Obviously a middle ground exists, but it's possible that in part of the scale you get worse results as you head toward the middle.


I didn't downvote you :(


Sorry - edited. Still didn't answer the question, though :p


Once you resolve questions of values and fairness, you should be able to leverage data and metrics to achieve outcomes consistent with those values. Say you’re designing a jet engine. You decide what output parameters you care about optimising (maximum thrust versus fuel effiency, for example). You know that you can manipulate certain input parameters to influence those outputs. And you can use data to verify and iterate on your design. But all you can do is change inputs to a very complex system. The unbending rules of the system itself decide how those affect the outputs.

The problem is that most in government simply are not systems thinkers. They are focused on values and fairness, and believe that once you’ve identified those values, you can directly legislate those into outcomes. This thinking leads to spectacular failures (the war on drugs, the war on poverty, tough on crime, etc).


> The problem is that most in government simply are not systems thinkers.

I don't think the main problem is that "people in the government are stupid". Real life and societies are much more complex than a jet engine. There are thousands of value goals, too many to be able to put numerical targets on each one.

What is the optimal ratio of potholes to unsolved murders?

And when you forget to include any of the value goals in your model, then you get a paperclip AI scenario with terrible effects somewhere else.


I didn’t say they were stupid. I said they are not systems thinkers. There are lots of very smart people who don’t view the world in terms of cause and effect, cost and benefit, action and reaction. Irrational and harmful, I might say that...


> In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness.

Sure, but once you decide, then you should use data to optimize it. Seems pretty straightforward, and I don't think anything in this article would disagree with that.


Even after I know what I'm optimizing for I cannot use data to optimize for it. I know from experience that unintended results will happen. Some of them will be very base which will force me to come up with a whole new set of things to optimize for.

You are also assuming we can agree on what to optimize for. In fact we do not, and will not.


> Even after I know what I'm optimizing for I cannot use data to optimize for it. I know from experience that unintended results will happen. Some of them will be very base which will force me to come up with a whole new set of things to optimize for.

You can't use data to optimize anything? Google and Facebook must be wasting their time collecting all that user data, then.

> You are also assuming we can agree on what to optimize for. In fact we do not, and will not.

We agree on lots of things to optimize for. There are cases of disagreement, but very very broad agreement as well.


>You can't use data to optimize anything?

No, I can use data, the problem is I don't know what all the effects of that optimization will be. So I constantly have to change what I'm optimizing for.

> We agree on lots of things to optimize for.

Broadly, but there are limited resources and each thing effects the other. So even though we agree that A and B are worth optimizing for, we will disagree on which is more important. Worse in many cases we will agree on A and B, but the data shows you cannot optimize for one without pessimism the other.

That is the broad agreement isn't really enough to do anything with, we need the details and there we disagree.


> No, I can use data, the problem is I don't know what all the effects of that optimization will be. So I constantly have to change what I'm optimizing for.

What are you trying to say here? Optimizing things with data is hard?

> Broadly, but there are limited resources and each thing effects the other. So even though we agree that A and B are worth optimizing for, we will disagree on which is more important. Worse in many cases we will agree on A and B, but the data shows you cannot optimize for one without pessimism the other.

I think we even agree broadly enough on the relative weights of many things to optimize for them. And I think we at the very least agree enough on things to partially optimize them, or pareto optimize them. In many cases there are low-hanging fruit to be picked that can optimize a metric that we all agree is good without sacrificing another.


> In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness. Not to mention that matters of value and fairness also change the availability of, and interest in, research itself. For example, if some people don't like the results of your research (despite presenting no challenge to the facts or methodology), they can now apparently replace it without notice after publishing it in a journal (and do so in a manner which is intended to make it difficult to publish it in other journals).

The academy is the wrong place to direct politics, because politics already direct the academy.

From my perspective, a more likely overall improvement in the ongoing quality of policy would be a requirement that all policies sunset reasonably soon by default, even if they don't seem divisive at the time they're passed. As it regards controlled substances, this would lower the bar for repeal to "nobody particularly cares to renew it" from "nobody particularly cares to do the work to repeal it".


This is a great point. I wish more people with an interest in policy/politics also had an interest of in intellectual history.

A lot of the most important debates of the last 100+ years has been on exactly this debate.

Rationalists like Ayn Rand (sort of), Rosa Luxemburg (red Rosa) and others on one side, and the like of Karl Popper & Hayek on the other.

When people talk about "testability" of a theory to decide if it's a scientific theory... they are borrowing from Popper's criticism of Freudianism, Marxism & the concept of metric based "government science."


He doesn't seem to have done well on a lot of these predictions. These are slated to happen in the early 2000s:

> Translating telephones allow people to speak to each other in different languages.

This is sort of possible, but not fluently really.

> Machines designed to transcribe speech into computer text allow deaf people to understand spoken words.

This one basically exists.

> Exoskeletal, robotic leg prostheses allow the paraplegic to walk.

I think there are some prototypes of this type of stuff but nothing in widespread use.

> Telephone calls are routinely screened by intelligent answering machines that ask questions to determine the call's nature and priority.

Not really, we have IVRs but they are in no sense "smart"

> "Cybernetic chauffeurs" can drive cars for humans and can be retrofitted into existing cars. They work by communicating with other vehicles and with sensors embedded along the roads.

Not even close

Seems like Ray Kurzweil might not be the best predictor of what is actually going to happen in the future. Seems like a bit of a snake oil salesman to me.


Szilard features prominently in Richard Rhodes' "The Making of the Atomic Bomb." Definitely worth the read if you have an interest in early nuclear science. Really filled in a lot of details about scientists that I remember learning about in school but never knew much about personally. Highly recommend it!


Superb book, perhaps the finest non-fiction book I've had the pleasure of reading. I love the way he is able to create a narrative that goes all the way back to H. G. Wells to find its roots.

Rhodes' follow-up, "Dark Sun: The Making Of The Hydrogen Bomb", basically continues the story where TMAB left off. The narrative gets a bit more fractured and factual, focused on the question of how to safeguard nuclear weapons and what their political/military/diplomatic purpose is. But it does wrap up the J. Robert Oppenheimer story.

Once one has read these two, I strongly recommend "American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer" [2] by Bird and Sherwin. Also extremely well written.

[1] https://www.amazon.com/Dark-Sun-Making-Hydrogen-Bomb/dp/0684...

[2] https://www.amazon.com/American-Prometheus-Triumph-Tragedy-O...


we have the same bookshelf apparently! have you read any other non-fiction greatest hits tomes, like Power Broker?


as we all have the same bookshelf here, the next brick over on mine is Daniel Yergin's "The Prize". amazing work on the history of oil


"The Prize" is fantastic, as is the associated (7 hour!) PBS-produced documentary that's available in its entirety on youtube. https://www.youtube.com/watch?v=n1stQW6i1Ko The actual in-person interviews with oil execs who were in the room during nationalization are amazing!


The Power Broker is a great, huge read.

Another good one that I got recently is Arabia Felix [1], a rather obscure Danish book from 1962 from NYRB. A minor classic.

I'm not a war buff by any stretch, but I can recommend Antony Beevor. Sometimes his books devolve into exhausting, never-ending play-by-plays of tank and troop movements, but both Stalingrad and The Fall of Berlin [3] and were fascinating just for his ability to conjure up the time and place. Inside the Third Reich was similarly interesting, even it's known to be a flawed narrative.

I also recently read Bad Blood, about Theranos, which was excellent. Literary-wise not quite on the same level, though.

Got any recommendations?

[1] https://www.npr.org/2017/06/17/531929925/in-the-refrains-of-...

[2] https://www.amazon.com/Stalingrad-Fateful-1942-1943-Antony-B...

[3] https://www.amazon.com/Fall-Berlin-1945-Antony-Beevor


I got the impression from Richard Rhode's books that Szilard, for all his ability and achievements, was a pretty pretentious 'know it all' with a (possibly justified) powerful sense of his own importance. This article kind of hints at this nature as well. I think it's largely this that has prevented his name being perhaps as widely known as Oppenheimer, Teller, Fermi etc when one thinks of the Manhattan Project, as those who retell these events often appeared not to like the man particularly much. Fermi of course chose never to work with Szilard ever again.


Really great visualization here, kudos to the author. Really makes is clear what's going on. Made me think that things like this could be useful in textbooks as well if they were more digital. Maybe in the next few years there will be a better way to integrate this kind of stuff into courses, I guess today they could always happen in lectures.


Take a look at distill.pub (which wsa also featured on the HN frontpage today). Its essentially a Machine Learning Journal with a focus on clarity and visualization, and is therefore interactive and visual. Colah is around on HN and might comment explaining it better.


The Young Lady's Illustrated Primer, pending on next century's Kickstarter equivalent.

Integrating Explained Visually into digital coursework flows would be amazing.

(Disclaimer: have not taken any online courses recently, so my memories of horrendous archaic information management are wildly outdated I hope.)


Yeah that'd be a really neat application for AR as well, interactive examples displayed around the textbook for the current topic etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: