Hacker Newsnew | past | comments | ask | show | jobs | submit | robnagler's commentslogin

People have been saying Buffet is a lucky outlier for decades. Is his success in the following decades survivorship bias or a valid ex post facto test of success? If you believed in Buffett's investment philosophy in the 1980s based on decades of results, and had a thousand bucks to buy a Berkshire A share, would it have been a lucky pick?

I didn't believe you could beat the market until about 2000, and I'm happy I started investing in Berkshire then. If I had bought Apple instead, it would have been a lucky pick imho. I think of investing in Berkshire to be based on ex post facto results, not survivorship bias.

A decade ago I did an analysis of the top 25 on the Forbes list[1], and the TL;DR is that they 1) inherited their wealth, 2) had bet on a single company, or 3) been a value investor. Nobody in the top 25 is a "trader" or "quant" (e.g. Paulson or Soros). That analysis is still true, btw.

[1] https://www.robnagler.com/2009/06/13/Objectively-Rich.html


Few people who analyse Buffett think he's just been lucky. He's more like a chess master who is just better at his game than most he competes with.


It's interesting that the 20y compound return are pretty close. I don't understand the STDev value in the tables, and it is not explained. Is that the the standard deviation of the annual returns?


This Freakonomics-style argument doesn't work. The only price that matters is the market clearing price. Appraisers don't know that more than anybody else. The only thing an independent appraiser would do is use (well-known) comps to anchor the price, which is not something a seller wants. Rather, a seller wants competition among buyers.

I used to think that real estate agents were a waste of time until I lived through several difficult transactions. Each real estate transaction involves negotiations for loans, inspection objections, and various legal details. For example, a good real estate agent will make sure the title commitment documents are in hand early enough that you have time to review them, because they often have a lot of mistakes. This can kill a closing and/or require post-transaction work to correct.

I bought a house in the middle of a flood (Boulder 2013). The inspection objection had already been negotiated before the flood occurred. Getting the deal done was a huge task, because the seller was lying about the house being affected by the flood. I still wanted the house, but I needed proper compensation for the remediation. My real estate agent did an amazing job of bringing the deal together by tossing in some money and getting the other broker and seller to contribute a bit.

I'm now living with the above real estate agent, and she's on the phone right now at 7:45p working on a deal. I have watched her pull together deals, and the important thing is that all her clients are happy. I have met many of them, and they are very grateful for her work. I know there are many other agents out there like her (maybe not quite as good :-).


You seem to have missed the point of the appraiser.

The market clearing price is not set in stone. It is determined partly by the actions of the agents.

The appraiser's estimate of the market clearing price would set a measuring point from which the agent's performance can be measured. Agents are then paid according to how their performance exceeds the expectation.

Without the appraiser's estimate, the performance of the agents can not be measured. There is no way to reward good performance or punish bad performance. The agents are simply collecting their cut of the transaction.


A bad measurement is worse than no measurement. The appraiser’s estimate may or may not (likely not in hot or cold markets) off a useful benchmark.


In this case, bad measurements are OK. They average out over time. If the bias is one way, then one type of agent (seller or buyer) gets a bit more pay than the other, but this is not a disaster and is compensated by agents competing to work on the higher-paying side of the transactions.


More bikeshedding for the new year:

  #!/bin/bash
  for n in {1..100}; do
    f=
    ((n % 3)) || f=Fizz
    ((n % 5)) || f+=Buzz
    echo ${f:-$n}
  done


I disagree there are no consequences. Broken ribs have been known to occur.[0]

[0] https://www.viarob.com/my/page/Snowtorturing


RadiaSoft | Research Software Engineer | Boulder, CO | FULL-TIME | ONSITE | http://rsl.link/rse

RadiaSoft is an open-source software company dedicated to improving scientific workflows through our state-of-the-art Science-as-a-Service platform, Sirepo. We are seeking a Research Software Engineer for our Boulder, Colorado office. Our small software team develops innovative solutions using modern technologies for legacy physics codes. Programmers also assist our physicists to optimize codes for HPC, to improve computational science workflows, to support reproducible research, and to present our innovative results to the scientific community at conferences and in publications.

Desired skills and experience

The successful applicant will have a BS or higher in computer science or physics or a closely related field, strong programming skills, and significant experience with Javascript and/or Python. Other desirable skills and experience include: parallel computing, CAD, and devops.

About the employer

RadiaSoft conducts contract R&D for academic, corporate and federal customers, specializing in the use and development of parallel scientific software. We have particular expertise in modeling relativistic charged particle beams, intense radiation pulses, and the interaction of both with plasmas and dielectric structures. We are developing open source browser-based interfaces for cloud-based scientific computing.

RadiaSoft offers a challenging, team-oriented work environment, competitive salaries and an excellent benefit package, including health insurance and a 401k with employer matching. Applicants must be authorized to work for any employer in the United States. RadiaSoft is an equal opportunity employer.


goalkicker.com is related to codeday.top afaict. They are the only two sites on Google that match some of the snippets. CodeDay is a machine translation blog which is very sophisticated. The PDFs appear to be a reorganization of those blog entries.

Very interesting. Well-written by machines.


Something is wrong with the math:

* Video encoding is 70% of Netflix's computing needs

* Stated cost savings on encoding is 92%

This equals a 64% savings on "Netflix's computing needs", which must be a good chunk of their budget.


What's wrong with the math? They saved a significant amount of money doing this, which is why it was worth doing.


I would think it would be a significant shareholder event to save 64% of their total computing budget. Their 2016 10K

https://ir.netflix.com/secfiling.cfm?filingID=1628280-17-496...

says their 3rd party cloud computing costs increased by $23M in 2015 and increased a further $27M in 2016. More importantly, their total technology and development cost increased from $650M in 2015 to $850M in 2016. That includes engineers so it's a bit tricky to figure out what their costs really are, but they didn't go down appreciably in 2016 so this effect had to be in 2017. Looking at their first three 10Q's shows their Technology and Development budget keeps going up. I don't see how a streaming service could save 64% of its total computing needs, and it not show up on any of its SEC filings.

In 2016 they completed a seven year move of all their computing infrastructure to AWS. When they sized the system, they bought reserved instances (probably at a significant discount). From the math in this article, they bought 64% more than they needed if they just discovered they could save 64% of their EC2 budget. That seems like a big math error.


I was a former insider so I can't get into details, but if you look you'll see that almost the entirety of their cost is content acquisition. The next biggest cost is salary.

Servers barely even register as far as costs go, and also, it's a 92% savings over what the cost would have been without the system, not an absolute 92%. The farm keeps growing all the time as movies become higher resolution and more encodings are supported.


Technology and Development (T&D) is directly related to computing, not content acquisition or sales. Computing costs are significant in this category, since they point them out explicitly in the notes. 64% savings would show up, since we know that 3rd party cloud costs increased by $50M over the last two years. For the number of instances they must be using, the costs must be running well over $100M.


The number you don't know is how much the encoding workload grew in the time it took them to develop the system.

Let's use your numbers. Say that two years ago computing costs were $50M, and encoding was $46M of that. Now say that their costs are currently $100M, but the encoding workload grew 6X. Under the old system, that would have cost $276M, but under the new system it is on $22M. That would be a 92% savings, and would totally be in line given that in the last few years they have drastically increased their machine learning output, which would have overtaken encoding work.


1. Keep in mind "potential savings" is different from "actual savings".

If the EC2 hosts have already been reserved then Netflix "cloud costs" would be relatively stable. What you might see is a reduced rate of increase y/y.

2. Keep in mind Amdhal's Law (or rather a slight variant). The absolute reduction has to be weighted against the percentage of resource usage that has been reduced.

(All numbers are made up) If previously Netflix was paying 3MM for encoding, and now they are paying 0MM; Compared to an annual 30MM on streaming with a ~25% y/y growth, you wouldn't notice the missing 3MM unless it was pointed out.


Their capacity keeps growing so the savings would be over the same amount of computing, but overall the total amount of computing they do keeps going up as shows move from being shot in 4K to 8K, and they acquire more and more shows to broadcast.


It’s 64% they didn’t need most of the time, but would need during peak usage. The “spot market” uses this buffer capacity only when it isn’t serving customers, preventing slowdowns and outages.


In addition to what everyone else has said, I think the wording is quite important. My reading is that encoding takes up 70% of their CPU hours. A 92% saving on that is obviously very good, but there's a lot of other ways you can spend money on aws.


In the HPC world, there are many of these: SLURM, Torque, PBS.


Don't forget Globus and Condor!


How could you overlook (son of) Grid Engine


Excellent article, thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: