Hacker Newsnew | past | comments | ask | show | jobs | submit | dmvdoug's commentslogin

> * I'm not seeing what makes SpaceX government funded beyond just that it provides services to the government*

Take away all of SpaceX‘s government contracts. You imagine SpaceX would still be in business?

As you said, every launch provider is basically dependent on government contracts to stay in business because the government is the only entity that has a legitimate need for launch capability such that it’s willing to pay for its development. There are no sufficiently profitable private contracts out there to sustain a launch provider.


Do you have any evidence for any of your claims beyond not liking the idiot that owns the company?



It’s true of all private launch providers, not just SpaceX.


Do you disagree that cultivating the launch provider industry in this way has strategic value?


Nope, but that wasn’t what I was responding to, either.


Yes, this. And the reason why congressional appropriations plummeted was that no one saw any need to maintain such high expenditures. There hasn’t been an actually coherent vision of what NASA is supposed to be working towards since the Apollo Program. Everything after that is lurching from one project to another, justifying it based on short-term possibility rather than committing to a longer-term goal the agency is supposed to be achieving. Just look at Shuttle. It accomplished some nice things, but it was always a dead end. Everybody in NASA knew it. ISS: accomplished some nice things, dead end. Sure, you can talk about how these were steps along the way to learning about long-term human habitation in space, but we’ve never had a coherent vision for that that everyone is aligned with. What they really were: make-work projects that were at least short-term justifiable, executed in order to preserve NASA’s capacity to do anything at all.


Nah, that’s false. Miniaturization was already underway before the Space Race. The space program absolutely benefited from it, yes. But NASA wasn’t at the forefront of those developments.


I was talking about rapid miniaturization, not just miniaturization in general, which I agree was underway before any space development.

NASA literally had departments and budgets dedicated to miniaturization.


I’ll give you an example: the technology in the Instrument Unit on the Saturn V, which was the computer that controlled the Saturn V during launch, was largely derived from System/360. By technology here I mean things like the Unit Logic Devices (ULDs) out of which the logic boards in the Launch Vehicle Digital Computer (LVDC) were made. No surprise, I suppose, given that it was contracted to IBM’s Federal Systems Division.


Sure, but compare density of a s360 mainframe and the Apollo Guidance Computer - they pumped a lot of money into integrated circuits just as they weee becoming viable to hit their size, mass and power targets.

Sure, this would likely have happend anyway, but possibly later with all related knock off effects.


> Sure, this would likely have happend anyway, but possibly later with all related knock off effects.

What are we missing because they did that though? Or what came latter? There is no way to answer this. It is easy to see what happened because of effort, but not what you didn't get (or got latter) because to focused on something else.


Minuteman III perhaps.


Dude’s been arguing with people since at least 2012 that systemd is a good thing. It took me less than a minute to figure that out by searching his blog.


It was very odd to start a “review” of a book from 1992 by criticizing it for lacking all the things you think a book published in 2025 should have. And then searching GitHub for code related to it, like TFA is expecting this to be something widely read as an introduction. TFA never considers who the target audience for the book was—in 1992, hardly a year when books about compilation techniques were looking to reach a wider audience (like Nyquist’s book Crafting or something).


Beowulf really doesn’t stand up to the Marvel franchise.

Alternatively, OP might have a bit of learning still to do.


It’s almost like what really matters when something goes wrong is who responds to the incident. There are individual human beings who genuinely give a shit about customer service, and will move heaven and earth in order to help customers. And then there are other individual human beings who want to do as little as possible, when confronted with an issue, and blaming the customer is often the shortest route to minimal work.

It really doesn’t matter what the organization’s policies and procedures are. At most, an organization’s culture may affect this, by nudging marginal cases to align with the culture. But in the end, it always comes down to individual human beings.


No, because the first one isn’t talking about writing documentation. It’s talking about knowledge discovery as a learned skill that eroded when web searching replaced how knowledge used to be sought. They actually say: even in the new-fangled domain of web searching, which you would think web natives would be better at, it’s actually people who had learned the skills and techniques of knowledge discovery pre-web who were better at finding what they were looking for. Now, why they think that is the case is a bit harder to grok, having to do with their object-oriented (sorry, sorry) view of understanding/knowledge.

Contrast that with the second quote. Good documentation could be in a dusty book in the library or in a SPA. What makes the documentation good isn’t, however, related to people’s ability to navigate information spaces.


> What makes the documentation good isn’t, however, related to people’s ability to navigate information spaces.

Then what's the point? If nobody can use the documentation properly, then the term "good documentation" is meaningless.


I think the article is saying that good documentation is objective, and is not defined by ease of use. You will ingest this difficult documentation and you will like it, because it is good for you.

You might reasonably ask "in what way".

> this is how documentation is, because this arrangement is part of its integrity, and this is how you must learn to use it and work with it.

The word "integrity" comes up six times. Something about integrity.


Yeah, this isn't something you put your name on. It's something the company pays you to do, to make the product better. Good documentation significantly improves a product. Which means making that information accessible to web natives.

Luckily, unlike web natives, LLM's have read lots of documentation cover to cover. Likely a good way to teach LLM's about your product is to write good documentation.


CVE statistics are also pretty hard to interpret in light of the kernel team’s willingness to assign CVE numbers for most any bugfixes.


This has to do with their policy on assigning CVE numbers, which is that pretty much any bugfix might be security-related because it’s the kernel, so it doesn’t take much to get a number assigned. See https://docs.kernel.org/process/cve.html.


I seem to recall that Linus Torvalds has the opinion that he doesn’t much treat security bugs more differently than he does regular kernel bugs. Perhaps this is why?


Yes, but it became more than just Linus and Greg’s view that couldn’t be overcome by outside argument, and became more formally Kernel Policy once they became a CVE number assigning authority.


Many other statutory schemes were enacted afterwards that placed additional restrictions on the tariff authority Congress gave the President. You can’t read one section of one statute and just assume it alone applies. Just look at the variety of crap you can find in Titles 19 and 50 having to do with trade policy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: