Hacker Newsnew | past | comments | ask | show | jobs | submit | JauntyHatAngle's commentslogin

The problem with that logic is that its clear now that Tesla is run by someone who is has directly said that we can wait 50-100 years to set up a sustainable energy economy, and is currently working against federal scientific bodies.

Combined with the fact that there are many EV's on the market of similar quality, it is clear you do not need to purchase from Tesla.

So you can avoid enriching someone who is in direct opposition to climate action while also making your own move to EVs. You can have your cake and eat it too.

I also take issue with "petty grievance". I do not know and will never meet Musk, there is no personal feud here.

I simply do not believe he is doing good things for the world, and the quality of the product offered does not force my hand in any way, so I will simply choose to take my money elsewhere. No petty grievance involved.


Revolutionized the battery industries and the electric car industries.

How much faster than 50 year do you think the global economy can be turned sustainable?

And what is your source for Elon working against federal scientific bodies?


I think the idea is that this keyword list will be expanded to include climate change etc over time, or some similar uncontroversial topic scientifically, but controversial socially/religiously that will reduce USA's scientific output in areas that are a genuine competition globally. E.g. climate change research to plan locations for future farming enterprises etc.

We shall see if true.


China doesn't care about climate change either.


They are building vast amounts of solar, wind, hydro, and nuclear power, and for fossil fuels they can't yet get rid of they are replacing old equipment with newer more efficient equipment.

New car sales there are over 50% EV or PHEV, up from 7% a few years ago. They currently have the 4th highest percent of EVs in use at 7.6% in 2023, behind Netherlands at 8.3%, Sweden at 11.0%, and Norway at 29.0%. The US is at 2.1%.


I'd be surprised if they weren't using climate change data in their modellin/planning.

Whether you are trying to reduce emissions or ignoring it is a separate concern than finding appropriate projected climates for agriculture etc.


They care about building solar panels, etc…


I know it's overused, but I do find myself saying YAGNI to my junior devs more and more often, as I find they go off on a quest for the perfect abstraction and spend days yak shaving as a result.


Yes! I work with many folks objectively way younger and smarter than me. The two bad habits I try to break them of are abstractions and what ifs.

They spend so much time chasing perfection that it negatively affects their output. Multiple times a day I find myself saying 'is that a realistic problem for our use case?'

I don't blame them, it's admirable. But I feel like we need to teach YAGNI. Anymore I feel like a saboteur, polluting our codebase with suboptimal solutions.

It's weird because my own career was different. I was a code spammer who learned to wrangle it into something more thoughtful. But I'm dealing with overly thoughtful folks I'm trying to get to spam more code out, so to speak.


I’ve had the opposite experience before. As a young developer, there were a number of times where I advocated for doing something “the right way” instead of “the good enough way”, was overruled by seniors, and then later I had to fix a bug by doing it “the right way” like I’d wanted to in the first place.

Doing it the right way from the start would have saved so much time.


This thread is a great illustration of the reality that there are no hard rules, judgement matters, and we don't always get things right.

I'm pretty long-in-the-tooth and feel like I've gone through 3 stages in my career:

1. Junior dev where everything was new, and did "the simplest thing that could possibly work" because I wasn't capable of anything else (I was barely capable of the simple thing).

2. Mid-experience, where I'd learned the basics and thought I knew everything. This is probably where I wrote my worst code: over-abstracted, using every cool language/library feature I knew, justified on the basis of "yeah, but it's reusable and will solve lots of stuff in future even though I don't know what it is yet".

3. Older and hopefully a bit wiser. A visceral rejection of speculative reuse as a justification for solving anything beyond the current problem. Much more focus on really understanding the underlying problem that actually needs solved: less interest in the latest and greatest technology to do that with, and a much larger appreciation of "boring technology" (aka stuff that's proven and reliable).

The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time. There are judgements all the way through that: sometimes deciding to invest in more foundational code, but by default sticking to YAGNI. Most of all is seeing my value not as weilding techno armageddon, but solving problems for users and customers.

I still have a deep fascination with exploring and understanding new tech developments and techniques. I just have a much higher bar to adopting them for production use.


We all go through that cycle. I think the key is to get yourself through that "complex = good" phase as quickly as possible so you do the least damage and don't end up in charge of projects while you're in it. Get your "Second System" (as Brooks[1] put it) out of the way as quick as you can, and move on to the more focused, wise phase.

Don't let yourself fester in phase 2 and become (as Joel put it) an Architecture Astronaut[2].

1: https://en.wikipedia.org/wiki/Second-system_effect

2: https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


Heh, I've read [2] before but another reading just now had this passage stand out:

> Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!

> I’m not saying there’s anything wrong with these architectures… by no means. They are quite good architectures. What bugs me is the stupendous amount of millennial hype that surrounds them. Remember the Microsoft Dot Net white paper?

Nearly word-for-word the same thing could be said about JS frameworks less than 10 years ago.


Both React and Vue are older than 10 years old at this point. Both are older than jQuery was when they were released, and both have a better backward compatibility story. The only two real competitors not that far behind. It's about time for this crappy frontend meme to die.

Even SOAP didn't really live that long before it started getting abandoned en masse for REST.

As someone who was there in the "last 12 months" Joel mentions, what happened in enterprise is like a different planet altogether. Some of this technology had a completely different level of complexity that to this day I am not able to grasp, and the hype was totally unwarranted, unlike actual useful tech like React and Vue (or, out of that list, Java and .NET).


> Some of this technology has a completely different level of complexity that to this day I am not able to grasp

Enterprise JavaBeans mentioned?


That's another great example!


> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.

I think this takes a kind of humility you can't teach. At least it did for me. To learn this lesson I had to experience in reality what it's actually like to work on software where I'd piled up a bunch of clever ideas and "general solutions". After doing this enough times I realized that there are very few general solutions to real problems, and likely I'm not smart enough to game them out ahead of time, so better to focus on things I can actually control.


> Most of all is seeing my value not as wielding techno armageddon, but solving problems for users and customers

Also later in my career, I now know: change begets change.

That big piece of new code that “fixes everything” will have bugs that will only be discovered by users, and stability is achieved over time through small, targeted fixes.


> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.

Thank you for putting so eloquently my own fumbling thoughts. Perfect explanation.


Here is an unwanted senior tip, in many consulting projects without the “the good enough way” first, there isn't anything left for doing “the right way” later on.


Why inflict that thinking on environments that aren’t consulting projects if you don’t have to? That kind of thinking is a big contributor to the lack of trust in consultants to do good work that is in the client’s best interests rather than the consultants’. We don’t need employers to start seeing actual employees in the same way too.


The important bit is figuring out if those times where "the right way" would have helped outweigh the time saved by defaulting to "good enough".

There are always exceptions, but there's typically order of magnitude differences between globally doing "the right thing" vs "good enough" and going back to fix the few cases where "good enough" wasn't actually good enough.


Only long experience can help you figure this out. All projects should have at least 20% of the developers who have been there for more than 10 years so they have background context to figure out what you will really need. You then need at least 30% of your developers to be intended to be long term employees but they have less than 10 years. In turn that means never more than 50% of your project should be short term contractors. Nothing wrong with short term contractors - they often can write code faster than the long term employees (who end up spending a lot more time in meetings) - but their lack of context means that they can't make those decisions correctly and so need to ask (in turn slowing down the long term employees even more)

If you are on a true green field project - your organization has never done this before good luck. Do the best you can but beware that you will regret a lot. Even if you have those long term employees you will do things you regret - just not as much.


I don’t like working in teams where some people have been there for much longer than everyone else.

It’s very difficult to get opportunities for growth. Most of the challenging work is given to the seniors, because it needs to be done as fast as possible, and it’s faster in the short term for them to do it than it would be for you to do with with their help.

It’s very difficult for anyone else to build credibility with stakeholders. The stakeholders always want a second opinion from the veterans, and don’t trust you to have already sought that opinion before proceeding, if you thought it was necessary to do so (no matter how many times you demonstrate that you do this). Even if the senior agrees with you, the stakeholder’s perception isn’t that you are competent, it’s that you were able to come to the right conclusion only because the senior has helped you.


> then later I had to fix a bug

How much later? Is it possible that by delivering sooner your team was able to gain insight and/or provide value sooner? That matters!


In many cases, we didn’t deliver sooner than we could have, because my solution had roughly equivalent implementation costs to the solution that was chosen instead. In some cases the bug was discovered before we’d even delivered the feature to the customers at all.


Ah, but that’s assuming the ‘right way’ path went perfectly and didn’t over-engineer anything. In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.

Having witnessed first-hand over-engineering waste millions of dollars and years of time, on more than one occasion, by people advocating for the ‘right way’, I think tallying the time wasted upgrading an under-engineered solution is highly error prone, and that we need to assume that some percentage of time we’ll need to redo things the right way, and that it’s not actually a waste of time, but a cost that needs to be paid in search of whether the “right way” solution is actually called for, since it’s often not. The waste might be the lesser waste compared to something much worse, and it’s not generally possible to do the exact right amount of engineering from the start.

Someone here on HN clued me into the counter acronym to DRY, which is WET: write everything twice (or thrice) so the 2nd or 3rd time will be “right”. The first time isn’t waste, it’s necessary learning. This was also famously advocated by Fred Brooks: “Play to Throw One Away” https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/plan...


> In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.

The “right way” examples I’m thinking of weren’t over-engineering some abstraction that probably wasn’t needed. Picture replacing a long procedural implementation, filled with many separate deprecated methods, with a newer method that already existed and already had test coverage proving it met all of the requirements, rather than cramming another few lines into the middle of the old implementation that had no tests. After all, +5 -2 without any test coverage is obviously better than +1 -200 with full test coverage, because 3 is much smaller than 199.


You make a strong case, and you were probably right. It’s always hard to know in a discussion where we don’t have the time and space to share all the details. There’s a pretty big difference between implementing a right way from scratch and using an existing right way that already has test coverage, so that’s an important detail, thank you for the context.

Were there reasons the senior devs objected that you haven’t shared? I have to assume the senior devs had a specific reason or two in each case that wasn’t obviously wrong or idiotic, because it’s quite common for juniors to feel strongly about something in the code without always being able to see the larger team context, or sometimes to discount or disbelieve the objections. I was there too and have similar stories to you, and nowadays sometimes I manage junior devs who think I’m causing them to waste time.

I’m just saying in general it’s healthy to assume and expect imperfect use of time no matter what, and to assume, even when you feel strongly, that the level of abstraction you’re using probably isn’t right. By the Brooks adage, the way your story went down is how some people plan for it to work up front, and if you’d expected to do it twice, then it wouldn’t seem as wasteful, right?


Everything in moderation, even moderation.


This isn't meant to be taken too literally or objectively, but I view YAGNI as almost a meta principle with respect to the other popular ones. It's like an admission that you won't always get them right, so in the words of Bukowski, "don't try".


Agreed. I’ve been trying to dial in a rule of thumb:

If you aren’t using the abstraction on 3 cases when you build it, it’s too early.

Even two turns into a higher bar than I expected.


Your documentation will tell when you need an abstraction. Where there is something relevant to document, there is a relevant abstraction. If its not worth documenting, it is not worth abstracting. Of course, the hard part is determining what is actually relevant to document.

The good news is that programmers generally hate writing documentation and will avoid it to the greatest extent possible, so if one is able to overcome that friction to start writing documentation, it is probably worthwhile.

Thus we can sum the rule of thumb up to: If you have already started writing documentation for something, you are ready for an abstraction in your code.


It's more case by case for me. A magic number should get a named constant on its first use. That's an abstraction.


C++ programmers decided against NULL, and for well over a decade, recommended using a plain 0. It was only recently that they came up with a new name: nullptr. Sigh.


That had to do with the way NULL was defined, and the implications of that. The implication carried over from C was that NULL would always be null pointer as opposed to 0, but in practice the standard defined it simply as 0 - because C-style (void*)0 wasn't compatible with all pointer types anymore - so stuff like:

   void foo(void*);

   void foo(int); 

   foo(NULL);
would resolve to foo(int), which is very much contrary to expectations for a null pointer; and worse yet, the wrong call happens silently. With foo(0) that behavior is clearer, so that was the justification to prefer it.

On the other hand, if you accept the fact that NULL is really just an alias for 0 and not specifically a null pointer, then it has no semantic meaning as a named constant (you're literally just spelling the numeric value with words instead of digits!), and then it's about as useful as #define ONE 1

And at the same time, that was the only definition of NULL that was backwards compatible with C, so they couldn't just redefine it. It had to be a new thing like nullptr.

It is very unfortunate that nullptr didn't ship in C++98, but then again that was hardly the biggest wart in the language at the time...


Like almost anything in IT, if you are good at using a hammer, you treat everything like a nail.


if you're good at using a hammer, the nail will be hammered in perfectly and without faults.

The only problem is if you're bad at using the hammer, and you only know how to use a hammer.


You certainly can legislate freedoms. The biggest news of last year was enabling legislating away access to abortion in many states for example.

Internet freedom is harder to legislate, but with enough backing you can legislate things in a way to make it difficult for the common person to access certain websites, which is certainly impactful.

It hasn't happened yet but there is nothing to say it wouldn't happen with enough time and backing of certain politicians.


I think that rather avoids one of the best parts of hacker news - getting the vision/justifications from the creator directly.


While I'm not saying most people by far aren't doing it to save money, I have bought switch games and then emulated them just to try the graphics at 4k 60fps that isn't possible on the console.

That's the part I'm sad about - that we won't get emulated games that look and feel better due to faster hardware in the future. Money isn't an issue for me.


I think they were referring to CIFS. Not the innocence project.


Exactly.

"Kate Judson is a lawyer who often deals with crimes that did not occur. As the executive director of the Wisconsin-based Center for Integrity in Forensic Sciences (CIFS), her job is to examine ostensible scientific evidence to see whether it backs up prosecutors' claims.

"Some people who died were classified as victims of homicide when they were really the victim of illness, or accident, or suicide, or medical error—that kind of thing," says Judson. "We had a case of a family that lost their child. The caregiver was accused of attacking her. It was later discovered, based on new medical evidence, that the child had been really ill with a disease she was probably born with."

Evidence can't bring a child back, obviously. But it can get an innocent person out of jail. And it can give a grieving family some peace of mind. To learn that your child "was held and comforted in their last moments, instead of attacked," says Judson, "would be important to know."

When the center was founded four years ago, Judson left her job as a public defender to become its first employee. Now a staff of four works to keep bad science out of the courtroom. "


To be fair, it's a definite truth that shows get more chance of funding based on what is popular and how it fits into that perceived potential popularity.

So TBBT is a part of the overall zeitgeist, an example of a show of its type that was very successful.

You bet if a sci-fi show was number 1, than other sci-fi shows would be being made and better funded.


What is sad is that popular comedy shows on Netflix and elsewhere are old network shows.

America doesn't know how to make comedy anymore. I blame Twitter.


Uk is going through a comedy golden age last decades. I attribute it to bbc doing an excellent job giving chances to younger comedians on gameshows like mock the week, qi, 9 out of 10 cats, im sorry i havent a clue etc... as well as the prominence of edinburgh fringe

I think this sort of talent development really is just about giving chances to new folks, its the risk averse large networks only re-hiring the same older folks that stifles an artistic sector, be it movies, shows, music, games, comedy etc


I don't see anything on Netflix that is trying to compete with TBBT, which is a CBS show firmly targeted at Middle America.

If anything, Netflix comedies are the opposite of TBBT, there's no audience laughter, it's all single-camera like Arrested Development, The Office and other 2000s-era neo-sitcoms.


Doping is probably still common, but the more endurance or speed based the more drugs help.

Soccer values attributes drugs exaggerate - but it has a skill component that is more important vs cycling. Soccer also has breaks and substitutes and ways to slow the game down to close the gap.

I'm not ignorant to the tactics and technique required in cycling, but its a smaller part of the sport vs tactics and skill in football.

So it's not as important to dope. But yes, still happens.


As I understand it, the main benefit of PEDs is improved recovery times. Regardless of the extent to which it affects the first race / game / training session, it permits your body to get back to peak condition more rapidly. The incremental gains from spending more time in the optimal performance window quickly compound.


There is a huge universe of PEDs, legal and otherwise. Some improve recovery times. Some accelerate muscle growth. Some, like EPO, directly boost aerobic capacity.


PEDs aren't just for endurance. Prescription painkillers, stimulants and corticosteroids are easy to get prescriptions for or even TUEs. The typical player in the NHL gets tested 2-3 times a year, and almost never in the off season. Some sports barely test more than once a year. In cycling it's closer to once a month for an average rider but it can get a lot higher if you're successful as most orgs will test their top 3 after every stage or race.


> In cycling it's closer to once a month for an average rider but it can get a lot higher

Extreme example (https://www.cyclingnews.com/news/jumbo-visma-and-uae-team-em...):

“Jonas Vingegaard has had no fewer than four blood tests in the last 48 hours. We are happy to participate in this.”


No, the cardio/fitness aspect of soccer absolutely cannot be overlooked, especially at the top levels where players are at the margin and need that edge, and where the amount of competition and pay and global attention is two orders of magnitude higher than cycling. And you got shit like EPO which basically erases recovery time, but you'll never catch unless you test daily.

>> Soccer also has breaks and substitutes

Kiiiinda, but not really at the pro level.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: