I wanted that for the switch 1, and then Nintendo started selling a lite with no dock instead. Probably most other people use their switches differently than I do. I guess you could get a switch that someone broke the screen, maybe for $100 less.
The elephant in the room here is that, like so many issues, AI has become political for some people. In this case your "AI skeptic friends" are your "defund the police" and "abolish ICE" friends. For 95% of people AI is just a new technology to be loved and feared.
There are some real technical debates to be had about the capabilities of LLMs and agents but that's not what's driving this "AI skepticism" at all. The "AI skeptics" decided AI is bad in advance and then came up with reasons, the weakest of which is the claim that it's all or mostly hype.
> The elephant in the room here is that, like so many issues, AI has become political for some people. In this case your "AI skeptic friends" are your "defund the police" and "abolish ICE" friends. For 95% of people AI is just a new technology to be loved and feared.
I really don’t think there is this “elephant” at all. If you reduce critique, skepticism and fear people might have to political slogans of mainly fringe online activists, I wonder how objective your assessment of those attitudes can actually be.
Fear of AI is universal but skepticism is not. It's a (primarily) far left meme that AI is just hype and should be compared to scams like cryptocurrencies.
I linked you research showing the opposite of what you claim. I'd recommend not looking at topics through this culture war lens, it tends to corrupts people's ability to be objective and thoughtful.
Maybe you could consider that some of us are not AI skeptics "in advance" and have already witnessed its downsides that in some particular conditions outweight its upsides.
To me anything less than true level 4 should remain with the driver.
I also believe that marketing it as FSD should be liable and scrutinized as a level 4 system. Because when you hear FSD, the public naturally thinks the abilities marked in level 4 arguably even 5.
Until the car requests intervention and the timer runs out, levels 3 and 4 are supposed to have the same behavior. If that process has not happened, why should the driver's level of responsibility be any different?
(Though a consequence is that levels 3 and 4 are very close together in difficulty. We might not see many level 3 cars.)
Volvo was visionary and dismissed Level 3 in about 2014 for being too dangerous. Basically the car drives until it doesn't and you may suddenly die because the time to get the situation and react is too short. Level 3 way purely for managers to claim it would be a linear progression whereas it is petty much THE gorge of automated driving. If you look at the SAE table it's just a little blue wart in a green column, but it's a lethal one.
> because the time to get the situation and react is too short
The time is up to the manufacturer, isn't it?
Mercedes uses 10 seconds right now and that seems pretty good to me. At that point I know it can't be too dire or the car would have already emergency stopped.
The time depends on how quickly an event unfolds in traffic. You can't guarantee 10s notice for an event that is imminent in 2s and the system might not be able to handle or can't detect.
The car could become temporarily "blind" for some reason with just 4-5s to brake before a collision. It's enough for a human driver even considering reaction time. But it's impossible to guarantee a minimum time without the ability to predict every issue that will happen on the road.
If there isn't a guaranteed minimum time, then it's not level 3, it's advanced level 2. Level 3 needs to be able to handle very rapid events by itself.
If it becomes "blind" because of an unexpected total system failure, that's an exception to the guarantee just like your transmission suddenly exploding is an exception. It had better be extremely rare. If it happens regularly then it needs a recall.
When dealing with unpredictable real life events there are no guarantees, unless we're considering the many carveouts to that definition from a legal perspective. A blind car (fluke weather, blown fuse, SW glitch, trolley problem) can no longer guarantee anything. Giving the driver 10s, or assuming the worst and braking hard could equally cause a crash.
> your transmission suddenly exploding is an exception
As long as the brakes or steering work a driver could still avoid a crash. The driver having a stroke is closer to a blind car.
> When dealing with unpredictable real life events there are no guarantees
The guarantee here is that the human isn't obligated to intervene for a moment.
If you call that guarantee impossible, then what about level 4 cars? They guarantee that the human isn't obligated to intervene ever. Are level 4 cars impossible?
Is this a wording issue? What would you say level 4 cars promise/provide? Level 3 cars need to promise/provide the same thing for a limited time. And that time has to be long enough to do a proper transfer of attention.
> The guarantee here is that the human isn't obligated to intervene for a moment.
Ah, understood. So the guarantee is that the driver is not legally responsible for anything that happens in those 10s. I always took that as a guarantee of safety rather than from legal consequences.
It's more about safety than legality. But with the understanding that nothing is perfect.
The guarantee is that you will be very safe and you can go ahead and look away from the road and pay attention to other things. But at most this is as good as a level 4 or 5 car, not an impossibly perfect car.
Yes but there is a minimum time (if a bit under-specified)
> "At Level 3, an ADS is capable of continuing to perform the DDT (Dynamic Driving Task) for at least several seconds after providing the fallback-ready user with a request to intervene."
I feel like that lack of standardization is part of the problem. Some manufacturers may pick different times to avoid nuisance braking, but that translates to higher risk to the driver. I’d like to see some core parameters like this standardized (whether by an industry body or regulator).
I've had several different cars from a few different manufacturers with different levels of ADAS systems and have used them on many long road trips and short trips. I haven't used any Tesla ADAS system for very long though.
The highest level of ADAS system I use regularly has facial attentiveness tracking. If you spend too much time drinking coffee or even looking out the sides of the car it will alert you and eventually turn off. So you're not spending a ton of time drinking coffee or reading emails.
It's really nice having the car just want to stay in the center of the lane and keep the following distance all on its own. It's less fatiguing on your hands and arms having the car feel like it's in a groove following all the curves for you instead of resisting your input all the time for hours and hours. It's incredibly nice not having to switch between the brake and the gas over and over in stop and go traffic. Instead, the only thing I need to focus on are the drivers around me and be ready to brake.
I've driven between Houston, Dallas, and Austin dozens of times with ADAS systems and another dozen or so times with only basic cruise control. It's way nicer when the only time I have to touch the gas and brake are getting on and off the highways. I'm considerably more relaxed and less exhausted getting to my destination.
Let's assume all these options are either the same price or an immaterial difference to the price of your next car. If you had an option for a car with basic cruise control or no cruise control, which one would you take? If the option was basic cruise or adaptive cruise which kept pace with traffic and operated in stop and go conditions, which would you choose?
You're right cruise control or whatever you want to call it is definitely great on long journeys on the highway. Maybe car makers should concentrate on those systems?
They are, other than Tesla. GM's SuperCruise, Ford's BlueCruise, and Mercedes DrivePilot are the few actually hands-free driving systems made by the legacy automotive companies and they're all largely locked to only fully operate on mapped and approved highways.
I actually think its worse than driving yourself. Humans are OK at doing a repetitive task non-stop. They're terrible at sitting still doing nothing waiting to quickly spring into action. They fall asleep or their mind wanders. This is something a computer is good at yet we've got it reversed. The car drives along doing mundane things but then hands it over to groggy human right when things really get hairy.
And then there's the skill atrophy. How do you learn to perform in stressful situations? By building up confidence and experience with constant repetition in more mundane ones, which this robs you of.
And the part of my sentence that you cut off was all about the circumstances of intervention.
Level 2 requires the driver to choose whether to intervene at all times. This is an unreasonable task for humans.
Level 3 puts the car in charge of when intervention is needed, and even once it wants intervention it still has to maintain safe control for several seconds as part of the system spec.
Level 4 puts the car in charge of when intervention is wanted, but you can refuse to intervene and it has to be able to park itself.
So I will double down on my claim. Until the car requests intervention AND the timer runs out, level 3 and 4 are the same. They require the same abilities out of the car. And that section of time, between wanting intervention and getting intervention, is the hardest part of level 3 driving by far. If you can solve that, you're 90% of the way to level 4.
A level 3 car has to be able to handle emergencies several seconds long, and turning it into level 4 is mostly adding the ability to park on the shoulder after you get out of the initial emergency.
The gap between 4 and 5 is a bunch bigger. A level 4 car can refuse to drive based on weather, or location, or type of road, or presence of construction, or basically anything it finds mildly confusing. 5 can't.
I edited a bit for clarity, but also I'll append a thought experiment as an extra edit:
A level 3 car with an hours-long driver intervention timer is basically identical to a level 4 car.
If you have a 0 second intervention timer, you're barely better than a level 2 car.
How long does the timer have to be before developing your level 3 system is almost as difficult as a level 4 system?
I agree with your thought experiments and also agree that overall it's a valid, technically accurate interpretation. So, this may be where we agree to disagree.
I still stand by level 3 != level 4 in terms of real world liability.
Level 3 allows too much wiggle room and sloppiness to be able to legally shift liability away from the driver. At that point you're playing that "intervention period" length. Manufacturers claiming Level 3 will want to lower it as much as possible, regulators raise it. To me, Level 3 simply shouldn't exist.
Only at Level 4 is the expectation, without a doubt, the machine is in control. A person in the driver seat is optional because the steering wheel and pedals are as well. When people bought "Full Self Driving" they seriously believe "when can I go to sleep?" ability is where it belongs, which always put the expectation at Level 4.
Saying level 3 shouldn't exist makes sense. But I don't think the liability gets very blurry as long as the intervention period is properly documented.
It looked like the Mercedes system is 10 seconds which seems like plenty to me.
And while it would be nice to sleep I'll be pretty happy just looking away from the road.
"A lie", FSD as it stands right now is a lie. A few cars might be able to drive a few geofenced places, but no car anywhere can drive anywhere, even with perfect weather and visibility and I'd even wager no traffic or even no other cars at all. Our Subaru gives up steering if there's no olcar in front, on "suburban" and rural roads about 35% of the time. More on some roads, less on others. I cannot determine, while driving, the cause for half of the self driving disable occurrences. No fog line and a broken center for an intersection on a 1 lane road it'll shut off nearly every time. It's surprising when it doesn't.
I've clocked nearly a half million miles on the road (I'll be there sometime in the next 9 months), and the range of technical ability you need to drive in just the US, no, scratch that, any given state or even county varies so much and potentially so often that FSD is just a lie to sell cars. I'm willing to upload a full hour drive touring a few parishes around here in my quite heavy Lexus, front and rear cameras, just to prove my point. I'd do it in the subaru but the dashcam isn't very good and also it's lineage is rally so it exaggerates how poor the roads are. My YouTube has dashcam footage of drives that I'm willing to bet no automated system could handle, even if it claimed to be "level 5". Driving after a storm or hurricane is another issue. I know the hazards in general and specifically for the areas I'd need to travel during or after an emergency. I cannot fathom the amount of storage and processing that would take, to have that for every location with roads. On board, in the car? Maybe in 20 years.
> I cannot fathom the amount of storage and processing that would take, to have that for every location with roads. On board, in the car? Maybe in 20 years.
Doing some napkin math, with 4 million miles of road in the US, if you wanted to store 1KB of data per meter of road, hundreds of data points, you'd only need 7TB for the entire database.
And the processing to make it shouldn't be anything special, should it? Collection would be hard.
Currently that would probably cost ~$500 per car to implement based on retail pricing of 8TB SSDs. It would need to be updated constantly, too, with road closures, potholes, missing signage, construction. With an external GPS unit like a tomtom, they had radio receivers in the power cord that tuned to traffic frequencies, if available, and could route you around closures, construction, and the like, so you need a nationwide network to handle this. Cellphone won't cut it. Starlink might, but regardless, you need to add that radio and accoutrements to the BOM for each car.
and i'm not talking about the processing of the dataset that gets put onto the 8TB SSD in the car; i am talking about the processing of the data on the 8TB SSD on the car while at speed.
furthermore, i am fairly certain that it would take, on average, more than 1.6MB per mile to describe the road, road condition, hazards, etc. a shapefile of all roads in the US - that which gets one closer to knowing where the lanes are, how wide the shoulders are, etc is 616MB. and it's incomplete - i put in two roads near me with fairly unique names and neither are in the dataset. So your self driving car using these GIS datasets won't know those roads.
I had an idea to put an atomicpi in my car, with two cameras. it has a bosch 9-dof sensor on the board, coupled with the cameras you can map road surface perturbations, hazards, and the like, which i believe will be much more than 1KB per meter, especially as you need "base" conditions and updates and current conditions (reported by the cars in front of you, ideally).
the csv GIS dataset looks like this:
and i ran, for example `awk -F, '/PACIFIC COAST/ {sum += $4} END {print sum}' NTAD*.csv` and it spat out 79.04, which i think is a bit shorter than reality. Looks like the dataset i pulled is only "major roads" as well - but that doesn't explain 79.04 as the sum of lengths of all rows with "PACIFIC COAST" in them. It does show the total length of interstate 10 is 3986.55, which is roughly double what the actual length is (2460mi), so perhaps i'm just not understanding this dataset.
Anyhow 600+ MB for just that sort of information (plus shapes) for only a really quite small subset of roads in the US.
anyhow my thoughts are scattered, this input box is too small, and i'm not really arguing. Maybe it is possible, but it will raise the price thousands of dollars per auto, you need infrastructure (starlink will work) to update the cars, and so on. I'm prepared to admit i am wrong, but your comment didn't move the needle for me.
If you want such constant updates that's tricky to distribute and hard to collect, but let's put that aside for a bit. I want to focus on the amount of data and how the car would use it. With $500 of SSD being nice and cheap.
> i am talking about the processing of the data on the 8TB SSD on the car while at speed.
I'm not worried about that. The actual driving takes such powerful computers that even if there was a petabyte of total data, the amount the car would have to process as it moves would be a trickle in comparison to what it's already doing. Max 50KB per 10 milliseconds. And obviously the data would by sorted by location so there's very little extra processing required.
But you tell me, how many data points do you think you need per meter of road?
I really don't think you need millimeter-level surface perturbations all the way across. Mapping the precise edges of the road and lanes should only need dozens of data points, 4 bytes each. And then you can throw a few more dozen at points inside the lanes to flesh it out. You can throw a hundred data points at each pothole without breaking a sweat. Measuring the surface texture in various ways and how it responds to weather is only going to take a handful of bytes per square meter, in a way that repeats a lot and is easy to compress.
That's an extremely inefficient format. Unnecessary object ids, repeating metadata over and over, way too many decimal places, and all stored as text.
But even then, your database is so tiny compared to the size I suggested that I don't think we can extrapolate anything useful. Even if we 4x it or whatever to compensate for a lack of rural roads.
Suffice to say that the 600MB just lets you draw the roads on a plane, it's like comparing an ascii art drawing of the road (from .csv/shp) to a digital still of the road (the amount of information you'd actually need). you absolutely cannot rely on "a couple of sensor [types]". I mentioned i have nearly a half million miles on the road. All of that prior experience influences my driving when i am driving someplace new. in that 8TB disk, you have to find a way to produce that "experience", except instead of my 0.5mm miles, you are talking about the aggregate "experience" of 0.5mm miles per road per unit of time (day for some places like I-10 through Los Angeles, month for others, maybe a year for some "rural" roads.)
none of this has to do with visual or proprioception. It's knowing "every inch" of road. It's knowing how far i can leave the center of the lane if someone else crowds me or goes over the center divider, because the shoulder is soft through here because logging trucks have been exiting the forest onto the highway. It's knowing what part of I-605 floods - not the whole thing, some lanes, some places, and "flood" means 2+ inches of water on the road surface, hitting it at speed makes a tidal wave flying into other lanes. If someone hits that in front of you, you're blind for a couple of seconds minimum. If we want to have semi trucks be "FSD" it needs to know, for the traffic and other conditions, how fast to go and what gear to be in to climb each hill, and then the hazards that are over the hill - that a trucker would know. Where's the gravel bed on more mountainous passes? Or more simply, what time of day neighborhoods are more likely to have people approaching or going through / out of intersections, blind or otherwise. How many "bytes" is that information, times every neighborhood? If many cars brake at the same place, there's probably a reason, and that needs to be either in the dataset or updated somehow if conditions change. You ever used Waze and had a report of something on the road or a cop parked somewhere, and it's nowhere to be seen? And that's updated much more frequently than the radio-info on the GPS systems i referred to earlier. Some roads become impassable in the rain, some roads ice more readily.
If this was easy/simple/solved, waymo et al would be bragging about it, the tech in their cars. Waymo (or the other one) specifically, because they cover less than 0.1% of road surfaces in the US, in some of the most maintained and heavily traveled corridors in the world. So, if anyone from a robotaxi company happens by and knows roughly how much storage is needed for <0.1% of the road surfaces in the US, then we could actually start to have this dialog in a meaningful way. Also i am unsure how much coverage robotaxis actually have in their service area. A "grid system" of roads makes mapping and aggregate data "simple", for sure.
This reminded me a bit of the idea that somewhere in the US there's a database of every sms sent to or from US cellular phones. "it's just text; it'll compress well" - belies how much text there is, there.
for reference, the map in my lexus is ~8GB, for the US. And that's just "shapes" and POI and knowing how the addressing works on each road. It doesn't know what lane i'm in, it doesn't track curves in the road effectively (the icon leaves the road while i'm driving quite often), and overpasses and the like confuse all GPS systems i've ever used - like in Dallas, TX where it's 4 layers high and parallel roads stacked. furthermore, just the road data on google maps for the nearest metro area to my house is 20MB. i have a recollection it goes real quick into hundreds of MB if you need to download maps for the swaths of areas where there is no cellphone reception, like areas in western Nevada. given 20MB for my metro, that's 40GB of just road shapes and addresses for the US, which is much more than the 600MB incomplete GIS files i downloaded.
so we've moved from fencing 600MB "text" data; to the actual data needed by a GPS to give directions, 8000MB. Your claim is that a mere 1000x more data is enough to autonomously self-drive anywhere in the US, at any time of day or year, etc...
you know who actually has this data and would know how big it is? Tesla.
The part of the computer that knows how to drive is completely separate from the 7TB database of the exact shape and location of every lane and edge and defect.
> knowing how far i can leave the center of the lane if someone else crowds me or goes over the center divider
Experience, not in the database.
> knowing what part of I-605 floods
> Where's the gravel bed on more mountainous passes?
That goes in the database but it's less than one byte per meter.
> How many "bytes" is that information, times every neighborhood?
I don't know why you would want that data, you should be wary of blind traffic at all times, but that's easy math. There's less than a million neighborhoods and time based activity levels for each neighborhood would be about a hundred bytes. So: Less than 1 byte per meter and less than 100MB total.
> If this was easy/simple/solved, waymo et al would be bragging about it
This doesn't happen for two reasons. One they are collecting orders of magnitude more data than road info, two like I keep saying the collection is extremely difficult and I'm only defending the storage and use as being feasible.
> This reminded me a bit of the idea that somewhere in the US there's a database of every sms sent to or from US cellular phones. "it's just text; it'll compress well" - belies how much text there is, there.
Well we know how many meters of road there are. So it's basic multiplication.
I can tell you how many hard drives you need to store a trillion texts. It's five hard drives.
Google thinks the human race sends almost ten trillion text messages per year. So I guess you could store them all very easily? Why do you think it's not doable?
> Your claim is that a mere 1000x more data is enough to autonomously self-drive anywhere in the US, at any time of day or year, etc...
My claim is that 1000x is enough for utterly exhaustive road maps. Figuring out how to drive is another thing entirely.
ohhhh, we're arguing past eachother. I am unsure how to reconcile.
an SMS isn't just "140 characters/bytes" or whatever (i honestly don't care what your definition of "SMS" is). Of course you could fit 140 characters * 1e12 onto 5 hard drives. Where are you going to put the 1PB (for 1e12, but your own cite says it's 1e13, so 10PB) of metadata, minimum? the most barebones amount of metadata you need to actually have actionable "intelligence" is 1KB per message (technically i was able to finagle it to ~1016 bytes.) And that's for every message, even an SMS that is the single character "K".
you need the metadata to derive any information from the SMS. "Lunch?" "yeah" "where?" "the place with the wheel" "okay see you in 25, bring Joel" This is what you propose to save. (quick math shows you went off something like ~32TB of sms data per 1e12 messages)
in the same way that you propose that the shapes of a road and it's direction and distance "plus 1KB of metadata per meter" is enough to derive the ability to drive upon those roads.
It's pretty obvious that just using sensors is not going to get FSD. Maybe in the next 20 years we will develop sensor technology (and swarm networking and whatever else) that will allow us to dispense with the "7TB" of metadata. My argument is that: we need much more "metadata" than 1KB per meter to "know the road baseline, current conditions, hazards", much in the same way a text message is more than 140 bytes. Driving with "only sensors" and rough GPS has killed people. It does not matter if human drivers have more death per million miles or whatever, because i am strictly talking about FSD, what other people are calling level 5 (i'd even concede level 4; although i wouldn't be able to use a level 4 car where i live for roughly 1/4th the year - and other areas would have more than 1/4th the year.)
Obviously you can reduce this, but there's a minimum viable amount of metadata, that's my claim, and it's more than 1KB per meter. that snippet is ~1800bytes as is. the "current conditions" would not be part of the dataset on the "7TB" disk. that would need to be relayed or otherwise ingested by the car as it drives - the way my 2012 lexus tells me that i'm about to drive into a wild storm, but that's all the extra information i get out of its infotainment system. waze is a better example of the sort of realtime updates i expect a FSD to need; although i expect many times more points of information than waze has, maybe dozens, maybe hundreds more. and each "trick" you do to reduce the size of the metadata necessarily implies more CPU needed to parse and process it.
> the most barebones amount of metadata you need to actually have actionable "intelligence" is 1KB per message (technically i was able to finagle it to ~1016 bytes.) And that's for every message, even an SMS that is the single character "K".
How did you reach that number?
I figure the most important metadata is source and destination phone numbers and a timestamp, and I guess what cell tower each phone was on. A phone number needs 8 bytes, and timestamp and cell tower can be 4 bytes, so that's 28 bytes of important metadata.
> (quick math shows you went off something like ~32TB of sms data per 1e12 messages)
I was going for a full 140TB of data. 20-30TB hard drives are available.
I did consider metadata, but I figured you could probably put that in the savings from non-full-length messages.
> Where are you going to put the 1PB (for 1e12, but your own cite says it's 1e13, so 10PB) of metadata, minimum?
Well for just the US it would be closer to 1PB. But, uh, I'd store it in a single server rack? (ideally with backups somewhere) As of backblaze's last storage pod post, almost three years ago, it cost them $20k per petabyte. That's absolutely trivial on the scale of telecomms or governments or whatever.
> My argument is that: we need much more "metadata" than 1KB per meter to "know the road baseline, current conditions, hazards", much in the same way a text message is more than 140 bytes.
I mean, I agree with you about needing extra information.
But that's why the number I gave is 10000x larger than your CSV. My number is supposed to be big enough to include those things!
> note: the metadata for a meter of road could be:
I really appreciate the effort you put into this. I have two main things to say.
A) That's less than a kilobyte of information. Most of the bytes in the JSON are key names, and even without a schema for good compression, you can replace key names with 2-byte identifier numbers. And things like "critical" and "Active roadwork zone with lane closure" should also be 1-byte or 2-byte indexes into a table. And all the numbers in there could be stored as 4 byte values. Apply all that and it goes down below 300 bytes. If you had a special schema for this, it would be even lower by a significant amount.
B) Most of those values would not need to be repeated per meter. Add one byte to each hazard to say how long it lasts, 0-255 meters, instant 99% savings on storing hazard data.
> each "trick" you do to reduce the size of the metadata necessarily implies more CPU needed to parse and process it.
CPUs are measured in billions of cycles per second. They can handle some lookup tables and basic level compression easily. Hell, these keys are just going to feed into a lookup table anyway, using integers makes it faster. And not repeating unchanged sections makes it a lot faster.
and again - if you use clever tricks to reduce this, you increase the overhead to actually use the data.
get a celltower snooper on your phone and watch the data it shows - that's the metadata for your phone. SMS dragnet would need that for both phones, plus the message itself.
It's not an integer. But you can store it inside 64 bits. You can split it into country code and then number, or you can use 60 bits to store 18 digits and then use the top 4 bits to say how many leading 0s to keep/remove. Or other things. A 64 bit integer is enough bits to store variable length numbers up to 19 digits while remembering how many leading zeros they have.
If you want really simple and extremely fast to decode you can use BCD to store up to 16 digits and pad it with F nibbles.
> JSON
Most of this is unimportant. Routing path, really? And we don't need to store the location of a cell tower ten million times, we can have a central listing of cell towers.
I don't think we really need both phone number and IMEI but fine let's add it. Two IMEI means another 16 bytes. And two timestamps sure.
Phone number, IMEI, timestamp, cell tower ID, all times two. That's still well under 100 bytes if we put even the slightest effort into using a binary format.
> and again - if you use clever tricks to reduce this, you increase the overhead to actually use the data.
No no no. Most of the things I would do are faster than JSON.
Removing the steering wheel and pedals from the robotaxi is Tesla embracing culpability, whether they like it or not. If they are negligent and cannot claim human error they will face huge damage awards.
It seems clear to me at least that Elon did a major pump of FSD, realized he was full of shit so got into politics to try to hack the system in his favor to hide the truth
i think it's fairly easy to get 80+ percent of the way to FSD and it looks like you're on the verge of being able to moat your company with actual FSD. He should and probably did know better - although i've seen lots of videos/articles about how he isn't actually that proficient technically.
even if that 80% was 99%, that last 1% will be the cause of some mishaps.
my subaru is within a few percent of 80% FSD if everything is turned on. I still technically have to hold the wheel, but the steering only shuts off about 20% of the time with that being met.
This is the same attitude that people used to try and avoid any culpability for Boeing in the 737-Max crashes. Even if they was a technical way to avoid a crash, it doesn’t avoid negligent or blatantly bad engineering practices. There’s a reason why engineers are expected to have an ethical duty to the public. Automakers get an industrial exemption on the assumption that the internal processes are sufficient to address the risk…What are we supposed to do when they aren’t?
Relative amateurs assuming that the people who work on Go know less about programming languages than themselves, when in almost all cases they know infinitely more.
The amateur naively assumes that whichever language packs in the most features is the best, especially if it includes their personal favorites.
The way an amateur getting into knife making might look at a Japanese chef's knife and find it lacking. And think they could make an even better one with a 3D printed handle that includes finger grooves, a hidden compartment, a lighter, and a Bluetooth speaker.
FWIW, I have designed several programming languages and I have contributed (small bits) to the design of two of the most popular programming languages around.
I understand many of Go's design choices, I find them intellectually pleasing, but I tend to dislike them in practice.
That being said, my complaints about Go's error-handling are not the `if err != nil`. It's verbose but readable. My complaints are:
1. Returning bogus values alongside errors.
2. Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled.
Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error.
> Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled
I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
> I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
Let me detail my claim.
Broadly speaking, in programming, there are three kinds of errors:
1. errors that you can do nothing about except crash;
2. errors that you can do nothing about except log;
3. errors that you can do something about (e.g. retry later, stop a different subsystem depending on the error, try something else, inform the user that they have entered a bad url, convert this into a detailed HTTP error, etc.)
Case 1 is served by `panic`. Case 2 is served by `errors.New` and `fmt.Errorf`. Case 3 is served by implementing `error` (a special interface) and `Unwrap` (not an interface at all), then using `errors.As`.
Case 3 is a bit verbose/clumsy (since `Unwrap` is not an interface, you cannot statically assert against it, so you need to write the interface yourself), but you can work with it. However, if you recall, Go did not ship with `Unwrap` or `errors.As`. For the first 8 years of the language, there was simply no way to do this. So the entire ecosystem (including the stdlib) learnt not to do it.
As a consequence, take a random library (including big parts of the stdlib) and you'll find exactly that. Functions that return with `errors.New`, `fmt.Errorf` or just pass `err`, without adding any ability to handle the error. Or sometimes functions that return a custom error (good) but don't document it (bad) or keep it private (bad).
Just as bad, from a (admittedly limited) sample of Go developers I've spoken to, many seem to consider that defining custom errors is black magic. Which I find quite sad, because it's a core part of designing an API.
In comparison, I find that `if err != nil` is not a problem. Repeated patterns in code are a minor annoyance for experienced developers and often a welcome landscape feature for juniors.
Again, you don't need to define a new error type in order to allow callers to do something about it. Almost all of the time, you just need to define an exported ErrFoo variable, and return it, either directly or annotated via e.g. `fmt.Errorf("annotation: %w", ErrFoo)`. Callers can detect ErrFoo via errors.Is and behave accordingly.
`err != nil` is very common, `errors.Is(err, ErrFoo)` is relatively uncommon, and `errors.As(err, &fooError)` is extraordinarily rare.
You're speaking from a position of ignorance of the language and its conventions.
Indeed, you can absolutely handle some cases with combinations of `errors.Is` and `fmt.Errorf` instead of implementing your own error.
The main problem is that, if you recall, `errors.Is` also appeared 8 years after Go 1.0, with the consequences I've mentioned above. Most of the Go code I've seen (including big parts of the standard library) doesn't document how one could handle a specific error. Which feeds back to my original claim that "errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled".
On a more personal touch, as a language designer, I'm not a big fan of taking an entirely different path depending on the kind of information I want to attach to an error. Again, I can live with it. I even understand why it's designed like this. But it irks the minimalist in me :)
> You're speaking from a position of ignorance of the language and its conventions.
This is entirely possible.
I've only released a few applications and libraries in Go, after all. None of my reviewers (or linters) have seen anything wrong with how I handled errors, so I guess so do they? Which suggests that everybody writing Go in my org is in the same position of ignorance. Which... I guess brings me back to the previous points about error-fu being considered black magic by many Go developers?
One of the general difficulties with Go is that it's actually a much more subtle language than it appears (or is marketed as). That's not a problem per se. In fact, that's one of the reasons for which I consider that the design of Go is generally intellectually pleasing. But I find a strong disconnect between two forms of minimalism: the designer's zen minimalism of Go and the bruteforce minimalism of pretty much all the Go code I've seen around, including much of the stdlib, official tutorials and of course unofficial tutorials.
> Indeed, you can absolutely handle some cases with combinations of `errors.Is` and `fmt.Errorf` instead of implementing your own error.
Not "some cases" but "almost all cases". It's a categorical difference.
> Most of the Go code I've seen (including big parts of the standard library) doesn't document how one could handle a specific error. Which feeds back to my original claim that "errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled".
First, most stdlib APIs that can fail in ways that are meaningfully interpret-able by callers, do document those failure modes. It's just that relatively few APIs meet these criteria. Of those that do, most are able to signal everything they need to signal using sentinel errors (ErrFoo values), and only a very small minority define and return bespoke error types.
But more importantly, if json.Marshal fails, that might be catastrophic for one caller, but totally not worth worrying about for another caller. Whether an error is fatal, or needs to be introspected and programmed against, or can just be logged and thereafter ignored -- this isn't something that the code yielding the error can know, it's a decision made by the caller.
> Not "some cases" but "almost all cases". It's a categorical difference.
Good point. But my point remains.
> First, most stdlib APIs that can fail in ways that are meaningfully interpret-able by callers, do document those failure modes. It's just that relatively few APIs meet these criteria. Of those that do, most are able to signal everything they need to signal using sentinel errors (ErrFoo values), and only a very small minority define and return bespoke error types.
>
> But more importantly, if json.Marshal fails, that might be catastrophic for one caller, but totally not worth worrying about for another caller. Whether an error is fatal, or needs to be introspected and programmed against, or can just be logged and thereafter ignored -- this isn't something that the code yielding the error can know, it's a decision made by the caller.
I may misunderstand what you write, but I have the feeling that you are contradicting yourself between these two paragraphs.
I absolutely agree that the code yielding the error cannot know (again, with the exception of panic, but I believe that we agree that this is not part of the scope of our conversation). Which in turn means that every function should document what kind of errors it may return, so that the decision is always delegated to client code. Not just the "relatively few APIs" that you mention in the previous paragraph.
Even `text.Marshal`, which is probably some of the most documented/specified piece of code in the stdlib, doesn't fully specify which errors it may return.
And, again, that's just the stdlib. Take a look at the ecosystem.
> I absolutely agree that the code yielding the error cannot know (again, with the exception of panic, but I believe that we agree that this is not part of the scope of our conversation). Which in turn means that every function should document what kind of errors it may return, so that the decision is always delegated to client code.
As long as the function returns an error at all, then "the decision [as to how to handle a failure] is always delegated to client [caller] code" -- by definition. The caller can always check if err != nil as a baseline boolean evaluation of whether or not the call failed, and act on that boolean condition. If err == nil, we're good; if err != nil, we failed.
What we're discussing here is how much more granularity beyond that baseline boolean condition should be expected from, and guaranteed by, APIs and their documentation. That's a subjective decision, and it's up to the API code/implementation to determine and offer as part of its API contract.
Concretely, callers definitely don't need "every function [to] document what kind of errors it may return" -- that level of detail is only necessary when it's, well, necessary.
> Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error
But you have to return something to satisfy the function signature's type, which often feels bad.
>> Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled
> I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
I agree to a point, but if you look at any random Go codebase, they tend to use errors.New and fmt.Errorf which do not lend themselves to branching on error conditions. Go really wants you to define a type that you can cast or switch on, which is far better.
> Go really wants you to define a type that you can cast or switch on, which is far better.
Go very very much does not want application code to be type-asserting the values they receive. `switch x.(type)` is an escape hatch, not a normal pattern! And for errors especially so!
> they tend to use errors.New and fmt.Errorf which do not lend themselves to branching on error conditions
You almost never need to branch on error conditions in the sense you mean here. 90% of the time, err != nil is enough. 9% of the time, errors.Is is all you need, which is totally satisfied by fmt.Errorf.
Returning an error -- or, more accurately, identifying an error and returning an annotation or transformation of that error appropriate for your caller -- is a way of handling it. The cases where, when your code encounters an error, that it can do anything other than this are uncommon.
This goes completely against the golang error-handling mindset.
Error handling is so important, we must dedicate two-thirds of the lines of every golang program to it. It is so important that it must be made a verbose, manual process.
But there's also nothing that can be done about most errors, so we do all this extra work only to bubble errors up to the top of the program. And we do all this work as a human exception-handle to build up a carefully curated manual stack trace that loses all the actually-useful elements of a stack trace like filenames and line numbers.
Handling errors this way is possible in only very brittle and simplistic software.
I mean, you're contradicting your very own argument. If this was the primary/idiomatic way of handling errors... then Go should just go the way of most languages with Try/Catch blocks. If there's no valuable information or control flow to managing errors... then what's the point of forcing that paradigm to be so verbose and explicit in control flow?
> Go very very much does not want application code to be type-asserting the values they receive. `switch x.(type)` is an escape hatch, not a normal pattern! And for errors especially so!
A type assert/switch is exactly how you implement Error.Is [^0] if you define custom error types. Sure it's preferable to use the interface method in case the error is wrapped, but the point stands. If you define errors with Errors.New you use string comparison, which is only convenient if you export a top level var of the error instead of using Errors.New directly.
> You almost never need to branch on error conditions in the sense you mean here. 90% of the time, err != nil is enough. 9% of the time, errors.Is is all you need, which is totally satisfied by fmt.Errorf.
I'd argue it's higher than 9% if you're dealing with IO, which most applications will. Complex interfaces like HTTP and filesystems will want to retry on certain conditions such as timeouts, for example. Sure most error checks by volume might be satisfied with a simple nil check, it's not fair to say branching on specific errors is not common.
> If you define errors with Errors.New you use string comparison.
With `Errors.New`, you're expected to provide a human-readable message. By definition, this message may change. Relying on this string comparison is a recipe for later breakages. But even if it worked, this would require documenting the exact error string returned by the function. Have you _ever_ seen a function containing such information in the documentation?
As for `switch x.(type)`, it doesn't support any kind of unwrapping, which means that it's going to fail if someone in the stack just decides to add a `fmt.Errorf` along the way. So you need all the functions in the stack to promise that they're never going to add an annotation detailing what the code was doing when the error was raised. Which is a shame, because `fmt.Errorf` is often a good practice.
I was actually referring to the implementation of errors.Is, which uses string comparison internally if you use the error type returned by errors.New and a type cast or switch if you use a custom type (or the cases where the stdlib defines a custom error type).
> A type assert/switch is exactly how you implement Error.Is [^0]
errors.Is is already implemented in the stdlib, why are you implementing it again?
I know that you can implement it on your custom error type, like your link shows, to customize the behavior of errors.Is. But this is rarely necessary and generally uncommon..
> If you define errors with Errors.New you use string comparison, which is only convenient if you export a top level var of the error instead of using Errors.New directly.
What? If you want your callers to be able to identify ErrFoo then you're always going to define it as a package-level variable, and when you have a function that needs to return ErrFoo then it will `return ErrFoo` or `return fmt.Errorf("annotation: %w", ErrFoo)` -- and in neither case will callers use string comparison to detect ErrFoo, they'll use errors.Is, if they need to do so in the first place, which is rarely the case.
This is bog-standard conventional and idiomatic stuff, the responsibility of you as the author of a package/module to support, if your consumers are expected to behave differently based on specific errors that your package/module may return.
> Complex interfaces like HTTP and filesystems will want to retry on certain conditions such as timeouts, for example. Sure most error checks by volume might be satisfied with a simple nil check, it's not fair to say branching on specific errors is not common.
Sure, sometimes, rarely, callers need to make decisions based on something more granular than just err != nil. In those minority of cases, they usually just need to call errors.Is to check for error identity, and in the minority of those minority of cases that they need to get even more specific details out of the error to determine what they need to do next, then they use errors.As. And, for that super-minority of situations, then sure, you'd need to define a FooError type, with whatever properties callers would need to get at, and it's likely that type would need to implement an Unwrap() method to yield some underlying wrapped error. But at no point are you, or your callers, doing type-switching on errors, or manual unwrapping, or anything like that. errors.As works with any type that implements `Error() string`, and optionally `Unwrap() error` if it wants to get freaky.
> Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error.
Ah yes the classic golang philosophy of “just avoid bugs by not making mistakes”.
Nothing stops you from literally just forgetting to handle ann error without running a bunch of third party linting tools. If you drop an error on the floor and only assign the return value, go does not care.
I know..! Ignoring an error at a call site is a bug by the caller, that Go requires teams to de-risk via code review, rather than via the compiler. This is well understood and nobody disputes it. And yet all available evidence indicates it's just not that big of a deal and nowhere near the sort of design catastrophe that critics believe it to be. If you don't care or don't believe the data that's fine, everyone knows your position and knows how dumb you think the language is.
Indeed, while not being a fan of how this aspect of Go, I have to admit that it seldom causes issues.
It is, however, part of the reasons for which you cannot attach invariants to types in Go, which is how my brain works, and probably the main reasons for which I do not enjoy working with Go.
Yeah, I mean, Go doesn't see types as particularly special, rather just as one of many tools that software engineers can leverage to ship code that's maintainable and stands the test of time. If your mindset is type-oriented then Go is definitely not the language for you!
To be fair there are lots of people who have used multiple programming languages at expert levels that complain about go - in the same ways - as well! They might not be expert programming language designers, but they have breadth of experience, and even some of them have written their own programming languages too.
Assuming that all complainants are just idiots is purely misinformed and quite frankly a bit of gaslighting.
"To be fair there are lots of pilots who have flown multiple aircraft at an expert level that complain about the Airbus A380 - in the same ways - as well! They might not be expert airplane designers, but they have a breadth of experience, and even some of them have created their own model airplanes too."
Yes, non-experts can have valid criticisms but more often than not they're too ignorant to even understand what trade-offs are involved.
see there you go again assuming. im talking about people who have written programming languages that are used in prod with millions of users, not people with toy languages.
is the entire go community this toxically ignorant?
"they’re smarter than me" feels like false humility and an attempt to make the medicine go down better.
1. Thomas is obviously very smart.
2. To be what we think of as "smart" is to be in touch with reality, which includes testing AI systems for yourself and recognizing their incredible power.
I haven't paid close attention. Why can't people make money with MCP-based APIs? Why can't providers require API keys / payment to call their functions?
Sure they can - they're just another API interface tailored for LLMs. I think parent and OP are in fact ranting about that (many APIs being locked behind signups or paywalls). Not sure I agree with the criticism though. In my view, web 2.0 was a huge success: we went from a world with almost no APIs to one where nearly every major website or app offers one. That's real progress, even if we didn't turn every business into an open data non-profit.
OpenAI and these companies hires inexperienced people with zero operational experience and this is how they run things. It's almost funny if you didn't see how unreliable the end result was.
Postgres is powerful but just not suited for this role. But if your only tool is a hammer...
Based on your replies I have zero reason to believe you know any better at all, considering the false statements you've already made (trivially proven, no less!) and lack of any meaningful critique of the source material.
I hate how "cracked" went from fun gamer lingo to joining the "gm fellow kids, any ninja rockstar whizzkid coder prodigies ready to get on their grindset and hustle to the moon?" pantheon seemingly overnight.