I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
pmarreck 2 hours ago [-]
Interestingly, I think that similar types of arguments are made against "agentic coding"
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
ownagefool 2 hours ago [-]
Sure, but you can do that in a diff after the event, rather than live.
catlikesshrimp 2 hours ago [-]
Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
Symmetry 2 hours ago [-]
SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.
JumpCrisscross 2 hours ago [-]
I think you mean SAE Level 3. SAE Level 2 is “lane centering” and “adaptive cruise control” [1]. (Level 3 is “when the feature requests, you must drive.)
Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
grvdrm 2 hours ago [-]
Reminds me of a situation not long ago.
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
x187463 2 hours ago [-]
One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
grvdrm 35 minutes ago [-]
Didn't know about that feature. Thanks for the illumination. On verge of going full electric and looking at BMW, Lucid, Porsche, Rivian, Tesla.
I wonder what's taught to new drivers about this sort of situation. My intuitive feeling (driving for almost 30 years) is you drive with the flow of traffic when traffic is present. I don't see too many left lane drivers glued to speed limits, but it's obvious when someone is a fast or slow.
throwanem 2 hours ago [-]
Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
x187463 2 hours ago [-]
Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD doesn't not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
grvdrm 29 minutes ago [-]
Glad you're ok!
I was watching the Tesla display on my way back home from LaGuardia airport last week (passenger, not driver).
No accidents or close calls, but it was obvious that I might be focused on 1 or 2 things in that very busy and chaotic environment whereas the car (FSD or otherwise) sees more than 2 things and possibly avoids something on my behalf.
throwanem 1 hours ago [-]
Huh.
I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.
(Jumping a light in the dark? Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter? I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.)
XorNot 2 hours ago [-]
Which is just worse.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
grvdrm 25 minutes ago [-]
Not sure about your car but the car I have with augmented cruise requires hands on wheel. Turns off otherwise. (Volvo XC90)
I agree that there are situations where what I do as a trained driver is different from augmented cruise.
A good example (or perhaps I'm wrong) is this: in a lane, car pulls into lane in front of me and between the car further ahead. Now I don't have enough space in between me and that new entrant. But instead of using brakes (unless eggregious), I bleed speed until I make space I want. Augmented cruise doesn't do that - it hits brakes.
So, from behind, I think it looks like I'm using my brakes a lot more than I am when on augmented cruise. And excessive brake use distracts the driver behind me.
x187463 2 hours ago [-]
Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.
IgorPartola 2 hours ago [-]
A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.
ghaff 2 hours ago [-]
I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.
butlike 2 hours ago [-]
I think their point was "it's not ready yet."
catlikesshrimp 2 hours ago [-]
"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"
Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.
sobellian 2 hours ago [-]
Would it be a vote of no confidence in Full Self Flying?
tclancy 2 hours ago [-]
No, it would be an acknowledgement of the lack of perfection in human systems so far.
saalweachter 2 hours ago [-]
I mean, they kinda are.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
tclancy 2 hours ago [-]
That presents an interesting failure mode challenge.
singpolyma3 2 hours ago [-]
Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.
How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.
2 hours ago [-]
d1sxeyes 2 hours ago [-]
To be fair, that report says
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
idop 2 hours ago [-]
It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
IDK, this has the same unethical energy as police turning off body cameras.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
JumpCrisscross 2 hours ago [-]
> the "right thing to do" is NOT turn off the cameras just before a collision
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
onemoresoop 2 hours ago [-]
This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.
scott_w 2 hours ago [-]
Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.
boringg 2 hours ago [-]
I guess i'll step in for the counter.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
onemoresoop 2 hours ago [-]
Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
scott_w 2 hours ago [-]
The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
x187463 2 hours ago [-]
This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
kombookcha 2 hours ago [-]
Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
x187463 2 hours ago [-]
> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
bena 2 hours ago [-]
So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
plqbfbv 1 hours ago [-]
It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.
> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.
plqbfbv 28 minutes ago [-]
I'm just stating what I remember, I'm not trying to defend Tesla.
For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.
2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc
Bluestrike2 2 hours ago [-]
Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
It's the Swiss national radio/tv service, they probably have the article in 4 languages or more
pmarreck 2 hours ago [-]
Are these still accidents where the driver was not paying attention, though?
singpolyma3 2 hours ago [-]
Of course. But the argument that the nature of FSD causes them to not pay attention.
adev_ 2 hours ago [-]
So...For a bit of context on the video and the article:
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
limbero 2 hours ago [-]
Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
outime 2 hours ago [-]
I've lived for many years in two large European countries and in both cases I found them hard to trust. Perhaps you have deep, first-hand knowledge of multiple European countries but in my experience they take too much money and are heavily biased. For that reason I'd prefer there to be no public broadcast companies - at least so my tax money doesn't support manipulation. In over 30+ years of life, I've never encountered a truly neutral public broadcaster in Europe, though I'm sure there may be exceptions.
spiderfarmer 2 hours ago [-]
In my country I judge them purely by what they do and say in the sectors where I know a lot about, and the facts they bring are mostly correct.
Also, they don't tout a single party line.
u_sama 2 hours ago [-]
They have left leaning biases, RTVE is basically a propaganda channel for the PSOE at this point and France Info/France2 have center-left biases which makes them not neutral and representing the corpus of society.
They are all well-funded though.
adev_ 2 hours ago [-]
> Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
paganel 2 hours ago [-]
The national broadcaster here in Romania has been politically leaning on whoever was paying the bills, hence on who’s holding political control over the country.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).
spiderfarmer 2 hours ago [-]
You're probably responding to Swiss person that lives in the USA.
RoxiHaidi 2 hours ago [-]
One day an AI will obviously be infinitely better at driving than a human will be but that day is not yet here.
baq 2 hours ago [-]
it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.
JumpCrisscross 2 hours ago [-]
> that day is not yet here
Have you been in a Waymo? SAE Level 4 is here, and it’s safer than humans [1].
Not the OP, but I have! I also have FSD v14 in my Tesla
Vastly VASTLY prefer Waymo. It's very good at its mission and is, at minimum, infinitely better than being in an Uber rideshare. I'd rather wait 20 minutes for a Waymo than 5 for any Uber or 0 to use my own car.
Ironically, Waymo got me much more interested in using my city's public transportation offering which is much better than I previously thought.
That said, Tesla FSD v14 is the best autonomous option for a supervised system that you can actually use.
gloosx 1 hours ago [-]
This is Waymo saying Waymo cars are safer than humans. Obviously the "it’s safer than humans" claim is selection biased, statistically underpowered apples-to-oranges comparison with limited sample size
JumpCrisscross 24 minutes ago [-]
> Obviously the "it’s safer than humans" claim is selection biased, statistically underpowered apples-to-oranges comparison with limited sample size
I haven't seen a good criticism of their methodology. If you have one, I'd be curious about their take.
On a more-direct measure, Waymos have had starkly lower fatalities and at-risk incidents than human drivers on average and, I think, near their best.
bluefirebrand 2 hours ago [-]
Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
pmarreck 2 hours ago [-]
> to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
senordevnyc 2 hours ago [-]
Yeah, you also have to consider that your kids can be on either side of the equation too.
catlikesshrimp 2 hours ago [-]
I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.
CrazyStat 2 hours ago [-]
We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
bluefirebrand 2 hours ago [-]
> You mean deaths to multiple other people, do you not
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
AlotOfReading 2 hours ago [-]
I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
JumpCrisscross 2 hours ago [-]
> deaths the AI predicts for other people
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
XorNot 2 hours ago [-]
It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
saalweachter 1 hours ago [-]
> Which is of course what the road rules are: you slam on the brakes.
Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".
You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".
If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).
That does leave you all sorts of room to come up with realistic trolley problems.
JumpCrisscross 29 minutes ago [-]
> That does leave you all sorts of room to come up with realistic trolley problems
But all require a human (or malicious) driver on one hand. The more rule-following AVs on the road, the fewer the opportunities for such trolley problems.
And I'd still argue that debating these ex ante is, while philosophically fascinating, not a practical discussion. I'm not seeing a case where one would code anything further than collision avoidance and e.g. pre-activating restraints.
Timon3 2 hours ago [-]
The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?
What is the lowest likelihood of your own death you'd find acceptable in this situation?
JumpCrisscross 2 hours ago [-]
> not sure I would trust it
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
bluefirebrand 2 hours ago [-]
Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
JumpCrisscross 2 hours ago [-]
> what happens when the tech gains market capture
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
2 hours ago [-]
maxerickson 2 hours ago [-]
I find it interesting that you don't give other drivers any consideration in your analysis.
bluefirebrand 2 hours ago [-]
Other drivers should take public transit if they don't want to / are afraid to operate their own vehicles
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
maxerickson 2 hours ago [-]
No, I mean that they are not prioritizing you and many make poor choices.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
watwut 2 hours ago [-]
They are not afraid to operate their own vehicles. They are afraid you will kill them.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
qsera 2 hours ago [-]
Appreciate the honesty.
occamofsandwich 2 hours ago [-]
Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
bluefirebrand 2 hours ago [-]
Feel free to minimize your own risk by staying home and never leaving
occamofsandwich 2 hours ago [-]
Feel free to minimize both our risks by not polluting public space with your personal crap.
senordevnyc 2 hours ago [-]
“Infinitely” is a high bar, but Waymo is already demonstrably better than the majority of human drivers.
qsera 2 hours ago [-]
But only in very controlled environments...
rimliu 2 hours ago [-]
Was it 2015 when HN was full of prediction we won't be driving in five years?
From what I see the serious accidents with human drivers are caused by deliberately doing the dangerous thing (in my corner of the world - mostly overtaking at the wrong place or time, or both). Besides that humans drive very safely. Outside of the tightly controlled environment I don't see self-driving getting any better till systems have a proper world-model. So, maybe never.
Geee 2 hours ago [-]
This is about the old autopilot, not FSD, and there doesn't seem to be anything new in the article. This is based on the same leaked data which has been public since 2023. The title seems to be inaccurate, as there's nothing to indicate that they hid fatal accidents.
Look I don't like Tesla as much as the next person, I think it is wildly over-hyped and over-valued. But this article is just slop.
The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
Lerc 2 hours ago [-]
>Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
2 hours ago [-]
2 hours ago [-]
HFguy 2 hours ago [-]
After you wrote this, I went and read the article I also didn't see much there either. And wonder why you are getting down voted. And TBC, also not a tesla fan (the truck is dumb).
tiberriver256 2 hours ago [-]
Thanks
kotaKat 2 hours ago [-]
Hot take but I feel like Tesla owners (hell, anyone with 'autonomous driving' vehicles) need to see some kind of modern lecture based on the Children of the Magenta talk on automation dependence in aircraft. Mandatory, before you can trigger the system on.
FSD has built this generation's newest children of the magenta line.
>Tesla owners (hell, anyone with 'autonomous driving' vehicles)
Or LLM users.
oblio 3 hours ago [-]
Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
gchamonlive 3 hours ago [-]
[flagged]
Forgeties79 2 hours ago [-]
That’s certainly the myth musk and his compatriots repeat whenever they’re slightly inconvenienced by consideration for the broader public, yes.
Forgeties79 2 hours ago [-]
You would be surprised how passionately people defend Tesla on HN sometimes, especially when safety records come up.
friendzis 2 hours ago [-]
Otherwise number go down
lotsofpulp 2 hours ago [-]
Liability insurance pricing tells the whole story, without clickbait articles or emotion.
If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
JumpCrisscross 2 hours ago [-]
> my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not
You’re correct inasmuch as we have no evidence there is “a significant problem.” But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
lotsofpulp 2 hours ago [-]
Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault. The insurance company still has to pay, which means the Tesla owners have to pay.
JumpCrisscross 2 hours ago [-]
> Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault
Why? They only pay out if you’re at fault. And if there aren’t final judgements in a deep pipeline of cases, premiums wouldn’t have a reason to adjust yet.
lotsofpulp 2 hours ago [-]
I am assuming Tesla has been around long enough and driven enough miles to have a sufficiently representative data set for insurance companies to know. I cannot imagine the pipeline of cases to be so deep as people are waiting on payments from collisions from years ago.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
edit to respond to Forgeties79:
> The issue is they are potentially lying. It’s why we are even having this discussion. The numbers could be fraudulent
When your vehicle gets into a collision, no one contacts the auto manufacturer about who was at fault. Suppose two cars collide. The police write a report, collect evidence, maybe the drivers submit their video recordings to the insurer.
But no one is calling Tesla and asking them to determine who was at fault. And if they did, Tesla would say we never agreed to be liable, and the driver should have been paying attention. There is no way to escape that if it was costing insurers more to insure liability for a Tesla, they would be asking for higher premiums.
Whether or not Tesla is lying to the government or whoever is irrelevant for the goal of determining if Teslas cause more damage than other vehicle brands.
JumpCrisscross 2 hours ago [-]
The entire point of these articles about mounting lawsuits is those assumptions may be wrong. The liabilities involved are higher. And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesn’t take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and it’s usually easier to weed out by claims history than running models on small premiums.
lotsofpulp 2 hours ago [-]
> The entire point of these articles about mounting lawsuits is those assumptions may be wrong.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brand’s. I don’t think Ford/Toyota/anyone else’s software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
JumpCrisscross 25 minutes ago [-]
> my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks
Fair enough. I agree with you in the long run. I just don't think we've seen the litigation that will define liability play out yet.
> Does the data from Tesla even come into play for an insurer?
Directly? No. At least, not unless AI actuaries make the work worth the while.
For juries calculating damages? Plaintiffs weighing whether to bring a case? Sure. That, in turn, plays into liability. And that is something insurers care about.
> In the meantime, a collision with a Tesla is the same as any other auto brand’s
In the meantime, yes. If collisions with Teslas predictably result in larger damages than with other brands, you'd expect to see more litigation when a Tesla is involved/suspected at fault, and with that, higher costs.
> No auto brand picks up the liability for the driver
The issue is they are potentially lying. It’s why we are even having this discussion. The numbers could be fraudulent.
And unfortunately, musk has earned people’s default skeptical stance towards him.
belter 2 hours ago [-]
[dead]
philipallstar 3 hours ago [-]
Yes, all companies sold leaded gasoline.
post-it 3 hours ago [-]
Usually when people provide examples, they're intended to serve as a representative sample of a larger trend, and not an exhaustive list. Hope that helps.
cj 2 hours ago [-]
Their point still stands.
Not all companies do illegal things.
IMO it’s also a distraction to blame it on “capitalism” or some “larger trend” rather than just pointing directly at the company and people responsible.
“The system is broken” line hasn’t worked for years now. Maybe if we stop blaming the system and start blaming the people?
tonyedgecombe 2 hours ago [-]
>Not all companies do illegal things.
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
ModernMech 2 hours ago [-]
No one claimed all companies do illegal things.
philipallstar 2 hours ago [-]
All of this is a crazy overgeneralisation of the hundreds of millions of companies in the world:
> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
sumeno 2 hours ago [-]
If I say "Ted is the Unibomber" do you think I'm saying everyone named Ted is the Unibomber? This is basic reading comprehension stuff
philipallstar 33 minutes ago [-]
"Corporations" in the passage I quoted would be equivalent to you saying "Yeah there's no way people named Ted would send bombs by mail".
ModernMech 2 hours ago [-]
Saying "corporations have lied in the past for their own self interest" and then pointing to two very well known examples does not imply or over generalize that all corporations do that.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
chneu 2 hours ago [-]
Or pushed beef that destroys the environment and gives people GI cancers while claiming the opposite.
dangus 3 hours ago [-]
To pile on to this pathetic excuse for a company: anyone considering buying a Tesla should know that they are the #1 brand for fatal accidents in the United States, with over twice the accident rate of a typical automaker: https://www.roadandtrack.com/news/a62919131/tesla-has-highes...
This terrible statistic can’t just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only “ahead” of the competition because the competition is unwilling to sell an unsafe system.
infecto 2 hours ago [-]
Your link only suggests driver and road conditions to be blamed. Consider the amount of power coming from a base model, I would lean towards driver. What they do with FSD stats is terrible and it would be refreshing to have some unbiased looks at it. Your narrative though is too biased and the link makes no connection to Tesla being responsible for the fatalities.
philipallstar 3 hours ago [-]
> Tesla vehicles have a fatal crash rate of 5.6 per billion miles driven, according to the study; Kia is second with a rate of 5.5,
Basically the same as Kia. Why are Kias so bad?
xutopia 3 hours ago [-]
2 reasons I can see.
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
philipallstar 1 hours ago [-]
But the article doesn't say that at all - quite the opposite:
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
estearum 2 hours ago [-]
Until recently, Kias were sub-entry level shitboxes
This would affect both driver selection and performance during impact
Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
infecto 2 hours ago [-]
I am sure there is a component of safety systems in a Kia but I would bet the bigger weighting is on driver profile.
dangus 2 hours ago [-]
You’re so close to understanding!
Tesla stans tell us that they’re the most luxurious and best-built cars on the road, in reality they’re as poorly built as an economy car brand for people who don’t want to pay for a Toyota with a reputation for low quality.
philipallstar 2 hours ago [-]
> You’re so close to understanding!
Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
infecto 2 hours ago [-]
You’re missing the obvious explanation here. Driver profile. You could have the safest car around but if it’s being driven by unsafe drivers it will lead to higher accidents and fatalities.
senordevnyc 2 hours ago [-]
I can get on board with the rationale that Tesla drivers are idiots.
maxcan 2 hours ago [-]
that study was pretty thoroughly debunked. Also, I believe it was put out by a lobbying group representing auto dealerships who see the Tesla DTC model as a mortal threat. There is a lot of legitimate criticism to be directed towards Tesla but the ISeeCars study "aint it".
mzl 2 hours ago [-]
I've heard people saying the study is bad, but whenever I've asked about why the answers have been pretty bad. Do you have a good source for why we should disregard it?
dangus 2 hours ago [-]
Find a link that shows it’s debunked then? All they did was analyze federal crash data.
I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
post-it 2 hours ago [-]
For a while they were the safest car in crash tests, weren't they? Was there an inflection point where they were dropping like a rock? Or is this a case of measuring different things (crash tests vs fatal accident rates)?
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
mzl 2 hours ago [-]
Dan Luu had some interesting analysis about car safety, comparing how different auto-makers fared on newly introduced crash tests: https://danluu.com/car-safety/
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
iugtmkbdfil834 3 hours ago [-]
I am admittedly not a fan, but I note that in my social circle I don't have anyone who considers one, one that has one wants to sell one, one vendor has one ( the truck one ), but it is clearly for marketing purposes so at least it makes sense.
jeffbee 3 hours ago [-]
How do we know it can't be explained by self-selecting driver population? That sounds like the most likely explanation, and it's the only explanation advanced by the article you provided.
post-it 2 hours ago [-]
I guess there's something to be said for "hey, if you're considering buying a Tesla, you may be the kind of person that's likely to kill themself in a car crash. Consider buying a safer car or taking the bus!"
Forgeties79 2 hours ago [-]
Reminds me of the first episode of madman where the guy pitches appealing to everyone’s “inherent death wish” when selling cigarettes haha
“That’s it? If you’re gonna die, die with us?”
dangus 2 hours ago [-]
Who would have guessed that a vehicle with no turn signal stalk or physical control to shift gears is unsafe!
Tesla sells too many vehicles for it to be a “self selecting driver population” thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
infecto 2 hours ago [-]
They don’t, these are the anti-Tesla folks. No level of reasoning is available for discussions like this.
I don’t like Elon but I also don’t think fiction and misleading stats serve anyone.
ymolodtsov 2 hours ago [-]
We're talking about a brand whose every car has at least 350HP, and most of them have more.
It's not an apples-to-oranges comparison.
dangus 2 hours ago [-]
So why is Dodge better on the list? Most Dodge models sold are rear wheel drive performance cars. They basically only sell the Challenger/Charger and the Hornet SUV that nobody’s buying.
The lengths people will go to defend Tesla continue to astound me. Can’t we just say that they suck without making excuses for them?
friendzis 2 hours ago [-]
> I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
rvz 2 hours ago [-]
The Tesla fans fell for it again.
The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.
If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?
x187463 2 hours ago [-]
This article specifically mentions "Autopilot", not FSD. I'll call out Tesla for BS as much as the next person and I own no stock, but FSD (Supervised) is exactly what it says. There's no aspect of vehicle operation that isn't controlled by FSD, but it must be supervised.
dham 2 hours ago [-]
Here we go again. Autopilot != FSD. Autopilot is not "autonomous" driving. It's lane keep with adaptive cruise control. The same system that Honda, Toyota, etc have. Yes the naming is wrong, the marketing is bad, but I don't see it as much worse as Toyota safety sense. If you use it to be "safe" you're going swerve off the highway into a ditch. I used super cruise from GM in my friends suv. As soon as lane markers go away on a bridge, I almost hit the railing.
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
idop 1 hours ago [-]
Elon himself uses both terms interchangeably[1], and the two reportedly use the same stack, so why shouldn't we conflate the terms?
Can you explain why that makes it ok to cover up accidents and lie about the recordings of the event being corrupted?
dv_dt 2 hours ago [-]
The news isn't necessarily of the effectiveness of the particular tech stack, but the integrity, or lack thereof of the manufacturer in reporting incidents. If that is in question, assessing the effectiveness of any of Tesla's tech stacks fsd or autonomy, or taxis for driving is in doubt.
Glemllksdf 2 hours ago [-]
I don't get it?
If autopilot was missleading, full self driving is too?
Rohansi 2 hours ago [-]
Autopilot is completely different software from FSD. If you think FSD is stupid then Autopilot is worse because it won't do anything other than stay in the same lane and adjust speed to the car in front of you.
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
x187463 2 hours ago [-]
The difference is FSD is properly annotated as (Supervised) and does exactly that. Autopilot does not 'autopilot' the vehicle by any reasonable measure.
Glemllksdf 1 hours ago [-]
Supervisded self driving would be correct. I don't think I was aware of the (Supervised) before your comment tbh.
freejazz 47 minutes ago [-]
FSD (not)
estimator7292 2 hours ago [-]
How about the fact that Tesla is killing people and covering it up?
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
x187463 1 hours ago [-]
It's not a semantic issue, FSD is a completely different system, but many people mix up the terms when discussing these systems due to poor naming. Autopilot is just cruise control and lane keep. FSD handles navigation and full vehicle control. Articles discussing the dangers of Autopilot are making perfectly reasonable claims about a system which was poorly named/marketed, but they are not meaningfully relevant to conversations about FSD.
buellerbueller 2 hours ago [-]
Here we go again; Musk fanboy to the rescue!
trymas 2 hours ago [-]
IMHO you're shifting goal posts (and I am not downvoting).
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
x187463 2 hours ago [-]
"Autopilot" is not $8K, that's FSD. Autopilot was the default cruise control/lane keep software and was renamed "Traffic Aware Cruise Control" a few months ago. The original name was ridiculously misleading.
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
[1] https://www.ncdd.com/images/blog/diagram.png
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
I wonder what's taught to new drivers about this sort of situation. My intuitive feeling (driving for almost 30 years) is you drive with the flow of traffic when traffic is present. I don't see too many left lane drivers glued to speed limits, but it's obvious when someone is a fast or slow.
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
I was watching the Tesla display on my way back home from LaGuardia airport last week (passenger, not driver).
No accidents or close calls, but it was obvious that I might be focused on 1 or 2 things in that very busy and chaotic environment whereas the car (FSD or otherwise) sees more than 2 things and possibly avoids something on my behalf.
I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.
(Jumping a light in the dark? Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter? I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.)
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
I agree that there are situations where what I do as a trained driver is different from augmented cruise.
A good example (or perhaps I'm wrong) is this: in a lane, car pulls into lane in front of me and between the car further ahead. Now I don't have enough space in between me and that new entrant. But instead of using brakes (unless eggregious), I bleed speed until I make space I want. Augmented cruise doesn't do that - it hits brakes.
So, from behind, I think it looks like I'm using my brakes a lot more than I am when on augmented cruise. And excessive brake use distracts the driver behind me.
Too bad that project failed.
https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
https://support.google.com/waymo/answer/9059119?hl=en
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.
EDIT: IIRC the threshold for disengagement is 1s.
> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.
For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.
2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds
Individual tragic anecdotal incidences aside the vagueness of the article really diluted the merit of the claims.
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
Also, they don't tout a single party line.
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).
Have you been in a Waymo? SAE Level 4 is here, and it’s safer than humans [1].
[1] https://waymo.com/safety/impact/
Vastly VASTLY prefer Waymo. It's very good at its mission and is, at minimum, infinitely better than being in an Uber rideshare. I'd rather wait 20 minutes for a Waymo than 5 for any Uber or 0 to use my own car.
Ironically, Waymo got me much more interested in using my city's public transportation offering which is much better than I previously thought.
That said, Tesla FSD v14 is the best autonomous option for a supervised system that you can actually use.
I haven't seen a good criticism of their methodology. If you have one, I'd be curious about their take.
On a more-direct measure, Waymos have had starkly lower fatalities and at-risk incidents than human drivers on average and, I think, near their best.
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".
You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".
If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).
That does leave you all sorts of room to come up with realistic trolley problems.
But all require a human (or malicious) driver on one hand. The more rule-following AVs on the road, the fewer the opportunities for such trolley problems.
And I'd still argue that debating these ex ante is, while philosophically fascinating, not a practical discussion. I'm not seeing a case where one would code anything further than collision avoidance and e.g. pre-activating restraints.
What is the lowest likelihood of your own death you'd find acceptable in this situation?
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
[1] https://www.cdc.gov/nchs/fastats/accidental-injury.htm
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
FSD has built this generation's newest children of the magenta line.
https://www.youtube.com/watch?v=5ESJH1NLMLs
Or LLM users.
It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
You’re correct inasmuch as we have no evidence there is “a significant problem.” But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
Why? They only pay out if you’re at fault. And if there aren’t final judgements in a deep pipeline of cases, premiums wouldn’t have a reason to adjust yet.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
edit to respond to Forgeties79:
> The issue is they are potentially lying. It’s why we are even having this discussion. The numbers could be fraudulent
When your vehicle gets into a collision, no one contacts the auto manufacturer about who was at fault. Suppose two cars collide. The police write a report, collect evidence, maybe the drivers submit their video recordings to the insurer.
But no one is calling Tesla and asking them to determine who was at fault. And if they did, Tesla would say we never agreed to be liable, and the driver should have been paying attention. There is no way to escape that if it was costing insurers more to insure liability for a Tesla, they would be asking for higher premiums.
Whether or not Tesla is lying to the government or whoever is irrelevant for the goal of determining if Teslas cause more damage than other vehicle brands.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesn’t take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and it’s usually easier to weed out by claims history than running models on small premiums.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brand’s. I don’t think Ford/Toyota/anyone else’s software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
Fair enough. I agree with you in the long run. I just don't think we've seen the litigation that will define liability play out yet.
> Does the data from Tesla even come into play for an insurer?
Directly? No. At least, not unless AI actuaries make the work worth the while.
For juries calculating damages? Plaintiffs weighing whether to bring a case? Sure. That, in turn, plays into liability. And that is something insurers care about.
> In the meantime, a collision with a Tesla is the same as any other auto brand’s
In the meantime, yes. If collisions with Teslas predictably result in larger damages than with other brands, you'd expect to see more litigation when a Tesla is involved/suspected at fault, and with that, higher costs.
> No auto brand picks up the liability for the driver
Tesla has been assigned liability already [1].
[1] https://law.marquette.edu/facultyblog/2025/08/jury-awards-24...
And unfortunately, musk has earned people’s default skeptical stance towards him.
Not all companies do illegal things.
IMO it’s also a distraction to blame it on “capitalism” or some “larger trend” rather than just pointing directly at the company and people responsible.
“The system is broken” line hasn’t worked for years now. Maybe if we stop blaming the system and start blaming the people?
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
This terrible statistic can’t just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only “ahead” of the competition because the competition is unwilling to sell an unsafe system.
Basically the same as Kia. Why are Kias so bad?
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
This would affect both driver selection and performance during impact
Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
Tesla stans tell us that they’re the most luxurious and best-built cars on the road, in reality they’re as poorly built as an economy car brand for people who don’t want to pay for a Toyota with a reputation for low quality.
Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.
https://en.wikipedia.org/wiki/ISeeCars.com#Partnerships
https://x.com/larsmoravy/status/1860100416819855492
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
“That’s it? If you’re gonna die, die with us?”
Tesla sells too many vehicles for it to be a “self selecting driver population” thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
I don’t like Elon but I also don’t think fiction and misleading stats serve anyone.
It's not an apples-to-oranges comparison.
The lengths people will go to defend Tesla continue to astound me. Can’t we just say that they suck without making excuses for them?
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.
If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
If autopilot was missleading, full self driving is too?
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.