Cousin Jack wrote: ↑Mon Apr 19, 2021 7:31 pm
Horse wrote: ↑Mon Apr 19, 2021 8:57 am
So what if the driver isn't expected to take back control
instantaneously? What if the system was designed to operate on its own for several - 'many' even - seconds. What it won't commit itself to a situation that it can't cope with?
The only way to be certain of "not committing itself to a situation that it can't cope with" is to refuse to drive. Period.
Indeed. As Spin will happily remind you (if he can stop himself from typing 'Tesla'
), there's no such thing as 'safe' [the MSF courses I taught in the 1990s had exactly the same point. Which means that, as you know
, it's all about choosing an acceptable level of risk.
Funnily enough, there have been people working on whether an
increase in risk might be temporarily acceptable. The only example I can remember from the presentation (heard about it years ago) was the AV travelling in lane 1 moving into lane 2 to allow a merging vehicle room - but having to fit between two other vehicles already in lane 2 temporarily resulting in acceptance of shorter headway.
Cousin Jack wrote: ↑Mon Apr 19, 2021 7:31 pm
Unexpected situations are unexpected. And there are an infinite number of them too.
Yes, but ...
I don't know about cars, but let's look at bikes. Riders typically have three main 'gotcha' crashes:
- 'right of way violations', car turns out or across
- bends, loss of control, usually running wide
- overtaking (including filtering as a sub-set)
If you want more detailed figures, they're available.
And, if you watch that video I posted, you'll see that some of the work being done is to use crash investigation information to test the AV software before it gets anywhere near the road. Not just that, the virtual testing allows for umpteen variations to be tested. Using crash information for learning, something the 'Safety II' team will emphasise as good practice. And FWIW I was using a 'top ten' breakdown of crashes in my 1988 advanced course theory session.
So 'infinite variation' perhaps. But if that was totally the case then it would scupper Spin's 'No Surprise' campaign. Which, of course, it doesn't, because the main elements are fairly easy to identify. NS even has its rhyming reminders, like 'gaps = traps' & 'can go = will go'. If a person can learn them, then a machine probably can too.
Cousin Jack wrote: ↑Mon Apr 19, 2021 7:31 pm
The only way to be certain of "not committing itself to a situation that it can't cope with" is to refuse to drive.
I recently heard someone say about their AV: "the problem wasn't stopping it, the problem was keeping it going". Yup, the system was so 'careful' that it would stop at just a hint of trouble.