Mussels wrote: ↑Mon Nov 01, 2021 7:33 pm
but I'm dubious about how they selected which accidents to reproduce.
A census of fatal, human-involved collisions was examined for years 2008 through 2017 for Chandler, AZ, which overlaps the current geographic ODD of the Waymo One fully automated ride-hailing service. Crash reconstructions were performed on all available fatal collisions that involved a passenger vehicle as one of the first collision partners and an available map in this ODD to determine the pre-impact kinematics of the vehicles involved in the original crashes.
What about non-fatal crashes? Just because a crash DIDN'T result in a fatality isn't any reason to exclude it!
No idea (you could contact them and ask ).
However, in the uk, fatals are investigated more thoroughly - as per the video, where the simulations were based on scenarios from the DfT 'RAIDS' files. The level of detail required may not be provided by STATS19 forms.
Also, at the other end of the severity scale, there are systems operating on many vehicles which compare how real drivers do and AVs would deal with driving.
Last edited by Horse on Tue Nov 02, 2021 7:42 am, edited 1 time in total.
Cousin Jack wrote: ↑Mon Nov 01, 2021 10:56 pm
I have never been to Phoenix, Arizona, but I have been to Arizona. Empty is the word that springs too mind.
A 'successful' autonomous system in Phoenix may be useless in London (and perhaps vice versa)
Well, if I wanted to test any sort of automated system that was intended to cope with a range of environments, I'd probably start with simpler tasks in a less complicated environment. And actually, I have (well, I was there when it was demonstrated).
Cousin Jack wrote: ↑Mon Nov 01, 2021 10:56 pm
I have never been to Phoenix, Arizona, but I have been to Arizona. Empty is the word that springs too mind.
A 'successful' autonomous system in Phoenix may be useless in London (and perhaps vice versa)
Well, if I wanted to test any sort of automated system that was intended to cope with a range of environments, I'd probably start with simpler tasks in a less complicated environment. And actually, I have (well, I was there when it was demonstrated).
My link ^ above - the problem with that approach tends to be that it all has to be mapped first, and revised for e.g. road changes. With AV 2.0, you upscale the visuals to deal with the world as-is, rather than as it should be.
Cousin Jack wrote: ↑Mon Nov 01, 2021 10:56 pm
I have never been to Phoenix, Arizona, but I have been to Arizona. Empty is the word that springs too mind.
A 'successful' autonomous system in Phoenix may be useless in London (and perhaps vice versa)
Well, if I wanted to test any sort of automated system that was intended to cope with a range of environments, I'd probably start with simpler tasks in a less complicated environment. And actually, I have (well, I was there when it was demonstrated).
My link ^ above - the problem with that approach tends to be that it all has to be mapped first, and revised for e.g. road changes. With AV 2.0, you upscale the visuals to deal with the world as-is, rather than as it should be.
Tasks are not always necessarily related to mapped environments. Certainly the system I rode in wasn't.
It is an interesting approach, and much like the way humans drive in a strange environment.
I would be interested to see how it coped with an extreme rural ( narrow roads, etc) environment.
Cousin Jack wrote: ↑Tue Nov 02, 2021 9:07 am
It is an interesting approach, and much like the way humans drive in a strange environment.
I would be interested to see how it coped with an extreme rural ( narrow roads, etc) environment.
One proposed location for an autonomous bus was, I think, Alderney, or another smaller island.
Cousin Jack wrote: ↑Tue Nov 02, 2021 9:07 am
It is an interesting approach, and much like the way humans drive in a strange environment.
I would be interested to see how it coped with an extreme rural ( narrow roads, etc) environment.
That's the real test, alright - where local knowledge plus custom and practice make a system work for human drivers.
Cousin Jack wrote: ↑Tue Nov 02, 2021 9:07 am
It is an interesting approach, and much like the way humans drive in a strange environment.
I would be interested to see how it coped with an extreme rural ( narrow roads, etc) environment.
That's the real test, alright - where local knowledge plus custom and practice make a system work for human drivers.
And where, potentially, V2V communication could inform the whole fleet.
DW. wrote: ↑Tue Nov 02, 2021 10:54 am
So, I've never seen this shit over here.
If the cars "self driving", does that mean you can have a few beers and not get done for DD ?
Going all Dan Dare etc., a full L5 vehicle might not have any easily accessible controls, so it might be difficult to be done for being in control of the vehicle.
I don't know whether there has been any suggestion that the autonomous driving system would have AI to block those "hold my beer" ideas
DW. wrote: ↑Tue Nov 02, 2021 10:54 am
So, I've never seen this shit over here.
If the cars "self driving", does that mean you can have a few beers and not get done for DD ?
Going all Dan Dare etc., a full L5 vehicle might not have any easily accessible controls, so it might be difficult to be done for being in control of the vehicle.
I don't know whether there has been any suggestion that the autonomous driving system would have AI to block those "hold my beer" ideas
If it could open my beer for me as well, I'd be really impressed.
DW. wrote: ↑Tue Nov 02, 2021 10:54 am
So, I've never seen this shit over here.
If the cars "self driving", does that mean you can have a few beers and not get done for DD ?
Going all Dan Dare etc., a full L5 vehicle might not have any easily accessible controls, so it might be difficult to be done for being in control of the vehicle.
I don't know whether there has been any suggestion that the autonomous driving system would have AI to block those "hold my beer" ideas
If it could open my beer for me as well, I'd be really impressed.
If you're that pished, you'll need the hose-clean stainless steel interior
Going all Dan Dare etc., a full L5 vehicle might not have any easily accessible controls, so it might be difficult to be done for being in control of the vehicle.
I don't know whether there has been any suggestion that the autonomous driving system would have AI to block those "hold my beer" ideas
If it could open my beer for me as well, I'd be really impressed.
If you're that pished, you'll need the hose-clean stainless steel interior
The 'Mr Creosote' trim level. With waffer-thin meent.
In this post, we discuss the results of a recent multi-city generalisation test, conducted to explore how we are building the most scalable approach to autonomous driving.
To build autonomous driving technology that can easily scale to new markets, we are pioneering a data-driven approach to self-driving. At the heart of our AV2.0 platform is a fully learned end-to-end motion planner that can quickly and safely adapt to complex driving environments, anywhere in the world. Our full AV2.0 platform consists of a camera-led sensor suite, an end-to-end neural motion planner and an autonomous driving system designed with safety and redundancy in mind.
Limits of the traditional approach
It is interesting to contrast AV2.0 with what is being widely used in the AV industry today, what we call AV1.0. The traditional approach is a modular perception-prediction-planner stack derived from classical robotics principles. AV1.0 is, broadly speaking, motivated by the general principle that if perception is solved then motion planning is easy. This unfortunately is yet to be proven despite years of engineering efforts and billions in investment by numerous companies.
Well engineered AV1.0 stacks that follow this modular design principle have the benefits of being able to engineer in parallel each of the modules of perception, planning, and control independently. However, these stacks are very expensive to design, adapt and maintain, and are reliant on expensive hardware, HD mapping, and localization systems. These stacks are also brittle as they place extremely high demands on the sensing and perception modules. Furthermore, interfaces between the fundamental modules need constant adaptation, and errors propagate throughout the stack. These planners, although evolving from classical algorithms to more data-driven ones, still suffer from perception and localization errors.
Our alternative vision
We reframe the driving problem as one that can be solved fully using machine learning, i.e., jointly learning to represent any driving scene and motion plan using a deep neural network trained on large quantities of human driving demonstrations. This approach enables us to build an autonomous mobility platform that can quickly and safely adapt to new cities, use-cases, and vehicle types, which is a core promise of AV2.0. Achieving this is game-changing for scaling autonomy and it means that we can deploy in new markets faster with substantially lower cost; moving us closer towards our goal of being the first to bring AVs to 100 cities.
Developing AVs that can easily drive in new places is significantly different from concurrent AV industry methods which require time-consuming and expensive city-specific adaptations such as building and maintaining highly-detailed and customized HD maps for every road driven.
To deploy in a new location, an AV1.0 team starts by manually driving sensor-equipped vehicles down every street so they can paint a 3D picture of the environment, down to the centimeter. They process this data into a detailed map with additional context such as speed limits, lanes, and traffic light locations. The maps are then tested and verified before being deployed, as well as constantly updated as city streets change. With the environment mapped, now comes the challenge of adapting the behavior planning for the new driving culture and environment. This is notably difficult, even in moving between two seemingly similar environments. The solution to this looks like an engineering team redesigning components of a large, complex planner. This process takes months.
In contrast, at Wayve, we are building AVs that generalise and are intelligent enough to not need these cumbersome HD maps. The world is constantly changing, so we need to be able to adapt to drive anywhere. What we mean by this is we can train our AV2.0 system to learn to drive autonomously on, say, London roads and then it can apply this acquired driving skill to new, unseen places and cities without any place or city-specific adaptation.
How did we test this?
To demonstrate this capability, we recently conducted a multi-city generalisation test where we took our best performing AV2.0 model to 5 different cities across the UK that we have never previously been to. The goal was to see if our AV2.0 model that was trained in London could generalise its driving intelligence to new cities, with no prior data collection to influence model performance in the new cities.
Cousin Jack wrote: ↑Mon Nov 01, 2021 10:56 pm
I have never been to Phoenix, Arizona, but I have been to Arizona. Empty is the word that springs too mind.
...
A 'successful' autonomous system in Phoenix may be useless in London (and perhaps vice versa)
Cousin Jack wrote: ↑Mon Nov 01, 2021 10:56 pm
I have never been to Phoenix, Arizona, but I have been to Arizona. Empty is the word that springs too mind.
...
A 'successful' autonomous system in Phoenix may be useless in London (and perhaps vice versa)
It can drive in a strange city, but can it manage really narrow roads, where reversing to a gateway/wider bit is the norm? Can it tell the difference between a solid verge/gateway and a soft/muddy one? Does it know how far back the last suitable passing place was? Can it recognize that, although the last passing place behind it is 1/2 mile back, the oncoming car and caravan will make a pigs ear of reversing 100 yards, so best reverse the 1/2 mile? Can it negotiate a give-way junction where the sign is in a holly bush and the line wore away 10 years ago? Can it tell the difference between grass/soft weeds and Cornish Hedges that are made from granite? All rather normal around here.?
In some places like Northern Scotland they have very narrow roads and extra problems. Can it detect that oncoming car a mile away, count the passing places (which may, or may not be marked with a sign), and pull into a suitable one and wait. Or not if the oncoming car pulls in and flashes it's lights. Can it recognize a HGV coming and pull into a passing space on the wrong side of the road so that the vehicles pass nearside to nearside, because the passing places are too short for the HGV? All day to day situations 2 months ago when I was there.
And of course, (and this happened to me about 4 weeks ago) can it recognize a police car blocking a road with it's blues on, re-route itself with no diversion signage, and use intelligence/local knowledge to re-route far enough to avoid coming back onto the closed road, whilst not taking the alternative 'main' road that would add 20 miles to the journey. And will it remember that police car when returning 1/2 later, and forget it again the following day?
Don't get me wrong, I would actually like a fully autonomous vehicle that I could drive to the pub, and it could drive me home again, but I am not at all convinced that we are 'nearly there'.