When Two Worlds Collide – Almost
OK, not quite two worlds but, when two autonomous vehicles almost collided.
Autonomous vehicles are getting their fair share of the press of late but this time it was for less positive reasons. Apparently two autonomous vehicles had a ‘near miss’ when an Audi Q5 crossover, equipped for autonomous driving with human backup almost met a Lexus crossover with the same idea. The Audi, one of two prototype vehicles from Delphi Automotive Inc was about to change lanes when it had to take evasive action to avoid a collision with the Google owned Lexus.
Some reports say that this is exactly the type of incident that this technology sets out to avoid. However, this is early days for this technology and even with the best testing facilities in the world there is nothing like real life for ironing out software bugs.
That does give rise to an interesting situation that happens on the roads ever day in that two drivers may spot a gap to move in to and both make the decision to go at the same time. This is a normal process for humans to deal with and I am sure it will become normal for autonomous vehicles as the technology matures. One thing did spring to mind though and that is legacy Ethernet collision detection.
“Everything worked reasonably really well until the traffic reached a critical level and then everything fell apart.”
On legacy half duplex Ethernet systems the protocol to allow data traffic to flow fairly relied on an algorithm to force devices to back off and wait to see if another device started to send. Everything worked reasonably really well until the traffic reached a critical level and then everything fell apart.
It made me wonder what would happen on the roads during heavily congested periods. Would decision time delays mount up to the point where the roads became completely gridlocked?
Although this is a very interesting and challenging problem it pails into insignificance when you consider the decisions autonomous vehicles will inevitably have to take if they are to be let loose on our roads. At some point in the future an autonomous vehicle will make a decision as to who to risk in the event of an impending accident. If a collision is inevitable there may still be choices as to which way to steer the vehicle. If all choices lead to potential loss of life, the decision as to who will die will lie with the vehicle. A very sobering thought.
“…machines that can make a decision as to who will live and who will die?”
In the aftermath of an accident where there have been fatalities who is responsible? The autonomous vehicle is a machine that has been programmed by humans. How far back will the investigation go? I am sure this is going cause many headaches with developers, standards bodies and governments in the future. Who is going to sign off a system with machines that can make a decision as to who will live and who will die?