Driverless Car Investment
In November 2017 the UK government announced it was committing £100million to an industry matched investment fund designed to encourage the testing and development of driverless car technology on its roads. There are currently two existing testing centres in the UK and this will work to combine them into a central hub.
This is in response to the numerous private companies who have invested heavily into bringing driverless cars to market, and will need to do extensive testing before this can be achieved. Volvo has already tested on UK roads, and this move will hope to attract more manufacturers in future.
It may have something to do with the projected value of the driverless car market that things are picking up pace so rapidly. One report has stated the global market for these cars will be worth £907 billion by 2035, and nobody wants to miss out on their slice of that colossal pie.
Car manufacturers are spending vast sums of money investing in firms with expertise in artificial intelligence, automation and tracking systems – with many partnerships and joint ventures taking place. As an example Ford, who is seen as one of the leaders in the race, have recently invested $1 billion in Argo AI, as well as other significant outlays.
In Europe, Daimler/Mercedes, Volkswagon Group, and the Renault Nissan Alliance are also at the forefront of bringing driverless cars to market, and all of whom have made significant progress in taking the technology mainstream through successful real world testing.
It is now very likely we will see driverless cars on our roads before the end of the decade, and perhaps even sooner. Google’s self-driving car project, Waymo, plans on launching a robot taxi service by the end of 2017 – having a distinct advantage over other firms by being able to manufacture the Lidar tracking systems, seen as necessary, in house. These are notoriously expensive and seen as a real obstacle to driverless cars becoming accessible to the wider public – costing as much as $80,000 per vehicle. It’s a very good thing they can build them themselves for 10% of cost.
Despite many successful tests and thousands of miles travelled without incident, there remains a distinct lack of trust with the technology in general. People’s reluctance to travel aboard trains and planes without a driver or pilot present has been well documented – despite it being safer and those types of vehicles being very suitable platforms for automated technology, as they operate in far more predictable environments than roads.
What has been discovered during testing is that the technology is actually so good at doing the job very nearly the entire time, that when the rare situation occurs the car can’t resolve itself and it needs to cede control back to the human operator, they are usually so distracted sleeping, eating, or talking to someone else they are completely unprepared to act.
This has actually resulted in a fatal accident already involving a Tesla Model S and one of their test drivers. The Model S has a feature called autopilot mode that can take over driving on the highway where less input is needed while driving. It is not a comprehensive technology and more like a highly advanced cruise control.
In this instance a truck pulled out across the vehicle and its white paint against a bright sky blinded the sensors on the Tesla to its presence. The car continued at full speed passing underneath the trailer, losing its roof in the process, before continuing on and crashing into a power pole.
Despite the obvious failing in the technology, this tragic accident highlights a dilemma the technology poses. Having a driver present in this case was not enough, because even when present he did not react to prevent the accident – trusting the car would react in time.
Should he not have trusted the car? But then what? Surely if that’s correct the technology becomes redundant as we constantly intervene before allowing the car to do so. For the system to work we surely have to trust it to some extent, but the level of trust needed is a delicate balance to maintain, and these incidents plant doubts in our minds that are hard to forget.
The problem is the technology struggles to anticipate the unpredictable – which is us. The human element is the biggest obstacle a driverless car has to overcome. It’s more than capable of operating the car, but second guessing what other (human) drivers are going to do - often in violation of the traffic laws the system is programmed to base its decision making on - is extremely difficult.
The solution would be to make all cars driverless and that would remove the unpredictability they struggle with, but that just isn’t possible. An incremental process is the best we can hope for, assuming the technology can be accessible enough for it to be adopted by the masses.
So are we to simply accept that these sorts of accidents will continue to happen while the technology is phased in and improved? History suggests so as most technology is introduced gradually over time, with plenty of mishaps along the way.
The question then is in the event of an accident, who would be liable if the vehicle was driving itself? Can a driver really be held responsible if he or she wasn’t actually driving the vehicle at the time of the accident? How would you determine if they should have intervened or not?
These are not easy questions to resolve.
If we say yes, the driver is responsible, we are also saying responsibility rests with an owner in general - regardless of whether they have any control or involvement in a given situation. This throws up all sorts of legal conflicts in other areas, and creates a potential nightmare for health and safety adherence, regulation and liability.
If we say no, you might find it hard to convince your car manufacturer to cough up for your repair bills – and if the accident results in something more serious, could have serious repercussions for the unfortunate individuals involved.