Software developers will soon be seen as the new mechanics as car manufacturers are increasingly using artificial intelligence (AI) to create personalised user experiences in connected vehicles. And while the opportunity to make use of this technology is clearly enormous, so too are the potential pit-falls. In a quest to deliver the ultimate user experience for automotive users, vehicle safety and regulation needs to be scrutinised.
An announcement by Google’s Waymo in March of this year revealed driverless ride-hailing services were being offered in San Francisco, highlighting how the entry of the likes of Google, Amazon, Apple, Microsoft, and others to the automotive segment has only precipitated an increase in AI technology use.
Across the pond, the UK Government is claiming that driverless cars will be on UK roads by 2025, and the short time-scale raises concerns over the speed of regulation changes, and uncertainties over how autonomous vehicles (AVs) will develop. As consumer expectations for software updates for vehicles evolve to become much like we see with mobile phones today, with ever-increasing demand for new features, there is heightened risk of vehicles becoming vulnerable from their source code. And as seen in 2015, when security researchers shook the automotive industry by hacking into a Jeep being driven by a (consenting) tech journalist.
The pursuit of fast deployment of autonomous vehicles on a global scale, raises fears that security issues along with premature regulation could threaten continued innovation and investment in this sector.
AI is The Driving Force Behind Connected Vehicles
AI is primarily being deployed in vehicles to improve the user experience – making vehicles safer, enhancing speech recognition, and to improve cloud- based navigation, weather and surface conditions. But we also see original equipment manufacturers (OEMs) making use of AI technology outside of the vehicle itself – to improve supply chain management, manufacturing, vehicle design, and testing.
A deeper dive into vehicle testing, reveals the deployment of AI and machine learning (ML) both have a significant impact. Not only is the base software under test, but all variations of the datasets learned during vehicle operation are also in need of testing. And in the case of Waymo’s self-driving vehicle – a complex system which connects enormous datasets from Google, real-time sensors, and external GPS mapping information – this is a meticulous process.
In many application areas for AI today there is not necessarily a safety concern, as they are largely convenience features such as speech recognition and navigation. But even here, a failure in navigation could potentially result in a collision that results in injuries or fatalities. In any case, it is essential to have at least an underlying advanced driver assistance systems (ADAS) safety technology to keep the AI/ML systems in check.
Connected Vehicle Safety in The Hands of Software Developers, Standards and Regulations
Technological developments around AI in vehicles have historically been outpaced by
the regulations. And regulation, has – rather than ensuring this technology is introduced safely and securely – typically prevented the use of AI in vehicles. But consumer demand means this is now changing, with the volume of connected vehicles growing 270% over the last 5 years. And rather than facing challenges over whether AI technology should be used, the conversation has shifted to how code complexity, regulation, and the lifespan of the implemented code are affecting vehicle security. And even with the primary implementation of AI is to improve overall UX, code security must not be overlooked.
Consumer demand for new features has driven demand and increased pressures to implement AI in vehicles. With this, however, there will be a constant need for software updates, and without complete access to the source code for a developer, areas of weakness, and security concerns can arise. It is of utmost importance to have access to the source code for tools and runtime software used in the development process, and equally important to have visibility into the projects you create for use in the vehicle.
Cloud-Based Technologies and Data are The Future of Automotive Cybersecurity
Regardless of the deployment of AI/ML technologies, vehicles will continue to be increasingly reliant on cloud-based technologies and data. The cloud-based aspects of autonomous driving enablement are important in the longer term, especially when all vehicles on the road are connected. Of course, local ADAS-like safety mechanisms based on LiDAR, radar and cameras will certainly be required to ensure system safety, from testing, through to deployment.
Consumer expectations for software updates and distribution of extended feature sets in the software-defined vehicles will continue to rise. And while – in light of the aggressive adoption of open-source software – the possibility for local hacking remains. Developers, the mechanics of the future, can prevent this with the correct toolkit and appropriate regulations at their disposal.
Commercially licensed software of an open nature will inevitably be on the rise. For efficiency in product development, it is extremely important to have access to the source code for tools and runtime software used in the development process, and equally important to have visibility into the projects that are created for use in vehicles. Commercial licensing and IP protection is an absolute necessity, while making the underlying source code available to the development community.