Earlier this month a Cruise AV – Cruise is General Motors’ subsidiary – was involved in a road collision with an emergency response vehicle.

In its statement, Cruise said its AV had “positively identified” the emergency vehicle, but the confines of the specific intersection made “visual identification more challenging” as it was “significantly occluded by buildings.”

However, a number of automakers are continuing to push to develop autonomous vehicles (AV), meaning issues around pedestrian and driver safety are increasingly being brought to the fore.

In a recent study, researchers from King’s College London have revealed “major” age and race biases in autonomous vehicles’ detection systems. The study is yet to be peer-reviewed, but its findings are worrying.

Dr Jie Zhang from the Department of Informatics at King’s College London, in collaboration with colleagues, assessed eight artificial intelligence (AI) powered pedestrian detection systems used in autonomous vehicle research.

They found through testing over 8,000 images through these pieces of software, detection accuracy for adults was almost 20% higher than it was for children, and just over 7.5% more accurate for light-skinned pedestrians compared to darker-skinned pedestrians.

A cause of this discrepancy is that the main collections of pedestrian images which are used to train the AI systems used in pedestrian detection – the system which tell a driverless car whether they are approaching a pedestrian – feature more people with light skin than dark skin.

The result of this uneven data source is a “lack of fairness” in the AI system it is used to train, the researchers say.

One important factor to highlight is that the systems that were tested did not belong to any driverless car companies, as that is considered “proprietary information” and so is not available to the public.

In an interview with New Scientist, Dr Zhang said that driverless car companies use existing open-source models, so “[We] can be certain that their models must also have similar issues.”

The researchers at King’s College London engaged in “extensive data annotation”, marking 8,311 images with 16,070 gender labels, 20,115 age labels, and 3,513 skin tone labels.

Dr Zhang explains: “Fairness when it comes to AI is when an AI system treats privileged and under-privileged groups the same, which is not what is happening when it comes to autonomous vehicles.

Car manufacturers don’t release the details of the software they use for pedestrian detection, but as they are usually built upon the same open-source systems we used in our research, we can be quite sure that they are running into the same issues of bias.”

The researchers also found that the bias towards dark-skin pedestrians increases significantly under scenarios of low contrast and low brightness, posing increased issues for night-time driving.

It is their hope that manufacturers will be more transparent when it comes to how their commercial pedestrian detection AI models are trained, as well as how they perform, before they hit the streets.

Dr Zhang continues: “Automotive manufacturers and the government need to come together to build regulation that ensures that the safety of these systems can be measured objectively, especially when it comes to fairness.

Current provision for fairness in these systems is limited, which can have a major impact not only on future systems, but directly on pedestrian safety.”

According to GlobalData, the global automotive industry experienced a 27% decline in the number of autonomous vehicles-related patents applications in Q2 2023, compared with the previous quarter.