Can AI See Everyone? Examining Bias in Skin Detection Technology


By Yvette Schmitter, Co-Founder & Managing Partner – Fusion Collective

At the most recent Consumer Electronics Show (CES), it became clear that autonomous
vehicles — a vision once limited to science fiction — are swiftly becoming nonfiction. In fact,
according to Jensen Huang, CEO of NVIDIA, they are already here. During a fireside chat for
financial analysts at CES, after making autonomous vehicle technology a centerpiece of his
keynote speech, Huang said, “Every single car company will have to be autonomous, or you’re
not going to be a car company,” adding that, “With Waymo’s success…it is very, very clear
autonomous vehicles have already arrived.”

Imagine New York City , known for the symphony of sounds that characterize its streets, free of
the incessant buzz of car engines. Instead, the clamor would be replaced by the nearly silent
glide of self-driving cars. Streets, once choked with traffic, would become safe havens for
pedestrians.

But not for all pedestrians. As studies have shown for years, the AI-driven systems that
autonomous vehicle designers have come to rely on have a vision problem that might have a
disproportionately negative effect on the safety of Black and Brown people.

The dangerous bias in pedestrian detection technology

Skin detection is an essential part of the navigation framework that guides autonomous
vehicles. To keep pedestrians safe, the vehicles must be able to differentiate humans from other
objects that might be lining roads. By detecting skin tones, the vehicle’s AI brains can identify
pedestrians even when they are partially obscured by other objects.

However, studies show that the systems being used for skin detection have blind spots.
According to new joint research from King’s College London and Peking University, the AI-
driven software used to detect pedestrians is worse at detecting dark-skinned pedestrians than

it is at detecting those with light skin tones. To make matters worse, the findings are not new. A
study conducted by the Georgia Institute of Technology in 2019 returned the same findings.
Why is dark skin harder for artificial intelligence to detect than light skin? The problem has
everything to do with the data used to train the AI. Because the data sets use images that
include more light skin than dark skin, the resulting AI is biased toward light skin when it scans
for pedestrians.

These studies reveal that the shiny promise of autonomous vehicles — clean, efficient
transportation that lowers environmental impacts and improves the quality of life for everyone —
hides a grave worry. The AI that drives autonomous cars has a blind spot that calls into question
the integrity of the technology, and the risk it poses to Black and Brown residents in the cities
where those vehicles would operate is a matter of life and death.

How to ensure AI sees everyone

Five years is way too long for these findings to go unaddressed. We must confront them now to
ensure we develop technology that genuinely serves and protects everyone. If we don’t, we’re
giving AI the authority to value certain skin tones above others, which could lead to nightmarish
results.

AI won’t fix exclusion — unless we make it

Manufacturers can’t claim progress while continuing to exclude entire demographics from
decision-making. AI is already shaping the next era of manufacturing — but whether it drives
inclusion or cements exclusion is a choice. As leaders, your responsibility is beyond the balance
sheet and profits; it’s about creating a future that is beneficial for all.

Manufacturers must demand diverse datasets, unbiased testing, and transparency in AI
decision-making. This can no longer be voluntary; we need ethical use and development
regulations covering all industries or companies using AI.

Manufacturing leadership must hold every AI developer accountable for building systems that
serve everyone — not just their preferences, experience, or privileged few.

Most importantly, the manufacturing industry must include diverse human voices in the rooms
where AI is built, trained, and deployed. If AI is designed only by those who have always had
power, it will only reinforce the world they built for themselves rather than the world we all
deserve.
Here are some steps that need to be taken if we want to have a future in which roads are
secure for everyone who uses them:

● Establish inclusive testing standards: Closed-track testing does not produce the
performance levels the real world demands. To keep people safe, developers must
commit to extensive testing of autonomous vehicles in real-world scenarios that
represent the racial, cultural, and environmental diversity of our cities.
● Enforce accountability and transparency: Concerns can’t be concealed. Too much is
at stake. Controls must be put in place and enforced that require autonomous vehicle
manufacturers to disclose the limitations of their systems, especially including any
biases detected in pedestrian recognition and how they are being addressed.
● Demand diversity in development: It is very difficult — perhaps impossible — for
technology to effectively address diverse needs if it is not developed by diverse groups.
The blind spots in pedestrian detection technology, as well as many other AI-driven
innovations, reflect this and will remain unless we push for more inclusive teams within
AI development, including researchers, engineers, and testers from various
backgrounds.
● Create community partnerships: The value added by increasing diversity in
development can be multiplied exponentially by engaging with Black and Brown
communities to gather input, test technologies, and ensure their needs and safety
concerns are understood and addressed.

The very streets AI-driven cars are supposed to make safe will become a minefield if we don’t
demand bias-free development. Moving forward, the focus must be on engineering autonomous
cars that genuinely see, hear, and protect everybody. We need a future in which everyone is
included in the vision of progress.

About the Author:

Yvette Schmitter, Co-Founder and Managing Partner of Fusion Collective , is a trailblazer
reshaping the future of technology, breaking down barriers, and building bridges where walls
once stood. Today, as the Co-Founder and Managing Partner at Fusion Collective and a former
Digital Architecture Partner at PwC, Yvette leads with a bold and inclusive vision: technology
must serve everyone—regardless of gender, race, culture, or socioeconomic background. For
Yvette, innovation isn’t about shiny new tools; it’s about unlocking potential, leveling playing
fields, and ensuring underrepresented voices have a seat at the table where decisions are
made.



Source link