Why Workers Are Losing Trust in AI-Driven Automation

Herbert Post
worker monitoring ai safety system

Key Takeaways

  • While automation has significantly reduced workplace injuries, AI-driven safety tools introduce new risks, especially when systems fail to accurately detect or respond to real-world hazards affecting diverse workers.
  • Trust in AI safety systems is declining due to false positives, missed detections, and perceived surveillance, leading workers to override alerts, avoid engagement, and underreport incidents.
  • Current regulatory frameworks lack mandatory standards for transparency, testing, or bias audits in AI safety systems, resulting in inconsistent protections across industrial settings.
  • Restoring worker trust requires inclusive design practices, explainable AI alerts, manual overrides, and direct worker involvement in system development and evaluation.

 

Fifty years ago, American workers faced injury rates more than four times higher than they do today. According to OSHA, workplaces averaged 10.9 injuries per 100 workers in 1972. By 2023, that number had dropped to 2.4.

Automation is a central reason for that shift. Across U.S. factories, more than 310,000 industrial robots now operate on assembly lines, in packaging systems, and within inspection processes. These machines carry out hazardous and repetitive tasks once assigned to people, reducing human exposure to physical risk.

The academic research I found backs this correlation. A one standard deviation increase in robot exposure (about 1.34 robots per 1,000 workers) corresponds to 1.2 fewer injuries per 100 full-time workers. Within manufacturing specifically, that reduction reaches 1.75 injuries per 100 workers. The financial benefit is also clear, as robot-driven safety improvements save an estimated $1.69 billion each year, measured in 2007 dollars.

Yet automation is not risk-free. Douglas Parker, Assistant Secretary of Labor for Occupational Safety and Health, put it plainly:

“Robot use will continue to expand, and employers have a responsibility to assess the hazards these new applications may introduce, and implement appropriate safety controls to protect the workers who operate and service them.”

 

Are AI Safety Systems Putting Workplace Safety at Risk?

Industrial automation has helped reduce injury rates, but it has not eliminated risk. It has changed where risk shows up and who carries it.

New safety tools use computer vision and AI-driven machine monitoring to detect violations and flag hazards in real-time. But studies show these systems often miss key details. This research I read has found that AI facial recognition and machine vision systems frequently fail to detect darker skin tones, with some algorithms misidentifying darker-skinned women as often as 35% of the time. This creates blind spots that increase injury and fatality risk for Black, Indigenous, or People of Color (BIPOC) workers.

Safety managers working with these systems have also reported real-world issues. One shared with me that their site’s system failed to recognize hand gestures from darker-skinned operators, resulting in no alerts when safety procedures were violated. The vendor told them to wait for the next patch. During that delay, injuries continued.

Trust in these tools is eroding. A 2024 survey showed 58% of engineers mistrust AI-generated safety alerts. They cite problems such as unpredictable false positives, lack of explanation for triggers, and uneven results across shifts. Meanwhile, serious injuries remain concentrated in sectors where automation is most prevalent. This study found that hospitalizations among manufacturing workers far exceeded those in construction.

worker hospitalization in manufacturing and construction

 

What Happens When Workers Stop Trusting Safety Systems?

Inside manufacturing sites piloting new safety automation, the feedback from the floor is mixed. On-screen dashboards show high compliance rates and low incident counts, but these figures don’t always match what workers experience day to day.

Engineers and operators have reported:

  • Disabling or overriding alerts they find inaccurate or disruptive
  • Avoiding interaction with safety tools that seem inconsistent
  • Feeling watched, not protected, by surveillance-driven systems

On one factory floor, I overheard a technician say the system flags things that aren’t real problems but misses when someone is actually in danger. Another operator mentioned that the technology seems more focused on monitoring than protecting. This breakdown in confidence affects behavior. When trust erodes:

  • Workers are less likely to report near-misses
  • More likely to bypass prompts
  • Slower to react to real hazards

This disconnect isn’t simply behavioral. It also affects how safety performance is documented. Internally, systems may appear reliable and effective. But what’s recorded often masks what workers are actually dealing with on the floor.

Safety Metric / Input

How It Appears in Reporting

What Workers Actually Do or Say

System Accuracy

"99% detection rate"

Operators override or ignore false flags

PPE Compliance

"Full compliance logged by cameras"

Workers say system misses real violations

Engagement with Safety Prompts

"High interaction rates"

Some workers avoid interacting at all

Feedback Loop

"No major complaints submitted"

Workers hesitant to report issues

Alert Response Time

"Quick acknowledgment in logs"

Many alerts seen as noise, not action

 

Industry Response: Fragmented and Cautious

Manufacturing organizations are not ignoring the challenges posed by safety automation, but their responses are uneven, cautious, and often reactive.

Two major associations have begun formal initiatives:

  1. The National Association of Manufacturers (NAM) released a report in May 2024 stating that technology is helping improve workplace safety, strengthen supply chains, and expand workforce training. It called for a risk-based approach to regulation rather than broad, prescriptive oversight.
  2. The American Society of Safety Professionals (ASSP) has created a task force to explore the role of AI in the future of work and safety, with the goal of better understanding how AI impacts the workplace, workforce, and the society's role as a trusted source on occupational safety and health.

Companies are experimenting as well. Siemens, in partnership with NVIDIA, has introduced industrial PCs that deliver up to 25 times faster AI inference performance for factory-floor applications. This allows real-time safety and process decisions in complex environments, but it also raises questions about transparency, oversight, and failure handling.

Despite these advances, no federal body currently mandates transparency in how safety AI systems are trained or validated. As a result:

  • Vendors are not required to disclose whether their models were tested on diverse demographics.
  • Bias audits are not uniformly conducted across the industry.
  • No shared certification process exists for AI safety tools in industrial environments.

The regulatory silence has created a patchwork of corporate experiments, voluntary pledges, and pilot programs, none of which guarantee consistent protection for workers.

The momentum is clear, but the framework isn’t. For now, the burden of ethical implementation rests mostly with individual employers and tech vendors.

 

How Safety Tech Can Regain Worker Trust

So where do we go from here?

Some manufacturers are starting to shift their approach. In parts of Europe, pilot projects are being tested that treat automation not as a finished product, but as a system shaped alongside workers. An EU-OSHA case study documented how engineers and frontline employees jointly evaluated an AI-powered product inspection system, reviewing alert interfaces, detection performance, and overall usability in real production settings.

Another Industry 5.0 report highlighted manufacturing pilot lines designed to prioritize human factors. These programs included ergonomic testing, co-validation with operators, and iterative feedback to refine systems before deployment. Rather than relying solely on executive-level testing or lab trials, they brought workers directly into the loop.

These early efforts are not aimed specifically at addressing bias. But they provide a foundation for confronting deeper flaws in automated safety, such as the failure of some vision systems to detect darker skin tones. If detection tools are evaluated across a wider range of users and environments, including differences in appearance, posture, and body type, these visibility gaps can be identified and corrected before deployment.

Some vendors are beginning to address bias more directly. A report by AI Ethics Lab notes that Intenseye’s system design considers skin tone, body shape, and gender as part of its bias mitigation process. The report outlines design review practices and visual annotation standards intended to reduce exclusion and bias in safety detection systems.

Among safety professionals, I have also noticed some best practices that have emerged as recommendations to improve trust and accountability in AI-based systems:

  • Add manual override controls, allowing operators to confirm or dismiss system alerts.
  • Use explainable alert systems that clarify why a detection was triggered.
  • Conduct bias checks as part of internal model evaluations to monitor for unequal performance across groups.

These features are not widely implemented yet, but they are gaining traction as part of discussions around safer, more equitable workplace automation.

 

What Is Explainable AI?

One of the most widely discussed improvements in safety technology today is explainability. In simple terms, explainable AI (XAI) refers to systems that make it clear why an alert was triggered or a decision was made. This is especially relevant in workplace safety, where workers may be flagged for violating rules without understanding what action was recorded or why it mattered.

Instead of simply displaying a generic warning, explainable systems offer details: the type of risk detected, the condition that triggered it, and any context around the decision. This can help prevent unnecessary confusion and reduce the chance that workers ignore alerts altogether.

The difference between black-box and explainable systems is not just technical. It changes how workers interpret alerts, respond in real time, and view the system’s credibility:

Feature

Black-Box AI System

Explainable AI System

Alert Detail

Generic notification

Shows what triggered the alert

Worker Understanding

Often unclear or disputed

Clear reasoning improves response accuracy

Trust Level

Lower over time

Reinforced by visible logic

Feedback Opportunity

Limited or ignored

Can be challenged or verified by operators

Example in Safety Context

“Zone violation detected”

“Entered restricted area during no-go interval”

Several sources I gathered support the use of XAI in manufacturing environments:

  • Fero Labs describes how explainable outputs help engineers understand what factors drive AI recommendations, improving decision-making around quality, scheduling, and resource use.
  • IBM states that transparent AI models help users detect bias, trace decisions, and meet accountability standards, especially in high-stakes applications where auditability is required.
  • An arXiv study showed that participants using explainable visual tools (like saliency maps) made up to five times fewer errors than those using opaque systems in image-based classification tasks.

Clear, interpretable outputs help close the gap between people and systems. When workers can see why alerts appear or what factors shaped a recommendation, they respond with greater confidence and accuracy. This level of transparency builds a stronger foundation for safety and reinforces the credibility of the tools used on the factory floor.

To rebuild trust, safety systems need to reflect the experience of the people using them. Involving workers early in the design and testing process ensures the technology fits the realities of their environment. When tools offer clarity, fairness, and room for feedback, they are far more likely to earn acceptance and deliver meaningful improvements.

 

FAQs

How does AI bias affect machine learning models?

AI bias occurs when training data reflects historical inequalities, missing representation, or flawed labeling. This leads machine learning models to produce inaccurate or unfair outcomes, such as failing to detect certain populations, making uneven predictions, or reinforcing systemic patterns in hiring, surveillance, or safety enforcement.

How does AI potentially lead to discrimination?

When AI systems are trained on data that underrepresents specific groups or encodes biased decisions, they can produce outputs that disadvantage those groups. In workplace settings, this could mean safety alerts failing to recognize darker skin tones or performance assessments that penalize certain behaviors unequally.

Can AI be truly unbiased?

No system can be completely free of bias. However, bias can be reduced through careful data selection, diverse testing, transparent design practices, and ongoing auditing. The goal is not perfection but measurable fairness across different groups and conditions.

Is AI a threat to the workplace?

AI presents both opportunities and risks. It can improve efficiency and safety, but it also introduces concerns around job displacement, surveillance, and system errors. Poorly designed or unregulated AI can harm trust, increase inequality, or overlook critical human judgment.

What is a “human-in-the-loop” system?

A human-in-the-loop system is one where humans retain the ability to oversee, intervene, or correct automated decisions. In workplace safety, this means operators can confirm alerts, override faulty detections, and remain involved in real-time decision-making. It helps balance automation with accountability.

 

TRADESAFE provides premium industrial safety equipment, such as Lockout Tagout Devices, Eyewash Stations, Absorbents, and more; precision-engineered and trusted by professionals to offer unmatched performance in ensuring workplace safety.


The material provided in this article is for general information purposes only. It is not intended to replace professional/legal advice or substitute government regulations, industry standards, or other requirements specific to any business/activity. While we made sure to provide accurate and reliable information, we make no representation that the details or sources are up-to-date, complete or remain available. Readers should consult with an industrial safety expert, qualified professional, or attorney for any specific concerns and questions.

Herbert Post

Born in the Philadelphia area and raised in Houston by a family who was predominately employed in heavy manufacturing. Herb took a liking to factory processes and later safety compliance where he has spent the last 13 years facilitating best practices and teaching updated regulations. He is married with two children and a St Bernard named Jose. Herb is a self-described compliance geek. When he isn’t studying safety reports and regulatory interpretations he enjoys racquetball and watching his favorite football team, the Dallas Cowboys.

ENSURE SAFETY WITH PREMIUM SOLUTIONS