Is Not an Ethics Problem. It’s a Safety Crisis.
The AI systems designed to protect miners in Australia and South America rely on limited data from teams that lack diversity. This has created measurable blind spots, which peer-reviewed research has documented. Currently, these issues remain unaddressed at active mine sites.
There’s a saying in engineering: “Garbage in, garbage out.” It emphasises that the quality of a system’s output depends entirely on the quality of its inputs. In safety-critical AI used in mines, this principle is a matter of life or death. The AI systems in use at mining operations in Australia and South America have a fundamental flaw, which is resulting in documented, measurable harm.
Last month, I presented this argument to the Peruvian-Australian Global Mining Alliance in Perth. The audience managed operations in two of the world’s richest mineral regions: Peru, which holds over 10% of global copper reserves and more than 21% of global silver, and Australia, which hosts the majority of critical mineral operations on Indigenous land.
The Architecture of a Bias-Driven Safety Failure.
Three well-documented categories of AI deployment in high-risk mining environments commonly exhibit biases that directly impact safety.
- Computer Vision and PPE Compliance Monitoring.
Research published in the Proceedings of Machine Learning Research (2018) by Joy Buolamwini and Timnit Gebru at MIT evaluated commercial computer vision systems used for PPE detection against different demographic groups:
- Light-skinned men | 0.8% | Baseline – system performs as intended
- Light-skinned women | ~7% | ~8× baseline – partial safety gap
- Dark-skinned men | ~12% | ~15× baseline – significant safety gap
- Dark-skinned women | 34.7% | ~43× baseline – critical safety failure
NIST confirmed these findings in its 2019 Face Recognition Vendor Test (FRVT). The study assessed 189 algorithms developed by 99 developers across over 18 million images. It found false-positive rates 10 to 100 times higher for African American and Asian faces across most tested algorithms.
A system that reports 99.2% accuracy for light-skinned male workers may only operate at 65% accuracy for an Indigenous Australian woman doing the same job in the same location. These are not the same systems; they operate differently but share the same name.
- Voice Recognition and Emergency Shutdown Protocols.
Research published in the Proceedings of the National Academy of Sciences (2020) by Koenecke et al. at Stanford tested five major commercial automatic speech recognition (ASR) systems (Amazon, Apple, Google, IBM, and Microsoft). All five showed significant racial differences: an average word error rate of 35% for Black American English speakers compared to 19% for white speakers.
No studies have examined Filipino-accented English, Quechua-influenced speech, or the various regional Australian English accents common in Pilbara, Goldfields, and remote site workforces. The evidence does not support the default assumption that these systems will work equally well for all speakers. When an algorithm reflects narrow viewpoints, it creates blind spots. In consumer tech, such an outcome might mean a failed advertisement. On a mine site in the Pilbara or the Andes, it could lead to a tragedy.
- Fatigue Detection and Biometrics Wearables.
Wearable technology that predicts fatigue by monitoring physiological data has been a key improvement in mining safety over the past decade. The technology’s effectiveness hinges on the quality of the underlying physiological model.
If this base data were gathered mainly from young, white male participants, who are the demographic majority in much biomedical and occupational health research, it would be important to consider the implications of this bias. The model will accurately predict fatigue only for that group. Reports indicate that errors in similar systems have affected women, older workers, and individuals from diverse backgrounds at rates as high as 30%. That means one in three assessments could be incorrect for a significant portion of your workforce.
The Structural Root: Homogeneity in the AI Pipeline.
Women constitute about 22% of the global AI workforce, with fewer than 15% in senior positions. During recent restructuring in the tech sector, women were roughly 1.6 times more likely to lose their jobs. Over 90% of engineering teams now employ AI-assisted coding tools. Such a process creates a feedback loop in which biased models produce training signals for future models.
Technology validated solely on datasets that do not match your workforce cannot be relied upon to protect it. This has direct implications for mining procurement.
Indigenous Data Sovereignty and the Social License Calculus.
Indigenous communities host approximately 60% of Australia’s active mines. Between 57.8% and 79.2% of critical mineral sites are on native title land. Peru recognises and enforces Indigenous territorial rights to mineral-rich areas through social licensing requirements.
AI systems used on these lands collect data from sources like autonomous drone surveys of ancestral lands, sensors in culturally important waterways, and biometric wearables that gather detailed health data on Indigenous workers. This information typically enters corporate systems without the informed consent or involvement of the communities it originates from.
CARE PRINCIPLES FOR INDIGENOUS DATA SOVEREIGNTY (Carroll et al., 2020)
- Collective Benefit: Data ecosystems must benefit the communities from which the data is collected, not just the collecting organisation.
- Authority to Control: Traditional Owners have the right to govern the collection, access, use, and disposal of data about them and their land.
- Responsibility: Data collectors have a duty to ensure that community data supports self-determination.
- Ethics: Community well-being must take precedence over corporate efficiency objectives in all data governance decisions.
The CARE Principles, developed by the International Indigenous Data Sovereignty Interest Group through the Research Data Alliance, have now been adopted internationally. Australia’s National Indigenous Australians Agency will require its implementation by government agencies as of January 2025. Mining companies involved with those agencies must comply.
The Regulatory Horizon: July 2026 and Beyond.
The APS AI Plan (November 2025) requires every Commonwealth agency to appoint a Chief AI Officer by July 2026. The Digital Transformation Agency updated Australia’s AI policy early in 2026, with the first mandatory requirement taking effect on June 15, 2026.
Two frameworks are key to meeting these standards:
- ISO/IEC 42001:2023. Published December 2023, this document is the world’s first international standard for AI management systems. It outlines certifiable structures that cover ethical policies, vendor oversight, and documentation of training data sources.
- NIST AI Risk Management Framework (RMF 1.0). Released on January 26, 2023, it emphasised agility, continuous monitoring, and the need to respond to algorithmic changes, which is important as AI systems learn from operational data after deployment.
Practical Imperatives for Mining Leaders.
Three specific recommendations fall within the authority of mining executives and do not require changes to the global AI industry:
- Conduct demographic bias audits before deploying safety-critical AI. Request that vendors provide evidence of system effectiveness for the demographic profiles in your workforce. Use NIST FRVT testing protocols as a benchmark. Do not accept systems validated on homogeneous datasets for safety-critical roles.
- Use procurement power to improve the pipeline. State in your vendor requirements that engineering teams creating safety systems must reflect meaningful demographic diversity. Mining companies represent a significant technology market, so your demands can drive market changes that individual responsibility cannot achieve.
- Formalise indigenous data consultations for corporate AI governance. Adopt the CARE Principles as a binding policy where operations are on or near traditional lands. Ensure conditions of consent, benefit sharing, and transparency about data use for ethical operations, and increasingly, these are necessary for maintaining social licences.
Conclusion: Leadership, Not Compliance.
The mining industry has a history of leading safety innovations in response to real risks. Leaders, who recognised that inaction cost more than change, drove the shift from a reactive to a proactive safety culture.
The AI safety challenge requires similar leadership: executives willing to ask tough questions of their technology vendors, procurement practices, and themselves about whose safety has traditionally been the priority.
Those organisations that arrive at the World Mining Congress 2026 in Lima with validated systems, clear governance, and true community partnerships will be the ones whose AI performs as expected.
References
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77–91. https://doi.org/10.1145/3287560.3287596 | Full paper: proceedings.mlr.press/v81/buolamwini18a.html
- Grother, P., Ngan, M., & Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280 | Report overview: nist.gov/…/nist-study-evaluates-effects-race-age-sex-face-recognition-software
- Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C., Rickford, J. R., Jurafsky, D., & Goel, S. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689. https://doi.org/10.1073/pnas.1915768117 | Project: fairspeech.stanford.edu
- Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., Holbrook, J., Lovett, R., Materechera, S., Parsons, M., Raseroka, K., Rodriguez-Lonebear, D., Rowe, R., Sara, R., Walker, J. D., Anderson, J., & Hudson, M. (2020). The CARE principles for Indigenous data governance. Data Science Journal, 19(1), Article 43. https://doi.org/10.5334/dsj-2020-043 | Full principles: gida-global.org/care
- Department of Finance, Australian Government. (2025). Establishing Chief AI Officers for the APS. finance.gov.au/about-us/news/2025/establishing-chief-ai-officers-aps | GovAI explainer: govai.gov.au/aide/chief-ai-officers-who-are-they-and-why-they-matter
- Digital Transformation Agency, Australian Government. (2026). AI policy update: Strengthening responsible use across government. dta.gov.au/articles/ai-policy-update-strengthening-responsible-use-across-government
- International Organization for Standardization. (2023). ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management systems. iso.org/standard/42001
- Tabassi, E. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.AI.100-1 | Framework home: nist.gov/itl/ai-risk-management-framework
ACKNOWLEDGMENT
A sincere thank you to Peruvian-Australian Global Mining Alliance (PAGMA) leadership for creating space for this essential conversation. Responsible AI in resources starts with the communities most affected, not just the companies most invested.
ABOUT THE AUTHOR
Eunice Sari
Director, CX Insight Pty Ltd · CEO, UX Indonesia · Co-Founder & Management Committee, WA AI Hub · Adjunct Associate Professor, UNSW & Research Fellow at CDU and ECU
Eunice Sari has spent 25 years at the intersection where new technology meets real people and fails them. As the CEO and co-founder of UX Indonesia in 2002, the first insight-driven UX research and consulting company in Indonesia, she built her career on a single observation: technology designed for its users gets used. That principle has since shaped every engagement she has led across industry, government, and academia in the USA, Europe, Australia, and Southeast Asia.
Today, she leads CX Insight Pty Ltd. in Western Australia and co-founded the WA AI Hub, bringing that same humanity-centred lens directly to the challenge of responsible AI adoption. Her work answers the practical question that organisations across every sector are now asking not whether to adopt AI, but how to do so in ways that work for everyone in the room, not just the majority.
Her research credentials are equally rooted in practical experience. As an adjunct associate professor at UNSW, research fellow at CDU and ECU, and industry fellow at the Australian Indonesian Centre, her work focuses on equity-grounded design, examining how technology systems can be built to include communities that have historically been excluded from the design process.
She was the first Asian female Google Developer Expert in Product Design and Strategy, a Google Certified Design Sprint Master, and a Google for Startups mentor (AI-First). She has mentored more than 500 startups through Google Launchpad, Murdoch University, and various accelerators across Australia and Indonesia.
wahub.ai · cxinsight.com.au · aigovsprint.online
Disclaimer: This article represents the author’s professional views on AI governance, diversity, and safety in the mining sector. It does not constitute legal, safety engineering, or investment advice. The peer-reviewed and government research cited above provide the statistical references. © 2026 WA AI Hub / CX Insight Pty Ltd.
You may contact the author at: https://www.linkedin.com/in/dr-eunice-sari/en
