The U.S. Department of Labor has issued new guidance on best practices for those developing or using employment-related Artificial Intelligence: Artificial Intelligence and Worker Well Being: Principles and Best Practices for Developers and Employers.
Among the agency’s key recommendations is that developers of such systems and the employers who use them should conduct audits – both before deployment and once the AI systems are in use – to ensure that those AI systems are not causing illegal discrimination:
Prior to deployment, developers and employers should audit AI systems for disparate or adverse impacts on the basis of race, color, national origin, religion, sex, disability, age, genetic information, and other protected bases, and should make the results public. Developers and employers using AI must maintain their compliance with anti-discrimination legal requirements. Developers can minimize disparate or adverse impacts in design by ensuring the data inputs used to train AI systems, and the algorithms and machine learning models, do not reproduce bias or discrimination. Employers should continue to routinely monitor and analyze whether the use of the AI system is causing a disparate impact or disadvantaging individuals with protected characteristics, and, if so, take steps to reduce the impact or use a different tool.
This “best practice” is consistent with state level legislation that a growing number of states are now implementing or considering for implementation. In light of this, employers who are using or planning to use employment-related AI systems – as well as developers of such systems – may want to consider having AI bias audits conducted to assess those systems.