
Advancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine Learning
Versandfertig in über 4 Wochen
21,99 €
inkl. MwSt.
PAYBACK Punkte
11 °P sammeln!
The U.S. Department of Defense (DoD) places a high priority on promoting diversity, equity, and inclusion at all levels throughout the organization. Simultaneously, it is actively supporting the development of machine learning (ML) technologies to assist in decisionmaking for personnel management. There has been heightened concern about algorithmic bias in many non-DoD settings, whereby ML-assisted decisions have been found to perpetuate or, in some cases, exacerbate inequities. This report is an attempt to equip both policymakers and developers of ML algorithms for DoD with the tools and guid...
The U.S. Department of Defense (DoD) places a high priority on promoting diversity, equity, and inclusion at all levels throughout the organization. Simultaneously, it is actively supporting the development of machine learning (ML) technologies to assist in decisionmaking for personnel management. There has been heightened concern about algorithmic bias in many non-DoD settings, whereby ML-assisted decisions have been found to perpetuate or, in some cases, exacerbate inequities. This report is an attempt to equip both policymakers and developers of ML algorithms for DoD with the tools and guidance necessary to avoid algorithmic bias when using ML to aid human decisions. The authors first provide an overview of DoD's equity priorities, which typically are centered on issues of representation and equal opportunity within personnel. They then provide a framework to enable ML developers to develop equitable tools. This framework emphasizes that there are inherent trade-offs to enforcing equity that must be considered when developing equitable ML algorithms. The authors enable the process of weighing these trade-offs by providing a software tool, called the RAND Algorithmic Equity Tool, that can be applied to common classification ML algorithms that are used to support binary decisions. This tool allows users to audit the equity properties of their algorithms, modify algorithms to attain equity priorities, and weigh the costs of attaining equity on other, non-equity priorities. The authors demonstrate this tool on a hypothetical ML algorithm used to influence promotion selection decisions, which serves as an instructive case study.