The growing use of artificial intelligence comes with more than just technology challenges. Ethical questions are emerging as government agencies use AI for a range of purposes, from customer service chatbots to mission-critical tasks supporting military forces. This increasing reliance on AI introduces concerns about bias in its foundation, especially related to information on gender, race, socioeconomic status and age.
Potential for bias can be built into AI algorithms by the humans who create them. This can happen intentionally or by accident resulting from biases developers don”t realize they have. The consequence, of course, is biased outcomes.
Many agencies are aware of the situation. For instance, the Defense Department released in February its AI strategy, which charges the Joint AI Center with “the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner.” Programs under development include using AI to check contractors” cyber hygiene and increase situational awareness for warfighters, even as the Defense Advanced Research Projects Agency is researching how it can overcome adversarial AI. The Department of Veterans Affairs wants to use AI to speed the retrieval and delivery of information to veterans and caregivers who call its hotlines.
Related Document:
Summary of the 2018 DoD Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity, 2018