The federal agencies must demonstrate that their artificial intelligence tools do not harm people, or else stop using them, according to new rules revealed by the White House on Thursday. Vice President Kamala Harris stated that government agencies must verify that these tools do not endanger the rights and safety of the American people. By December, each entity must have specific safeguards in place to guide everything from facial recognition screens at airports to AI tools that help control the electrical grid or determine mortgages and home insurance.

The new directive issued on Thursday to agency heads by the White House’s Office of Management and Budget is part of the broad AI decree signed by President Joe Biden in October. While the broad decree aims to safeguard commercial AI systems manufactured by leading technology companies, such as those that power generative AI chatbots, Thursday’s directive targets tools that agencies have been using for years to make decisions on immigration, housing, child welfare, and a range of services.

For example, Harris stated that “if the Veterans Administration wants to use AI in its hospitals to help doctors diagnose patients, they must first demonstrate that the AI does not produce racially biased diagnoses.” Agencies that cannot implement the safeguards “must stop using the AI system unless the agency head can explain why doing so would increase risks or violate rights in general or create an insurmountable obstacle to their critical operations,” according to a White House announcement.

There are two other “binding requirements,” said Harris. One is that federal entities must hire an AI chief with the “experience, expertise, and authority” to oversee all AI technologies used by the agency. The other is that agencies must annually publish an inventory of their AI systems that includes an assessment of their potential risks. There are some exceptions for intelligence agencies and the Department of Defense, which are engaged in a particular debate over the use of autonomous weapons.

The goal of these new rules is to ensure that AI tools used by federal agencies do not create harm or bias in their decision-making processes. By implementing safeguards and oversight measures, the government aims to protect the rights and safety of the American people while still harnessing the benefits of AI technology in areas like healthcare, infrastructure, and financial services.

These requirements are part of a broader effort by the Biden administration to promote responsible AI use across all sectors of society. By holding government agencies accountable for the use of AI tools, the administration is signaling its commitment to ethical and transparent practices in the development and deployment of AI technology.

Overall, the new rules issued by the White House represent a significant step towards ensuring that AI is used responsibly and ethically in government operations. By requiring agencies to demonstrate the safety and fairness of their AI systems, the government is taking proactive measures to address concerns about bias, discrimination, and other potential harms that can arise from the use of AI technology. As AI continues to play a larger role in our society, these guidelines will help ensure that it is used in a way that benefits all Americans.

Share.
© 2024 Trend Fool. All Rights Reserved.