LOGO

AI EO BS - What Does It Mean?

December 4, 2020
AI EO BS - What Does It Mean?

A recent executive order originating from the White House addresses “the Use of Trustworthy Artificial Intelligence in Government.” Setting aside the questionable assumption of the government’s inherent reliability and the notion that the issue lies with the software itself, the order largely lacks substantive impact.

Similar to other executive orders, its authority is restricted to directing federal agencies, and the practical extent of this influence is limited. This particular order “directs Federal agencies to be guided” by a set of nine principles, which immediately indicates the scope of its effect. Essentially, it asks agencies to consider these guidelines!

Notably, all military and national security operations are excluded from the order’s provisions, despite being areas where AI systems pose the greatest risks and require the most rigorous oversight. Concerns are not focused on AI applications within organizations like NOAA, but rather on the activities of intelligence agencies and the Department of Defense. (These entities typically operate under their own established regulations.)

The outlined principles function more as a set of aspirations. The executive order stipulates that AI utilized by federal entities must be:

It would be difficult to identify even a single significant AI implementation, anywhere globally, that fully embodies all of these characteristics. Any agency asserting that its AI or machine learning systems adhere to all principles detailed in the executive order should be viewed with considerable doubt.

The principles themselves are not inherently flawed or unproductive—it is undeniably important for agencies to assess potential risks when integrating AI and to establish monitoring procedures. However, an executive order alone is insufficient to achieve this. Meaningful AI accountability has already been demonstrated through robust legislation at the municipal and state levels, and while a federal law is not imminent, this order does not serve as a substitute for comprehensive legislation. It lacks the necessary specificity and binding force. Furthermore, many agencies have already adopted comparable “principles” in previous years.

The primary concrete action this executive order initiates is requiring each agency to compile a comprehensive list of all its AI applications, regardless of how “AI” is defined. However, it will likely be over a year before these lists become available.

Agencies have 60 days to determine the format for this AI inventory; 180 days after that, the inventory must be completed; an additional 120 days are allotted for review to ensure consistency with the principles; agencies must “strive” to align systems with these principles within a further 180 days; concurrently, the inventories must be shared with other agencies within 60 days of completion; and finally, they must be made public within 120 days of completion (excluding sensitive information related to law enforcement or national security).

While the inventories could theoretically be available in a month, a realistic timeframe suggests approximately a year and a half. At that point, we will receive a record of AI tools from the previous administration, potentially with crucial details redacted at their discretion. Nevertheless, the resulting documentation could prove insightful, depending on its content.

This executive order, like others of its kind, represents an attempt by the current White House to project an image of proactive leadership in an area largely beyond its direct control. The development and deployment of AI should undoubtedly be guided by established principles, but even if these principles could be imposed from the top down, this non-binding directive—which essentially asks agencies to seriously consider them—is not the appropriate approach.

#ai#eo#bs#meaning#interpretation#decoding