AI: Regulations Rising
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

AI: The Next Transformation

AI: Regulations Rising

/ Guide & Reference

Artificial intelligence (AI) is not just a technological construct. It is societal, industrial, political, structural, and economic as well. As such, it will be regulated to manage the upside opportunities and the potential problems arising from the implementation of the technology.

Regulation is on the way. In the US, the White House just announced an executive order to examine and address the current and long-term risks of AI; this order adds to the voluntary Risk Management Framework released in January 2023.

Preparing for regulations while exploring opportunities requires a balance of caution, curiosity, and smart counsel

The administration announced that “this landmark executive order is a testament to what we stand for: safety, security, trust, openness, American leadership and the undeniable rights endowed by our creator that no creation can take away."

The US measures include:

The executive order instructs federal regulators overseeing specific industries to evaluate AI risk and then publish guidelines for private companies to follow. For example, the Treasury Department has been given until March 2024 to publish a report on how the banking sector can manage cybersecurity risk flowing from the use of AI tools. The Commerce Department has been given until June 2024 to help reduce the risks posed by AI-based synthetic content (including “deepfakes”).

This regulation sets a high bar for other governments around the world.

Meanwhile, the UK hosted a two-day summit on AI safety at Bletchley Park—the base where Alan Turing famously spent time developing his computer skills. The summit was prompted by the longer-term concerns that AI systems of innovation and creation could lead to the development of more deadly bioweapons or more far-reaching cyberattacks. The purpose of the summit, attended by government officials, business leaders, and AI experts from companies around the world, is to try to reach international consensus on mitigating the risks of AI.

For businesses and their boards, this accelerated activity around regulation reinforces the need to constantly follow the guidance of their legal counsel as they deploy AI systems across the enterprise. Regulations will come post hoc, and compliance will be enforced—the executive order already has “teeth” and they will get sharper.

Corporations may also wish to make their voices heard in the regulatory process as federal agencies respond to the presidential directive.

The principles of transparency, honesty, integrity, and privacy around the data should be followed. They should set boundaries but not limit creativity as companies leverage the undeniable possibilities of AI platforms to drive their business forward.

For further guidance, see:

AI in the Era of ESG: Nine Steps Boards Can Take Now

Opportunities and Challenges of AI and Its Impact on Cybersecurity

AI: The Next Transformation (a resource for insights across all elements of AI)

Artificial intelligence (AI) is not just a technological construct. It is societal, industrial, political, structural, and economic as well. As such, it will be regulated to manage the upside opportunities and the potential problems arising from the implementation of the technology.

Regulation is on the way. In the US, the White House just announced an executive order to examine and address the current and long-term risks of AI; this order adds to the voluntary Risk Management Framework released in January 2023.

Preparing for regulations while exploring opportunities requires a balance of caution, curiosity, and smart counsel

  • Regulations are coming rapidly, with the executive order instructing federal agencies to draft guidelines for companies by mid-2024.
  • Boards and companies must prepare for the impending regulations but should not slow down exploration and implementation of AI.
  • Europe is also drawing up regulations with a focus on longer-term issues and bad actors, and these changes will have an impact on global organizations.

The administration announced that “this landmark executive order is a testament to what we stand for: safety, security, trust, openness, American leadership and the undeniable rights endowed by our creator that no creation can take away."

The US measures include:

  • Protecting consumer privacy by creating guidelines that agencies can use to evaluate privacy techniques used in AI;
  • Helping prevent AI algorithms from discriminating and creating best practices on the appropriate role of AI, especially in the justice system, the health system, and the education system;
  • Creating new safety and security standards for AI, including measures that require AI companies to share safety test results with the federal government;
  • Creating a program to evaluate potentially harmful AI-related health care practices and creating resources on how educators can responsibly use AI tools;
  • Ensuring competition in the AI marketplace and guarding against the concentration of power;
  • Guarding against undue workforce surveillance, the undermining of workers’ rights, and worsening job quality as a result of AI implementation; and
  • Working with international partners to implement AI standards around the world.

The executive order instructs federal regulators overseeing specific industries to evaluate AI risk and then publish guidelines for private companies to follow. For example, the Treasury Department has been given until March 2024 to publish a report on how the banking sector can manage cybersecurity risk flowing from the use of AI tools. The Commerce Department has been given until June 2024 to help reduce the risks posed by AI-based synthetic content (including “deepfakes”).

This regulation sets a high bar for other governments around the world.

Meanwhile, the UK hosted a two-day summit on AI safety at Bletchley Park—the base where Alan Turing famously spent time developing his computer skills. The summit was prompted by the longer-term concerns that AI systems of innovation and creation could lead to the development of more deadly bioweapons or more far-reaching cyberattacks. The purpose of the summit, attended by government officials, business leaders, and AI experts from companies around the world, is to try to reach international consensus on mitigating the risks of AI.

For businesses and their boards, this accelerated activity around regulation reinforces the need to constantly follow the guidance of their legal counsel as they deploy AI systems across the enterprise. Regulations will come post hoc, and compliance will be enforced—the executive order already has “teeth” and they will get sharper.

Corporations may also wish to make their voices heard in the regulatory process as federal agencies respond to the presidential directive.

The principles of transparency, honesty, integrity, and privacy around the data should be followed. They should set boundaries but not limit creativity as companies leverage the undeniable possibilities of AI platforms to drive their business forward.

For further guidance, see:

AI in the Era of ESG: Nine Steps Boards Can Take Now

Opportunities and Challenges of AI and Its Impact on Cybersecurity

AI: The Next Transformation (a resource for insights across all elements of AI)

Author

Other Related Resources

hubCircleImage