Members of The Conference Board get exclusive access to the full range of products and services that deliver Trusted Insights for What's Ahead TM including webcasts, publications, data and analysis, plus discounts to conferences and events.
01 November 2023 / Guide & Reference
Artificial intelligence (AI) is not just a technological construct. It is societal, industrial, political, structural, and economic as well. As such, it will be regulated to manage the upside opportunities and the potential problems arising from the implementation of the technology.
Regulation is on the way. In the US, the White House just announced an executive order to examine and address the current and long-term risks of AI; this order adds to the voluntary Risk Management Framework released in January 2023.
The administration announced that “this landmark executive order is a testament to what we stand for: safety, security, trust, openness, American leadership and the undeniable rights endowed by our creator that no creation can take away."
The US measures include:
The executive order instructs federal regulators overseeing specific industries to evaluate AI risk and then publish guidelines for private companies to follow. For example, the Treasury Department has been given until March 2024 to publish a report on how the banking sector can manage cybersecurity risk flowing from the use of AI tools. The Commerce Department has been given until June 2024 to help reduce the risks posed by AI-based synthetic content (including “deepfakes”).
This regulation sets a high bar for other governments around the world.
Meanwhile, the UK hosted a two-day summit on AI safety at Bletchley Park—the base where Alan Turing famously spent time developing his computer skills. The summit was prompted by the longer-term concerns that AI systems of innovation and creation could lead to the development of more deadly bioweapons or more far-reaching cyberattacks. The purpose of the summit, attended by government officials, business leaders, and AI experts from companies around the world, is to try to reach international consensus on mitigating the risks of AI.
For businesses and their boards, this accelerated activity around regulation reinforces the need to constantly follow the guidance of their legal counsel as they deploy AI systems across the enterprise. Regulations will come post hoc, and compliance will be enforced—the executive order already has “teeth” and they will get sharper.
Corporations may also wish to make their voices heard in the regulatory process as federal agencies respond to the presidential directive.
The principles of transparency, honesty, integrity, and privacy around the data should be followed. They should set boundaries but not limit creativity as companies leverage the undeniable possibilities of AI platforms to drive their business forward.
For further guidance, see:
AI in the Era of ESG: Nine Steps Boards Can Take Now
Opportunities and Challenges of AI and Its Impact on Cybersecurity
AI: The Next Transformation (a resource for insights across all elements of AI)
Artificial intelligence (AI) is not just a technological construct. It is societal, industrial, political, structural, and economic as well. As such, it will be regulated to manage the upside opportunities and the potential problems arising from the implementation of the technology.
Regulation is on the way. In the US, the White House just announced an executive order to examine and address the current and long-term risks of AI; this order adds to the voluntary Risk Management Framework released in January 2023.
The administration announced that “this landmark executive order is a testament to what we stand for: safety, security, trust, openness, American leadership and the undeniable rights endowed by our creator that no creation can take away."
The US measures include:
The executive order instructs federal regulators overseeing specific industries to evaluate AI risk and then publish guidelines for private companies to follow. For example, the Treasury Department has been given until March 2024 to publish a report on how the banking sector can manage cybersecurity risk flowing from the use of AI tools. The Commerce Department has been given until June 2024 to help reduce the risks posed by AI-based synthetic content (including “deepfakes”).
This regulation sets a high bar for other governments around the world.
Meanwhile, the UK hosted a two-day summit on AI safety at Bletchley Park—the base where Alan Turing famously spent time developing his computer skills. The summit was prompted by the longer-term concerns that AI systems of innovation and creation could lead to the development of more deadly bioweapons or more far-reaching cyberattacks. The purpose of the summit, attended by government officials, business leaders, and AI experts from companies around the world, is to try to reach international consensus on mitigating the risks of AI.
For businesses and their boards, this accelerated activity around regulation reinforces the need to constantly follow the guidance of their legal counsel as they deploy AI systems across the enterprise. Regulations will come post hoc, and compliance will be enforced—the executive order already has “teeth” and they will get sharper.
Corporations may also wish to make their voices heard in the regulatory process as federal agencies respond to the presidential directive.
The principles of transparency, honesty, integrity, and privacy around the data should be followed. They should set boundaries but not limit creativity as companies leverage the undeniable possibilities of AI platforms to drive their business forward.
For further guidance, see:
AI in the Era of ESG: Nine Steps Boards Can Take Now
Opportunities and Challenges of AI and Its Impact on Cybersecurity
AI: The Next Transformation (a resource for insights across all elements of AI)
You already have an account with The Conference Board.
Please try to login in with your email or click here if you have forgotten your password.