AI Large Language Models Under Fire: Update on Country Restrictions
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you acknowledge our privacy policy and consent to the use of cookies. 

AI: The Next Transformation

Your guide to real-world application

Sign Up for AI Event Updates

AI Large Language Models Under Fire: Update on Country Restrictions

May 12, 2023 | Report

AI-powered large language models (LLMs) such as ChatGPT-3, ChatGPT-4, Google Bard, and Baidu’s ERNIE bot have demonstrated remarkable capabilities in understanding and generating humanlike text responses. However, their disruptive potential has raised concerns among governments worldwide. These models’ sheer power and versatility can challenge traditional information control and privacy norms and potentially contribute to the spread of misinformation. As a result, governments face the difficult task of balancing the benefits of AI innovation with the need to safeguard against its unintended consequences.

Insights for What’s Ahead

  • While some countries have implemented bans or restrictions on AI-powered LLMs such as ChatGPT and Google Bard, the effectiveness of these measures is still being determined due to the rapid pace of technological advancements and the ability of users to bypass restrictions.
  • A more practical approach to addressing privacy, misinformation, and censorship concerns may involve focusing on AI’s ethical development and responsible use; as the global conversation surrounding AI-powered LLMs continues to evolve, stakeholders must work together to ensure that these tools are used safely and responsibly, ultimately benefiting society as a whole.

Fundamentals—Risks of Using LLMs like ChatGPT

  1. Privacy concerns: May unintentionally reveal sensitive or confidential information, especially in corporate and government environments, leading to potential data breaches
  2. Misinformation: Can generate convincing but false information, which malicious actors can exploit to spread misinformation or disinformation campaigns, posing risks to national security and public trust
  3. Biased outputs: May inadvertently generate biased outputs based on its training data, potentially perpetuating harmful stereotypes or reinforcing existing biases, leading to unethical decision-making in corporate or government settings
  4. Lack of contextual understanding: While contextual understanding has improved, may still misunderstand or misinterpret user input, leading to incorrect or misleading responses
  5. Legal and regulatory compliance: May not adhere to specific legal or regulatory requirements, such as the European General Data Protection Regulation (GDPR), which can lead to sanctions or bans.

Why Do Governments Ban or Prohibit This technology?

The reasons for banning or prohibiting ChatGPT and other LLMs vary by country. Some countries are concerned about privacy violations, while others fear the potential for the technology to be used for spreading misinformation. Government censorship is also a common concern, particularly in China, where strict censorship laws prohibit using many foreign web platforms. In Italy, the temporary ban was motivated by a data breach and concerns over the legal basis for using personal data to train the chatbot. Using AI chatbots such as ChatGPT has raised concerns among ethicists and regulators over potential negative societal implications, including privacy violations, bias, and misinformation. OpenAI has been working to address these concerns and mitigate the potential adverse effects of AI chatbots. However, there is still a long way to go before such technologies can be used safely and responsibly worldwide.

Countries with Bans or Restrictions (As of May 8, 2023)

Because ChatGPT is a frontrunner in the LLMs AI space, ChatGPT has already been banned in multiple countries: Russia, China, North Korea, Cuba, Iran, Syria, and Italy. Other LLMs with the same capabilities will likely have similar treatment from governments worldwide.

The reasons for these bans vary:

  1. Russia: ChatGPT is banned in Russia primarily due to concerns about the US using the technology to spread misinformation. Additionally, the ongoing conflict with Ukraine contributes to restrictions on AI language models in general.
  2. China: China has banned ChatGPT due to concerns that the US could use the technology to spread misinformation and influence global narratives. Furthermore, ChatGPT does not comply with China’s strict censorship laws, making it inaccessible.
  3. North Korea: The North Korean government has banned ChatGPT, claiming that the US could use it to spread misinformation. Strict state control over information and technology also contributes to the ban.
  4. Cuba: In Cuba, ChatGPT is restricted due to heavy regulation of internet access. The government blocks many websites and services, including AI language models like ChatGPT.
  5. Iran: ChatGPT is inaccessible in Iran because of strict US sanctions that limit Iranian citizens’ access to certain technologies and services, including LLMs.
  6. Syria: The use of ChatGPT in Syria is restricted due to ongoing conflict and strict government control over information and technology.
  7. Hong Kong: ChatGPT is effectively banned in Hong Kong due to the government’s increasing control over the internet and restrictions on access to certain websites and services.
  8. Italy: Italy banned ChatGPT after the country’s data protection watchdog called on OpenAI to stop processing Italian residents’ data. The ban was initiated due to concerns over privacy violations, data breaches, inaccurate information in ChatGPT responses, and lack of age restrictions on the platform. OpenAI has taken measures to address these issues and work with Italian regulators to resolve the situation, and the ban on ChatGPT in Italy was lifted as of April 30, 2023, as reported by various news sources such as msn.com.

In addition: ChatGPT cannot be used in Afghanistan, Bhutan, Central African Republic, Chad, Eritrea, Eswatini, Libya, South Sudan, Sudan, and Yemen due to its omission from OpenAI’s list of countries that can use its API.

Countries That Are Currently Investigating LLMs

In addition to the countries that have banned or restricted ChatGPT, several others are investigating the AI model due to privacy concerns. Canada’s head privacy commissioner has launched an investigation into OpenAI, alleging personal information harvesting. Watchdogs in Germany, France, Ireland, and Spain are considering similar actions. These investigations highlight the growing global concern surrounding the ethical use and privacy implications of AI-powered language models like ChatGPT. As more countries scrutinize the technology, addressing these issues to ensure the responsible development and deployment of AI models in the future becomes ever more important.

Effectiveness of Banning or Prohibiting—Is There a Better Way?

The effectiveness of banning or prohibiting LLMs such as ChatGPT remains questionable. As highlighted in a May 4 article from SemiAnalysis, major technology companies, including Google, acknowledge that they have "no moat,” and neither do their competitors. This implies that the rapid pace of technological advancements and the widespread availability of AI models make it increasingly difficult for governments to control access to such tools effectively. Furthermore, tech-savvy users can often find workarounds to bypass restrictions and access banned AI models. As a result, focusing on the ethical development and responsible use of AI, rather than outright bans, may be a more practical way to address concerns related to privacy, misinformation, and censorship in the age of advanced language models like ChatGPT.

Responsible, ethical use of AI can be achieved through various methods, such as:

  • Establishing industry-wide guidelines and best practices for AI developers, ensuring transparency, fairness, and accountability in designing and deploying AI systems.
  • Encouraging collaboration between developers, regulators, and users to create a shared understanding of AI’s potential risks and opportunities, fostering a proactive approach to addressing challenges.
  • Implementing robust data privacy and security measures to protect users’ information and comply with relevant regulations like GDPR.
  • Developing AI LLMs that are more resistant to misuse and manipulation; for example, by incorporating techniques to detect and prevent the generation of misinformation or biased content.
  • Investing in AI education and awareness programs to ensure that users understand the capabilities and limitations of AI-powered tools, promoting informed decision-making and responsible use.

In future articles, we plan to explore the alternatives to outright banning and prohibiting AI technologies such as ChatGPT. We will investigate the potential pathways for mitigating risks associated with these advanced language models while maximizing their societal benefits. This exploration will cover topics such as developing robust regulatory frameworks, establishing ethical guidelines for AI developers, fostering cross-sector collaboration, and promoting AI literacy and responsible use among end-users. Examining these alternative approaches aims to provide a more nuanced understanding of AI-powered language models’ challenges and opportunities.

AUTHOR

ChristianKromme

International Futurist & AI Visionary
Senior Fellow, ESF Center, Europe
The Conference Board


Publications


Webcasts, Podcasts and Videos


Upcoming Events


Press Releases / In the News

hubCircleImage