If it moves, tax it. If it keeps moving, regulate it. GenAI keeps moving and the regulations are a comin.
Recommended Reading
US voters are worried about AI risks and favor federal regulation over self-regulation by tech companies, according to the Artificial Intelligence Policy Institute's polling. A majority is concerned about AI's impact, supports slowing its development, and believes it could pose threats. Trust in tech companies for regulation is low. Similar sentiments exist globally.
AI lobbying has seen a significant increase of 185% as demands for regulation grow. This surge in lobbying activity reflects the rising concerns surrounding the need to regulate artificial intelligence. The article, published by CNBC, provides more details on this topic.
As businesses and governments race to make sense of the impacts of new, powerful AI systems, governments around the world are jostling to take the lead on regulation. Business leaders should be focused on who is likely to win this race, moreso than the questions of how or even when AI will be regulated. Whether Congress, the European Commission, China, or even U.S. states or courts take the lead will determine both the speed and trajectory of AI’s transformation of the global economy, potentially protecting some industries or limiting the ability of all companies to use the technology to interact directly with consumers.
In 2024, AI regulation is expected to move from vision to concrete action. The United States will continue its friendly approach to AI regulation, while Europe will enforce its sweeping AI Act. China is working on an AI law with an ambitious scope, and other regions, like Africa, are likely to introduce AI regulations. Geopolitically, differences between democratic and authoritarian countries in fostering and weaponizing AI industries will become more apparent.
The document is a draft of an article titled "A pro-innovation approach to AI regulation." It was created on February 22, 2024, and last edited on the same day. The article explores the topic of AI regulation and advocates for a pro-innovation approach. The full article can be found at the provided URL.
The proposed Critical Algorithmic System Classification (CASC) is a new legal designation that, through the federal rulemaking process, will enable federal agencies to set and enforce rules over qualifying ADSs. The CASC would empower covered federal agencies to set and enforce rules over qualifying ADSs, by demonstrating that a category of ADS meets the legal criteria for the CASC, and in doing so, could set and enforce standards for the commercial use of that type of CASC system. The CASC approach is a novel and potentially impactful approach to enabling the comprehensive governance of ADSs through sectoral regulatory agencies and application-specific rulemaking.
The extent to which generative AI will impact the legal market and broader economy will come down to not only what it is capable of, but also how and whether it can operate within the legal and regulatory frameworks of the day.
Big Tech is going to have to live with more regulation but … regulators have to be wary about killing the goose that laid the golden egg, said University of Michigan law professor Daniel Crane.
Technology leaders can mitigate AI risk by addressing eight areas of interest in the FTC request to OpenAI about ChatGPT.
Global data protection authorities caution social media firms on data-scraping due to rising AI training demand; companies must balance interests with regulatory compliance.
The EU has given its final approval for the world's first artificial intelligence laws. These laws aim to regulate the use of AI and ensure ethical and responsible practices. The article provides more details on the topic.
The Europe Union is introducing “crash test” systems for artificial intelligence to ensure new innovations are safe before they hit the market.
Next
1 / 10