We’re seeing more and more signs of regulatory approaches that could be used to develop AI. These regulations may end up preventing certain AI initiatives, but they will also ensure greater transparency for the consumer.
It’s a positive thing that AI generated material is safe, but I don’t think we will be able to do the necessary due diligence to implement these tools in a way that’s both beneficial and protective.
The first limitation is data controls. Every company developing AI faces legal issues based on the use of protected copyright material in their models.
A group of French publishing companies launched a legal case against Meta last week for copyright violation, joining an American collective in asserting their rights to ownership against the tech giant.
If either case results in a large payout, then you can bet on the fact that other publishers will launch similar cases, which may result in a huge fine for Zuck and Co., based upon their initial process of creating the models of Llama LLM.
It’s not only Meta. OpenAI, Google and Microsoft are all facing lawsuits over their use of copyrighted materials. There is widespread concern about text being stolen to be used in these AI models.
This could create a new precedent in the law around data use, and social media platforms could be left as leaders when it comes to LLMs, since they will have the most proprietary data. Their ability to sell such models will be also limited by the user agreements and data clauses that were built into their contracts after the Cambridge Analytica controversy (as they are also bound by EU regulations). Meta reportedly accessed illegal books and information to develop its LLM, because the data it had, based off of Facebook and Instagram user posts, was not adequate.
This could be a significant hindrance to AI in the U.S., as China’s cyber security rules allow them to use and access data of Chinese companies in any way they want.
OpenAI, a company based in the United States that advocates for looser restrictions on data usage, has called directly for the government’s approval of the use copyrighted data for AI training.
It is for this reason that so many leaders in the tech industry have sought to gain favor with President Trump, to help them win on deals like these and others. If U.S. firms face restrictions, Chinese companies will win the AI race.
It is a paradox that intellectual property rights are so important, but it would be a bad idea to allow your work to become incorporated into training systems, which will make your profession or art obsolete. Also, money. You can also bet on corporations to tap into money when there is money to make (see: attorneys jumping onto YouTube’s copyright claims). This will likely be the reckoning that defines the future of AI.
China, the EU, and the U.S. all implemented regulations last week pertaining to “labeling synthetic content”.
Facebook, Instagram and Threads have all implemented rules regarding AI disclosure. Pinterest added these recently. LinkedIn has AI labels and detection in place (but there are no rules regarding voluntary tagging), whereas Snapchat labels AI images that were created using its tools but does not have any rules about third party content.
Note: X developed AI disclosure rules in 2020 but did not implement them.
It’s a great development, but as we see with so many AI changes, it is happening in a piecemeal manner. This leaves responsibility on specific platforms rather than implementing universal guidelines and procedures.
This is also better for innovation in the “Move fast and break things” Facebook sense. This is more likely the case, given that there are now many tech-savvy leaders in the White House.
In the name of corporate success, I feel that pushing for innovation can lead to more harm. As people become more reliant upon AI tools, they are more likely to make mistakes. AI visuals also become more ingrained in interactive processes.
Are we more worried about the harms of AI?
For the most part regurgitating web-based information is just a slight alteration to our normal process. There are still risks. People are developing relationships and outsourcing their critical thinking to AI generated characters. These AI generated characters will be more prevalent in social media apps. Meanwhile, millions of people have been duped by AI created images such as starving children, elderly people who are lonely, or innovative kids living in remote villages.
Sure, we didn’t see the expected influx of politically-motivated AI-generated content in the most recent U.S. election, but that doesn’t mean that AI-generated content isn’t having a profound impact in other ways, and swaying people’s opinions, and even their interactive process. We’re ignoring the dangers and ill effects that are already present because we don’t wish to see other countries develop faster.
Social media has had the same effect, giving billions access to tools which have been associated with various types of harm. We’re trying to slow things down, and some regions are looking at banning teens from using social media in order to protect them. Only in the past 10 years, have we seen any serious efforts made to reduce the risks of social media.
What have we learned from all this?
It seems not. Again, corporations who stand to gain the most by mass adoption are pushing for a capitalist strategy that involves moving quickly and breaking whatever it is.
This is not to suggest that AI is bad. It’s also not to suggest that we should not be utilizing generative AI to automate various processes. But I do say that in the AI Action Plan proposed by the White House and in other similar initiatives, such risks should be considered as important factors when developing AI.
The answer is no. In ten years, we will be looking for ways to limit the use of generative AI and curb its harm.
The major players are going to win, and I also expect all these claims of copyright will eventually fade, favoring rapid innovation.
The AI hype has been real and is expected to grow into a market worth $1.3 trillion dollars.
As a result, critical thinking, interaction capacity and mental health will be affected at a large scale.