Study says AI is far from ‘transparent’

Study says AI is far from ‘transparent’

Artificial intelligence (AI) models, which have doubled since the launch of ChatGPT, lack transparency and pose a risk to applications that use them as a technical foundation, according to a study by Stanford University published on Wednesday (19).

A new index designed and calculated by researchers at this California university indicates that the most transparent model among the ten models evaluated is Llama 2, an artificial intelligence system launched by Meta in July that can be freely reused.

However, it received a score of only 54%, which is still very inadequate, according to the study authors.

The GPT-4 language model created by Open AI – the Microsoft-funded company that created the famous ChatGPT bot – achieved just 48% transparency.

Other well-known models, such as Google’s Palm-2, or Anthropic’s Claude 2 (funded by Amazon), appear lower in the ranking.

All models should try to achieve 80% to 100% transparency, estimates Stanford research director Rishi Bomasani, in so-called “core” models.

The lack of transparency makes it more difficult for companies “to know whether they can build secure applications based on these business models” and “for academics to trust these models in their research,” the study explains.

This also complicates the task for consumers who want to “understand the limitations of the models, or seek compensation for their damages,” he explains.

Specifically, “most companies do not disclose the scope of copyrighted content used to train their model. Nor do they disclose the use of human labor to debug the training data, which can be a major problem.”

See also  What are Charles III's challenges and what should he do to maintain the threshold of sovereignty? - News

“Neither company provides information on how many users rely on its model, nor does it provide statistics on the countries or markets that use it,” Bomasani highlights.

According to the authors, this transparency index could be used in the future by political and regulatory authorities.

The European Union, the United States, China, the United Kingdom and Canada have declared their desire for more transparency in the field of artificial intelligence.

“Artificial intelligence holds great promise of amazing opportunities, but it also represents risks to our society, our economy, and our national security,” US President Joe Biden assured in July of those responsible for companies in this sector.

The issue was highlighted during the G7 meeting in Japan in May, and the UK is preparing to host an international AI summit in November.


You May Also Like

About the Author: Camelia Kirk

"Friendly zombie guru. Avid pop culture scholar. Freelance travel geek. Wannabe troublemaker. Coffee specialist."

Leave a Reply

Your email address will not be published. Required fields are marked *