Meta-AI chief Yann LeCun fears bleak AI future controlled by a few companies



summary
Summary

What is the real risk of AI moving forward: regulation or openness?

Yann LeCun, head of AI research at Meta, comments on Twitter.com about the potential consequences of regulating open AI research and development. He warns that regulation could lead to a few companies controlling the AI industry. In his view, this is the most dangerous scenario imaginable.

LeCun criticizes AI research leaders

LeCun’s tweet was directed at AI pioneers Geoff Hinton, Yoshua Bengio, and Stuart Russell, who have repeatedly and publicly expressed concerns about the potential negative impacts of AI.

According to LeCun, the majority of the academic community supports open AI research and development, with AI pioneers Hinton, Bengio, and Russell as notable exceptions.

Ad

Ad

He argues that their “fear-mongering” provides ammunition for corporate lobbyists. The real AI disaster would be if a few corporations took control of AI, LeCun says.

Specifically, LeCun accuses OpenAI CEO Sam Altman, GoogleDeepmind CEO Demis Hassabis, and OpenAI Chief Scientist Ilya Sutskever of massive corporate lobbying and attempting to regulate the AI industry in their favor under the guise of safety.

“I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous,” LeCun writes.

Safe and open AI is possible – and desirable

LeCun advocates a combination of human creativity, democracy, market forces, and product regulation to drive the development of AI systems. He believes that safe and controllable AI systems are possible.

Meta AI’s chief scientist is researching a new autonomous AI architecture that can be safely controlled by objectives and guardrails. He believes that the fuss about the dangers of current AI models, especially large LLMs, is overblown.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top