Published in News

Sutskever wants to create Safe Superintelligence

by on20 June 2024


This will end well

Ilya Sutskever, one of the founders of OpenAI, has started a new company called Safe Superintelligence.

This new venture is focused on developing a robust AI system within a dedicated research environment, with a strong emphasis on AI safety. Alongside Sutskever, the company is co-founded by investor Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, recognised for his work on training extensive AI models at OpenAI.

For many years, experts have considered how to make AI systems safer, but practical solutions have been limited. The best approach so far involves combining human judgment with AI to guide the technology towards beneficial outcomes for humanity. However, the question of how to prevent an AI system from causing harm is still largely theoretical.

Sutskever has spent considerable time reflecting on these safety challenges and has developed several potential strategies, although Safe Superintelligence is not ready to share details yet.

"At its core, a safe superintelligence system should be designed to prevent widespread harm to humanity," explains Sutskever.

"Beyond that, we aim for it to contribute positively. We're considering building upon fundamental values that have supported liberal democracies for centuries, such as liberty, democracy, and freedom."

He also notes that while large language models have been central to AI development, Safe Superintelligence aims to create something much more advanced. Current AI systems are limited to simple interactions, but Sutskever envisions a more versatile and powerful system.

"Imagine a vast super data centre autonomously advancing technology. That's quite an extraordinary concept, and ensuring its safety is our goal," he says.

Last modified on 20 June 2024
Rate this item
(0 votes)