Safe Superintelligence

OpenAI co-founder Ilya Sutskever’s new venture, Safe Superintelligence, secures $1 billion for safe AI. “Building safe superintelligence (SSI) is the most important technical problem of our​​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence Inc. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.” Link

This entry was posted in Artificial Intelligence. Bookmark the permalink.

Leave a comment