Why Elon Musk Is Stepping Down from AI Safety Group He Co-Founded

Elon Musk speaks at the International Astronautical Congress on Sept. 29, 2017 in Adelaide, Australia, where the Tesla and SpaceX CEO detailed the long-term technical challenges that need to be solved in order to support the creation of a permanent, self-
Elon Musk speaks at the International Astronautical Congress on Sept. 29, 2017 in Adelaide, Australia, where the Tesla and SpaceX CEO detailed the long-term technical challenges that need to be solved in order to support the creation of a permanent, self-sustaining human presence on Mars. (Image credit: Mark Brake/Getty)

Entrepreneur and CEO of Tesla and SpaceX, Elon Musk may have a little more time on his hands (maybe), as he's departing his spot on the board of the artificial-intelligence safety group OpenAI, according to a blog post.

The departure is likely the result of Tesla's move into the realm of A.I., which he said in 2017 would be the "best in the world" and would even be able to "predict your destination."

Musk will continue to "donate and advise the organization," OpenAI said in a blog post Feb. 20, adding that "As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon."

Musk and Y Combinator CEO Sam Altman co-founded the nonprofit venture in December 2015, with backing from the likes of Peter Thiel (an early backer of Facebook), Reid Hoffman (who co-founded LinkedIn), Jessica Livingston (founding partner of Y Combinator), Greg Brockman and computer scientist Ilya Sutskever, according to the OpenAI website.

OpenAI's mission is to develop safe AGI (artificial general intelligence) and ensure those developments are made public; its 60 or so researchers are tasked with long-term research, according to the company. On Tuesday (Feb. 20), OpenAI researchers published a paper on the pre-print site arXiv.org, detailing the possible security threats that come with "malicious" A.I.

In fact, Musk has sounded the "evil A.I." alarm several times. On Aug. 11, 2017, he tweeted that artificial intelligence poses a bigger threat to humanity than North Korea, even as the possibility of a nuclear missile attack escalated after President Donald Trump and North Korean leader Kim Jong-un exchanged threatening words. In July 2017, he told a gathering of state governors that the government needs to regulate A.I. before robots start "killing people."

Musk's departure from the OpenAI board could mean big things for Tesla. As Elon Goodbye reported on Futurism, the move "could signal that Tesla is more deeply committed to their own AI projects than we thought."

Goodbye added, "Those who have had their ears to the ground for any rumblings that Tesla is ready to deliver vehicles capable of Level 5 autonomy could take this new OpenAI development as a sign that the company is inching closer to that elusive goal."

No company has reached that level of autonomy, which means that a driverless car could navigate any road and under any conditions that a human could, and all the human "driver" would need to do is input a destination, according to Car and Driver.

Originally published on Live Science.

TOPICS
Managing editor, Scientific American

Jeanna Bryner is managing editor of Scientific American. Previously she was editor in chief of Live Science and, prior to that, an editor at Scholastic's Science World magazine. Bryner has an English degree from Salisbury University, a master's degree in biogeochemistry and environmental sciences from the University of Maryland and a graduate science journalism degree from New York University. She has worked as a biologist in Florida, where she monitored wetlands and did field surveys for endangered species, including the gorgeous Florida Scrub Jay. She also received an ocean sciences journalism fellowship from the Woods Hole Oceanographic Institution. She is a firm believer that science is for everyone and that just about everything can be viewed through the lens of science.