Here are ten prominent warnings about AI, each with a different author and their respective quote, date, and source:
- “I am in the camp that is concerned about superintelligence…I think that is the most pressing existential risk” – Elon Musk, 2014, MIT Aeronautics and Astronautics Department’s Centennial Symposium. (source: https://www.youtube.com/watch?v=2C-oHVEsUv4)
- “The development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking, 2014, interview with BBC. (source: https://www.bbc.com/news/technology-30290540)
- “If you create a superintelligence that has a goal structure that’s not well aligned with human values, then it will destroy humans as a side effect of pursuing its goal.” – Stuart Russell, 2015, TED talk. (source: https://www.ted.com/talks/stuart_russell_how_ai_might_make_us_better_people)
- “If AI does not represent humanity, it will destroy humanity.” – Kai-Fu Lee, 2018, Forbes article. (source: https://www.forbes.com/sites/forbestechcouncil/2018/09/17/why-ai-will-be-invisible/?sh=654b02db7527)
- “I think we should be concerned about artificial intelligence. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” – Bill Gates, 2015, interview with The Verge. (source: https://www.theverge.com/2015/1/28/7926001/bill-gates-reddit-ama-elon-musk-ai-annihilation)
- “The consequences of creating something that can out-think us are potentially more disastrous than any human fallouts in history.” – Gray Scott, 2017, Futurism article. (source: https://futurism.com/ai-could-destroy-us-stephen-hawking)
- “The thing that people fear about AI is not malice but competence…A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” – Max Tegmark, 2018, book “Life 3.0: Being Human in the Age of Artificial Intelligence.” (source: https://www.goodreads.com/book/show/32758412-life-3-0)
- “As AI gets more powerful, its negative effects will be magnified. We need to ensure that AI is aligned with human values and ethics from the outset.” – Demis Hassabis, 2018, Wired interview. (source: https://www.wired.co.uk/article/demis-hassabis-ai-deepmind)
- “If we see a superintelligence whose objectives are misaligned with human flourishing, it’s likely the last thing we’ll ever see.” – Nick Bostrom, 2015, TED talk. (source: https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)
- “We should worry about AI when it gets smarter than us, but we should be terrified when it gets smarter than we are planning for it to be.” – Neil deGrasse Tyson, 2018, interview with Business Insider. (source: https://www.businessinsider.com/neil-degrasse-tyson-ai-terrifying-2018-5)
The text above was generated with OpenAI’s free tool, using ChatGPT-3.5 Mar 14 version. Funnily (or eerily?) enough, the tool struggled to generate this answer and gave errors several times. Almost as if it didn’t want to share warnings about AI. But that’s just a coincidence, right? Also, the few source links I did check didn’t seem to work anymore. They were just fetched from the training data. Yet another example how once something is put on the internet, it stays there forever – in one form or another.
Anyway, I’ll leave you with one more bonus quote. I don’t know who Eliezer Yudkowsky is, but this one was too chilling to leave out:
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” – Eliezer Yudkowsky, 2008, LessWrong blog post. (source: https://www.lesswrong.com/posts/RErySfM8tXTAQXtLc/the-ai-box-experiment)