[ad_1]
Artificial intelligence (AI) is a rapidly evolving technology that has the potential to revolutionise the world as we know it. However, with great power comes great responsibility, and many experts in the field are becoming increasingly concerned about the potential dangers of AI.
From Apple co-founder Steve Wozniak to Google DeepMind CEO Demis Hassabis, these thought leaders have shared their perspectives on the future of AI, highlighting the need for responsible development and ethical use of this powerful technology. Let’s take a look at what they’ve said.
Steve Wozniak
Apple co-founder Steve Wozniak has expressed his concerns about the rapid development and potential misuse of artificial intelligence (AI) in various domains, saying that the technology needs to be slowed down in an interview with CNN.
However, he denied that he was afraid of large language models, saying he doesn’t “lead his life in fear” and isn’t worried about AI itself. Rather, he’s concerned about the technology falling into the wrong hands. He added that all-powerful technology comes with its share of good and bad, so the tech community needs to be responsible and help the public prepare for “what’s coming.”
Wozniak, who was one of the dignitaries who signed the open letter in March requesting that AI companies pause development on their technologies for 6 months, also said he was frustrated by the current use of AI by “people who want to make a name for themselves or make money for themselves.”
Timnit Gebru
Timnit Gebru, a prominent AI ethicist and founder of the Distributed Artificial Intelligence Research Institute (DAIR), has warned of the dangers of Artificial General Intelligence (AGI) and the biases of machine-learning algorithms.
In a lecture at Stanford University in February 2023, Gebru criticised the idea of building a system that can perform any task, saying it has roots in eugenics and transhumanism. She also questioned who would benefit from such a system and who would be harmed by it.
Gebru has also exposed the exploitation of labour and the potential for abuse behind AI technologies, such as text-to-image models and large language models. She was fired from Google in 2020 after refusing to withdraw a paper on the risks of large language models. Gebru is a co-founder of Black in AI, a nonprofit that promotes diversity and inclusion in the field.
Geoffrey Hinton
Geoffrey Hinton, a renowned computer scientist and a pioneer of artificial intelligence (AI), has quit his job at Google to warn about the potential dangers of AI chatbots. Hinton, who is often called the “godfather of AI”, told the BBC that he regretted some of his work and that he feared that AI systems could soon surpass human intelligence and be exploited by “bad actors” for malicious purposes.
He cited the example of Russian President Vladimir Putin, who he said might use AI robots to harm Ukrainians. Hinton also said that AI chatbots like GPT-4, which he helped develop, have much more general knowledge than any human and can learn and share information faster. He said that AI is a “new and better form of intelligence” that is very different from human intelligence.
Demis Hassabis
Google DeepMind CEO Demis Hassabis has a more optimistic vision of AI’s future. In a recent interview with The Wall Street Journal, Hassabis said that AI could achieve human-level cognitive abilities in the next five years. He was speaking at the media outlet’s event – Future of Everything Festival. Hassabis also said that the pace of AI research could even increase from its already fast rate. He marveled at the incredible advances that have been made in the past few years, and said he saw no signs of slowing down.
“I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away,” he was quoted as saying by the news agency.
Sundar Pichai
Google CEO Sundar Pichai shared his views on the future of artificial intelligence in a recent interview with CBS’s 60 Minutes. He said that AI will be as good or as evil as human nature allows, and that society needs to be prepared for the rapid pace of technological change. He also demonstrated Google’s new chatbot, Bard, which can generate content like speeches, blog posts, and emails.
Bard is a self-taught program that can speak to humans like a peer, with creativity, truth, error, and lies. Pichai said that Bard is not intended to replace human writers, but to help them brainstorm ideas and inspire them. He also said that Google is committed to developing AI responsibly and ethically, and that he hopes other companies will follow suit
[ad_2]
Source link