Man Vs. Machines: Understanding The Dangers Of Artificial Intelligence
SPONSORED MATERIAL Artificial Intelligence or AI is hardly a new concept. The idea of super-intelligent, self-learning computers has been floating around for decades, though most of it has been largely confined to the realm of science fiction. Novels, short stories, movies, even animated films for children have had AI as their subject matter, mostly tackling the technology’s potential as a tool that could be utilized either for good or for evil. While there are many outcomes in which AI could be considered as a positive technological advancement like if it were integrated in motors, gearboxes and other operating mechanical devices as an explanation that would benefit mankind in general, one might do well to bear in mind the old adage that says, “The road to hell is paved with good intentions.” But should humans be concerned about artificial intelligence? Of late, the idea of AI has increasingly been at the forefront of discussion and contention not only among the scientific community, but society in general. Thanks to the internet and various social media platforms, the dialogue has reached a bigger audience than it has had in the past couple of years. Elon Musk, co-founder and CEO of Tesla and SpaceX, has been particularly open about his concerns regarding AI in both his personal Twitter account and through the several news reports covering this topic, even going so far as to describe the creation of AI as “summoning the demon.” The late theoretical physicist Professor Stephen Hawking, too, has expressed his concerns about AI. In an interview with the BBC, he warned about the creation of AI that could surpass human intelligence, which he said could potentially spell the end of humankind itself. On the other hand, Facebook CEO Mark Zuckerberg, in a Facebook live video, seems to hold a more optimistic approach, opposing Musk’s predictions of doomsday scenarios regarding AI. While discussions about artificial intelligence continue, the idea, like any other, undoubtedly poses both positive outcomes and dire consequences, which mostly depend on its unpredictable potential. As far as potential goes, AI can either make the lives of humans easier or render our existence obsolete—and the danger perhaps lies in the fact that nobody could accurately say which route AI would take in the future. And as with any form of intelligence, it would likely remain unreliable despite the infinite possibility of developments that could take place. As early as now, humans can catch a glimpse of the potential dangers that AI poses. Take for example Alexa, Amazon’s voice-controlled intelligent personal assistant device that can perform a multitude of tasks, such as setting alarms for reminders, listing down groceries, and answering almost any kind of query thrown its way, to name a few. At first glance, the device itself seems to be helpful and rather innocuous. However, a closer look at its capabilities such as machine learning and AI would make one think twice about how helpful it really is, as it is not improbable that a scenario such as complete dependency on Alexa might be the definitive outcome, rendering humans incapable of doing simple tasks for themselves. It is also possible that the drive to learn and expand one’s knowledge might be stifled due to the fact that Alexa could answer any question, making the pursuit of education moot when a self-learning device could do all the research and thinking. One could argue that this just similar to doing a quick Google search where all the information is at one’s fingertips, the only difference being that Alexa could do it faster, removing the attendant hassle of pouring over hundreds of pages of information before one could get to the answer. This, however, could potentially impair cognitive functions in the sense that fact-checking or personally examining different information to arrive at a conclusion would be thing of the past. In short, humans could end up letting Alexa think for themselves. Naturally, moral and ethical concerns have also arisen regarding AI. It isn’t so much that AI could be evil or malevolent, but more about the possibility of the AI’s goals not coinciding with that of humans. Since the machine learns and possesses intelligence, despite it being artificial, it isn’t far-fetched to consider that AI machines would probably find society’s goals different from their own, which could possibly lead to them overriding commands in favor of their own objectives. Since AI is designed to perform better than humans, it isn’t a fantastic claim that a time could come when AI would realize its potential—if they aren’t already aware of it yet—and it could spell disaster in the sense that this might lead to them classifying humans as inferior in intelligence, thus having a logical reason to eliminate the human race as one would get rid of an inconvenience or an object that has ceased to perform in an optimal way. This in itself isn’t inherently evil if taken from the AI’s perspective—perhaps just the result of the simple process of elimination when mathematically assessing something’s usefulness. Of course, it’s also possible that AI would consider genocide as moral bankruptcy, but humans themselves have been trying to eliminate other humans since the dawn of time. AI could perhaps arrive at the conclusion that stopping all wars would mean eliminating all the participants. This, barring aside discussions about the creation of sexbots. Intelligence, too, presupposes the capability for language or communication. Animals have sounds, humans have words, and AI has code. However, what happens when AI begins to develop a language that’s beyond human understanding, apart from the code initially programmed in their system, or even the vocabulary that we’re used to? A quick look at an experiment conducted by Facebook in 2017 where two AI bots chatted with each other to negotiate a trade gives a rather unsettling peek into this mysterious scenario: as the trade progressed, the two bots were able to develop a kind of “shorthand” only they could understand, the objective of which was to improve negotiations. The conversation itself looked odd as words were repeated in a manner similar to that of unintelligible chanting. Facebook ended up shutting down the experiment. This then begs the question: If two simple chat bots have shown that they could develop their own language, what is then stopping more advanced AI programs in the future to communicate in a way humans could not understand, and even accomplish this in secret? By extension, it isn’t impossible to think that by then, AI machines could have created their own undetectable communications network more sophisticated than the chat servers available today. These are but a few of the dangers that artificial intelligence that some experts raised. Do you know any more? This is sponsored material and does not necessarily reflect the editorial views of Asia Sentinel.