Artificial intelligence (AI) is a staple of both science fiction and pop culture, appearing in several movies and video games. Both celebrity, scientists, and entrepreneurs such as Elon Musk and Professor Stephen Hawking, are talking about it. However, one has to wonder if technology ranging from cyborgs to sentient computers are truly to the benefit of mankind.
AI has always been a polarising subject to most who come across it. Even in works of fiction, there tend to be two opposing views. The first view is rather optimistic and practical, and focuses on the potential benefits associated with technological advancement, such as training surgeons and testing new life-saving surgical procedures. Meanwhile, the opposing view of AI is much more pessimistic, with a heavy focus on the dangers. This includes concern over mass surveillance, and the impoverishing of countries where outsourced physical or menial labour is a primary export. This concern extends to apocalyptic predictions of a worldwide unemployment epidemic due to robots replacing humans in the workplace. In order to better understand the public perception of AI, one has to analyse its history, and how humans have progressed alongside it.
A brief history of AI
AI can be traced back to when Homer first penned the Iliad and described the seemingly fully autonomous humanoids that were forged in the fires of Hephaestus’ forge. These mechanical automatons were fully functional beings. They displayed an understanding of human intelligence, and an arguable degree of sentience. Homer’s characterisation, which dates back to the sixth century BC, served as the basis for artificial intelligence, which subsequently lingered in the realm of philosophy and myth for many, many centuries. It was not until the thirteenth century AD that the first automatons were reportedly created by human hands, when inventor Ismail al-Jazari created a humanoid robotic boat which carried four musicians on it, believed to be powered entirely by the flow of water.
The Renaissance and Descartes
During the Renaissance, particularly the fifteenth and sixteenth centuries, we saw a resurgence in automation through the use of clockwork machinery. In the seventeenth century another development was made, which potentiated the development and conceptualisation of AI. Philosopher René Descartes introduced the notion that humans and animals are no different to highly complex machines. Today, contemporary thinkers find themselves adhering to his paradigm of the Cartesian Mechanism.
The world’s first computer
The eighteenth century saw the repetition of clockwork automatons, and it was not until the nineteenth century, when Joseph Marie Jacquard invented the Jacquard Loom, that AI made another developmental leap. The Jacquard Loom was remarkable for being the very first machine which was programmable. By using input cards after its construction, it could serve different purposes. This particular machine served as an inspiration to other inventors, seen by many as the very first computer of the modern age.
During the twentieth century, we saw the first usage of the word “robot”, when it was used in a play by Karel Capek in 1923. There was also the proposal of the universal Turing machine, and the widely known Turing test, by Alan Turing. Lastly, Isaac Asimov published his three laws of robotics in 1950.
The modern age of robotics
The second half of the twentieth century began the modern age of robotics, which was defined by the development of stored-program electronic computers. The term “artificial intelligence” was coined by John McCarthy at the first Dartmouth workshop in 1956. Since then, the Massachusetts Institute of Technology (MIT) and International Business Machines Corporation (IBM) have been leading developers in the field of robotics and artificial intelligence, making striding leaps towards its advancement as more institutions around the world join in the study and exploration of AI. There were so many advancements in this field during this time, which they could not possibly be summarised in a brief history.
Robots that mimic human emotion
Which brings us to the twenty first century, in which interactive robot pets and other smart toys have become widely available for purchase on the high street. MIT have published a dissertation on Sociable Machines, describing KISMET, a robot meant to display and mimic human emotion. And as late as 2017, a robot named Sophia, produced by Hanson Robotics, became the very first robot to be granted citizenship of a country in the world when Saudi Arabia bestowed citizenship upon her at the Future Investment Initiative summit held in Riyadh.
Pros and cons
Throughout the years there have been many vocal supporters of AI, as well as disparaging views from those who are against it. The advantages of AI are most certainly numerous.
Delegating certain jobs to robots, particularly those roles that are considered mundane, might end up freeing people to pursue higher, more creative tasks. Science, exploration and the arts would become the realm of humans, while machinery takes on the physical or mundane labour currently done by humans.
Machines can be tasked with solving complex algorithms with the use of cognitive technologies that allow artificial intelligence to work much faster the the human mind is capable. This in turn increases work efficiency. The machines can use the algorithms to make complex decisions quickly. This extends to completing tasks where precision is of extreme importance. AI could potentially facilitate the elimination of human error. Humans make mistakes for reasons such as tiredness, bias, and other cognitive limitations which can overload motor skills or the human perception. Artificial intelligence is immune to these effects and could carry out tasks without limitations or error.
Taking on danger
Another directly beneficial element of using machines is being able to expose a machine to conditions that are extremely dangerous, even deadly, for humans. Building robust and adaptable AI machines to take on jobs that are impossible for humans to perform means that scientists can explore harsh climates, ocean depths and even other planets. All of these endeavours can very well be overtaken by AI in the absence of human capacity or where the risk of loss of life is present.
Disadvantages of AI
The disadvantages of AI, and the ethical questions that come with the advent of increasingly smart robots, are debated by many. One of the primary concerns is loss of jobs. Considering the increase of machines replacing humans in more serial tasks, this fear is not totally unfounded. Where once a factory would be operated by humans, now machines have taken their place, working tirelessly and with much greater efficiency. Artificial intelligence is very likely going to replace jobs which require a low level of specialisation, such as cashiers at fast food restaurants. In more extreme cases, AI might even take a role as advisor to medical specialists, as seen already in IBM’s own Watson.
Loss of power
One of the risks of artificial intelligence is the loss of power held by humans. This is currently most notable in the decision making process, and the execution of those very same decisions. An extreme example would be the doomsday device parodied in Stanley Kubrick’s film Dr. Strangelove, where the inability of humans to override artificial intelligence leads to the destruction of all mankind. This extends to fear of an AI takeover, where even if it was once possible for humans to control machines, the machines themselves evolve to a point where they become the dominant form of intelligence on earth, and consequently destroy the human race.
The issue of moral judgement and the replacement of humanity
One subject which is often up for debate, is whether or not an artificially created intelligence can have the same capacity for morality that humans do. Can AI be a moral being in the same way as a human being? Can they make purer, more moral decisions, as they are free from bias? Or do they lack emotion and empathy, and as such are unable to be classed as moral beings?
Professor Stephen Hawking has warned that, due to being limited by slow biological evolution, humans risk not being able to compete with the advancement of AI, which could potentially re-design itself, and overtake human development at an alarming rate. The creation of a fully autonomous artificial intelligence, with the capacity to learn and reproduce, could render humanity entirely obsolete.
How AI is portrayed in fiction
Throughout different media, AI is portrayed in numerous different lights. This is almost always in a way which distinguishes the machines from humanity. In the vast majority of fictional representations, AI takes the role of either the primary antagonist, such as the murderous computer HAL in 2001: Space Odyssey, or as secondary one, for example Ash the cyborg from the first Alien film.
An inevitable element of these stories is the conflict in which humans find themselves with artificial intelligence. Films often depict it as the crux of the problem, having the characters question what it means to be human. They are forced to evaluate the role of humanity in an expanding universe, where everything a human does can be artificially reproduced by something that is made by humans themselves. Not only is AI engineered by humans, but at its core it is virtually indistinguishable in its creation from the dozens of tools which have already been manufactured to make human lives easier.
Another fear that is often played upon in the entertainment industry is the potential loss of control humans have over artificial intelligence, and the risk of machines eventually turning on humanity. Films such as the Terminator franchise or the Matrix trilogy, build upon the notion of a sentient race of machines originally conceived by human, which eventually take over and either exterminate or enslave mankind for its own purpose. Literature does not shy away from these topics either, with a plethora of science fiction novels based around the apocalyptic idea of machines ruling the world. And naturally, though often adapted from books and movies, the medium of video games has also broached the subject. The critically acclaimed Mass Effect series by Bioware explores the potential conflicts between humanity and truly sentient artificial intelligence, and the game SOMA is set in a post-apocalyptic world where machines start developing human characteristics, and even a consciousness, forcing the player to question what it is to be human.
Though portrayals of artificial intelligence suggest that the dominant attitude towards artificial intelligence is one of fear and hesitation, a study published in 2017 shows that public opinion towards AI has grown to be more positive over the years. As the public is more exposed to the idea of artificial intelligence and what it entails, general opinion on the subject has shown to be more accepting of AI. The aforementioned issues are no longer seen as such a large barrier to the potential development of machines with artificial intelligence, as reported by the Association for Advancement of Artificial Intelligence.
The benefit to humankind
In regards to the potential benefits which AI can have on humankind, discussions are increasing. However, while there is a marked increase of these discussions, and a trend of optimism in AI and what robotisation has grown to provide for our species, research also suggests that fear of a loss of control has become increasingly prevalent. It is often asked whether humanity would have the capacity to compete with a self-aware organism which can compute infinitely higher amounts of information at a rate faster than humans ever could. Could we still retain our place in a world where machines work faster, learn more swiftly, and prove more adaptable than humans?
We believe that information should be free and will therefore never put up a paywall.
If you like reading our reports about the Scandinavian business scene and would like to donate towards the upkeep of the site, we would be very grateful. Click here to donate.
Tech | 🕐 03. Aug. 2020
Business | 🕐 04. Aug. 2020
Startups | 🕐 13. Apr. 2020
Startups | 🕐 17. Apr. 2020
Startups | 🕐 27. Apr. 2020
Game Development | 🕐 18. Feb. 2020
Tech | 🕐 26. Feb. 2020
Tech | 🕐 24. Feb. 2020