Artificial Intelligence and Symbiosis

There has been some debate recently about what the future of artificial intelligence will mean for human beings. Some believe that it will enrich our lives by saving us from the drudgery of meaningless work. Others believe that it will be our downfall, that our creations will turn on us and destroy us. I am not so sure that either will come to pass; instead, humans and AI will begin to merge together.

In a way, this process has already begun. We use it on a daily basis, and it uses us. For example, we use AI to efficiently direct us to information online and, at the same time, it is learning about us: all the information that we now provide through the internet is now used to teach AI how we think and act.

Humans are inherently technological as well as biological. Obviously, being human has to do with being a particular species of organism, but it also means using tools. We are the supreme tool users of the world, so much so that they are a part of our existentiality. From the beginnings of human history, tools have been transforming what it means to be human. We have civilization because we developed technologies that enabled one person to feed many: once the ratio between calorie production and consumption began to increase, we also began to see the incredible specialization and diversification of the fields of knowledge we have today.

1200px-Pilot_ACE3

An early computing machine, the ACE 3. Image Copyright Wikipedia.

We will inevitably become even more entwined with this technology over time. For example, DARPA is working on neural interface technology – a implant about one cubic centimetre in size that would serve as an interface between the brain and future versions of the type of devices we use today. Such innovations would enable us to ‘talk’ to our devices and ‘read’ information in the same way that we talk or read to ourselves without speaking now.

This is more far fetched, I know, but we may even begin to retrieve information without being aware that we are doing it. Imagine that you find yourself thinking about New York City. You find yourself thinking about its history, its geography. Then you might find yourself thinking, is this something I learnt or ‘looked up’? What would it mean to learn if we could just think about something and ‘know’ about it immediately?

The human mind will continue, however, to have its limitations. We may have access to more information than ever before, but this does not mean that we will be able to use it. Artificial intelligence is already being used to analyse data: there is, for example, IBM and MIT’s Watson, an AI that won jeopardy, and whose ‘reading’ and analytical skills are now being used to help fight cyber-crime. There is also Google DeepMind’s Go-playing AI, ‘AlphaGo’, which recently did something that no other AI had ever done before: it beat a human professional of the highest (9-dan) ranking. AlphaGo may have been ‘taught’ to play by analyzing human games, but it is clear that it will now begin to teach humans. Indeed, if it’s skills continue to increase, it will most likely go on to revolutionise the way the game is played.

Dr Alan Turing, the inventor of the computing machine. Every year, a test named after him is held to determine the best AI from around the world.

Dr Alan Turing, the inventor of the computing machine. Every year, a test named after him is held to determine the best ‘chatbot’ AI. Human subjects chat ‘blind’ on computer terminals and then must ascertain whether they are speaking to a human or AI. Image Copyright Wikipedia.

It is important to understand that there are different types of artificial intelligence. Watson and AlphaGo are not examples of what’s called ‘strong AI’ – intelligence with self-conscious awareness and intentionality. They are remarkably good at the tasks that they perform, but they are not ‘people’ yet. Indeed, part of their strength may be that they are not people. For example, it is an advantage that AlphaGo does not experience hunger or stress or sadness. Many of the applications of AI are like this – they are not things for which consciousness and intentionality are required.

I will conclude with this question: what would be the point of creating ‘strong AI’ apart from seeing whether or not it can be done? If we continue to see AI in an instrumental sense, then it would actually be morally wrong to create it: no ‘person’ should be an instrument. This is especially true when it comes to intentionality – are we going to create beings with wants and desires and then prevent them from pursuing them?