I have been thinking about this for quite some time and have had discussion with a friend of mine that is a programmer, so I decided to write a little piece on it in an effort to clear my own thoughts around it and also maybe get some feedback from the people reading this.
We are now in the beginning age of artificial intelligence. There is no doubt about it. For those of us that use social media on regular basis, we interact with it every day. I recently had a run-in with what I think was an AI when my Indiegogo campaign for my latest book project, Ismark: The Golden Bond, got flagged for “a negative experience. I have no idea how that happened or if it was a bot, but it sure felt like a random occurrence, so I chucked it up to ignorance and early AI, rather than malicious people trying to stop me reaching the people who might want to read my books.
Now this illustrates very well the problems that might occur as we move slowly down the road of smarter and smarter AIs. Context and seeing a bigger picture may not always be the priority to those that build those AIs and they may, while building these, tread on many people’s ability to express themselves.
You may say that flagging one campaign for this is not the biggest deal in the world, and I would actually agree with you. I was annoyed when it happened, but it did not stop my campaign running and I could still stuff about it, but it did limit its reach to people and that got me thinking. What if it happens on a grander scale? I have friends who have been banned from various social media platforms for discussing suicide or trying to help people who were suicidal.
Just quoting “Go kill yourself!” in a conversation got one of my friends permanently suspended when he was trying to talk to someone about this. He was not telling the person to go kill themselves, but debating the impact of such a statement. When this happens to someone of little social importance, they simply disappear. And what if this was a suicidal person who was trying to express their anguish to the world, screaming for help, listing all the things that their brain told them?
This is one of many reasons why I am worried, because I don’t even know if AI ever can manage to see that context. Do AI have the creativity to understand other people’s emotions? Can it manage empathy for other people? Especially in an age where we already seem to have so very little empathy for each other.
My friend, the programmer, said that you can program a lot of variables into an AI program, but I am not sure if designing “a soul”, for lack of a better term, is even possible. Can we replicate what millions of years of evolution has created? The ability to be creative, tell stories and have empathy seems like a very uniquely human attribute and I am not sure that it can be created by us as we seem to have very little understanding of it ourselves or time for it.
Empathy, context and creativity do seem to be closely linked in my experience as well. Whenever I have discussed things with hypercreative or very creative people, they seem more willing to grant that some behavior may be explained differently under certain circumstances and therefore more willing to be open-minded about other people’s motives, behaviors or opinions. One of my favorite people to discuss with is someone very creative, but who disagrees with me on a few things. That person is always willing to listen to me though, because they understand my reasoning on things.
Can an AI do that? Is that possible? I am always trying to be an optimist when it comes to technology and I do believe we can solve most problems with automating a lot of stuff, but I doubt we can do that with creativity and our social interactions.