Artificial Intelligence: Ethically Moral?

Everyday we face decisions, but how can we tell if something we do is ethical? The Merriam-Webster dictionary defines the term ethics as the discipline dealing with what is good and bad and with moral duty and obligation. In this blog post, I will be debating if the use of artificial intelligence (AI) softwares by companies is inherently ‘ethical’. Artificial intelligence softwares use specific algorithms to teach a computer program how to react to certain circumstances and commands. Examples of these are Siri for the iPhone and Alexa for Amazon devices, along with many others. These softwares are pretty advanced and require meticulous programming to respond to human conversion, but come with their own downfalls. The downfall I will me focused on today is privacy, or the lack thereof.

Many artificial intelligence softwares respond to commands such as “Hey Siri” and “Alexa, play…” which requires them to be listening in at all times to be able to identify these commands. If a software is listening at all times, what else are they hearing? AI softwares hear our every word which gives the company that runs them the ability to collect data on its users, much of that data being very personal. Where is the data they are collecting from us going? Companies such as Facebook have been under hot water lately for selling the personal information of their users to third-party companies, so what keeps the of AI from believing that the companies responsible for developing such softwares from doing so also? Some may not worry about where the personal information collected by AI softwares go, but this can lead to some sticky situations. For example, an executive board of a company could be in a meeting discussing important, confidential statistics. Most people nowadays have a smartphone that they carry everywhere. With these smartphones everywhere, there are bound to be some in the previously mentioned meeting listening in to every word being said. This means that the information that was confidential beforehand is no longer kept a secret. This isn’t confined to executive board meetings, but also to our daily lives.

Consequentialism, or Teleological, ethics can be applied to this situation. Teleological ethics focuses on what is the greatest good for the greatest amount of people. Artificial intelligence softwares being able to listen in on users and collecting data may benefit the companies that develop them (and possibly other third-party companies if they do sell the information), but harm users by invading their privacy. The users outnumber the companies by a long shot, so one could argue that the use of AI is inherently bad and should be disbanded.

Knowing that AI softwares are listening in on you at any given moment, would you be more weary of what you say in the presence of a device? With the use of smart devices on the ever-growing rise, is there any way to escape the listening ears of AI?