AI ETHICS AND THE FUTURE

A tumbleweed rolls across a barren landscape.
Abandoned skyscrapers and overgrown weeds are common descriptors that would be used. Throughout this city everything is silent except for an all encompassing electric hum. As you walk through the ruins of what used to be a mall, you see advertisements for what seems to be a new AI update, “The singularity in your palm”, declares a smiling humanoid, the rest of the poster is torn.

As you make your way out of the mall you see billboards down a stretch of what used to be the highway. The billboards flash periodically showing an emergency symbol. You decide to walk towards the highway seeing a light in the far off distance.
Finally, you reach the outskirts of the city, and that’s when you see it, a large structure in the distance, it seems impossibly large and appears to have drones flying around it. The only activity you’ve seen since arriving here.
Something about the structure beckons you towards it, and you comply striding across the dusty desert. When you reach the structure after what seems like hours, you find a cold, grey metallic structure reaching into the sky. As you admire the structure and the glowing lights that seem to travel across its body, you’re interrupted by a loud bang, you turn quickly, and you’re immediately blinded by a flash of light, a mechanical whirring can be heard as something moves towards you.
As the bright light fades away, you see two glowing lights moving towards you at high speed. It is the last thing you see before it all goes black.

While the dystopian future described above seems like it was brought about by a hyper-intelligent Artificial Intelligence, it’s unlikely that the dystopian scenarios put forward by popular sci-fi tropes will one day be a reality. AI today is being developed at an ever increasing rate, and AI is pervasive, being used in almost everything, embedded in all our technology and, in years to come AI will probably make its way into every facet of our lives. A couple of years ago AI being developed wasn’t considered a “threat” to human life, with a lot of the literature quoting that at that point of time a worm could be considered to have more intelligence. In this decade however, advanced AI is a reality, capable of great feats of computing power and does show some “intelligence”. Often the literature quotes how the AI, DeepBlue, beat chess grandmaster Garry Kasparov, or how Google’s AI won 4-1, in a game of Go against eighteen time champion Lee Sedol, a game that was thought to be playable by only humans due to the intuition and strategy involved. As AI is developed further, smarter AI could revolutionize our industries and the way we live, but this also raises some concerns. Looking through the lens of a chess grandmaster, as AI continues to develop, does this mean you’re inferior to a hunk of metal? Is AI superior to the average person at any number of tasks and activities? Looking beyond what AI is capable of in games, and we see a plethora of ethical dilemmas arising. These ethical concerns don’t arise from just the AI itself but also from its impacts on our lives, society and the economy. It becomes important that we consider formulating an ethical framework according to which AI would be developed and implemented. As of now there are several firms and authorities developing ethical frameworks for AI, but as AI continues to be developed these frameworks would have to keep abreast with new developments.

In this article we take a look at a few scenarios where ethical concerns can be raised when it comes to AI. Autonomous cars, the military, media and financial services, are some of the scenarios where ethical dilemmas arise. In this article we explore some of the industries that have to deal with these concerns sooner rather than later.

Autonomous Transportation

AI is an integral part of autonomous transportation. From the internal systems of the car to the safety of the driver and their surroundings, AI is embedded into what makes an autonomous vehicle autonomous. The ethical dilemma here is not a new one, if you’re familiar with the famous “Trolley Problem”. To those not in the know, the problem is a thought experiment which stipulates that there is a trolley barreling down a train track and at a certain distance the track diverges, one track has five people tied to the tracks the other has just one person tied to the tracks. You are in control of the switch which diverts the trolley to either one of these tracks.

Which track do you choose? 

It’s a dilemma that only gets more complicated as you add more information to the problem. This sort of dilemma is a very real scenario in the case of automated vehicles. In the event of an accident involving automated vehicles, the onboard AI would be capable of making decisions to either avert the accident or ensure the least damages. Let’s take an event where an autonomous vehicle for whatever reason loses control and veers off the road towards a pedestrian, the AI could make the decision to veer away from the pedestrian and towards a wall that may ensure the pedestrians safety but not the drivers. Which decision should the AI make? In this particular industry this dilemma requires that AI implemented in these kinds of vehicles have some sort of ethical constraints. The challenge is defining the constraints and the scenarios. Who’s responsible for the crash? Rather who’s more responsible in the event of a difficult choice? As autonomous vehicles become the norm on the road we need to make an effort to develop AI with these ethical issues in mind. However, this is unlikely as in a majority of these scenarios, the decisions available have no way of being subjectively categorized as ethical or unethical. It’s more probable we’ll use autonomous vehicles with a willful ignorance towards the ethical issues raised.

Autonomous weapons

Prominent scholars think that a weapon can be deemed autonomous if it is capable of dealing damage without the operation, decision or confirmation of a human supervisor. Autonomous weapons systems are being integrated into militaries across the world. And this development comes with its fair share of controversy from activists seeking to ban the development of autonomous weapons. As with any morality argument there are two sides, one for the development of autonomous weapons systems and one against. 

The main arguments for the development of autonomous weapons systems are that these systems due to their higher reliability and precision would be able ensure better compliance with international law and human ethical values, an example being less collateral damage being taken by civilians. The autonomous weapons system also serves to protect a military’s soldiers by effectively removing them from harm’s way. On the other hand those that are opposed to the development of autonomous weapons generally argue that, for one, that there are limitations to the technology to operate within certain ethical norms and legal boundaries. The more interesting take on the argument against development is concerned with universal ethical concerns that are raised when autonomous weapons are used. 

Primarily among these are questions such as:

What limits on autonomy should be placed in order to retain human agency and intent in decisions that can lead to the loss of human life and other damages?

If autonomous weapons are implemented, how are human values respected? Since these weapons do not view ethical constraints and norms the same way a human perceives them, would there be a responsibility gap? Who would take the responsibility for a decision that incurs the loss of human life? Thankfully in most ethical debates it is widely acknowledged that human agency and intent should be retained when it comes to such decisions.

Is human dignity being undermined if autonomous weapons are used?

This argument proposes that it matters in what way a person is killed/injured  and not just if a person is killed/injured. There are several laws employed in times of war to ensure human rights and international law. The arguments here are many and often people envision a future where human dignity is not respected and autonomous weapons are used indiscriminately with no thought of the magnitude of force used or the extent of damages.

What are the impacts of having humans distanced from decisions that may incur the loss of human life and other damages?

The main idea here being that if humans are distanced to such an extent where effectively (physically and psychologically) the battlefield doesn’t hold an emotional value. Would making these decisions that could lead to the loss of life be easier or less controlled?

The questions posed above have serious implications for the future of defense, international law and human rights. Serious thought needs to be put into the decision to develop autonomous weapons, in order to ensure human agency and dignity.

Media and Journalism

From 2016 to 2020 anyone on twitter or who has been exposed to a news feed has heard of the term Fake News. It’s entirely coincidental that Donald Trump’s presidency coincided with this period. But what does AI have to do with fake news? Well in March, 2017, reports came out that Cambridge Analytica, a data analytics firm, had used analytics from various social media platforms in order to influence the U.S. elections (okay, so not so coincidental). Many social media platforms like Facebook, Twitter, Instagram etc. use big data analytics in order to better engage with customers with targeted advertising. This seems innocuous enough and in fact small businesses and other e-commerce platforms benefit greatly from big data. It is when these advertisements are more akin to misinformation campaigns, fake news can wreak havoc by either creating a false state of panic, inciting violence or in this case influencing the political processes of a country. AI can be painted as the instrument of a villain but it can also be used to ascertain which media are reliable and which are not. 

Another scenario where AI can be implemented in an unethical way is in digital media. This is often referred to the rise of Deepfakes, synthetic media where a person’s likeness is manipulated onto an existing video. Previously only big studios in the film industry could afford to do the kind of manipulation you see today. But with the rise of AI and machine learning, Deepfake technology has become very accessible. AI used in Deepfake technology employs machine learning methodologies such as General Adversarial Networks (GANs, more on this in a later blog) this particular kind of methodology only requires a moderately powerful computer and resources that are easily accessible online such as Google’s Tensorflow, an open-source machine learning platform. 

The ethical issues here are obvious, videos could be manipulated to show events that never occurred. These Deepfakes could be used for financial fraud, revenge porn, celebrity porn and misinformation campaigns. While there exists little to no real legislation on the usage of Deepfakes many countries such as Indonesia, Russia and Germany have enforced laws to curb fake news. However this legislation comes with its own criticisms especially on its impingements on the freedom of expression.

Financial Services and Autonomous Policing

AI is increasingly being used to automate processes in the financial sector, everything from insurance premiums and loans to claims processing and fraud detection. It’s estimated that automation in the financial sector would have saved around USD 512 billion in 2020. However, with increased integration of AI and other autonomous technology many industries face the risk of biases. When a loan application is denied due to a low credit score or some other reason, data is recorded, however the algorithm that deems a loan application high risk or chooses to deny this application is opaque. When looking at how AI algorithms are trained to evaluate a loan application or any financial service, these programs often use historical data. If a financial institute had a history of denying loan applications to minorities this bias could be further propagated by the AI algorithm. Which brings us to a question who should be in charge of developing AI? The government? Corporations? Maybe it should be the common man. As AI continues to be integrated into the financial sector we should be careful to ensure that harmful biases are not propagated onto a much larger scale. Financial institutions should develop an ethical framework around which AI algorithms could be modeled.

Biases and AI are a common fear that is often pointed out. As a gleaming future approaches where AI helps us achieve utopian ideals, we should be quick to step in and point out that today’s society is far from fair and equitable. As AI is integrated into our daily lives what’s to say the same biases that plague us today won’t plague societies of the future? 

If we look at policing today and the fact that police authorities around the world are adopting AI predictive policing to better and more efficiently police their jurisdictions. There is reasonable doubt among experts and policymakers that AI policing may not be the best path forward especially in today’s current political climate. In San Francisco a study was released examining PredPol, a crime-mapping program, and its performance in mapping rates of drug use in various areas of Oakland. The program worked on mapping this data using demographic statistics, police reporting from Oakland and various historical data. The program showed that drug use was mostly spread out across Oakland however, it also suggested that drug use was concentrated in lower income localities that were significantly non-white. The authors of the study, William Isaac and Kristian Lum, states in their study that the use of predictive policing could result in a feedback loop, where policing authorities respond to the predicted data and record crimes in these areas thereby causing the AI algorithm to map more crimes in those areas. Propagation of this kind of bias can be dangerous for a number of reasons. It can strain tensions between policing authorities and communities, it can propagate racial biases and lead to the loss of life. The question of who develops AI is an important question we must pose, repeatedly if required, to ensure that all parts of society are treated fairly.    

This article provides a rather superficial insight into some of the ethical dilemmas we are faced with when it comes to using AI technologies. Looking deeper into some of the ethical questions posed here, and we come across questions that are thought provoking and more deeply related to philosophical pursuits. It’s obvious that there are a variety of ethical concerns when it comes to developing, implementing and using AI, some of these concerns require serious discourse in order to retain some semblance of morality in the future. But as AI and other technologies are developed at an ever increasing rate, from a perspective, it’s likely the benefits offered by these technologies far outweigh the detriments. 

Mark Coeckelbergh

Discover the digipersonality of the week: Mark Coeckelbergh

He’s a Belgian philosopher & author of AI Ethics.

To round off our excursion into the ethics of AI this week, our digi-personality is Mark Coeckelbergh—a philosopher in technology. You might be asking yourself what dimensions are included in technology philosophy; a fair question. Coeckelbergh has a blog in which he posts about the links between philosophy, tech, art and the environment. His scope of research and reporting is broad, and incredibly novel at present.

7 Industries being transformed by Spatial Computing

This week, we’re focusing on a question that might have a lot of people confused: what is spatial computing? It can be any kind of software or hardware technology that allows us, humans, virtual beings or robots to move and interact with the real or virtual world.

So, what kind of technology are we talking about?
The tech can range from Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), Computer Vision, Sensor Technology to Automated Vehicles. In the last couple of years, we’ve been introduced to innovations in AR and VR by tech giants such as Google, Microsoft and Samsung.
In gaming, VR is breaking ground in the player experience with consoles such as the Oculus Rift from Facebook. If we look at the automotive industry, we have players like Tesla and Google, who have been trying to develop a fully automated car for several years now.
It’s apparent that there is a global interest in developing and integrating spatial computing in several markets and industries and the obvious question becomes, what kind of changes would we see and where would we see them?

According to experts in these fields, we will most likely see dramatic changes in 7 industries:

Retail

Transportation

Healthcare

Media & Communication

Manufacturing

Banking & Trading

Education

Retail

In retail, integration of spatial computing is already happening, with major retailers in furniture, fashion, food etc.  announcing AR and VR integrated services. AR technology at in-store locations have the potential to transform the customer experience, for example using a mobile phone camera to point at a product and have all the information available to you on screen. In 2020 with the pandemic crisis and restrictions on in-store shopping, VR catalogs provide a safe alternative to consumers from the comfort of their seat or palm. Ikea, for example, has developed a virtual store where customers can view furniture in 3-d and in various settings and backgrounds. It’s likely we’re going to see more spatial computing integrated into our customer experience, as VR and AR continues to provide customers with information in an innovative way. 

Healthcare

Spatial computing in healthcare has great potential to impact virtually everything involved with healthcare, from the waiting room experience to surgery and surgery preparation. What does this mean for the industry at large? Well, we would see the healthcare system become more efficient, improved patient care and more personalized healthcare. VR is already being used by surgeons to conduct a range of surgeries often in combination with specialized equipment, in addition AR is being used in medical school to teach anatomy, surgery and other medical courses. We’re already seeing spatial computing being integrated into psychotherapy, notably in the treatment of PTSD, autism, depression etc. The healthcare industry is no stranger to adopting innovative technology and spatial computing is no different.

Banking & Trading

While there is little to no spatial computing being currently used in this industry the possibilities are endless. A major reason for this lag in adoption is because the industry is conservative by nature, another reason being, the numerous regulations to do with trading and customer interfaces. A current example of spatial computing in the banking industry is digital banks, real financial institutions that do not have a physical bank branch, and this is a trend most major banks are adopting with reduced footfall in physical bank branches. Digital banks can serve the same functionality online as banking becomes increasingly more digitized.  VR and AR are currently being tested in virtual trading , 3-d visualization and security. In security, spatial computing is primed to play an important role with AI and VR being used for facial recognition. This is important as facial recognition could reduce time spent on point-of-sales interactions, facilitate virtual banking and provide a layer of security.

Transportation

The transportation industry already uses spatial computing to a large extent and was one of the oldest adopters of the technology. However in years to come we will most likely see the extensive usage of sensor technology such as Lidar, the use of AI in a car’s internal and external systems and eventually, fully automated vehicles on the road. A smart car that embodies the visions of the future we often see in popular sci-fi franchises, is one that has not yet been realized but we’re making our way there. Tesla and other car manufacturers are developing self-driving cars, with sophisticated AI and sensor technologies, what does this mean for our roads and cities? Fully automated vehicles would probably lead to the development of smarter roads and cities, lowered transportation costs and increased safety on and off the road.

Media & Communication

The entertainment industry, the gaming industry and the telecommunications industry are all going to be heavily defined by spatial computing. From VR and AR games to films that explore storytelling through the lens of VR, integrating spatial computing probably has the most impact in this industry. Already VR and AR games are part of the popular culture, most notably with Pokemon GO! A marketing phenomenon that even took the superbowl by storm. As spatial computing makes strides in increasing the immersive experience, it’s likely that we’ll see these changes being carried forward onto our mobile phone and AR experiences. A possible outcome would be AR maps, usually these maps contain 3-d information and are generally used by autonomous robots and drones, however, this technology may make its way into our daily lives in order to map our surroundings in tandem with whatever technology we’re interacting with.

Education

In the education industry, spatial computing offers children the opportunity to learn in a more experiential fashion. With VR and AR classrooms, a whole world is opened to students. Previously getting a world-class education required resources that some students may not have access to and VR & AR could possibly bridge that gap. And this could be seen during the COVID-19 crisis, education systems were forced to adopt distanced learning methodologies implementing virtual classrooms and blackboards and so on, showing that VR and AR classrooms are here to stay. Spatial computing can change not only the medium through which we learn but also the way we learn. In STEM courses teachers place an emphasis in spatial awareness, however, concepts relating to spatial awareness are sometimes hard to grasp especially in 3-d, VR & AR provide a solution that enables us to visualize objects in 3-d and gain a higher understanding. Spatial computing is already transforming the education sector especially in STEM related courses.

Manufacturing

When we think of robotics and robots we often think of automatons from the likes of Transformers and C-3PO in reality the robots we use in manufacturing more closely resemble an arm. The robots we use are capable of complex tasks and can carry heavy weights and are often used in complex production trains, working in tandem with other moving parts. When looking at the automobile industry and its manufacturing these robots work at a large scale and at high efficiencies. It’s mind boggling to think that Ford produces a Ford F-150 down the line every 53 seconds, that’s almost 3 tons of steel and that’s with the technology we use today. Spatial computing could lead to the development of augmented manufacturing where humans and robots interact with each other to complete a task, also called a “cobot”. This could range from manual labor to more specialized tasks requiring supervision. A robot or person could theoretically supervise a task through spatial computing technology such as AR and VR. Cobots could be especially useful in the production of complicated parts requiring more dexterity or precision. Spatial computing could revolutionize manufacturing on a big scale leading to higher efficiencies and reductions in production time.