Neural Networks are algorithms modeled to simulate the human brain. Consisting of components similar to our neurons and nerves, it must be stressed that Neural Networks don’t actually work like human brains. They just process information in a similar fashion. Neural Networks are able to learn and model nonlinear and complex relationships, this makes Neural Networks especially adept at finding patterns and inferring new relationships between various inputs and outputs. Neural Networks are also capable of generalizing, they do this by employing Fuzzy Logic to make sense of incomplete or ambiguous data.
How do they work?
A Neural Network consists of several processors that operate parallely but are arranged in tiers. These tiers are connected to each other and process inputs from the tier that precedes them and passes on their output to the next tier. Each tier consists of nodes that are interconnected to nodes in tiers that come before and after them. Each node has a certain programmed function and rules that go along with it as well as relationships it has inferred by itself. Neural Networks are so special because they are able to learn and adapt quickly, they are able to do so by placing an ‘importance’ weight on the inputs and outputs received by each node. The inputs that have the most weight are ones that contributed the most to the right output.
We use Neural Networks to handle and analyze large amounts of data. In a 2017 Statista report, monthly global data volumes were estimated to reach around 230,000 petabytes (one million gigabytes). With such large amounts of data to be analyzed many companies have begun to consider using Neural Networks to analyze and handle their data. Neural Networks have been implemented in many industries, ranging from engineering and medicine, to business applications in financial prediction and targeted marketing.
Let’s have a look at some industrial applications of Neural Networks.
Neural Networks and Engineering
The engineering sector is perhaps, where Neural Networks are essential, Neural Networks can be implemented in many engineering fields; Flight control, automotive control process and quality control are just some of these fields. Neural Networks are being implemented in industries that are looking to automate their processes. If we look at the drone industry, William Koch a Boston University Computer Scientist has developed a quadcopter with machine learning capabilities. As of right now most quadcopters are controlled through linear controllers and requires significant human intervention to maneuver the quadcopter. However the drone, Neuroflight, developed by Koch and a team of collaborators is able to maneuver through dynamic environments such as wind by using a machine learning neural network.
There are many companies investing heavily in Neural Networks in drone technology, notably General Electric and other industry players. Another company that uses Neural Networks in their drone technology is Aeiou.tech. The company’s Dawn platform helps their drones to navigate difficult and dynamic environments, avoid obstacles and much more, the platform is currently being developed so that the unmanned drones are also capable of inspection duties.
Neural Networks and Medicine
Neural Networks are capable of modeling complex and nonlinear relationships, they are adept at looking for patterns, which makes them a revolutionary tool in the field of medical diagnosis. Today Neural Networks are used in medicine to mode4l various human body parts and to analyze various scans (CT, X-rays, PET etc.). Neural Networks do not require a predefined algorithm to analyze these scans and this makes Neural Networks perfect for recognizing patterns in various scans. A Neural Network learns by example and as such Neural Networks do not require descriptors of the disease.
Currently Neural Networks are being used to model and diagnose the cardiovascular system. Diagnosis, in this case, can be achieved by having a Neural Network build a model of a person’s cardiovascular system and monitor this model, By comparing real time changes in physiology a Neural Network could possibly detect arising medical conditions at a much earlier stage. An added advantage of using Neural Network for diagnosis is that Neural Networks can provide sensor fusion, i.e. Neural Networks can combine values from different sensors to get a more complete or accurate description of the model. Neural Networks that are being used today take input from a variety of biomedical sensors.
Neural Networks and Business Applications
As we move towards a digital age where more and more retail activities occur online. It stands to reason that more companies would require a way to handle large amounts of data in order to better target their customer base. Neural Networks can play a pivotal role in optimizing a company’s marketing strategy.
As online commerce becomes more prevalent, more metadata from customers, this includes personal details, shopping patterns as well as any other relevant information. In 2021 this means analyzing millions of gigabytes of data, a neural network could analyze and model relationships with different variables when compared to traditional computational networks. Neural Networks are also being used to automate email marketing. From sending cold sales mails to follow up mails, a neural network could greatly optimize email marketing processes by segmenting customers by various categories. A more lucrative application of Neural Networks is its use in financial forecasting (another blog, another day).
Neural Networks today have huge implications for big data. With increased processing power and an algorithm that learns to see patterns over time, companies can leverage Neural Networks in big data analysis to further optimize their business models. Social Media platforms as mentioned above use Neural Networks to filter for fraudulent or criminal activities on their platform, it can also be used to segment potential customers more accurately, providing companies information to model a targeted marketing strategy.
Neural Networks and other Deep Learning technologies offer us a path to true AI, and as this field continues to innovate businesses and other fields of interest should consider integrating AI technologies into their business model.
A tumbleweed rolls across a barren landscape. Abandoned skyscrapers and overgrown weeds are common descriptors that would be used. Throughout this city everything is silent except for an all encompassing electric hum. As you walk through the ruins of what used to be a mall, you see advertisements for what seems to be a new AI update, “The singularity in your palm”, declares a smiling humanoid, the rest of the poster is torn. As you make your way out of the mall you see billboards down a stretch of what used to be the highway. The billboards flash periodically showing an emergency symbol. You decide to walk towards the highway seeing a light in the far off distance. Finally, you reach the outskirts of the city, and that’s when you see it, a large structure in the distance, it seems impossibly large and appears to have drones flying around it. The only activity you’ve seen since arriving here. Something about the structure beckons you towards it, and you comply striding across the dusty desert. When you reach the structure after what seems like hours, you find a cold, grey metallic structure reaching into the sky. As you admire the structure and the glowing lights that seem to travel across its body, you’re interrupted by a loud bang, you turn quickly, and you’re immediately blinded by a flash of light, a mechanical whirring can be heard as something moves towards you. As the bright light fades away, you see two glowing lights moving towards you at high speed. It is the last thing you see before it all goes black.
While the dystopian future described above seems like it was brought about by a hyper-intelligent Artificial Intelligence, it’s unlikely that the dystopian scenarios put forward by popular sci-fi tropes will one day be a reality. AI today is being developed at an ever increasing rate, and AI is pervasive, being used in almost everything, embedded in all our technology and, in years to come AI will probably make its way into every facet of our lives. A couple of years ago AI being developed wasn’t considered a “threat” to human life, with a lot of the literature quoting that at that point of time a worm could be considered to have more intelligence. In this decade however, advanced AI is a reality, capable of great feats of computing power and does show some “intelligence”. Often the literature quotes how the AI, DeepBlue, beat chess grandmaster Garry Kasparov, or how Google’s AI won 4-1, in a game of Go against eighteen time champion Lee Sedol, a game that was thought to be playable by only humans due to the intuition and strategy involved. As AI is developed further, smarter AI could revolutionize our industries and the way we live, but this also raises some concerns. Looking through the lens of a chess grandmaster, as AI continues to develop, does this mean you’re inferior to a hunk of metal? Is AI superior to the average person at any number of tasks and activities? Looking beyond what AI is capable of in games, and we see a plethora of ethical dilemmas arising. These ethical concerns don’t arise from just the AI itself but also from its impacts on our lives, society and the economy. It becomes important that we consider formulating an ethical framework according to which AI would be developed and implemented. As of now there are several firms and authorities developing ethical frameworks for AI, but as AI continues to be developed these frameworks would have to keep abreast with new developments.
In this article we take a look at a few scenarios where ethical concerns can be raised when it comes to AI. Autonomous cars, the military, media and financial services, are some of the scenarios where ethical dilemmas arise. In this article we explore some of the industries that have to deal with these concerns sooner rather than later.
Autonomous Transportation
AI is an integral part of autonomous transportation. From the internal systems of the car to the safety of the driver and their surroundings, AI is embedded into what makes an autonomous vehicle autonomous. The ethical dilemma here is not a new one, if you’re familiar with the famous “Trolley Problem”. To those not in the know, the problem is a thought experiment which stipulates that there is a trolley barreling down a train track and at a certain distance the track diverges, one track has five people tied to the tracks the other has just one person tied to the tracks. You are in control of the switch which diverts the trolley to either one of these tracks.
Which track do you choose?
It’s a dilemma that only gets more complicated as you add more information to the problem. This sort of dilemma is a very real scenario in the case of automated vehicles. In the event of an accident involving automated vehicles, the onboard AI would be capable of making decisions to either avert the accident or ensure the least damages. Let’s take an event where an autonomous vehicle for whatever reason loses control and veers off the road towards a pedestrian, the AI could make the decision to veer away from the pedestrian and towards a wall that may ensure the pedestrians safety but not the drivers. Which decision should the AI make? In this particular industry this dilemma requires that AI implemented in these kinds of vehicles have some sort of ethical constraints. The challenge is defining the constraints and the scenarios. Who’s responsible for the crash? Rather who’s more responsible in the event of a difficult choice? As autonomous vehicles become the norm on the road we need to make an effort to develop AI with these ethical issues in mind. However, this is unlikely as in a majority of these scenarios, the decisions available have no way of being subjectively categorized as ethical or unethical. It’s more probable we’ll use autonomous vehicles with a willful ignorance towards the ethical issues raised.
Autonomous weapons
Prominent scholars think that a weapon can be deemed autonomous if it is capable of dealing damage without the operation, decision or confirmation of a human supervisor. Autonomous weapons systems are being integrated into militaries across the world. And this development comes with its fair share of controversy from activists seeking to ban the development of autonomous weapons. As with any morality argument there are two sides, one for the development of autonomous weapons systems and one against.
The main arguments for the development of autonomous weapons systems are that these systems due to their higher reliability and precision would be able ensure better compliance with international law and human ethical values, an example being less collateral damage being taken by civilians. The autonomous weapons system also serves to protect a military’s soldiers by effectively removing them from harm’s way. On the other hand those that are opposed to the development of autonomous weapons generally argue that, for one, that there are limitations to the technology to operate within certain ethical norms and legal boundaries. The more interesting take on the argument against development is concerned with universal ethical concerns that are raised when autonomous weapons are used.
Primarily among these are questions such as:
What limits on autonomy should be placed in order to retain human agency and intent in decisions that can lead to the loss of human life and other damages?
If autonomous weapons are implemented, how are human values respected? Since these weapons do not view ethical constraints and norms the same way a human perceives them, would there be a responsibility gap? Who would take the responsibility for a decision that incurs the loss of human life? Thankfully in most ethical debates it is widely acknowledged that human agency and intent should be retained when it comes to such decisions.
Is human dignity being undermined if autonomous weapons are used?
This argument proposes that it matters in what way a person is killed/injured and not just if a person is killed/injured. There are several laws employed in times of war to ensure human rights and international law. The arguments here are many and often people envision a future where human dignity is not respected and autonomous weapons are used indiscriminately with no thought of the magnitude of force used or the extent of damages.
What are the impacts of having humans distanced from decisions that may incur the loss of human life and other damages?
The main idea here being that if humans are distanced to such an extent where effectively (physically and psychologically) the battlefield doesn’t hold an emotional value. Would making these decisions that could lead to the loss of life be easier or less controlled?
The questions posed above have serious implications for the future of defense, international law and human rights. Serious thought needs to be put into the decision to develop autonomous weapons, in order to ensure human agency and dignity.
Media and Journalism
From 2016 to 2020 anyone on twitter or who has been exposed to a news feed has heard of the term Fake News. It’s entirely coincidental that Donald Trump’s presidency coincided with this period. But what does AI have to do with fake news? Well in March, 2017, reports came out that Cambridge Analytica,a data analytics firm, had used analytics from various social media platforms in order to influence the U.S. elections (okay, so not so coincidental). Many social media platforms like Facebook, Twitter, Instagram etc. use big data analytics in order to better engage with customers with targeted advertising. This seems innocuous enough and in fact small businesses and other e-commerce platforms benefit greatly from big data. It is when these advertisements are more akin to misinformation campaigns, fake news can wreak havoc by either creating a false state of panic, inciting violence or in this case influencing the political processes of a country. AI can be painted as the instrument of a villain but it can also be used to ascertain which media are reliable and which are not.
Another scenario where AI can be implemented in an unethical way is in digital media. This is often referred to the rise of Deepfakes, synthetic media where a person’s likeness is manipulated onto an existing video. Previously only big studios in the film industry could afford to do the kind of manipulation you see today. But with the rise of AI and machine learning, Deepfake technology has become very accessible. AI used in Deepfake technology employs machine learning methodologies such as General Adversarial Networks (GANs, more on this in a later blog) this particular kind of methodology only requires a moderately powerful computer and resources that are easily accessible online such as Google’s Tensorflow, an open-source machine learning platform.
The ethical issues here are obvious, videos could be manipulated to show events that never occurred. These Deepfakes could be used for financial fraud, revenge porn, celebrity porn and misinformation campaigns. While there exists little to no real legislation on the usage of Deepfakes many countries such as Indonesia, Russia and Germany have enforced laws to curb fake news. However this legislation comes with its own criticisms especially on its impingements on the freedom of expression.
Financial Services and Autonomous Policing
AI is increasingly being used to automate processes in the financial sector, everything from insurance premiums and loans to claims processing and fraud detection. It’s estimated that automation in the financial sector would have saved around USD 512 billion in 2020. However, with increased integration of AI and other autonomous technology many industries face the risk of biases. When a loan application is denied due to a low credit score or some other reason, data is recorded, however the algorithm that deems a loan application high risk or chooses to deny this application is opaque. When looking at how AI algorithms are trained to evaluate a loan application or any financial service, these programs often use historical data. If a financial institute had a history of denying loan applications to minorities this bias could be further propagated by the AI algorithm. Which brings us to a question who should be in charge of developing AI? The government? Corporations? Maybe it should be the common man. As AI continues to be integrated into the financial sector we should be careful to ensure that harmful biases are not propagated onto a much larger scale. Financial institutions should develop an ethical framework around which AI algorithms could be modeled.
Biases and AI are a common fear that is often pointed out. As a gleaming future approaches where AI helps us achieve utopian ideals, we should be quick to step in and point out that today’s society is far from fair and equitable. As AI is integrated into our daily lives what’s to say the same biases that plague us today won’t plague societies of the future?
If we look at policing today and the fact that police authorities around the world are adopting AI predictive policing to better and more efficiently police their jurisdictions. There is reasonable doubt among experts and policymakers that AI policing may not be the best path forward especially in today’s current political climate. In San Francisco a study was released examining PredPol, a crime-mapping program, and its performance in mapping rates of drug use in various areas of Oakland. The program worked on mapping this data using demographic statistics, police reporting from Oakland and various historical data. The program showed that drug use was mostly spread out across Oakland however, it also suggested that drug use was concentrated in lower income localities that were significantly non-white. The authors of the study, William Isaac and Kristian Lum, states in their study that the use of predictive policing could result in a feedback loop, where policing authorities respond to the predicted data and record crimes in these areas thereby causing the AI algorithm to map more crimes in those areas. Propagation of this kind of bias can be dangerous for a number of reasons. It can strain tensions between policing authorities and communities, it can propagate racial biases and lead to the loss of life. The question of who develops AI is an important question we must pose, repeatedly if required, to ensure that all parts of society are treated fairly.
This article provides a rather superficial insight into some of the ethical dilemmas we are faced with when it comes to using AI technologies. Looking deeper into some of the ethical questions posed here, and we come across questions that are thought provoking and more deeply related to philosophical pursuits. It’s obvious that there are a variety of ethical concerns when it comes to developing, implementing and using AI, some of these concerns require serious discourse in order to retain some semblance of morality in the future. But as AI and other technologies are developed at an ever increasing rate, from a perspective, it’s likely the benefits offered by these technologies far outweigh the detriments.