4 developments in Robotics you should keep an eye on !

It’s 1961 and George Devol has just received his patent for “ Programmed Article Transfer”. It reads as follows- “The present invention relates to the automatic operation of machinery, particularly the handling apparatus, and to automatic control apparatus suited for such machinery.” The Unimate 1900 series, which is the world’s first industrial robot, however, was actually born at a cocktail party in 1956. George Devol had just unveiled his latest invention, the Programmed Article Transfer device. It was at this cocktail party that George Devol met Joseph Engelberger (Father of Robotics), “Sounds like a robot to me” Joseph Engelberger exclaimed. In 1957 Engelberger, who at the time, was director of Consolidated Controls Corporation (A Condec subsidiary) convinced the CEO of Condec to develop George Devol’s device. Two years later and the first prototype of the Unimate #001 was built. Soon after, the first industrial robot was installed at the General Motors plant in New Jersey. The Unimate 1900 series was responsible for automating a lot of the assembly lines that existed at the time, with Japan being one of the first adapters of Industrial Robotics technology.

Today with more than 373,000 industrial robots,and a global market that is expected to grow upto $210 billion by 2025. It’s safe to say Robotics will play an integral role in the next industrial revolution. Today we use robots more than ever, and in many different industries and businesses. From medicine to marketing and warehousing & logistics, robots are quickly becoming part of daily business operations and in this blog we take a look at developments in the field of robotics in a few of these industries.

Medical Robots

The global market value for medical robots is expected to reach $12.7 billion by 2025 from a value of $5.9 billion in 2020. The key drivers of growth in this market are from robot-assisted rehabilitation, robot-assisted surgeries and the increasing adoption of robots in the healthcare industry. In this blog we take a look at robotics assisted surgery and a major player in this area is Intuitive Surgery the company behind the da Vinci surgical system. Maybe you remember the “did you know they did surgery on a grape” meme that was circulating around twitter in 2018, if you don’t, in 2018 a video from 2014 was shared widely on many platforms. The video showed surgeons peeling the skin off of a grape and then stitching it back together, well the instrument used in this video was the da Vinci surgical system. The impressive surgery showcased the ability of the robot to perform minimally invasive surgeries with great accuracy and dexterity.

Although the da Vinci surgical system has been around for more than 18 years, Intuitive Surgery has set the standard for robotics assisted surgeries and continues to do so. While the da Vinci surgical system has a limited scope of use at the moment, this is likely to change in the coming years as more patients opt for surgeries with minimal invasiveness. This shift towards robotics assisted surgeries saw a more drastic uptick in 2020 with the COVID-19 pandemic. As hospitals struggled to deal with rising critical cases, many hospital staffers found themselves amidst a shortage of Personal Protection Equipment, with many hospitals and clinics cancelling elective surgeries in order to prevent patients at risk from catching the virus. A possible workaround to ensure that safety measures are upheld, is the use of robotics assisted surgeries. As robots find their place in the surgical theatre they are also more frequently being used in Orthopaedics, prosthetics, psychotherapy and more. Robotics in the healthcare industry continues to grow and in the space of a few decades. From patient experience to surgeries, robotics will transform the healthcare industry.

Collaborative Robots

Collaborative robots or cobots are the next evolution in industrial robotics. First invented in 1996 by J. Edward Colgate and Michael Peshkin, the cobot has since developed into an integral part of Industry 4.0. These robots are designed for human-robot interaction within a shared space. How do they differ from the traditional industrial robot you’d find on any assembly line? While industrial robots are programmed to accomplish certain tasks, cobots are designed to interact with and collaborate with humans, they are designed to enhance human ability. Industrial robots on the other hand are not designed to interact with humans and operate in ‘cages’, which can be a safety hazard. These robots are capable of moving tons of material and as such can prove dangerous to humans trying to navigate the assembly line floor and as such are often programmed to stop when a human enters its vicinity. This can lead to performance and safety issues, collaborative robots on the other hand can monitor their environment and continue to work without observing a decline in performance or safety. Cobots are especially important for businesses now more than ever as we continue to innovate into the future. As technologies such as the Internet of Things, 5G technology and Big Data are developed, cobots find its place in industry with great synergy with these technologies.    

The global market for cobots was valued at $649 million in 2018 and is expected to grow at a rate of 44.5% from 2020 to 2025. Obviously the collaborative robot market is still new, the first company to offer collaborative robots was Cobotics, founded by J. Edward Colgate and Michael Peshkin in 1997. The company offered Cobots in the automobile assembly line, and offered models that used ‘Hand-assisted Control’. As of now Universal Robots has established itself as a market leader in the field, releasing cobots with applications in education, entertainment, manufacturing and research. 

Automated Warehouses

In the last 20 years, warehouses and logistics have expanded exponentially. As industries innovate and adapt so have warehouses and logistics, with the rise of online retail and e-commerce we’ve seen warehouses and logistics become increasingly automated. Technology such as the Internet of Things, Artificial Intelligence and sensor technology continue to be developed at a fast rate today, and this has supported the growth of the automated warehousing market. In 2015 this market was valued at $15 billion, by 2026 it’s expected to be worth $30 billion. A major reason for why more and more businesses are turning to automated solutions lies in the benefits offered by automating their processes. Integrating AI into their warehousing enables businesses to benefit from increased productivity and better accuracy, better sensor technology working in tandem with AI can improve working conditions in a warehouse and integrating these technologies into warehouse management software can lead to better oversight and tracking, ideally it would work as the hub of warehouse operations.

What kind of robots would you find in an automated warehouse? Warehousing robots usually fall into three categories; Automated Guided Vehicles (AGV), Autonomous Mobile Robots and Aerial Drones. 

Ocado Supermarket storage robots by James Vincent for The Verge – 2018

Automated guided vehicles act as carts that carry inventory to different parts of the warehouse, they are usually guided by magnetic strips or move along tracks laid out in the warehouse. 2XL a logistics company based in Belgium has automated their warehouses with AGVs, these AGVs replaced workers who previously spent most of their time walking from one part of the warehouse to the other. Since the AGVs took over the company observed increased warehouse efficiency at lower costs, since the AGV could work through the night and on week-ends for the same price if they operated in the day. 

Autonomous Mobile Robots have the same function as AGVs however they do not need magnetic strips or a track to navigate the warehouse. As Autonomous Mobile Robots have the hardware and software to map their environment as well as sensors to navigate obstacles. These robots are smaller and so do not carry heavy payloads and instead are an asset in sorting inventory in the warehouse. Due to their smaller size they are able to navigate and identify information on packages.

Aerial drones in warehousing and logistics generally refers to it being used as an aerial delivery device. While this may still seem like wishful thinking, aerial drones can still be used to optimize warehouse processes. Since drones can fly they can navigate the warehouse more quickly and easily. They can scan for inventory, and interact with the warehouse management system. While today they require a human operator, the vision is for a more automated drone integration.

Telepresence Robots

If any of our readers have binged Modern Family they probably remember the episode where Phil Dunphy can’t make it to a family gathering and instead chooses to insert himself into the events via an iPad on wheels. Telepresence robots can be summarized as iPads on wheels, as they are an operator controlled robot which has a video camera, a mic, a screen and speakers so that individuals can be ‘present’ in an environment. Telepresence robots have found its way in almost any working environment, schools, clinics, warehouses, corporate offices etc.

by Alex Knight

With the COVID-19 pandemic, countries went into a lockdown which made many businesses employ work from home measures. As the lockdown cut down on human contact, some sectors suffered more than others from the lack of human contact, especially the social and healthcare sectors. Many businesses, clinics and offices began to look for ways to bridge this gap in human contact by adopting a  more immersive telepresence experience. While most of these robots are literally just tablets on wheels that are using Zoom or Portal, the contribution from these robots are not to be taken lightly. These robots were able to have a positive effect on patient morale especially in elderly patients. There are more sophisticated telepresence robots that are more humanoid and are often used in care homes, such as the Pepper robot from Softbank. As businesses take an outlook on the future that has more distanced meetings and conferences, it makes sense to invest in telepresence robotics in order to have a more physical presence for participants in the meeting and establishing more concrete relationships over a screen.

Neural Networks and Industry

Neural Networks are algorithms modeled to simulate the human brain. Consisting of components similar to our neurons and nerves, it must be stressed that Neural Networks don’t actually work like human brains. They just process information in a similar fashion. Neural Networks are able to learn and model nonlinear and complex relationships, this makes Neural Networks especially adept at finding patterns and inferring new relationships between various inputs and outputs. Neural Networks are also capable of generalizing, they do this by employing Fuzzy Logic to make sense of incomplete or ambiguous data.

How do they work?

A Neural Network consists of several processors that operate parallely but are arranged in tiers. These tiers are connected to each other and process inputs from the tier that precedes them and passes on their output to the next tier. Each tier consists of nodes that are interconnected to nodes in tiers that come before and after them. Each node has a certain programmed function and rules that go along with it as well as relationships it has inferred by itself. Neural Networks are so special because they are able to learn and adapt quickly, they are able to do so by placing an ‘importance’ weight on the inputs and outputs received by each node. The inputs that have the most weight are ones that contributed the most to the right output.

We use Neural Networks to handle and analyze large amounts of data. In a 2017 Statista report, monthly global data volumes were estimated to reach around 230,000 petabytes (one million gigabytes). With such large amounts of data to be analyzed many companies have begun to consider using Neural Networks to analyze and handle their data. Neural Networks have been implemented in many industries, ranging from engineering and medicine, to business applications in financial prediction and targeted marketing. 

Let’s have a look at some industrial applications of Neural Networks.

Neural Networks and Engineering

The engineering sector is perhaps, where Neural Networks are essential, Neural Networks can be implemented in many engineering fields; Flight control, automotive control process and quality control are just some of these fields. Neural Networks are being implemented in industries that are looking to automate their processes. If we look at the drone industry, William Koch a Boston University Computer Scientist has developed a quadcopter with machine learning capabilities. As of right now most quadcopters are controlled through linear controllers and requires significant human intervention to maneuver the quadcopter. However the drone, Neuroflight, developed by Koch and a team of collaborators is able to maneuver through dynamic environments such as wind by using a machine learning neural network.

There are many companies investing heavily in Neural Networks in drone technology, notably General Electric and other industry players. Another company that uses Neural Networks in their drone technology is Aeiou.tech. The company’s Dawn platform helps their drones to navigate difficult and dynamic environments, avoid obstacles and much more, the platform is currently being developed so that the unmanned drones are also capable of inspection duties.

Neural Networks and Medicine

Neural Networks are capable of modeling complex and nonlinear relationships, they are adept at looking for patterns, which makes them a revolutionary tool in the field of medical diagnosis. Today Neural Networks are used in medicine to mode4l various human body parts and to analyze various scans (CT, X-rays, PET etc.). Neural Networks do not require a predefined algorithm to analyze these scans and this makes Neural Networks perfect for recognizing patterns in various scans. A Neural Network learns by example and as such Neural Networks do not require descriptors of the disease. 

Currently Neural Networks are being used to model and diagnose the cardiovascular system. Diagnosis, in this case, can be achieved by having a Neural Network build a model of a person’s cardiovascular system and monitor this model, By comparing real time changes in physiology a Neural Network could possibly detect arising medical conditions at a much earlier stage. An added advantage of using Neural Network for diagnosis is that Neural Networks can provide sensor fusion, i.e. Neural Networks can combine values from different sensors to get a more complete or accurate description of the model. Neural Networks that are being used today take input from a variety of biomedical sensors.

Neural Networks and Business Applications

As we move towards a digital age where more and more retail activities occur online. It stands to reason that more companies would require a way to handle large amounts of data in order to better target their customer base. Neural Networks can play a pivotal role in optimizing a company’s marketing strategy. 

As online commerce becomes more prevalent, more metadata from customers, this includes personal details, shopping patterns as well as any other relevant information. In 2021 this means analyzing millions of gigabytes of data, a neural network could analyze and model relationships with different variables when compared to traditional computational networks. Neural Networks are also being used to automate email marketing. From sending cold sales mails to follow up mails, a neural network could greatly optimize email marketing processes by segmenting customers by various categories. A more lucrative application of Neural Networks is its use in financial forecasting (another blog, another day). 

Neural Networks today have huge implications for big data. With increased processing power and an algorithm that learns to see patterns over time, companies can leverage Neural Networks in big data analysis to further optimize their business models. Social Media platforms as mentioned above use Neural Networks to filter for fraudulent or criminal activities on their platform, it can also be used to segment potential customers more accurately, providing companies information to model a targeted marketing strategy.

Neural Networks and other Deep Learning technologies offer us a path to true AI, and as this field continues to innovate businesses and other fields of interest should consider integrating AI technologies into their business model.

AI ETHICS AND THE FUTURE

A tumbleweed rolls across a barren landscape.
Abandoned skyscrapers and overgrown weeds are common descriptors that would be used. Throughout this city everything is silent except for an all encompassing electric hum. As you walk through the ruins of what used to be a mall, you see advertisements for what seems to be a new AI update, “The singularity in your palm”, declares a smiling humanoid, the rest of the poster is torn.

As you make your way out of the mall you see billboards down a stretch of what used to be the highway. The billboards flash periodically showing an emergency symbol. You decide to walk towards the highway seeing a light in the far off distance.
Finally, you reach the outskirts of the city, and that’s when you see it, a large structure in the distance, it seems impossibly large and appears to have drones flying around it. The only activity you’ve seen since arriving here.
Something about the structure beckons you towards it, and you comply striding across the dusty desert. When you reach the structure after what seems like hours, you find a cold, grey metallic structure reaching into the sky. As you admire the structure and the glowing lights that seem to travel across its body, you’re interrupted by a loud bang, you turn quickly, and you’re immediately blinded by a flash of light, a mechanical whirring can be heard as something moves towards you.
As the bright light fades away, you see two glowing lights moving towards you at high speed. It is the last thing you see before it all goes black.

While the dystopian future described above seems like it was brought about by a hyper-intelligent Artificial Intelligence, it’s unlikely that the dystopian scenarios put forward by popular sci-fi tropes will one day be a reality. AI today is being developed at an ever increasing rate, and AI is pervasive, being used in almost everything, embedded in all our technology and, in years to come AI will probably make its way into every facet of our lives. A couple of years ago AI being developed wasn’t considered a “threat” to human life, with a lot of the literature quoting that at that point of time a worm could be considered to have more intelligence. In this decade however, advanced AI is a reality, capable of great feats of computing power and does show some “intelligence”. Often the literature quotes how the AI, DeepBlue, beat chess grandmaster Garry Kasparov, or how Google’s AI won 4-1, in a game of Go against eighteen time champion Lee Sedol, a game that was thought to be playable by only humans due to the intuition and strategy involved. As AI is developed further, smarter AI could revolutionize our industries and the way we live, but this also raises some concerns. Looking through the lens of a chess grandmaster, as AI continues to develop, does this mean you’re inferior to a hunk of metal? Is AI superior to the average person at any number of tasks and activities? Looking beyond what AI is capable of in games, and we see a plethora of ethical dilemmas arising. These ethical concerns don’t arise from just the AI itself but also from its impacts on our lives, society and the economy. It becomes important that we consider formulating an ethical framework according to which AI would be developed and implemented. As of now there are several firms and authorities developing ethical frameworks for AI, but as AI continues to be developed these frameworks would have to keep abreast with new developments.

In this article we take a look at a few scenarios where ethical concerns can be raised when it comes to AI. Autonomous cars, the military, media and financial services, are some of the scenarios where ethical dilemmas arise. In this article we explore some of the industries that have to deal with these concerns sooner rather than later.

Autonomous Transportation

AI is an integral part of autonomous transportation. From the internal systems of the car to the safety of the driver and their surroundings, AI is embedded into what makes an autonomous vehicle autonomous. The ethical dilemma here is not a new one, if you’re familiar with the famous “Trolley Problem”. To those not in the know, the problem is a thought experiment which stipulates that there is a trolley barreling down a train track and at a certain distance the track diverges, one track has five people tied to the tracks the other has just one person tied to the tracks. You are in control of the switch which diverts the trolley to either one of these tracks.

Which track do you choose? 

It’s a dilemma that only gets more complicated as you add more information to the problem. This sort of dilemma is a very real scenario in the case of automated vehicles. In the event of an accident involving automated vehicles, the onboard AI would be capable of making decisions to either avert the accident or ensure the least damages. Let’s take an event where an autonomous vehicle for whatever reason loses control and veers off the road towards a pedestrian, the AI could make the decision to veer away from the pedestrian and towards a wall that may ensure the pedestrians safety but not the drivers. Which decision should the AI make? In this particular industry this dilemma requires that AI implemented in these kinds of vehicles have some sort of ethical constraints. The challenge is defining the constraints and the scenarios. Who’s responsible for the crash? Rather who’s more responsible in the event of a difficult choice? As autonomous vehicles become the norm on the road we need to make an effort to develop AI with these ethical issues in mind. However, this is unlikely as in a majority of these scenarios, the decisions available have no way of being subjectively categorized as ethical or unethical. It’s more probable we’ll use autonomous vehicles with a willful ignorance towards the ethical issues raised.

Autonomous weapons

Prominent scholars think that a weapon can be deemed autonomous if it is capable of dealing damage without the operation, decision or confirmation of a human supervisor. Autonomous weapons systems are being integrated into militaries across the world. And this development comes with its fair share of controversy from activists seeking to ban the development of autonomous weapons. As with any morality argument there are two sides, one for the development of autonomous weapons systems and one against. 

The main arguments for the development of autonomous weapons systems are that these systems due to their higher reliability and precision would be able ensure better compliance with international law and human ethical values, an example being less collateral damage being taken by civilians. The autonomous weapons system also serves to protect a military’s soldiers by effectively removing them from harm’s way. On the other hand those that are opposed to the development of autonomous weapons generally argue that, for one, that there are limitations to the technology to operate within certain ethical norms and legal boundaries. The more interesting take on the argument against development is concerned with universal ethical concerns that are raised when autonomous weapons are used. 

Primarily among these are questions such as:

What limits on autonomy should be placed in order to retain human agency and intent in decisions that can lead to the loss of human life and other damages?

If autonomous weapons are implemented, how are human values respected? Since these weapons do not view ethical constraints and norms the same way a human perceives them, would there be a responsibility gap? Who would take the responsibility for a decision that incurs the loss of human life? Thankfully in most ethical debates it is widely acknowledged that human agency and intent should be retained when it comes to such decisions.

Is human dignity being undermined if autonomous weapons are used?

This argument proposes that it matters in what way a person is killed/injured  and not just if a person is killed/injured. There are several laws employed in times of war to ensure human rights and international law. The arguments here are many and often people envision a future where human dignity is not respected and autonomous weapons are used indiscriminately with no thought of the magnitude of force used or the extent of damages.

What are the impacts of having humans distanced from decisions that may incur the loss of human life and other damages?

The main idea here being that if humans are distanced to such an extent where effectively (physically and psychologically) the battlefield doesn’t hold an emotional value. Would making these decisions that could lead to the loss of life be easier or less controlled?

The questions posed above have serious implications for the future of defense, international law and human rights. Serious thought needs to be put into the decision to develop autonomous weapons, in order to ensure human agency and dignity.

Media and Journalism

From 2016 to 2020 anyone on twitter or who has been exposed to a news feed has heard of the term Fake News. It’s entirely coincidental that Donald Trump’s presidency coincided with this period. But what does AI have to do with fake news? Well in March, 2017, reports came out that Cambridge Analytica, a data analytics firm, had used analytics from various social media platforms in order to influence the U.S. elections (okay, so not so coincidental). Many social media platforms like Facebook, Twitter, Instagram etc. use big data analytics in order to better engage with customers with targeted advertising. This seems innocuous enough and in fact small businesses and other e-commerce platforms benefit greatly from big data. It is when these advertisements are more akin to misinformation campaigns, fake news can wreak havoc by either creating a false state of panic, inciting violence or in this case influencing the political processes of a country. AI can be painted as the instrument of a villain but it can also be used to ascertain which media are reliable and which are not. 

Another scenario where AI can be implemented in an unethical way is in digital media. This is often referred to the rise of Deepfakes, synthetic media where a person’s likeness is manipulated onto an existing video. Previously only big studios in the film industry could afford to do the kind of manipulation you see today. But with the rise of AI and machine learning, Deepfake technology has become very accessible. AI used in Deepfake technology employs machine learning methodologies such as General Adversarial Networks (GANs, more on this in a later blog) this particular kind of methodology only requires a moderately powerful computer and resources that are easily accessible online such as Google’s Tensorflow, an open-source machine learning platform. 

The ethical issues here are obvious, videos could be manipulated to show events that never occurred. These Deepfakes could be used for financial fraud, revenge porn, celebrity porn and misinformation campaigns. While there exists little to no real legislation on the usage of Deepfakes many countries such as Indonesia, Russia and Germany have enforced laws to curb fake news. However this legislation comes with its own criticisms especially on its impingements on the freedom of expression.

Financial Services and Autonomous Policing

AI is increasingly being used to automate processes in the financial sector, everything from insurance premiums and loans to claims processing and fraud detection. It’s estimated that automation in the financial sector would have saved around USD 512 billion in 2020. However, with increased integration of AI and other autonomous technology many industries face the risk of biases. When a loan application is denied due to a low credit score or some other reason, data is recorded, however the algorithm that deems a loan application high risk or chooses to deny this application is opaque. When looking at how AI algorithms are trained to evaluate a loan application or any financial service, these programs often use historical data. If a financial institute had a history of denying loan applications to minorities this bias could be further propagated by the AI algorithm. Which brings us to a question who should be in charge of developing AI? The government? Corporations? Maybe it should be the common man. As AI continues to be integrated into the financial sector we should be careful to ensure that harmful biases are not propagated onto a much larger scale. Financial institutions should develop an ethical framework around which AI algorithms could be modeled.

Biases and AI are a common fear that is often pointed out. As a gleaming future approaches where AI helps us achieve utopian ideals, we should be quick to step in and point out that today’s society is far from fair and equitable. As AI is integrated into our daily lives what’s to say the same biases that plague us today won’t plague societies of the future? 

If we look at policing today and the fact that police authorities around the world are adopting AI predictive policing to better and more efficiently police their jurisdictions. There is reasonable doubt among experts and policymakers that AI policing may not be the best path forward especially in today’s current political climate. In San Francisco a study was released examining PredPol, a crime-mapping program, and its performance in mapping rates of drug use in various areas of Oakland. The program worked on mapping this data using demographic statistics, police reporting from Oakland and various historical data. The program showed that drug use was mostly spread out across Oakland however, it also suggested that drug use was concentrated in lower income localities that were significantly non-white. The authors of the study, William Isaac and Kristian Lum, states in their study that the use of predictive policing could result in a feedback loop, where policing authorities respond to the predicted data and record crimes in these areas thereby causing the AI algorithm to map more crimes in those areas. Propagation of this kind of bias can be dangerous for a number of reasons. It can strain tensions between policing authorities and communities, it can propagate racial biases and lead to the loss of life. The question of who develops AI is an important question we must pose, repeatedly if required, to ensure that all parts of society are treated fairly.    

This article provides a rather superficial insight into some of the ethical dilemmas we are faced with when it comes to using AI technologies. Looking deeper into some of the ethical questions posed here, and we come across questions that are thought provoking and more deeply related to philosophical pursuits. It’s obvious that there are a variety of ethical concerns when it comes to developing, implementing and using AI, some of these concerns require serious discourse in order to retain some semblance of morality in the future. But as AI and other technologies are developed at an ever increasing rate, from a perspective, it’s likely the benefits offered by these technologies far outweigh the detriments. 

Mark Coeckelbergh

Discover the digipersonality of the week: Mark Coeckelbergh

He’s a Belgian philosopher & author of AI Ethics.

To round off our excursion into the ethics of AI this week, our digi-personality is Mark Coeckelbergh—a philosopher in technology. You might be asking yourself what dimensions are included in technology philosophy; a fair question. Coeckelbergh has a blog in which he posts about the links between philosophy, tech, art and the environment. His scope of research and reporting is broad, and incredibly novel at present.

Robert Scoble

Discover the digipersonality of the week:
Robert Scoble.

He’s written The Infinite Retina, a book highly regarded by the digitech industry.

Scoble has a unique tie to innovation hubs including Silicon Valley, and has been at the forefront of tech development for over two decades. As a blogger, he maintains a gateway between famous tech founders, managers, their disruptive firms and watchful members of society. You can find out more and stay up to date on all things tech & startup by making your way over to his blog: ‘Scobleizer’. With consumers’ trust in digital companies remaining fragile and specific to personal instinct, it is the voice of personalities like Scoble, who act as a helpful medium between the two.

7 Industries being transformed by Spatial Computing

This week, we’re focusing on a question that might have a lot of people confused: what is spatial computing? It can be any kind of software or hardware technology that allows us, humans, virtual beings or robots to move and interact with the real or virtual world.

So, what kind of technology are we talking about?
The tech can range from Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), Computer Vision, Sensor Technology to Automated Vehicles. In the last couple of years, we’ve been introduced to innovations in AR and VR by tech giants such as Google, Microsoft and Samsung.
In gaming, VR is breaking ground in the player experience with consoles such as the Oculus Rift from Facebook. If we look at the automotive industry, we have players like Tesla and Google, who have been trying to develop a fully automated car for several years now.
It’s apparent that there is a global interest in developing and integrating spatial computing in several markets and industries and the obvious question becomes, what kind of changes would we see and where would we see them?

According to experts in these fields, we will most likely see dramatic changes in 7 industries:

Retail

Transportation

Healthcare

Media & Communication

Manufacturing

Banking & Trading

Education

Retail

In retail, integration of spatial computing is already happening, with major retailers in furniture, fashion, food etc.  announcing AR and VR integrated services. AR technology at in-store locations have the potential to transform the customer experience, for example using a mobile phone camera to point at a product and have all the information available to you on screen. In 2020 with the pandemic crisis and restrictions on in-store shopping, VR catalogs provide a safe alternative to consumers from the comfort of their seat or palm. Ikea, for example, has developed a virtual store where customers can view furniture in 3-d and in various settings and backgrounds. It’s likely we’re going to see more spatial computing integrated into our customer experience, as VR and AR continues to provide customers with information in an innovative way. 

Healthcare

Spatial computing in healthcare has great potential to impact virtually everything involved with healthcare, from the waiting room experience to surgery and surgery preparation. What does this mean for the industry at large? Well, we would see the healthcare system become more efficient, improved patient care and more personalized healthcare. VR is already being used by surgeons to conduct a range of surgeries often in combination with specialized equipment, in addition AR is being used in medical school to teach anatomy, surgery and other medical courses. We’re already seeing spatial computing being integrated into psychotherapy, notably in the treatment of PTSD, autism, depression etc. The healthcare industry is no stranger to adopting innovative technology and spatial computing is no different.

Banking & Trading

While there is little to no spatial computing being currently used in this industry the possibilities are endless. A major reason for this lag in adoption is because the industry is conservative by nature, another reason being, the numerous regulations to do with trading and customer interfaces. A current example of spatial computing in the banking industry is digital banks, real financial institutions that do not have a physical bank branch, and this is a trend most major banks are adopting with reduced footfall in physical bank branches. Digital banks can serve the same functionality online as banking becomes increasingly more digitized.  VR and AR are currently being tested in virtual trading , 3-d visualization and security. In security, spatial computing is primed to play an important role with AI and VR being used for facial recognition. This is important as facial recognition could reduce time spent on point-of-sales interactions, facilitate virtual banking and provide a layer of security.

Transportation

The transportation industry already uses spatial computing to a large extent and was one of the oldest adopters of the technology. However in years to come we will most likely see the extensive usage of sensor technology such as Lidar, the use of AI in a car’s internal and external systems and eventually, fully automated vehicles on the road. A smart car that embodies the visions of the future we often see in popular sci-fi franchises, is one that has not yet been realized but we’re making our way there. Tesla and other car manufacturers are developing self-driving cars, with sophisticated AI and sensor technologies, what does this mean for our roads and cities? Fully automated vehicles would probably lead to the development of smarter roads and cities, lowered transportation costs and increased safety on and off the road.

Media & Communication

The entertainment industry, the gaming industry and the telecommunications industry are all going to be heavily defined by spatial computing. From VR and AR games to films that explore storytelling through the lens of VR, integrating spatial computing probably has the most impact in this industry. Already VR and AR games are part of the popular culture, most notably with Pokemon GO! A marketing phenomenon that even took the superbowl by storm. As spatial computing makes strides in increasing the immersive experience, it’s likely that we’ll see these changes being carried forward onto our mobile phone and AR experiences. A possible outcome would be AR maps, usually these maps contain 3-d information and are generally used by autonomous robots and drones, however, this technology may make its way into our daily lives in order to map our surroundings in tandem with whatever technology we’re interacting with.

Education

In the education industry, spatial computing offers children the opportunity to learn in a more experiential fashion. With VR and AR classrooms, a whole world is opened to students. Previously getting a world-class education required resources that some students may not have access to and VR & AR could possibly bridge that gap. And this could be seen during the COVID-19 crisis, education systems were forced to adopt distanced learning methodologies implementing virtual classrooms and blackboards and so on, showing that VR and AR classrooms are here to stay. Spatial computing can change not only the medium through which we learn but also the way we learn. In STEM courses teachers place an emphasis in spatial awareness, however, concepts relating to spatial awareness are sometimes hard to grasp especially in 3-d, VR & AR provide a solution that enables us to visualize objects in 3-d and gain a higher understanding. Spatial computing is already transforming the education sector especially in STEM related courses.

Manufacturing

When we think of robotics and robots we often think of automatons from the likes of Transformers and C-3PO in reality the robots we use in manufacturing more closely resemble an arm. The robots we use are capable of complex tasks and can carry heavy weights and are often used in complex production trains, working in tandem with other moving parts. When looking at the automobile industry and its manufacturing these robots work at a large scale and at high efficiencies. It’s mind boggling to think that Ford produces a Ford F-150 down the line every 53 seconds, that’s almost 3 tons of steel and that’s with the technology we use today. Spatial computing could lead to the development of augmented manufacturing where humans and robots interact with each other to complete a task, also called a “cobot”. This could range from manual labor to more specialized tasks requiring supervision. A robot or person could theoretically supervise a task through spatial computing technology such as AR and VR. Cobots could be especially useful in the production of complicated parts requiring more dexterity or precision. Spatial computing could revolutionize manufacturing on a big scale leading to higher efficiencies and reductions in production time.