Covid - 19

By Oriana Medlicott

…it is evident that video conferencing, digital contact tracing and facial recognition have their role to play in our current normal and our normal of tomorrow. These technologies all come with their evident ethical concerns before COVID-19 and now, this crisis has highlighted some of their clear faults…

Right now, the news and our social media timelines are flooded with an overload of information and statistics surrounding COVID-19. Naturally, it is an extremely worrying time for everyone and we are all feeling the impact. One of the main questions that keeps arising during this uncertain period is –  what will life be like after COVID-19? Will we be able to get back to what we knew as normal or will there be a shift in society?

The economic impact is set to be huge. Many businesses are already experiencing difficulties and are forced to close temporarily, whereas some have been able to quickly adapt by moving their entire workforce to working remote. One industry that has evidently benefitted is Tech. Apps such as Zoom, Uber Eats and House Party, to name a few, have seen a giant spike in users. If the world didn’t already massively rely on technology, then it most certainly does now and this reliance will continue post COVID-19. As we move into a potential era of expedited dependence on technology, what will this mean for ethical concerns we already had prior to COVID-19 and,  has this unprecedented time enhanced these ?  

This article will be analysing  three technologies, Facial RecognitionVideo Conferencing and Digital Contact Tracing (DCT) which have come to the fore in this global pandemic. The expedited adoption of these three has highlighted key ethical concerns that need to be addressed to enable their beneficial use for the future. 

Facial Recognition 

Over the past years we have seen a huge increase worldwide in the use of facial recognition, which has driven scrutiny for ethical consequences such as bias where the lack of diversity in programming of AI is a repeated issue. Algorithms can only be as unbiased as the people who create them and there have been many examples of black people being wrongly identified. Clearview AI, has been at the forefront of controversy regarding the ethical breaches in the use of their facial recognition technology. According to Kashmir Hill, due to the lack of public knowledge, over 600 law enforcement agencies spread around the US have started using Clearview AI’s tool which is able to identify murder suspects, thieves, fraud and more. 

Clearview poses a risk due to the sensitive data that is uploaded to a server where there is no evidence of data protection, this in turn led to their entire client database being hacked. Again, similar to the data scandal regarding Cambridge Analyticia, it was found that Clearview AI had used over 3 billion facial images from social media sites such as Facebook, Youtube and Twitter, again, violating core ethical principles such as consent and privacy.  As a response to much of the backlash and outrage surrounding multiple data scandals, in January, governing bodies such as the European Union, announced a potential 5 year ban whilst looking to deliver tighter policies regarding the ethics of facial recognition being used. San Francisco went as far as banning the use of facial recognition throughout the city due to privacy concerns.

As the COVID-19 crisis became more imminent worldwide, many governments were quick to implement facial recognition to track and monitor their citizens during lockdowns. Russia enforced widespread use of facial recognition surveillance to execute strict lockdown policies as a response to the spread of COVID-19. China, enforced facial recognition to monitor temperatures of people on public transport alerting authorities of suspected COVID infected people. There has been a clear surge in the use of this technology to help fight the spread of the virus. In a post-pandemic world, this increased use of facial recognition may be implemented at a more rapid rate in the name of safety. However, it is important to not let our concern for health and security overcome the imperative need for ethical safeguards. 

Digital Contact Tracing 

Arguably, Tech is a huge component holding us together during this crisis as we all quickly transitioned to the ‘new normal’. At the beginning of April, the World Health Organisation, announced an overwhelming need for and support from digital technology whilst tech companies raced for health technology solutions to fight the virus. Digital Contact Tracing (DCT), has been the most prominent tool that big tech giants such as Apple and Google will make available to health authorities worldwide.  There are two types of DCT: centralised and decentralised. The tech giants are looking to use the decentralized model, whereas the UK government has opted for the centralised one, which has caused evident privacy concerns. The key difference between both, is that the centralised model uploads data into one server whereas decentralised allows for more user autonomy by enabling data to be stored on a personal device. It is argued that the decentralized model has enhanced data protection making it harder for hackers to infiltrate our personal data. 

Rightly so, DCT has led to many debates from Ethicists regarding data safety and surely after many data scandals, we have every right to contest this. Is trust in this technology blinded by our urgent need? Consequently leading us into further privacy breaches with our data. We must ensure users have autonomy over their data enabling applications to have clear guidelines for privacy and consent, guaranteeing that people are aware of where their data is being stored and the option for it to be erased once consent is no longer given. Another concern, is that consent may just be assumed when a person is traveling abroad, and therefore leading to personal data use by different governing states and companies.  

Video Conferencing 

As the threat of COVID-19 spread, where possible companies started encouraging employees to work remotely from home. Businesses had no choice but to trust and rely on technology to keep their teams connected and their operations running smoothly.  Family and friends have been able to stay connected via ‘the weekly Zoom Quiz’, new apps such as House Party allow users to play games together and the UK government even hosted their cabinet meetings via Zoom. This is part of the ‘new digital normal’  where in the name of health, human interaction is via a screen. 

As quickly as Zoom gained popularity during lockdown, widespread news alerted the world of hackers infiltrating the app. The UK Government was quickly advised by the National Cyber Security Centre to stop using Zoom due to fears of Chinese surveillance. The Citizens Lab  of Toronto University published a report in early April outlining the potential security risks of using Zoom to protect data. The report findings note that, although Zoom is a Silicon Valley based company, it has 3 companies in China where 700 people are employed to develop the software. Why? In order for  Zoom to avoid US wages and increase their profit margins. However, this leaves them vulnerable to pressure from Chinese Authorities. Additionally, allowing Chinese authorities to have access to private data from users that aren’t based in China.

Covid-19 certainly increased Zoom’s popularity but exposed ethical privacy breaches that were already present before this crisis unfolded. In early January, Zoom had already been determined by cybersecurity firm Check Point to have compromised security that enabled hackers to use a meeting ID to gain access. The firm recommended ways to mitigate this by enforcing passwords, but this was ignored by Zoom. This failure to act is what caused more of the recent issues we have seen and shows how this negligence compromised user safety. 

Will the novelty wear off with this new method of working and communication? The flaws that have been exposed need to be addressed and private corporations must take accountability to ensure transparency is at the forefront of their business model. Privacy and security are core principles of ethics, and must be centered in the programming and development of the technology in hand, for users to feel confident. 

TECH AND OUR NEW NORMAL OF TOMORROW? 

No one can truly predict what a post-pandemic world will look like, what our new working habits will be and how we will use new technologies. However, it is evident that video conferencing, digital contact tracing and facial recognition have their role to play in our current normal and our normal of tomorrow. These technologies all come with their evident ethical concerns before COVID-19 and now, this crisis has highlighted some of their clear faults and why ethical guidelines are extremely essential at mitigating such risks. 

It is highly likely that many governments will mandate the use of contact tracing and facial recognition with the intention of stopping the spread of the virus. This is when transparency and accountability are important, we must be aware of who is developing the technology, why and what the intentions are. We should all have one common goal to fight against the spread of COVID-19 and keep our communities safe and healthy. Provided that ethics is at the forefront of business discussions and decisions, and that ethicists are able to play the key role they have in the innovation and expedition of new technologies, then we are headed in the right direction for safe and fair technology of tomorrow.

Originally published on Linkedin

About the Author

Oriana Medlicott is an AI Ethics Strategist, writer, researcher & consultant; passionate about the future of technology, philosophy, and art.

[The views expressed in this article are those of the author and do not necessarily reflect the views of the Ayottaz.com]

Leave a Reply

Your email address will not be published. Required fields are marked *