What are the ethical nightmares of Ai

With new and exciting AI products coming out every week, what ethical issues do companies need to consider before creating new AI devices or implementing them in the workplace.
With new and exciting AI products coming out every week, what ethical issues do companies need to consider before creating new AI devices or implementing them in the workplace.

Click here to listen to TBT on Air’s latest podcast, ‘The ethical nightmares of AI’ 

AI is a transformative technology that will become part of our everyday lives, from the office to our homes and cars. By introducing AI into our lives, we need to address that AI is no longer purely about functional skills but also about ethical questions behind building these systems. With AI spanning across various industries such as healthcare, retail and manufacturing, there are various ethical issues that we need to be vigilant about to ensure that AI is not doing more harm than good. So, what issues do we need to look out for to ensure AI remains a helpful part of our world.

Threatening human jobs

One of the biggest issues that need to be dealt with is AI replacing human workers. AI has brought mixed emotions due to people worrying about AI taking over their jobs; however, that is not the case. Companies need to be open and honest with their employees about how their responsibilities will change, and the new categories of jobs can be created.

In research done by Cognilytica analysts Kathleen Walch and Ronald Schmelzer, it was found that companies that adopt augmented intelligence approaches, where AI is augmenting and helping humans to do their jobs better, rather than fully replacing the human, shows faster and more consistent ROI for organizations and is welcomed much more warmly by employees. In addition, people feel more comfortable working with machines instead of being replaced by machines. 

Misuse of AI

Another major issue that needs to be addressed is the misuse of AI for surveillance and manipulation of human decisions. AI allows governments and countries to keep tabs on what people are doing through technologies such as facial recognition. It has been estimated that over 176 countries have been using AI surveillance. Even tech giants have raised their concerns on AI surveillance and the possibility of governments and companies abusing the technology. For example, Microsoft President Bradford Smith came forward to the US Congress and said that “we live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology”. 

In the near future, we can assume that we will be in a world where everyone knows who we are, where we are and what we want. It’s uncomfortable to think that companies and governments will have massive amounts of knowledge about our lives and, in turn, may influence our lives by manipulating our decisions. Therefore, companies need to look at how they implement AI surveillance and ensure that they are not infringing on their employees’ privacy. Unless you have a really good reason to access an employee’s computer camera or emails, it simply should not be done.  

AI-powered analytics have been in action for a few years now, and this gives companies an idea of what you will purchase, who you would vote for and what content you would read. Unfortunately, this has resulted in companies and governments abusing analytics to manipulate the decisions you will make. This is ethically wrong in its entirety and can be seen from the Cambridge Analytica case where American voter data was unlawfully obtained from Facebook to build voter profiles. With this information, AI was brought in to automate social media accounts to help create and spread misinformation across the internet to manipulate voters’ decisions on who to vote for. 

Companies need to remain cautious about what they do with their employee and client information while protecting that information from malicious attacks. 

Malicious users

Malicious users are becoming a major issue that needs to be acknowledged. One way that a user can be malicious is through deepfakes which can also have a major impact on the decisions we make and how we make them. Deepfakes are falsely created images of videos where someone else’s image can replace a person. In addition, malicious internet users may use this platform to misrepresent political leaders’ speeches and actions. 

The need for action is now

With technology constantly evolving, there has been an increase in major AI-powered threats that are becoming harder to detect, more flexible to systems and environments, and more accurate in identifying vulnerable areas within a system. Companies and governments need to act now to build a digital a strong, and reliable infrastructure that can withstand the force of these attacks. In a report on the Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the writers found that companies and governments can expect novel attacks that exploit human vulnerabilities, existing software vulnerabilities or the vulnerabilities of AI systems. The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems may expand the threats associated with these attacks. They also expect novel attacks that subvert cyber-physical systems or involve physical systems that it would be infeasible to direct remotely. The use of AI to automate tasks involved in surveillance and deception may expand threats associated with privacy invasion and social manipulation. They also expect novel attacks that take advantage of an improved capacity to analyze human behaviours, moods, and beliefs based on available data. These concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates.

Finally, with machines becoming more intelligent by the day, we need to understand how they should be treated and viewed as a society. At the moment, this issue is surrounded only by questions as we simply don’t know yet. For example, when machines start to replicate emotions while also acting similar to humans, how should they be governed? Should we consider machines as humans, animals, or inanimate objects? Will we consider the feelings of machines? 

READ MORE: 

With no answers to these questions, we will have to wait and see what happens over the years and maybe machines will be seen as humans who deserve and require protection. 

Click here to discover more of our podcasts now

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter 

An image of AI, AI, What are the ethical nightmares of Ai

Amber Donovan-Stevens

Amber is a Content Editor at Top Business Tech

DUBLIN TECH SUMMIT 2022 IS BACK AT THE RDS IN...

TBT Newsroom • 27th May 2022

After a long period of restrictions and countless cancelled events all over the world, we are extremely excited to announce that Dublin Tech Summit (DTS) LIVE is back on 15-16 June 2022, at Dublin’s most famous venue, the RDS, in accordance with all governmental COVID safety guidelines. Our event will bring together over 8,000 professionals...

Is your business ready for Headless Commerce?

Sam van Hees • 26th May 2022

Sam Van Hees, co-founder & CMO of Instant Commerce, a frontend as a service solution which enables any eCommerce brand to build a high-end headless storefront, explores the top seven signs your ecommerce business is ready to make the switch to headless commerce.

Fibre – Is Your Business Being Hoodwinked?

Dom Norton • 25th May 2022

By researching and understanding all of the options, away from the smoke and mirrors marketing tactics, business owners can make a more informed choice around their connectivity and ultimately get what they are actually paying for.

Rivery Raises $30M B Round

Rivery Dataversity • 24th May 2022

Rivery, the SaaS ELT today announced a new funding round of venture capital led by Tiger Global alongside existing investors State Of Mind Ventures and Entrée Capital.