AI and Ethics Archives - Fuse AI https://insights.fuse.ai/tag/ai-and-ethics/ Insights Tue, 30 Nov 2021 07:18:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 https://insights.fuse.ai/wp-content/uploads/2021/04/favicon.png AI and Ethics Archives - Fuse AI https://insights.fuse.ai/tag/ai-and-ethics/ 32 32 6 Major Ethical Concerns with AI https://insights.fuse.ai/6-major-ethical-concerns-with-ai/ Fri, 12 Nov 2021 10:29:03 +0000 http://44.213.28.87/?p=366 There are many ethical considerations related to emerging technology. The scale and application of AI also bring with it unique and unprecedented challenges. The article details some of the ethical concerns with AI.

The post 6 Major Ethical Concerns with AI appeared first on Fuse AI.

]]>
There are many ethical considerations related to emerging technology, and the scale and application of AI brings with it unique and unprecedented challenges such as privacy, bias and discrimination, economic power, and fairness. Below are some of the ethical concerns with AI-

Concerns over Data Privacy and Security 

Data privacy is an ethical concern with AIA frequently cited issue is privacy and data protection. There are several risks related to AI-based Machine Learning. ML needs large data sets for training purposes, while access to those data sets raises questions. 

An additional problem arises with regard to AI and pattern detection. This AI feature may pose privacy risks even if the AI has no direct access to personal data. An example of this is demonstrated in a study by Jernigan and Mistree where AI can identify sexual orientation from Facebook friendships. The notion that individuals may unintentionally ‘leak’ clues to their sexuality in digital traces is a cause for worry, especially to those who may not want this information out there. Likewise, Machine Learning capabilities also enable potential re-identification of anonymized personal data. 

While most jurisdictions have established data protection laws, evolving AI still has the potential to create unforeseen data protection risks creating new ethical concerns. The biggest risk lies with how some organizations collect and process vast amounts of user data in their AI-based system without customer knowledge or consent, resulting in social consequences. 

Treating Data as a Commodity

Much of the current discourse around information privacy and AI does not take into accountdata is a commodity that can be traded the growing power asymmetry between institutions that collect data and the individuals generating it. Data is a commodity, and for the most part, people who generate data don’t fully understand how to deal with this. 

AI systems that understand and learn how to manipulate people’s preferences exacerbate the situation. Every time we use the internet to search, browse websites, or use mobile apps, we give away data either explicitly or unknowingly. Most of the time, we allow companies to collect and process data legally when we agree to terms and conditions. These companies are able to collect user data and sell it to third parties. There have been many instances where third-party companies have scrapped sensitive user data via data breaches, such as the 2017 Equifax case where a data breach made sensitive data, which included credit card numbers and social security numbers of approximately 147 million users, public and open for exploitation.  

Ethical Concerns with AI over Bias and Discrimination 

Ethical concerns with AI include bias and discrimination concernsTechnology is not neutral—it is as good or bad as the people who develop it. Much of human bias can be transferred to machines. One of the key challenges is that ML systems can, intentionally or inadvertently, result in the reproduction of existing biases. 

Examples of AI bias and discrimination are the 2014 case where a team of software engineers at Amazon building a program to review resumes realized that the system discriminated against women for technical roles. 

Empirical evidence exists when it comes to AI bias in regards to demographic differentials. Research conducted by the National Institute of Standards and Technology (NIST) evaluated facial-recognition algorithms from around 100 developers from 189 organizations, including Microsoft, Toshiba, and Intel, and found that contemporary face recognition algorithms exhibit demographic differentials of various magnitudes, with more false positives than false negatives. Another example is the 2019 case of legislation vote in San Francisco where the use of facial recognition was voted against, as they believed AI-enabled facial recognition software was prone to errors when used on people with dark skin or women. 

Discrimination is illegal in many jurisdictions. Developers and engineers should design AI systems and monitor algorithms to operate on an inclusive design that emphasizes inclusion and consideration of diverse groups.

Ethical Concerns with AI over Unemployment and Wealth Inequality  

The fear that AI will impact employment is not new. According to the most recent McKinseywealth inequality is a an ethical concern with AI Global Institute report, by 2030 about 800 million people will lose their jobs to AI-driven robots. However, many AI experts argue that jobs may not disappear but change, and AI will also create new jobs. Moreover, they also argue that if robots take the jobs, then those jobs are too menial for humans anyway. 

Another issue is wealth inequality. Most modern economic systems compensate workers to create a product or offer a service. The company pays wages, taxes, and other expenses, and injects the left-over profits back into the company for production, training, and/or creating more business to further increase profits. The economy continues to grow in this environment. When we introduce AI into the picture, it disrupts the current economic flow. Employers do not need to compensate robots, nor pay taxes. They can contribute at a 100% level with a low ongoing cost. CEOs and stakeholders can keep the company profits generated by the AI workforce, which then leads to greater wealth inequality. 

Concern over Concentration of Economic Power 

concentration of economic power, depiected in the image by money growing in isolated potted plantsThe economic impacts of AI are not limited to employment. A concern is that of the concentration of economic (and political) power. Most, if not all, current AI systems rely on large computing resources and massive amounts of data. The organizations that own or have access to such resources will gain more benefits than those that do not. Big tech companies hold the international concentration of such economic power. Zuboff’s concept of “surveillance capitalism” captures the fundamental shifts in the economy facilitated by AI.  

The development of such AI-enabled concentrated power raises the question of fairness when large companies exploit user data collected from individuals without compensation. Not to mention, companies utilize user insights to structure individual action, reducing the average person’s ability to make autonomous choices. Such economic issues thus directly relate to broader questions of fairness and justice.  

Ethical Concerns with AI in Legal Settings 

image depicts AI singling out a suspect in a crowd, a showcase of predictive policing which is an ethical concern with AI
Image source

Another debated ethical issue is legal. The use of AI can broaden the biases for predictive policing or criminal probation services. According to a report by SSRN, law enforcement agencies increasingly use predictive policing systems to predict criminal activity and allocate police resources. The creators build these systems on data produced during documented periods of biased, flawed, and sometimes unlawful practices and policies.  

At the same time, the entire process is interconnected. The policing practices and policies shape the data creation methodology, raising the risk of creating skewed, inaccurate, or systematically biased data. If predictive policing systems are ingested with such data, they cannot break away from the legacies of unlawful or biased policing practices that they are built on. Moreover, claims by predictive policing vendors do not provide sufficient assurance that their systems adequately mitigate the data either.  

Concerns with the Digital Divide 

AI can exacerbate another well-established ethical concern, namely the digital divide. Divides between countries, gender, age, and rural and urban settings, among others, are already well-established. AI can further exacerbate this. AI is also likely to have impacts on access to other services, thereby potentially further excluding segments of the population. Lack of access to the underlying technology can lead to missed opportunities.

In conclusion, the ethical issues that come with AI are complex. The key is to keep these issues in mind when developing and implementing AI systems. Only then can we analyze the broader societal issues at play. There are many different angles and frameworks while debating whether AI is good or bad. No one theory is the best either. Nevertheless, as a society, we need to keep learning and stay well-informed in order to make good future decisions.  

Furthermore, you can also check our previous blog for more information on AI Ethics: Ethics of Artificial Intelligence.

The Fuse.ai center is an AI research and training center that offers blended AI courses through its proprietary Fuse.ai platform. The proprietary curriculum includes courses in Machine Learning, Deep Learning, Natural Language Processing, and Computer Vision. Certifications like these will help engineers become leading AI industry experts. They also aid in achieving a fulfilling and ever-growing career in the field.

The post 6 Major Ethical Concerns with AI appeared first on Fuse AI.

]]>
Ethics of Artificial Intelligence https://insights.fuse.ai/ethics-of-ai-artificial-intelligence/ Thu, 11 Nov 2021 08:06:29 +0000 http://44.213.28.87/?p=356 The great impacts of AI aren’t without challenges. When designing, producing, and deploying AI models, data scientists, engineers, domain experts, and delivery managers should make ethics a priority. The article details ethics of AI and why it is important.

The post Ethics of Artificial Intelligence appeared first on Fuse AI.

]]>
The great impacts of AI aren’t without challenges. A steep learning curve insinuates mistakes and miscalculations, which can result in unanticipated harm. When designing, producing, and deploying AI models, data scientists, engineers, domain experts, and delivery managers should make ethics a priority.

What is the Ethics of AI? 

AI ethics are techniques, values, principles, and accepted standards of right and wrong to guide moral conduct in development and deployment. 

Robot ethics

Robot ethics, or roboethics, refers to the morality of how humans build, design, use and treat robots. This subset is concerned with the rules AI engineers and those involved in the creation and deployment of AI models should apply to ensure ethical robot behavior. Roboethics deals with moral dilemmas, such as concerns of robots posing threats to humans or using robots in wars.

The main principle is guaranteeing autonomous systems exhibit acceptable behavior in situations with humans, AI systems, and other autonomous systems such as self-driving vehicles. Robot ethics emerged out of engineering ethics and shares its origins with both Asimov’s Laws and traditional engineering concerns of safe tools. 

Machine ethics

Unlike roboethics, machine ethics, also known as machine morality, is a new field that focuses on the designing prospects of computer and robotic systems that demonstrate sensitivity to human values. In other words, machine ethics deals with the implementation of human value sensitivity into AI models so that they can make morally sound decisions. As such, this field is concerned with designing Artificial Moral Agents (AMAs), robots, or artificially intelligent computers that behave morally.

You can think of a robot’s choices and actions as hard-wired. We sometimes refer to this as “operational morality.” As systems become more autonomous, there arises the need to build AI models with ethical routines so that they can select and act out appropriate behavior from among the various courses of action. This is known as “functional morality,” and this is what Machine ethics is about. Functional morality can still fall far short of full moral agency. 

Click here to read the blog about the ethical concerns of AI: 6 Major Ethical Concerns with AI.

Ethics of AI Principles 

Governments, the EU, large companies like Microsoft and Google, and many other associations have drafted several policy documents and ethical guidelines related to the ethics of AI over the years. The converging result currently presents 11 major ethical principles:  

Transparency

Transparency is the most prevalent principle in current AI ethics literature. Common thingstransparency is important in the ethics of AI include increased explainability, interpretability, or other acts of communication and disclosure. After all, the impact of AI in people’s daily lives will grow the more it is applied, potentially in life or death decisions, like the diagnosis of disease and illnesses, or the choice of self-driving cars in complex traffic situations. This thus calls for high levels of transparency.  

We can apply this principle in data use, human-AI interaction, and automated decisions. Transparency in AI allows humans to see, understand and explain if the models have been thoroughly tested. We can also understand why AI made particular decisions, and what data the AI model has ingested. This helps answer such questions as “What that decision was based on?”, and “Why was it taken the way it was taken?” 

Transparency also helps minimize harm, improve AI responsibility, and foster trust. After all, transparency in AI helps make underlying values definitive and encourages companies to take responsibility for AI-based decisions. Such responsible decisions will then not exclude ethical considerations while aligning with the core principles of the company.

Many policy documents suggest increased disclosure of information by AI developers and deployers, although specifications regarding what should be communicated vary from one policy to the next, with some asking for transparency regarding the AI source code, limitations of AI models, and investment specifics while others ask for transparency regarding the possible impacts of AI systems.   

Justice, Fairness, and Equity

Many ethical guidelines call for justice and the monitoring of bias. There are some sources, such as the Position of Robotics and AI policy by Green Digital Working Group, that also focus on justice as respect for diversity, inclusion, and equality. This ethical principle of AI emphasizes the importance of fair access to AI and its benefits, placing a particular emphasis on AI’s impact on the labor market and the need to address democratic or societal issues. 

Non-maleficence

The principle of non-maleficence calls for safety and security, stating that AI should never cause foreseeable or unintentional harm. More considerations entail the avoidance of specific AI risks or potential harms, such as intentional misuse via cyber warfare or malicious hacking. Risk-management strategies also fall under this principle, such as technical measures and governance. Such strategies can range from interventions at the level of AI research and design to technology development and/or deployment. 

Responsibility and Accountability

Sources rarely define responsibility and accountability in AI, despite widespread references to “responsible AI.” Recommendations include acting with integrity and clarifying the attribution of responsibility and legal liability. This principle also focuses on the underlying reasons and processes that lead to harm. 

Privacy

Privacy is seen as a value to uphold and a right to be protected. This is often presented in relation to data protection and data security. Hence, in order to uphold privacy, suggested modes of achievement fall into four categories: technical solutions, such as differential privacy and privacy by design, data minimization, access control, and regulatory approaches. 

BeneficenceBeneficience is important in ethics of AI

The principle of beneficence comprises the augmentation of human senses and promotion of human well-being, peace, and happiness. This ethical principle focuses on the creation of socio-economic opportunities and economic prosperity. Strategies for the implementation of this principle include aligning AI with human values, advancing scientific understanding, minimizing power concentration and conflicts of interests.  

Freedom and Autonomy

This refers to the freedom of expression and the right to flourish with self-determination through democratic means. This ethical philosophy also refers to the freedom to use a preferred platform or technology. Transparency and predictable AI promote freedom and autonomy. 

Trust

This principle calls for trustworthy AI research and technology, trustworthy AI developers and organizations, and trustworthy design principles. The term also underlines the importance of customers’ trust. A culture of trust among scientists and engineers can support the achievement of other organizational goals. Furthermore, in order for AI to fulfill its potential, overall trust in recommendations, judgments, and AI use is indispensable.    

Education, reliability, and accountability are important to build and sustain trust. Engineers can also develop processes to monitor and evaluate the integrity of AI systems over time. Additionally, while some guidelines require AI to be transparent and understandable, others explicitly suggest that instead of demanding understandability, AI should fulfill public expectations.

In ethics of AI, sustainability is a core principleSustainability

Sustainability calls for the development and deployment of AI to improve the ecosystem, protect the environment, improve biodiversity, and contribute to fairer and more equal societies. Ideally, AI can create sustainable systems whose insights remain valid over time through the increase in efficiency in the process of designing, deployment, and management of AI models. We can achieve sustainability by minimizing the ecological footprint. 

Dignity

The principle of dignity is intertwined with human rights. It entails avoiding harm, forced acceptance, automated classification, and unknown human-AI interaction. Artificial Intelligence should not diminish or destroy but respect, preserve and increase human dignity. 

Solidarity

Solidarity is an ethical principle mostly referenced in relation to the implications of AI for the labor market. It calls for a strong social safety net. This fundamental philosophy underlines the need for redistributing the benefits of AI to protect social cohesion and respect vulnerable groups.

Why do we need Ethics of AI? 

Ethical AI ensures that AI initiatives maintain human dignity and don’t cause harm. Data often reflects the bias in society, and when not corrected, can cause AI systems to make biased decisions. AI firms need to ensure that the choices they make, from the partners they work with and the composition of their data science teams to the data they collect, all contribute to minimizing bias. Furthermore, the adoption of ethical AI principles is essential for the healthy development of all AI-driven technologies. Self-regulation by the industry will also be much more effective than any legislative effort if engineers and developers uphold ethical principles during the creation and deployment process.  

You can also read our previous blog about how businesses benefit from AI: 10 Benefits and Applications of AI in Business.

The Fuse.ai center is an AI research and training center that offers blended AI courses through its proprietary Fuse.ai platform. The proprietary curriculum includes courses in Machine Learning, Deep Learning, Natural Language Processing, and Computer Vision. Certifications like these will help engineers become leading AI industry experts. They also aid in achieving a fulfilling and ever-growing career in the field. 

The post Ethics of Artificial Intelligence appeared first on Fuse AI.

]]>