Ethics of AI: A Practical Guide for Professionals

The Moral Code of the IA: A journey between the values that shape the destiny:

Transparency;

Equity;

Responsibility;

Privacy;

Security;

Respect for Human Rights.


Transparency:

Transparency is a key concept in the ethics of artificial intelligence, a principle that permeates every reflection on the proper use of these technologies. In essence, the transparency translates into our ability to penetrate the internal mechanisms of the systems of IA, to understand the logic that guide them in formulating their decisions. This means, in practice, to have the possibility to observe the inner workings of what is often referred to as the “black box” of AI, an expression that emphasizes the difficulty of seeing how the information is processed and transformed. The need for this access to the “thought” of the machine comes from a fundamental question: how can we, as human beings, put our trust in a system that operates in a way that matt, dispensing answers and recommendations, without revealing the policies that have contributed to the shaping of such predictions? Transparency, then, emerges as an essential element for establishing a genuine relationship of trust between the user and the AI, a relationship that is based on awareness rather than blind acceptance. On the contrary, the lack of "transparency" in the hands of a kind of oracle digital, that may appear to be omniscient, but that it reveals, in its opacity, a potential source of confusion and distrust. This lack of clarity can trigger a series of problems, which are often complex and far-reaching consequences, especially when automated decisions have a direct and tangible on the life of the people.

To overcome this challenge, to shed light on the inner workings of the IA, is born and has developed a specific field of research, and dynamic: the Explainable AI, abbreviated as XAI, which is the Artificial Intelligence Explained. This scientific field dedicated to the development of models and techniques, designed with the goal of making systems AI more understandable and accessible to the human intellect. In this view of innovation, there are also some key techniques that deserve special attention.

One of these is LIME, which stands for Local Interpretable Model-agnostic Explanations. LIME stands as a tool to reveal the internal logic of an AI system in a specific context, by analyzing the way in which subtle changes to the input data affect the final output of the model. In essence, LIME allows us to “disrupt” the image, for example, to see which regions are important for its classification. It is as if the LIME to help us understand how an AI system has classified a particular image, simulating the virtual experiments in which blackness, highlight, and change the pixels of the source image. In this way, LIME provides an explanation for the “local”, focusing on the case of a single analysis.

Another technical highlight is the SHAP, the contraction of the SHapley Additive exPlanations. SHAP draws inspiration from the values of Shapley, a concept from game theory that allows to evaluate the individual contribution of each of the “players” (in this case, each data input) to the final outcome of a “game” (the decision of the model). Applied to the IA, SHAP allows us to quantify and assign a value “importance” to each of the characteristics that have shaped the final decision of the model. Taking the example of image classification, SHAP there would be if, in a model of specific IA, the “whiskers” or the “ears” of a cat, have exercised a greater influence over its classification as compared to other features. Unlike LIME, which focuses on a single case, SHAP offers a perspective, “global”, outlining the relative importance of each feature in the overall performance of the AI system.

It is important to keep in mind that the techniques of XAI are exhausted with LIME and SHAP. There are additional approaches, such as Grad-CAM, which is mainly applied in the field of artificial vision, allowing the display of the areas most salient within an image, on which a neural network gives more attention to classify it.

The need for transparency, however, is not seen as an imperative uniform and valid for any type of AI system. On the contrary, such a requirement is modulated and varied significantly in relation to the context of the application, and to the implications arising from the automated decisions. In other words, we can ideally define a sort of spectrum of transparency, which unfolds over a range of increasing levels of clarity and intelligibility.

At a low level of the spectrum, we find systems which are relatively simple, or applications in which the impact of a possible wrong decision proves to be a minimum. Consider, for example, an AI system that suggests to us the products similar to those that we purchased earlier on a platform of e-commerce. In this case, the user is primarily interested in the capabilities and effectiveness of the system, rather than the detailed analysis of its internal algorithms. The “black box” is not, in this scenario, a problem which is particularly critical.

As the automated decisions acquire greater importance and impact more significantly on our lives, the need for transparency becomes progressively more stringent. In systems of AI that have a decisive influence on the access to goods or services essential to the approval or the denial of a request for a bank loan, the user acquires the right to know at least the main criteria that were taken into account by the system, in order to understand the logic that shaped the decision.

The maximum level of transparency, however, imposes itself as an ethical imperative in high-risk environments, where decisions algorithmic exert a profound influence and potentially irreversible on the life of the people. Consider, for example, the use of AI in the medical field for the formulation of the diagnosis (in particular, when they are in the game invasive therapies and difficult), or in the legal field for the determination of judgments (such as the assessment of social dangerousness). In these scenarios, the transparency ceases to be a simple recommendation and is transformed into a duty: the user not only has the right to understand, in detail, how the decision was made, but must also be able to dispute, ask for a review and, if necessary, to obtain an adjustment.

The lack of transparency, in fact, it can trigger a series of problems that are not negligible, often intricate and far-reaching consequences.

Consider, for example, to systems of IA used for judicial decisions algorithmic. These systems are used in some contexts to estimate the probability of recurrence of an inmate, they may conceal to their internal decision-making mechanisms opaque, based on statistical variables hardly decipherable, and, sometimes, even questionable on ethical grounds (think of the use of socio-economic data or, even, the area of origin of the subject). This can lead to judicial decisions which are grossly unfair, that, far from blunting, are likely to amplify social inequalities that already exist. The lack of transparency in these cases, without the defendant and his legal ability to fully understand and, consequently, to challenge effectively the logic that shaped the decision.

Also the recommendations made by the algorithms of the social media represent a context in which the lack of transparency may prove to be extremely problematic. These algorithms, often, to select and filter in a manner invisible to the information to which we are exposed, shaping our bubbles informational based on our preferences, and online activities. This phenomenon can lead us to lock ourselves in the so-called “echo chambers”, which is information-restricted and self-referential in which we are exposed mainly to the opinions that confirm our preexisting beliefs, limiting our openness to different perspectives and our ability to develop a critical thinking. The opacity with which they operate, and these algorithms makes it difficult to assess the true extent of their influence and the degree of manipulation to which, perhaps unconsciously, we are exposed to.

Finally, also in the context of human resources (HR), the employment non-transparent AI can give rise to critical ethical non-negligible. A growing number of companies, in fact, makes use of a system of AI to automate some of the most crucial decisions related to the management of the staff, such as the selection of the candidates at the recruitment stage, the evaluation of the performance of work or the awarding of career advancement. In these scenarios, the transparency proves to be an essential factor to ensure that the evaluation criteria adopted by the systems of IA are correct, objective and impartial, and that do not give rise to any form of discrimination or preferential treatment (nepotism, sexism, etc.).

Equity:

Equity stands as a fundamental pillar in the architecture of the ethics of artificial intelligence, a principle that he calls us to ensure that the systems of IA are not tools of discrimination or vehicles for favouritism unjustified. In this context, fairness is not limited to asking for a treat formally the same for all; it is past, demanding a substantial justice, able to recognize and respect the diversity, vulnerability, and the specific needs of each individual.

The challenge of the pursuit of equity in the context of the IA has never been more difficult. The systems of IA, even when you don't want to, they can finish to play, or even amplify, the prejudices and inequities that already exist in the human society. This phenomenon, which often is manifested through the so-called “bias algorithmic”, it is closely related to systematic biases that creep in the data used to train the models of IA. Such distortions, incorporandosi in the data, in the end they shape the functioning of the algorithms and influence their decisions.

It is important to be aware that there are several types of bias, each with its own characteristics and implications:

  • The bias historical rooted in the injustices of the past, when certain social groups were subjected to systematic discrimination, or were excluded from opportunities critical. If an AI system is trained on data that reflect these iniquities historical, the risk of it happening again in the present is very high.
  • The bias of representation creeps in when the data of training, they cannot capture the entire diversity of the real-world population. If a group of people is under-represented in the dataset, it is likely that the AI system is able to operate just as well for that group, and the results are wicked.
  • The bias measurementfinally, is related to the distortions that may affect the collection or the measurement of the data. If the tools and methods of measurement are inherently flawed, as well as the systems of AI trained on this data inherit inevitably distortions.

To make concrete the impact of bias, algorithmic, we can consider some examples:

  • Systems facial recognitionfor example , have often been shown to make a lot more errors in the identification of people with darker skin, with potential adverse consequences in the field of security and surveillance.
  • Systems translation can sometimes perpetuate gender stereotypes, translating linguistic expressions neutral in terms of gender so as to give certain professions or social roles are mainly to men or women.
  • Systems human resources-based IAused for the selection of candidates, can unwittingly encourage candidates from the same university or the same cultural contexts of the recruitment team, playing dynamics “omofilia” algorithmic.
  • Even the assistants voicewith their linguistic choices and the items used can contribute, even if unintentionally, to convey and reinforce certain stereotypes of society and of the relations between human beings.

To address these challenges and ensure the fairness of the systems of IA, we have developed a series of strategies and techniques, including:

The adoption of approaches such as the “fairness through awareness”, which aims to take explicit account of the sensitive data to build models that mitighino the effect, and the “fairness through blindness”which, on the contrary, proposes to exclude from the all the sensitive information (an approach which, however, may not always be effective and may have unexpected results).

L’use of data sets in training a diverse, representativethat reflect the full range of relevant characteristics of the population.

L’implementation of regular audits and systematicaim to identify any signs of bias in the performance of the systems of IA.

The the development of algorithms that incorporate mechanisms to evaluate and quantify the impact of bias, enabling you to make corrections targeted.

Responsibilities:

The responsibility is a crucial aspect in the landscape of the ethics of artificial intelligence, because it raises fundamental questions about the attribution of guilt and on the definition of the obligations in the context of the actions and decisions of the systems of IA. In essence, the principle of responsibility and asks us “who must give an account” when an AI system makes a mistake, it causes damage or acts in an unexpected way.

This question is far from simple, because the complexity of the systems IA often blurs the traditional boundaries of responsibility. Consider, for example, a scenario in which an automobile to the driving autonomous is involved in an accident. In this case, the responsibility may fall on a multiplicity of actors: the designer of the software that is controlling the car, on the manufacturer of the car, the company that provided the data for training, or even on the passenger.

The difficulty in identifying a single person in charge is amplified by the “chain of responsibility” that characterizes the development and implementation of the systems of IA. These systems, in fact, are often the result of a collective work, which involves many teams and different organizations, each with their own skills and responsibilities.

For clarity on this point, it is necessary to explore different perspectives on the responsibility in the field of AI:

  • The individual responsibility focuses on the role of the individuals involved in the design, development, and use of systems of IA. In this model, the responsibility is understood as a moral duty and a legal act in a responsible way, and to answer for their actions.
  • The corporate responsibility move the focus on the obligation of companies that develop and implement systems of the IA to ensure that they are safe, ethical, and respectful of the law. In this context, companies can be held responsible for damage caused by its own systems of IA, even if you do not have acted with intent or gross negligence.
  • The state responsibility calls into question the role of public institutions is to regulate and supervise the development and use of systems of IA in order to protect the rights and interests of citizens.

It is clear that the theme of the responsibility in the field of IA is intended to attract a number of debates and in-depth, since there are no simple solutions or universally accepted. However, the search for an ethical framework that is robust and well defined, it is essential to build a future in which AI is a positive force and not a source of risk or uncertainty.

Privacy:

In the complex scenario of the ethics of artificial intelligence, privacy emerges as an issue of fundamental importance, even more so in an era in which the ability to collect, analyze, and make use of personal data reached unprecedented levels. In this context, the concept of privacy is based on the inalienable right of every individual to exercise full control and aware of the fate of your personal information, defining how these will be collected, processed, shared, and, in the last analysis, is protected.

The artificial intelligence systems, in particular those that are based on powerful machine learning techniques, require, by their very nature, large amounts of data in order to be properly trained and to work in an effective way. These data include information often of an extreme delicacy, such as demographics, geographical coordinates, journal of online behavior, biometric data, information about the health and financial data.

The use of a body of data so large and sensitive raises a series of intricate ethical challenges. On the one hand, access to the personal data, which are managed in a fair and transparent way, can undoubtedly enable the development of artificial intelligence systems able to bring significant benefits to society, such as those that enhance the accuracy of medical diagnosis, the definition of the pedagogical approaches in educational contexts, or optimize the efficiency of transport systems. On the other hand, the collection and processing of indiscriminate of personal data expose concretely, the private sphere of individuals to a number of potentially serious risks, including:

  • The surveillance massivemade possible by the use of artificial intelligence systems to monitor the continuous and extensive the activities of individuals, both in the domain online than in the physical world. This pervasive observation can establish a climate of constant scrutiny, which in turn undermines personal freedom and the ability to act without constraints.
  • The profiling, that is, the systematic analysis of data for the purpose to build detailed profiles of individuals, then used for making decisions in crucial areas, such as recruitment processes, the granting of bank loans, access to certain services, or the provision of targeted advertisements. The real risk is that the profiling will lead to discriminatory practices, and a compression of the significant opportunities of the individual.
  • The secondary uses are not allowed of personal data, namely the use of the information originally collected for a specific purpose, for the purposes of a radically different, and that is not contemplated or authorized by the individuals themselves.
  • The breaches of data securitythat may result from cyber attacks by external or by leaks of internal data, exposing the personal information stored in the systems of artificial intelligence to serious risks, and potentially causing irreparable damage to the individuals concerned.

To protect the privacy in this scenario, the complex and dynamic, have been developed specific regulations that a wide range of techniques dedicated to the protection of data.

Among the rules in the most relevant at the international level, stands the The General regulation on Data Protection (GDPR)in force in the European Union. This regulation sets out a coherent framework of principles that should guide the collection, processing and management of personal data, including:

  • The principles of lawfulness, fairness and transparencythat require the processing of data is lawful, and in a way that protects the stakeholders and providing them with clear and accessible information.
  • The principle of purpose limitationthat requires that data be collected for specific purposes, explicit and legitimate purposes, precluding their use for purposes incompatible with the original ones.
  • The principle of data minimisation, which are expected to be collected and processed data is adequate, relevant and strictly necessary in relation to the purposes pursued.
  • The principle of the accuracy of the data, which requires that the information is accurate, and, when necessary, updated.
  • The principle of limitation of the conservation, which establishes that the data are kept for the time strictly necessary to achieve the purposes for which they were collected.
  • The principle of integrity and confidentiality, which requires the adoption of appropriate security measures to protect the data from unauthorized access, unlawful processing, loss, destruction or accidental damage.
  • The principle of accountability, which identifies the holder of the treatment the person competent to ensure compliance with the GDPR and to prove you did it.

In parallel with the regulatory system, we have developed different techniques aimed to the protection of privacy in the systems of artificial intelligence, including:

  • The approach of the privacy by design, which proposes the integration of measures for the protection of privacy in the very early stages of the design of the systems.
  • Techniques anonymous, which remove identifying information from the data, making impossible their transfer to a specific individual.
  • Techniques privacy differential, which add a degree of “noise” to the data, in order to protect the privacy of individuals without precluding the possibility of the aggregate analyses.
  • Techniques encryptionthat encrypt data in such a way as to make it unreadable to anyone who does not have the keys, decrypting appropriate.

In conclusion, the privacy is a component unavoidable in the ethics of artificial intelligence. The development and implementation of artificial intelligence systems need to be driven by a profound respect for the privacy of the individuals, which implies the adoption of an approach multifaccettato that combines a robust regulatory framework with the effective use of technical protection of privacy.

Safety:

The security is set up as a fundamental imperative, in the ethics of artificial intelligence, a principle that transcends the simple protection of the systems of IA from external threats, and embraces a broad concept of resilience and reliability. In the context of IA, the security implies the need to ensure that the systems will not only be protected from cyber attacks, but also able to work in a more predictable and reliable, avoiding errors and unwanted behaviors.

One of the crucial aspects of security in the IA is the the vulnerability of the algorithms. Unlike traditional software, information systems and machine learning, especially the ones based on the techniques of deep learning, can be fooled by what is referred to as “attacks of opponents.” These perturbations minimum, often imperceptible to the human eye, that are made to the input data (for example, changes from very light to the pixels of an image) that can induce the AI system to make classification errors. Let's say, for example, a facial recognition system that, because of an attack opponent, swap the face of one person to that of another: the consequences, in terms of safety and invasion of privacy, can be very serious.

In addition to protect themselves from the attacks of the opponents, the systems of IA should demonstrate robustness, that is, the ability to operate correctly even in the presence of noise, errors, or incomplete data. Consider, for example, to a system of autonomous guided that it should interpret the images of the road in adverse weather conditions or low light. The robustness is essential to ensure that the system does not make mistakes which could put at risk the safety of the people.

Another important concept is the resilience, which refers to the ability of the AI system to recover from a failure or an attack. A system that is resilient is able to continue to operate, at least in small mode, even when there are problems, and quickly return to its normal state once the problem is resolved.

In conclusion, the safety in the IA is a multidimensional concept, which encompasses both security from external threats that the guarantee of reliability and resilience. The development of systems of IA sure it is a complex challenge, which requires different skills and an interdisciplinary approach.

Respect for Human Rights:

The respect of human rights is a pillar unavoidable in the ethics of artificial intelligence, a principle that should inform every phase of design, development, and implementation of these technologies. This means that systems of IA may not be designed or used in ways that threaten to violate, or compress the fundamental rights and freedoms of all individuals, regardless of their origin, gender, ethnicity, sexual orientation, disability or any other characteristic.

This imperative has its roots in the founding documents of humanity, such as the Universal Declaration of Human Rights. Many of the articles of this declaration take on particular importance in the era of AI:

  • Article 2: The Declaration prohibits any form of discrimination. The systems of IA, however, may introduce or perpetuate discrimination if they are not designed with care, and if they are trained on data that reflect or amplify the prejudices that exist in society.
  • Article 12: This article establishes the right to a private life. The surveillance massive enabled by AI, the profiling of invasive and the collection of non-consensual data may violate this right in a profound way.
  • Article 19: The freedom of opinion and expression is a fundamental right. The systems of IA used to moderate online content must be designed to protect and not to suppress that freedom, even if it requires a delicate balance with the need to combat misinformation and incitement to hatred.

The challenge is to build systems of AI that will not only be technically advanced, but also to incorporate an ethics that is “by design”, to consider the implications for human rights, from the earliest stages of the design process. This requires a multidisciplinary approach, involving experts in technology, ethics, law, and social sciences, and an open dialogue, and inclusive with all stakeholders.

Respect for Human Rights:

The respect of human rights is a categorical imperative in the field of the ethics of artificial intelligence, a foundation that must permeate every single step of the process of design, development and implementation of these technologies. In essence, this principle implies that the systems of IA may not be designed or used in a manner which in any way threaten, violate or compress the fundamental rights and freedoms that belong to every individual as a human being. This protection extends equally to every person, regardless of their origins, gender, ethnicity, sexual orientation, disability or any other personal characteristic.

To fully understand the scope of this imperative, it is essential to take root in the founding documents of mankind, between which there emerges forcefully the Universal Declaration of Human Rights. Different arrangements of this text capital buy a crucial importance in the context of the era of AI:

  • Article 2: The Declaration clearly prohibits any form of discrimination. The artificial intelligence systems, however, may paradoxically, to introduce or even emphasize the dynamics of discriminatory, especially if they are not designed with the utmost attention and if they are trained on data that reflect or amplify the biases that already snaking in society.
  • Article 12: This article solemn enshrines the right of every individual to respect for private life. The forms of surveillance massive enabled by the technologies of artificial intelligence, practices, invasive of profiling and the collection of personal data without informed consent may violate deeply this fundamental right.
  • Article 19: The freedom of opinion and expression is one of the cornerstones of a democratic society. The systems of IA used to moderate online content should be designed with the utmost care to protect and foster this freedom, instead of suppressing it. This task requires the most delicate balance between the protection of freedom of expression and the need to combat the spread of misinformation and hate speech.

The challenge that we face is, therefore, to build artificial intelligence systems that are distinguished only for their technical sophistication, but which incorporate from the outset, an ethics that is “by design”, that is, an ethics that is intrinsic to the DNA of the project. This long-term approach must consider seriously the implications of the systems of AI on human rights, providing for the active involvement and synergy of experts from a variety of disciplines (technology, ethics, law, social sciences), and promoting an open dialogue, and inclusive with all stakeholders.

Leave a Comment

en_US