Ethics as a compass:
reflections for the Safer Internet Day

 

 In light of the Safer Internet Day, launched by the European Commission in 2004, experts from the Strategic Alliance of Catholic Research Universities have analyzed the ethical implications of digital technologies

In 2004, the European Commission launched Safer Internet Day to promote positive and responsible internet use, especially among young people. This initiative broadened its scope involving more than 150 countries and raising awareness of the challenges and opportunities presented by emerging digital technologies. Recently, as the Covid-19 pandemic contributed to a faster spread of technology by moving a range of experiences from face-to-face to the digital world, the debate around the need for a humane dimension of technology has grown.

Inspired by its mission of global cooperation for the Common Good, the Strategic Alliance of Catholic Research Universities (SACRU) has collected insights from its experts on the ethical questions posed by digital technologies from a multidisciplinary perspective. SACRU is a network composed of eight Catholic Universities from four different continents. The contributions represent the personal views of individual academics and are not intended as the official positions of SACRU and its partner Universities.

Contributions by experts – SACRU Universities

Università Cattolica del Sacro Cuore (Italy)

Written by Giuseppe Riva, Director of Humane Technology Lab, and Ciro De Florio, Associate Professor of Logic and Philosophy

Ethics as a compass

Every technology reshapes the world. For this reason, technology is rarely value-neutral: it affects reality, and moral relevance accompanies every causal action. And there is no doubt that, for the past twenty years or so, the agenda of technology ethicists has been dominated by two words: Artificial intelligence (AI). AI is generally associated with two powerful narratives that sometimes have polarizing aspects. On the one hand, AI is understood as the triumph of human reason, the creation of what sets us apart from the rest of the natural world. However, there is another, a more human-centric, narrative that questions the impact of this technology on the various dimensions of our experience: from work to interpersonal relationships. These two narratives are independent and return different images of the digital revolution. However, the reunification of these perspectives seems essential to governing this phenomenon. The goal is not easy because AI acts with a degree of autonomy and independence never before observed, bringing out a new category of actors: “artificial agents.”

The other major ethical research pillar in AI is algorithms’ transparency. The more intelligent, autonomous, adaptive software is, the more difficult it becomes to understand “from the outside” the mechanisms by which information is analyzed. The media reflection on the importance of ethics in AI concerns, in large part, the normative relevance of software systems and information management: selecting a candidate based on a prediction about his or her productivity or diagnosing a certain disease are actions that involve information processing. However, AI systems can do much more than that; they can act concretely in our world, harming or saving lives, relieving from physical fatigue, or relegating humans to spectators.

The union of robotics with AI opens up largely unexplored fields whose ethical, economic, political, and social consequences could be disruptive. Interaction with robots introduces a set of problems that software does not: a robot’s agency and a human’s agency toward a robot are inescapably mediated by physical interaction. But not all robots, that is, not all technological devices with AI systems, resemble humans. There are (semi-)automatic machines on the horizon whose operation is already under the lens of AI ethicists. Think, as examples, of self-driving cars and automatic weapons; again, what is relevant for ethical consideration are different (new?) concepts of agency, control, and autonomy. What the digital revolution and the advent of AI need are not narratives but rational looks at the world based on a “human,” integrated, multidisciplinary approach that combines knowledge of technical aspects with that of the processes and contexts in which AI and robotics will be used. Unfortunately, without this double perspective, the risk of losing the ethical challenge is very high.

 Written by Giovanna Mascheroni, Associate Professor of Sociology of Media and Communication

Children and AI

AI is embedded in many platforms, services and objects we, including children, use on a daily basis- at home, at school, in the workplace, on the move. And, yet, the role of AI in children’s lives- let alone its problematic consequences for children’s futures- remains almost invisible in the public debate, hidden behind the industry hype and the powerful discourse of techno- or data-solutionism. Contrary to this rhetoric, however, AI systems are not artificial: rather, they are heavily dependent on data extraction and processing, algorithmic automation, and the legitimation of data as accurate, objective, and impartial representations of reality. In other words, AI does not only require the extraction of natural resources, huge amounts of computational power and energy, or the exploitation of human labour – as the recent case of Open AI using underpaid Kenyan workers reminds us: AI is premised upon our submission to datafication, to turning our lives into profitable resources for surveillance capitalism.

Children’s everyday lives—their contexts, practices, and emotions- are not exempt. From the recommendation systems of YouTube, streaming or gaming platforms; to the voice-based agents embedded in domestic smart speakers; to algorithms running on educational platforms or health apps, children’s lives are routinely dependent on data, systematically turned into digital data, and increasingly governed by algorithmic classification and automated processes. The risks involve more than facing data breaches and privacy violations. In 2020, when the A-level grades were decided by a controversial algorithm in place of the usual exams, thousands of British students were downgraded and risked their admission to university. As this example shows, data-driven includes biases as much as human judgment: whether it originates in the historical data used to train machine learning, or in the (often manual) classification of data, or even in the design and programming of the algorithm itself, algorithmic bias results in systematic discrimination and “allocative” and “representational harms”.

Respectively, the unequal access to resources (education, health, credit, job opportunities, etc.) based on presumably “impartial” algorithmic classifications, and the influence of stereotyped classifications on a child’s self-representation, their understanding of the social world and, ultimately, their agency to encompass the longer-term harms that AI, if unregulated, may pose to children. In order to repurpose AI for a better future, policy interventions should move beyond privacy to encompass questions of equity, transparency, and sustainability. Beyond data protection regulation and to avoid longer-term harms, children, their parents, and educators should be given a voice in the automated decisions made for them by AI systems.

Universitat Ramon Llull (Spain)

Written by Xavier Vilasís, Full Professor at La Salle-URL Engineering Department

 The key for artificial intelligence governance

Artificial Intelligence is polarising. Some consider it a humankind major threat, while others see it solving people’s main challenges. Some just look at the money-making opportunities it provides. In any case, the discussion is rarely set on sound technical grounds but rather on the powerful storytelling invoked by the attribution of human features to algorithms.

Yet facts are that large amounts of personal and context data are available, and more shall be in the future, while computer capacity has dramatically increased. This has enabled the development of complex algorithms, performing accurate profiling of citizens, detecting their presence, or generating coherent text.

None of these activities is new, but our dependence on the digital world and the scale at which those analyses can be performed could potentially exploit our psychological weaknesses. Once again, in technological development, it is not technology to blame but its use. And again, three major players are required to ensure the best potential use of these new advancements. First, algorithm designers and users, who must do their best to keep the ethical and moral principles of their use. Second, individuals who must exert critical thinking on what is being proposed. And third, regulators, who must enforce laws making sure ethical and moral guidelines shall be followed.

What makes Artificial Intelligence different from other technological advances is its direct global social reach, combined with its technical complexity. Education becomes key to provide all players the ability to perform critical thinking, the proper guidelines to set ethical and moral principles and finally, of course, the knowledge to grasp the reach of the technology. These requirements imply the need for a comprehensive span of education both at first, secondary, and tertiary levels, breaking set divisions between STEAM and other disciplines.

Pontificia Universidad Católica de Chile (Chile)

Written by Gabriela Arriagada, Assistant Professor of AI & Data Ethics

 We cannot develop good human-centred AI without teaching applied ethics

The discipline of human-centred AI (HCAI) focuses on aiding humans instead of replacing them, thus extending human abilities and capabilities to develop technified societies by designing new interactions between humans and AI. Most recently, two major events have incorporated this sub-discipline into their agenda. NeurIPS, in 2021, analysed the use of machine learning algorithms in healthcare, education, and government, through the understanding of technical requirements, design approaches, efficacy metrics and societal impact for HCAI systems. IBM, in 2022, organized a conference on intelligent user interfaces, with cutting-edge innovations for human-computer interaction based on generative AI research (generating new images, text, code, video, and audio), including areas such as co-creative systems and explainability.

Despite the exponential advancement in research initiatives in the public and private sectors, a critical role in the development of HCAI has been consistently overlooked. One of HCAI’s goals is facilitating shared objectives between the optimization goals of an AI and human decision-making. To advance in this goal, however, it is essential to consider and integrate the contextual needs of different affected social groups into AI’s design and development methodologies. This contextualization implies analyzing background information beyond the collected data points, which amounts to societal conditions affecting individuals the AI is programmed to aid. Incorporating systematic ethical scrutiny, for example, can help prevent solutionism traps from blinding against alternative solutions to a socio-technical problem, which may not require using AI as the default. After all, technical feasibility does not amount to moral desirability.

Thus, developing AI technologies to serve people requires an interdisciplinary perspective rooted in the moral questioning of real-life AI applications that fosters trust whilst respecting people’s dignity. Teaching these methodological tools for moral deliberation is still underdeveloped. Many undergraduate and graduate courses keep ethics as a foreign aspect of their curriculum. We are educating highly trained developers and overlooking their role as professionals and citizens capable of critically engaging with concerns about fairness, equality, discrimination, transparency, and responsibility in AI.

A central dimension of an applied ethical approach to AI’s development implies meaningful human control over the AI. The interpretation, understanding, and implementation of the technology’s limitations depend on human decision-making, which is necessarily context-dependent. Accordingly, to continue this integration of human-centred AI into society, we cannot move forward without educating future developers to think and deliberate about the ethics of AI, not as an added feature but as a foundational element.

Universidade Católica Portuguesa (Portugal)

 Written by Paulo Cardoso, Professor at Católica Lisbon School of Business and Economics and expert in digital innovation

 How safe is the Internet?

The Internet has been around for more than 50 years already. Still, it only conquered the world after the World Wide Web advent, starting in the mid-90s. In those old days, many of us believed that the freedom of communication would naturally foster the freedom of speech, allowing participants to access as much information and knowledge as they wanted and possibly could. We sincerely believed we were building a better world.

For example, back in 1991, many of us used the communication tools already present on the Internet to spread out the word about what happened in the Santa Cruz Massacre in Indonesia, pushing for freedom in a movement that eventually led to the creation of Timor-Leste.

In Europe, the Internet was forbidden in most countries until 1994. At the time, adopting a USA-based protocol developed by the military, and supported by technologies and companies on the other side of the Atlantic, was utterly inconceivable. For those like us using the Internet against the law in Europe, the world of free communication seemed scattered for good. And then, in October 1994, a surprisingly magical event happened when the G7 committed towards adopting a standard protocol to be the bedrock for a worldwide and free communication platform among all. By chance, the Web was just born at the time, and all crucial elements lined up to spread the adoption of what is today the wonderful and previously imaginable common communication space for most.

Reduced asymmetries of information & knowledge seemed to push the world towards safety and prosperity. It looked like everybody connected could access the truth, no matter what. Still, we have been witnessing troublesome examples like the rise of negationists and the US Capitol attack on the 5th of January 2021, where thousands of people shared death wishes against freedom’s representatives in the USA, all in the name of freedom itself. And it turns out that most people with dark beliefs originate in the same Internet. How was this possible? Why are individuals using their freedom to opt for the dark side? How often do we stumble on misinformation, or even disinformation, within messages coming from those we trust and with whom we share the same values, only to find they were simply misled? Consequently, the Internet became unsafe because it can contribute to strengthening the darkest forces on earth. How can we fight for a safer internet?

The ongoing discussion on how to influence or control Big Tech’s platforms for supervising their own content is as tricky as dangerous. So, the other option is strengthening recipients’ enlightenment, with space for rightfully judging the good from the bad. A major contribution in this sense comes from handling biases, where education can be pivotal to preparing individuals for the communication jungle that the Internet has evolved into. Biases are unconscious and can be rewarding because individuals are addictively joyful when believing to be right. So, how can we help individuals’ cognitive efforts to recognize their biases and mitigate the corresponding consequences through self-awareness?