In this paper, we present a study that introduces a ‘ vaccination strategy dilemma’ to human participants and analyzes their response.
To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. This behaviour among humans is defined as pro-social rule breaking. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons.
Ethical behaviour is a critical characteristic that we would like in a human-centric AI. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. Results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which either deontological or utilitarian ethics cannot completely explain. In the presented study, we introduce a 'vaccination strategy dilemma' where one needs to decide whether they would distribute Covid-19 vaccines only to members of a high-risk group (follow the rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. In this paper, we examine the when, i.e., conditions under which humans break rules for pro-social reasons.
#Judicial consent imdb how to#
To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents to identify when and how to break rules set by their designers. Hence, ethical behaviour is a critical characteristic of a human-centric AI. From healthcare decision making to social media censoring, these agents face problems and make decisions that have ethical and societal implications. The world is heading towards a state in which Artificial Intelligence (AI) based agents make most decisions on behalf of humans. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems.
This is especially pressing in light of the prevalence of applied, industrial AI research. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition.