Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Aligning values
Time:
Thursday, 26/June/2025:
3:35pm - 4:50pm

Session Chair: Donovan van der Haak
Location: Auditorium 6


Show help for 'Increase or decrease the abstract text size'
Presentations

Aligning technology with human values

Martin Peterson

Texas A&M University, United States of America

This talk aims to broaden the discussion of value alignment beyond artificial intelligence to technology in general: all technologies—not just AI systems—should be aligned with values and norms specified by humans. Call this the General Valuea Alignment Thesis (GVAT).

I will make two points about GVAT. First, I address its relevance. Once we recognize that every technology can be aligned with values and norms, there is no reason to claim that technological artifacts are inherently value-laden. Claims about the morality of technologies can and should be expressed in terms of value alignment. Consider, for instance, the low bridges over the parkways in Long Island designed by Robert Moses in the 1920s. These bridges were intentionally constructed to prevent buses from passing under them, thereby limiting access to beaches and other desirable destinations for low-income groups and racial minorities, who relied on public transportation more than other social groups. Langdon Winner argues that this shows how the bridges themselves embody social and moral values: the concrete and steel of the bridges “embody […] systematic social inequality” (1980: 124). However, according to GVAT, the bridges are morally neutral means to an end that align poorly with some of our values: racial justice and equity. An additional benefit of GVAT is that it enables us to explain how value alignment changes over time without committing ourselves to the controversial idea that moral values change. According to GVAT, there is no genuine value change. Racial justice and equity are as important today as a hundred years ago, but because cars are now more affordable, the bridges are less misaligned now than they used to be.

My second point about GVAT addresses the problem of measuring value alignment. For GVAT to serve as a foundation for the ethics of technology, we must explain how value alignment can, at least in principle, be measured. In addition to using insights from the extensive literature on Value Sensitive Design, we can study the measurement problem by applying insights from social choice theory, particularly Arrow's impossibility theorem (including some recent generalizations) and Harsanyi’s aggregation theorem. The upshot of this is that it is sometimes -- but not always -- possible to measure value alignment. This is itself an interesting observation. I end the talk by showing how value alignment can, under some circumstances, be measured on a ratio scale by applying the theory of conceptual spaces to moral values.

REFERENCES

Peterson, M., & Gärdenfors, P. (2023). “How to measure value alignment in AI” AI and Ethics, 1-14.

Van den Hoven, J. (2013). Value-sensitive design and responsible innovation. Responsible innovation: Managing the responsible emergence of science and innovation in society, 75-83.

Winner, L. (1980). “Do artifacts have politics?” Daedalus 109 (1):121--136.



Aligning AI with ideal values: Comparing metanormative methods to the Social Expert Model

Erich Mark Riesen

Texas A&M University, United States of America

Autonomous AI agents are increasingly required to operate in contexts where human welfare is at stake, raising the imperative for them to act in ways that are morally optimal—or at least morally permissible. The value alignment research program seeks to create “beneficial AI” by aligning AI behavior with human values (Russell, 2019). In this paper, I compare two methods for aligning AI with ideal values. Ideal values are actual values idealized, where idealization involves correcting the content of actual values to account for distorting influences and imperfect conditions. For moral realists, ideal values reflect what is objectively valuable. Metanormative methods attempt uncover ideal values while also dealing with moral uncertainty and disagreement by maximizing expected moral value over the moral theories one has a credence in (top-down) (e.g. Bogosian, 2017). The Social Expert Model uses social choice theory to aggregate the judgments of moral experts about what AI agents ought to do in concrete cases (bottom-up). I argue that we have strong reasons for favoring the descriptive Social Expert Model over metanormative methods when aligning the behavior of AI agents operating in morally complex domains (e.g., autonomous cars, care robots, autonomous weapons). The raison d'être of metanormative theories is to handle moral uncertainty and disagreement about which moral theory is correct. However, I argue that such theories do not actually solve the problem but just push it up to the metanormative level. And introducing more theoretical machinery about which we disagree seems misguided, particularly in the context of value alignment where we must balance getting the best answers eventually against getting decent rather than poor answers now. A bottom-up descriptive method that draws on moral expertise as embodied in the collective judgments of moral experts handles not only first-order uncertainty and disagreement about which moral theory is correct but also higher-order uncertainty and disagreement about which metanormative theory is correct, and we should favor it on this basis.



Aligning values: setting better agendas for technology development

Yunxuan Miao

TU Delft, the Netherlands

Agenda-setting in technology development is inherently value-laden, like the Moral Compass of Innovation. It reflects not only the priorities of agenda-setters but also the values they deem important for recipients. This dual role embeds agendas within sociotechnical systems, promoting what should be valued in design and shaping how these values are framed and interrelated. Through cognitive, affective, and behavioural dimensions, agenda-setting contributes to the formation of collectively shared values within the technology community, offering a mechanism for allocating attention and social resources critical to technological progress.

Despite its importance, the ethics of agenda-setting remains under-explored in normative terms. Traditional critiques often focus on specific cases of agenda misuse without offering frameworks for ethically improving agenda-setting. Given the inevitability of agenda-setting in attentional structuring and resource allocating, it is important to move beyond critique to establish methods for navigating the ethical challenges it entails. This is particularly vital in resolving conflicts between competing agendas, such as individual versus collective priorities or divergent frames, which frequently arise in practice.

This paper argues that while universal rules for ethical agenda-setting may be elusive, situated reflection on conflicts provides a pathway to continually improving agendas in value-sensitive and responsible development of technology. Conflicts, far from being mere obstacles, can serve as opportunities for uncovering hidden assumptions and fostering collaboration. Situated reflection enables agenda-setters to critically examine and redesign agendas in a context-sensitive manner, ensuring they remain relevant and ethically grounded.

Furthermore, good agenda-setting requires good justification. Ethical dimensions of agendas cannot be verified in an absolute sense but must instead be validated through persuasive, appropriate, and context-sensitive reasoning. Proper justification does not restrict recipients’ intellectual autonomy; on the contrary, it can enlighten them. For example, addressing pressing societal challenges in technology through timely agenda-setting can inspire researchers and practitioners to innovate more responsibly.

By situating agenda-setting within the ethics of technology development, this paper highlights its potential to align attention with shared values while proposing actionable methods for improving its practice through reflection and justification.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany