Intelligence over wisdom: the price of conceptual priorities
Anuj Puri
Tilburg University, Netherlands, The
The researchers at the 1956 Dartmouth Conference decided “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” (McCarthy et al 1955). It is worth pondering over how different our world would be, if we were equally driven by the pursuit of wisdom as we are by the pursuit of intelligence in the development of technology. Our recent fixation with intelligence over wisdom is not merely a matter of chagrin; but as the impact of Socially Disruptive Technologies reveals this conceptual transition has come at significant moral and social cost. While most accounts of intelligence are context specific, goal oriented, and focused on optimization (Legg & Hutter 2007), wisdom is often associated with virtuousness and a general ability of sound judgement arising out of lived experience. This is not to say that intelligence and wisdom are mutually exclusive; but rather to highlight that our quest for new data driven technologies is driven by the former rather than the latter. In our search for human like intelligent machines that can solve the problems of the day, we seemed to have lost track of our embodied wisdom that was derived from historical recognition of our co-existence. This moral loss is reflected in some of the priorities for which AI systems are being developed and deployed ranging from propagation of deepfakes to development of autonomous weapons to adoption of a “statistical perspective on justice” (Littman et al 2021). Some scholars have sought to overcome the limitations of Artificial Intelligence by promulgating a move towards Artificial Wisdom (Jeste et al 2020, Tsai 2020). However, it’s feasibility remains uncertain. Wisdom is acquired through embodied experience and requires acknowledgment of our collective co-existence. This is a skin in the game argument both in the phenomenological sense as well as an acknowledgment of one’s responsibility towards consequences of one’s actions. If this argument holds, then our efforts are better focused on using wisdom to delineate those areas where artificial intelligence can make a positive contribution such as cancer detection (Eisemann et al 2025) rather than treating AI as a panacea for all our troubles. In our zealous pursuit of advancements in AI, we seem to have forgotten that while intelligence may help us in achieving certain goals, wisdom lies in deciding whether those goals are worth pursuing in the first place.
References:
Eisemann, N., Bunk, S., Mukama, T. et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med (2025). https://doi.org/10.1038/s41591-024-03408-6
Jeste, D. V., Graham, S. A., Nguyen, T. T., Depp, C. A., Lee, E. E., & Kim, H. C. (2020). Beyond artificial intelligence: exploring artificial wisdom. International Psychogeriatrics, 32(8), 993-1001.
Legg, S., & Hutter, M. (2007). A Collection of Definitions of Intelligence (arXiv:0706.3639). arXiv. https://doi.org/10.48550/arXiv.0706.3639
McCarthy et al (1955), A proposal for the Dartmouth Summer Research Project on Artificial Intelligence.
Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.
Tsai, Ch. Artificial wisdom: a philosophical framework. AI & Soc 35, 937–944 (2020). https://doi.org/10.1007/s00146-020-00949-5
Addressing challenges to virtue ethics in the application of artificial moral agents: From a Confucian perspective
Yin On Billy Poon
Hong Kong Baptist University, Hong Kong S.A.R. (China)
The rapid advancement of artificial intelligence (AI) offers both significant opportunities and potential risks to human society. Therefore, controlling the risks presented by AI and limiting its potential harm to human interests is an urgent and pressing issue. Lately, one of the hottest topics is “AI agents”. These AI agents are “capable of autonomously performing tasks on behalf of a user or another system by designing their workflows and utilizing available tools”(Gutowska, 2024). The rise of such capable AI brings forth significant ethical implications. One possible solution is that nurture AI like a child, ensuring that it aligns with human values. Recently, the application of rule-based ethics to guide AI’s actions has encountered numerous challenges, suggesting that a virtue-based approach could be a viable solution.
However, there are some obstacles to applying virtue ethics to AI systems. In this paper, I will articulate two of them. First, virtue ethics is an agent-centered ethics, when we apply it to AI, AI has to be a moral agent. So the question is, can AI be considered as a moral agent? Second, some scholars, such as Roger Crisp(2015), have cast doubts on the practicality of virtue ethics as a guide for action.
In the current discourse in the philosophy of AI, some scholars argues that AI could be considered a moral agent (Floridi & Sanders, 2004; Anderson & Anderson, 2007; Misselhorn, 2018). On the other hand, there is another camp who present opposing views (Brożek & Janik ,2019; Hallamaa & Kalliokoski, 2020). In my opinion, while AI can be recognized as moral agents, they should be considered functional moral agents rather than full moral agents due to their lack of moral patiency.
I will evaluate this by using the distinction between a full moral agent and a functional moral agent (Misselhorn, 2018). Then, I will borrow concepts from animal ethics to indicate moral agency and moral patiency. According to Tom Regan (2004), moral patients are sentient beings, and moral agents must also be moral patients. This means that without moral patiency, an agent cannot be considered a moral agent (151-155). Meanwhile, there is a wealth of resources in Confucianism to justify that why without moral patiency, a moral agent is not a full moral agent. it. I will explain it from a Confucian perspective.
To address the second critique, I will also integrate viewpoints from Confucianism. Recognized by many as a form of virtue ethics, Confucianism contributes to the discourse on motivation, and disposition, providing valuable insights that may fortify the foundation of a virtue ethics approach to artificial intelligence.
In conclusion, I will argue that virtue ethics is a viable and preferable ethical theory to design for AI.
References List:
Anderson, M., & Anderson, S. L. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28(4), 15-26.
Brożeka, B., & Janika, B. (2019). Can artificial intelligences be moral agents? New Ideas in Psychology, 54, 101-106. https://doi.org/https://doi.org/10.1016/j.newideapsych.2018.12.002
Crisp, R. (2015). A Third Method of Ethics? Philosophy and Phenomenological Research, 90(2), 257-273.
Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machine(14), 349-379. https://doi.org/https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Gutowska, A. (2024). What are AI agents? Retrieved 3th Jan from https://www.ibm.com/think/topics/ai-agents
Hallamaa, J., Kalliokoski, T. (2020). How AI Systems Challenge the Conditions of Moral Agency?. In: Rauterberg, M. (eds) Culture and Computing. HCII 2020. Lecture Notes in Computer Science(), vol 12215. Springer, Cham. https://doi.org/10.1007/978-3-030-50267-6_5
Misselhorn, C. (2018). Artificial Morality. Concepts, Issues and Challenges. Society, 55, 161-169.
Regan, T. (2004). The Case For Animal Rights. University of California Press.
|