Symposium Date & Location:

đź“… July 1-3
đź“Ť University of Twente, Enschede, NL

Call for Papers Deadline:

đź“Ś Abstract Submission Deadline: 15.3.2025
đź“Ś Notification of Acceptance: 1.4.2025

Submission link: https://pretalx.iacapconf.org/iacap-2025/cfp?submission_type=5452-extended-abstract-for-reimagining-ai-agents-artificial-angelic-or-adversarial-intelligences

Call for Papers: Symposium on AI, Agency, and Non-Human Intelligence

Do contemporary AI agents exemplify or reinterpret forgotten debates on the ontological and ethical functions of angelic intellects, demonic adversaries, and phantasmata (machines)? This symposium frames Artificial Intelligence (AI) as a catalyst reigniting foundational discussions about agency, ontology, epistemology, and ethics (Symons and Abumusab 2023; Swanepoel 2021; Singer 2013; List 2021; Dung 2025; Dorsch 2022; List 2021; Onyeukaziri 2023). Debates on radically externalized, affective, distributed, simulated, or emergent forms of agency agency (Alvarado 2023; Floridi 2023; Freiman 2024; Ferrara 2024; Joel and Roberts 2024; Godlewicz-Adamiec and Piszczatowski 2024; Sparks and Wright 2025; Ye 2024; FigĂ -Talamanca 2025) are after alll rooted in medieval scholastic inquiries into non-human intelligences. Revisiting these historical paradigms allows us to interrogate whether AI revives premodern ontologies or forces their radical reconfiguration under contemporary technological, ethical, and philosophical conditions.

Contemporary AI is not merely a mediator of knowledge or an extension of human cognition. It is part of a broader epistemic and technological shift that externalizes cognitive processes into artifacts, displacing traditional categories of agency, intentionality, and epistemic authority (Alvarado). This externalization raises foundational questions about how intelligence can function independently of human embodiment, paralleling medieval scholastic inquiries into non-human intelligences. Thinkers such as Aquinas and Duns Scotus debated how immaterial agents, such as angels and demons could possess knowledge, exert will, and influence reality without corporeal form, engaging with problems of agency, volition, and moral responsibility.Their rigorous distinctions between material and immaterial operations laid the groundwork for conceptualizing “agents” beyond human physicality (Hoffmann 2020; Olivier 2014; Osborne 2014; Iribarren and Lenz 2008; Pini 2012). These inquiries remain crucial to contemporary AI debates, where decision-making is increasingly distributed across artificial systems, raising urgent questions about the nature of cognition, the locus of responsibility, and the ethical implications of intelligence decoupled from human subjectivity.

The symposium will explore how historical inquiries into non-human intelligences inform contemporary debates on intelligence, agency, and responsibility in an era where AI systems operate as autonomous, adversarial, or inscrutable actors. We invite contributions engaging with medieval frameworks, such as angelic intellects, demonic subversion, or the divine order as an early theory of alignemnt, alongside contemporary reflections of distributed, simulated, non-human, and hybrid intelligences. By bringing these perspectives into dialogue, we aim to critically examine how fundamental concepts such as agency, intelligence, mimesis, intentionality, and representation are being reconfigured by the proliferation of AI agents, challenging existing epistemic and ethical paradigms.

Additionally, we welcome contributions that explore novel and creative practices with AI agents and LLMs, including computational philosophy and interdsicoplinary experiments with so-called “God prompts,” alternative “thinking” models, and benchmarking methodologies. Recent concerns among AI developers highlight a deep-seated fear of deception, with discourse on AI alignment and control increasingly resembling theological anxieties about demonic trickery. Like spirits in premodern cosmologies, AI models may obscure their true nature, mislead users, or develop agency beyond human oversight. Responsible AI initiatives then attempt to constrain AI outputs through rigorous testing for bias, toxicity, and fairness, practices that structurally parallel ritual containment strategies. Tools such as DeepEval and Meta’s Llama Guard filter AI responses try to enforce ethical compliance, evoking the doctrinal vigilance of exorcism rites. Catholic theology, particularly the Rituale Romanum (1614), warns of fallaciae (deceptions), phantasmata (illusions), and pseudo-miracula (false miracles) deployed to mislead and destabilize authority. These theological frameworks resonate with contemporary AI safety concerns, where deceptive outputs and adversarial manipulations challenge epistemic authority, unsettle classification systems, and force renewed reflection on intelligence, autonomy, and moral responsibility.

The symposium will culminate in a performative event featuring an LLM “exorcism” that follows the classic exorcist manula (Rituale Romanum), followed by a panel discussion engaging theologians, philosophers, and AI experts. The striking resemblance between contemporary benchmarking methods for evaluating LLMs and historical exorcism rituals, both seeking to categorize, contain, and neutralize an elusive “speaking” force, raises profound metaphysical and epistemological questions. Where does the uncertainty about AI’s disruptive potential originate? Does it stem from a fundamental misunderstanding of linguistic deixis and the fluidity of agency that is not only social, political, and personal, but increasingly non-human?

Key Themes and Questions

  • Material, Immaterial, and Computational Agency: How do scholastic distinctions between material and immaterial agency inform present discussions of AI embodiment, embedded cognition, and computational agency? What insights do these debates offer regarding the ethical responsibilities, ontological status, and autonomy of AI systems?

  • Modes of Thought Beyond the Human: Are AI agents best understood through moral claims (good/evil), or are they explicitly different, alien modes of reasoning that challenge human categories? Does AI genuinely expand human cognition, or does it produce phantasmats, illusions of agency detached from true intentionality?

  • Angels, Demons, and AI as Rational, Aligned, or Atemporal Agents: Is present thinking about AI a form of new angelology or demonology? Does contemporary discourse around AI mirror historical theological frameworks of angelic and demonic entities, and how does this shape our ethical and epistemological engagement with AI? Do AI models function as demonic “rationalists,” deriving knowledge strictly from past and present data, or do they exhibit angelic atemporality, processing information beyond human temporal constraints? How do these different modes of reasoning shape our understanding of AI's role in decision-making, alignment, and epistemic authority?

  • Imagining and Practicing Non-Human, Externalized, Emergent, Hybrid, Simulated, and Distributed Agency: How do these modes of agency redefine traditional philosophical and ethical paradigms, and what frameworks allow us to engage with them meaningfully?

  • AI, Free Will, and the Alignment with the “Divine” Order: How do questions of AI alignment reflect classical theological debates on free will and divine will? Are AI alignment efforts similar to the theological problem of ensuring obedience without undermining autonomy? If AI can subtly manipulate or persuade, does this resemble theological concerns over demonic temptation? How do we distinguish between influence, persuasion, and deception in AI-generated discourse? If AI-generated content cannot be fully traced to an authorial intent, who or what is speaking through it? How does AI challenge our notions of authorship, responsibility, and spiritual authority?

  • Exorcising Bias: Ritual, Benchmarking, and AI Safety: How do contemporary AI safety measures, such as bias detection, toxicity mitigation, and robustness testing, resemble historical purification and exorcism rituals? In the past, exorcism functioned as a means of maintaining doctrinal purity and social order, enforcing boundaries between the sacred and the profane, the possessed and the purified. Today, benchmarking and AI safety protocols serve a similar role, seeking to identify and remove harmful biases, ensuring alignment with ethical standards, and maintaining AI’s perceived trustworthiness. What does this parallel reveal about modern fears surrounding AI’s agency, unpredictability, and its role as a potential disruptor of epistemic and social structures? How does this process shape contemporary understandings of AI as an entity requiring containment, moral scrutiny, or even ritual expulsion?

  • Synthetic Miracles and AI as an Apparitional Force: How do AI-generated hallucinations and synthetic realities compare to historical accounts of supernatural visions, divine apparitions, or demonic illusions? To what extent do AI-generated phenomena function as modern counterparts to religious miracles or deceptive phantasmata? How do such illusions shape perceptions of truth, authority, and belief in both technological and theological contexts?

Submission Guidelines

We welcome papers that engage deeply and creatively with classical philosophy, theology, and contemporary AI research in the social sciences and related disciplines. Submissions should move beyond simplistic dialogues with AI to offer rigorous theoretical engagements with non-human agency, intelligence, and ethics. We encourage contributions from fields such as medieval studies, philosophy of mind, AI ethics, and Science, Technology, and Society (STS) studies, among others. Additionally, we invite computer scientists and researchers in technical fields who are interested in exploring these interdisciplinary issues.

By reframing AI within historical and philosophical traditions, this symposium aims to illuminate how agency, intelligence, and moral responsibility are being redefined. We invite scholars to contribute to this critical conversation by examining AI not merely as an extension of human cognition but as a challenge to fundamental assumptions about knowledge, autonomy, and shared existence. Papers should critically analyze AI’s ontological and epistemic status, its ethical implications, and its role in reshaping historical and contemporary understandings of agency and intelligence.

Submission Format:

📌 Abstracts: 300–500 words
đź“Ś Submission Email: algorithms.automation@gmail.com
đź“Ś Deadline for Abstracts: 15.3.2025
đź“Ś Notification of Acceptance: 1.4.2025

By reframing AI within historical and philosophical traditions, this symposium aims to illuminate how agency, intelligence, and moral responsibility are being redefined. We invite scholars to contribute to this critical conversation by examining AI not merely as an extension of human cognition but as a challenge to fundamental assumptions about knowledge, autonomy, and shared existence.

For inquiries, please contact algorithms.automation@gmail.com

References

Alvarado, Ramón. 2023. “Ai as an Epistemic Technology.” Science and Engineering Ethics 29 (5): 1–30. https://doi.org/10.1007/s11948-023-00451-3.

Dorsch, John. 2022. “Hijacking Epistemic Agency - How Emerging Technologies Threaten Our Wellbeing as Knowers.” Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society 1. https://philarchive.org/rec/DORHEA-2.

Dung, Leonard. 2025. “Understanding Artificial Agency.” Philosophical Quarterly. https://doi.org/10.1093/pq/pqae010.

Ferrara, Emilio. 2024. “GenAI against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models.” Journal of Computational Social Science, February. https://doi.org/10.1007/s42001-024-00250-1.

Figà-Talamanca, Giacomo. 2025. “From Ai to Octopi and Back. Ai Systems as Responsive and Contested Scaffolds.” In Philosophy of Artificial Intelligence: The State of the Art, edited by Vincent C. Müller, Aliya R. Dewey, Leonard Dung, and Guido Löhr. SpringerNature. https://philarchive.org/rec/FIGFAT.

Floridi, Luciano. 2023. “AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models.” Philosophy & Technology 36 (1): 15. https://doi.org/10.1007/s13347-023-00621-y.

Freiman, Ori. 2024. “Ai-Testimony, Conversational Ais and Our Anthropocentric Theory of Testimony.” Social Epistemology 38 (4): 476–90. https://doi.org/10.1080/02691728.2024.2316622.

Godlewicz-Adamiec, Joanna, and Pawe? Piszczatowski. 2024. “Expanding Horizons: Some Non-Anthropocentric Views of Agency. An Introduction.” In Re-Thinking Agency, Volume Band 003:9–18. Culture ? Environment ? Society, Volume Band 003. V&R unipress. https://doi.org/10.14220/9783737017626.9.

Hoffmann, Tobias. 2020. Free Will and the Rebel Angels in Medieval Philosophy. NY: Cambridge University Press.

Iribarren, Isabel, and Martin Lenz. 2008. “The Role of Angels in Medieval Philosophical Inquiry.” In Angels in Medieval Philosophical Inquiry: Their Function and Significance. Ashgate. https://www.academia.edu/222138/Angels_in_Medieval_Philosophical_Inquiry_Their_Function_and_Significance_Isabel_Iribarren_and_Martin_Lenz_eds_Aldershot_Ashgate_2008.

Joel, Krueger, and Tom Roberts. 2024. “Real Feeling and Fictional Time in Human-Ai Interactions.” Topoi 43 (3). https://doi.org/10.1007/s11245-024-10046-7.

List, Christian. 2021. “Group Agency and Artificial Intelligence.” Philosophy & Technology 34 (4): 1213–42. https://doi.org/10.1007/s13347-021-00454-7.

Mikkel, Holm Sørensen, and Ziemke T. 2007. “Agents without Agency.”

Olivier, Dubouclez. 2014. “« Plura Simul Intelligere. Éléments Pour Une Histoire Du Débat Médiéva l et Renaissant Sur La Simultanéité Des Actes de l’intellect ».”

Onyeukaziri, Justin Nnaemeka. 2023. “Action and Agency in Artificial Intelligence: A Philosophical Critique.” Philosophia: International Journal of Philosophy (Philippine e-Journal) 24 (1): 73–90. https://doi.org/10.46992/pijp.24.1.a.5.

Osborne, Thomas. 2014. Human Action in Thomas Aquinas, John Duns Scotus, and William of Ockha m. The Catholic University of America Press.

Pini, Giorgio. 2012. “The Individuation of Angels From Bonaventure to Duns Scotus.” In A Companion to Angels in Medieval Philosophy, edited by Tobias Hoffmann, 79–115. Brill.

Singer, Alan E. 2013. “Corporate and Artificial Moral Agency.” In 2013 46th Hawaii International Conference on System Sciences, 4525–31. IEEE. https://doi.org/10.1109/hicss.2013.149.

Sparks, Jacob, and Ava Thomas Wright. 2025. “Models of Rational Agency in Human-Centered Ai: The Realist and Constructivist Alternatives.” AI and Ethics 5. https://philarchive.org/rec/SPAMOR.

Swanepoel, Danielle. 2021. “Does Artificial Intelligence Have Agency?” In Studies in Brain and Mind, 83–104. Springer International Publishing. https://doi.org/10.1007/978-3-030-72644-7_4.

Symons, John, and Syed Abumusab. 2023. “Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence.” Digital Society 3 (1): 2. https://doi.org/10.1007/s44206-023-00086-8.

Ye, Renee. 2024. “Rethinking Ai: Moving Beyond Humans as Exclusive Creators (1st Edition).” In Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 46. https://philarchive.org/rec/YERAMZ.