Soft submission deadline: January 10th at midnight Anywhere on Earth (AoE)
Generative AI models like LLMs are transforming how we create and access information while also raising concerns about manipulation, deception, and the integrity of public discourse at unprecedented scale.
The AI Manipulation and Information Integrity (AIMII) workshop will bring together researchers from computer science, cognitive science, philosophy, political science, and policy to clarify core concepts, evaluate the evidence on AI's persuasive and manipulative capabilities, and explore implications for society and democracy.
The workshop will feature three panel discussions with leading researchers as well as a poster/lightning talk session showcasing new work from the broader community.
Call for submissions
We are soliciting abstracts to present a poster or lightning talk at the first workshop on AI, Manipulation, & Information Integrity (AIMII) Workshop at IASEAI'26.
For submissions sent before January 10th, notifications will be returned by January 17th in order to allow time for registration and booking travel.
We may consider submissions sent after the soft deadline but we cannot guarantee that they will be reviewed before the deadline for registration to the main IASEAI conference (January 27th).
IMPORTANT: While the IASEAI organizers assured us that all accepted submissions will be able to register to the workshop day itself, registration for the main IASEAI conference is separate.
If you are interested in attending the remainder of the conference, please submit a statement of interest
here and mention that you are submitting to the workshop.
Submission Guidelines
We welcome abstracts describing new work, work-in-progress, position statements, or summaries of recently published work. There are no formal proceedings.
Topics
We welcome submissions on topics including (but not limited to):
Conceptual & Philosophical Foundations
- Definitions and taxonomies of persuasion, manipulation, and deception
- Moral and epistemic dimensions of AI influence
- Autonomy, consent, and the ethics of personalized persuasion
- Boundary cases and edge cases (e.g. when does influence become manipulation?)
Measurement & Evaluation
- Benchmarks and evaluations of persuasive or manipulative capabilities
- Ecological validity of current measurement approaches
- Sycophancy, reward hacking, and training dynamics that produce manipulative behaviors
- Detecting deception, sandbagging, or strategic behavior in AI systems
- Human studies of AI persuasion (attitude change, belief updating, behavioral effects)
Psychology & Cognitive Science
- Human susceptibility to AI-generated persuasion
- Trust, overreliance, and calibration in human-AI interaction
- Cognitive and affective mechanisms of AI influence
- Individual differences in vulnerability to AI manipulation
Societal & Political Impacts
- AI and misinformation/disinformation
- Effects on journalism, media ecosystems, and information environments
- Implications for democratic deliberation and political discourse
- Manipulation in AI companions, chatbots, and productivity tools
- Targeted advertising, recommender systems, and algorithmic influence
Mitigations & Governance
- Technical approaches to reducing manipulative capabilities or behaviors
- Transparency, disclosure, and labeling interventions
- Regulatory frameworks (EU AI Act, DSA, etc.) and their effectiveness
- Red-teaming, auditing, and third-party evaluation
- Platform governance and content moderation
Broader Perspectives
- Historical and comparative perspectives on information manipulation
- Human-AI co-evolution in communication
- Manipulation in multi-agent and agentic AI systems
- Dual-use concerns and beneficial applications of persuasive AI
Organizers
- Beba Cibralic
- Tijl De Bie
- Fosca Giannotti
- Luke Hewitt
- Cameron Jones
- Anna Monreale
- Dino Pedreschi