About the Workshop
Generative AI (GenAI) is transforming the landscape of artificial intelligence, not just in scale but in kind. Unlike traditional AI, GenAI introduces unique challenges around interpretability, as its tens of billions of parameters and emergent behaviors demand new ways to understand, debug, and visualize model internals and their responses. At the same time, GenAI enables agentic systems—goal-driven, autonomous agents that can reason, act, and adapt across complex tasks, raising a critical question: Will AI agents eventually replace human data scientists, and if not, how might they best collaborate?
VIS x GenAI Workshop brings together researchers, practitioners, and innovators exploring the intersection of generative AI, autonomous agents, and visualization. We focus on new challenges and opportunities, including interpreting and visualizing various aspects of large-scale foundation models, designing visual tools both for and with agents, and rethinking evaluation, education, and human–AI collaboration in the age of generative intelligence. Our workshop aims to address critical questions: How can visualization techniques evolve to collaborate with AI systems? What novel interfaces will emerge in agent-augmented analytics? How might generative AI reshape visualization authoring and consumption?
Call for Participants
We invite participation through two submission tracks: Short Paper and Mini-Challenge. Both are opportunities to showcase novel ideas and engage with the growing community at the intersection of visualization, generative AI, and agentic systems.
Track A: Short Paper
We invite short paper submissions (2–4 pages excluding references) that explore topics across theory, systems, user studies, and applications for GenAI interpretability and safety, or agentic VIS. Submissions must follow the VGTC conference two-column format, consistent with the IEEE VIS formatting guidelines. Areas of interest include, but are not limited to, the following:
- VIS for interpreting GenAI systems.
▶More information - GenAI interpretability and safety realted works that highlights challenges or opportunities where VIS can fit.
▶More information - Position papers for VIS and GenAI researchers.
▶More information - Agent-augmented VIS tools.
- VIS tools for agents that agents themselves can perceive, reason over, or act upon.
- Methods and benchmarks for assessing agent performance on VIS-related tasks.
- Case studies and demos of agent systems applied to real-world VIS problems.
- Position papers on agents in VIS education, immersive visualizations for embodied agents, or multi-agent coordination in visual reasoning.
Track B: Mini-Challenge
Inspired by challenges like ImageNet in computer vision and HELM in language models, our mini-challenge invites participants to build or adapt AI agents that can automatically analyze datasets and generate visual data reports.
The goal is to benchmark and accelerate progress in agent-based data analysis and communication. We will provide a starter kit and an evaluation setup to support participants. During the testing period, you will be able to iteratively improve your agents by submitting them to the evaluation server. Afterward, you will submit the report generated by your best-performing agent to the PCS system by the submission deadline.
More details, including datasets, templates, and submission instructions, will be released in June!
Stay tuned!

Submissions—including both papers and challenge reports—must be anonymous and submitted through the PCS system. Each submission will be evaluated by at least two reviewers based on quality and topical relevance. Accepted papers will be invited to present as posters, demos, or lightning talks during the workshop, and will be published on the workshop website. Top-rated challenge participants will receive awards and be invited to present their solutions at the workshop.
Important Dates
All deadlines are in Anywhere time zone (UTC-12).
- May 30th, 2025: Call for Participation
- August 20th, 2025: Submission Deadline
- September 1st, 2025: Author Notification
- October 1st, 2025: Camera Ready Deadline
- November 2nd or 3rd: Workshop Day, TBD
Schedule
TBA soon!
Organizers
- Zhu-Tian ChenUniversity of Minnesota
- Shivam RavalHarvard University
- Enrico BertiniNortheastern University
- Niklas ElmqvistAarhus University
- Nam Wook KimBoston College
- Pranav RajanKTH Royal Institute of Technology
- Renata G. RaidouTU Wien
- Emily ReifGoogle Research & University of Washington
- Olivia SeowHarvard University
- Qianwen WangUniversity of Minnesota
- Yun WangMicrosoft Research
- Catherine YehHarvard University
Challenge Development Team
- Pan HaoUniversity of Minnesota
- Divyanshu TiwariUniversity of Minnesota
- Chia-Lun(James) YangUniversity of Minnesota
- Zhu-Tian ChenUniversity of Minnesota
- Qianwen WangUniversity of Minnesota