Poster Sessions & Flash Talks
The Poster Sessions & Flash Talks of the AI Village take place in the Aare Foyer, offering an interactive environment for researchers and practitioners to present their work. While other workshops and presentations continue in dedicated rooms, attendees can explore various AI-related projects and innovations through poster displays and short flash talks.
For details, see the list of Poster Sessions & Flash Talks.
Presenters will be available at their posters to discuss their work, answer questions, and engage in detailed conversations about their research. Flash talks provide quick 5-minute overviews of key findings, allowing attendees to efficiently identify topics of interest for deeper discussion during the poster viewing sessions.
List of Poster Sessions & Flash Talks
These sessions run throughout the day parallel to the main conference track, allowing participants to move freely between different formats. Please check the schedule below for specific presentation times:
| Time | Aare Foyer – Stand 1 | Aare Foyer – Stand 2 | Aare Foyer – Stand 3 | Aare Foyer – Stand 4 | Aare Foyer – Stand 5 | Aare Foyer – Stand 6 | Aare Foyer – Stand 7 | Aare Foyer – Stand 8 |
|---|---|---|---|---|---|---|---|---|
13:05
-
16:10 | Poster Sessions Prof. Dr. Ariane Trammell Head of Information Security Research
at
ZHAW Maurice Amon Research Assistant
at
ZHAW Show descriptionSmall and Medium-sized Enterprises (SMEs) are often easy targets for attackers due to their limited budgets and lack of specialized security personnel.
To bridge this capability gap, we are developing an AI Security Consultant as a cost-effective alternative to expensive human security consultants. | Poster Sessions Show descriptionFuzzing is a proven technique for uncovering vulnerabilities, but libraries remain hard to fuzz due to the need for specialized drivers.
Manual drivers are costly and stall at coverage plateaus, while automated solutions often waste effort on invalid code paths.
libErator builds API chains from static analysis, probes them, and crucially learns from rejection by avoiding invalid sequences in future attempts.
This feedback-driven approach rapidly converges on valid, diverse drivers, balancing generation and testing.
Across 15 C libraries, libErator uncovered 24 confirmed bugs. | Poster Sessions Tomas Joaquin Anderegg Master Student
at
EPFL | Poster Sessions Dr. Anastasiia Kucherenko Scientific Collaborator
at
HES-SO Valais-Wallis (IEM) Show descriptionAs Large Language Models (LLMs) become embedded in products and services, their reliance on vast, often opaque training data raises pressing risks around safety, intellectual property, and trust.
A central question for privacy, compliance, and safe deployment is: when an LLM is reproducing memorized sequences versus generalizing? | Poster Sessions Show descriptionCybersecurity workflows often require processing long documents such as incident reports, threat intelligence, and compliance texts, where retaining full context is essential.
Most NLP models are either resource-intensive (LLMs) or limited by 512-token caps (typical Hugging Face transformers), impairing document-level understanding.
We present a lightweight, prompt-free approach using SetFit extended with Longformer architecture to process up to 4096 tokens.
Originally developed for automated essay scoring, the method transfers effectively to security applications while requiring far less GPU power, making it suitable for low-data or privacy-sensitive environments.
Released on Hugging Face with 6,000+ downloads in the first month, our model demonstrates how small, specialized models offer scalable, cost-effective solutions for cybersecurity tasks including incident classification, compliance audits, and log analysis. | Alexander Sternfeld Associate Researcher
at
HES-SO Valais-Wallis (IEM) Show descriptionLarge language models (LLMs) are increasingly used for code generation but still often introduce subtle vulnerabilities.
This poses serious risks in security-critical contexts, where system failures can be catastrophic.
We present TypePilot, an agentic AI framework that leverages the Scala type system to guide and verify LLM-generated code.
By embedding type-driven constraints into the generation process, TypePilot mitigates issues such as input validation flaws and injection vulnerabilities.
Our results show that structured, type-focused pipelines enhance the trustworthiness of automated code generation in high-assurance domains. | Dr. Loic Marechal Scientific Collaborator
at
HES-SO Valais-Wallis (IEM) Dr. Sébastien Rouault Co-founder and CTO
at
Calicarpa |
About the speakers
Prof. Dr. Ariane Trammell
Read more …
Maurice Amon
Read more …
Dr. Nicolas Badoux
Read more …
Paul Bagourd
Read more …
Tomas Joaquin Anderegg
Read more …
Dr. Anastasiia Kucherenko
Read more …
Dr. Elena Nazarenko
Read more …
Alexander Sternfeld
Read more …
Dr. Loic Marechal
Read more …
Dr. Sébastien Rouault
Read more …