Tracks (AISoLA)

AI Assisted Programming (AIAP)

Organizers:

  • Wolfgang Ahrendt (Chalmers University of Technology, SE)
  • Bernhard Aichernig (Johannes Kepler University Linz, AT)
  • Klaus Havelund (Jet Propulsion Laboratory, US)

Neural program synthesis, using large language models (LLMs) which are trained on open source code and other artifacts, are rapidly becoming a popular addition to the software developer’s toolbox. LLM driven coding assistants and coding agents can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming. However, these LLMs have improved considerably in short time, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results. Even alert users fall victim to automation bias: the well-studied tendency of humans to be over-reliant on computer generated suggestions. The area of software development is no exception to this automation bias.

 

This track is devoted to discussions and exchange of ideas on questions like:

  • What are the capabilities of this technology when it comes to software development?
  • What are the limitations?
  • What are the challenges and research areas that need to be addressed?
  • How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?
  • What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?
  • What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?
  • Can quality assurance methods and technologies themselves profit from the new power of LLMs?

 

Topics of relevance to this track include the interplay of LLMs with the following areas:

  • Program synthesis
  • Formal specification and verification
  • Model driven development
  • Static analysis
  • Testing
  • Monitoring
  • Documentation
  • Requirements engineering
  • Code explanation
  • Library explanation
Formal Methods for Diagnosis and Fault Management

Organizers:

  • Martin Leucker (Univrersity of Lübeck, DE)
  • Martin Sachenbacher (University of Applied Sciences Regensburg, DE)
  • Ingo Pill (Technical University of Graz, AT) 

Diagnosis addresses the problem of detecting whether a system is not functioning correctly and, as accurately as possible, determining which part of the system is failing and which type of fault is present when a deviation from expected behavior has occurred. Model-based Diagnosis (MBD) is a rigorous symbolic AI technique that relies on models of component behavior and system interconnections to hypothesize a minimal set of faults that best explains the discrepancies between model predictions and observed system behavior.

Symbolic AI approaches to diagnosis, such as MBD, as well as sub-symbolic and hybrid techniques, together with the related field of fault detection, identification, and reconfiguration (FDIR) in control engineering, extend and complement verification and validation by not only detecting incorrect system behavior, but also identifying its potential root causes. Identifying these causes is essential for devising effective mitigation strategies and for correcting system designs. Diagnostic capabilities also play a crucial role in enhancing system resilience, understood as the intrinsic ability of a system to maintain required functionality when affected by anticipated or unanticipated contingencies that may not have been explicitly considered during design.

The track will address challenges and open research questions at the intersection of formal methods and AI-based approaches for detecting, diagnosing (or explaining), and correcting faulty behavior. Particular emphasis will be placed on the integration of runtime verification and online diagnosis, as well as on leveraging formal theories and model-based techniques in data-driven, sub-symbolic diagnosis, which is currently often treated as a general classification problem and primarily applied to failure detection.

Anticipated contributions include formal methods, theoretical frameworks, and computational approaches for rigorous diagnosis, fault mitigation, and correction, encompassing logic-based and qualitative methods, temporal, discrete-event, continuous and hybrid systems, as well as sub-symbolic and probabilistic techniques. Real-world applications in domains such as reliable software systems, robust cyber-physical systems, and digital twins are especially welcome. The track is planned as a post-proceedings format, enabling reflection on the discussions and the presentation of preliminary results. Given the interdisciplinary scope of the topic, we would welcome consideration for inclusion in the ISoLA/AISoLA joint sessions, if possible.

Foundations of Robust Generative AI in Human Cyber-Physical Systems

Organizers:

  • Dejan Nickovic  (Austrian Institute of Technology, AT)
  • Saddeck Bensalem  (Institute of Information Science and Technologies ISTI-CNR, IT)
  • Huang Xiaowei.Huang (University of Liverpool, UK)

With their ability to learn complex patterns from large datasets and generate new content from prompts, Generative AI (GenAI) models bring the promise of major gains in productivity. At the same time, GenAI poses serious high-stake risks resulting from hallucinations, bias, misuse, and lack of transparency. Therefore, robust use of GenAI is becoming an urgent necessity in human cyber-physical systems (HCPS), such as robotics, automotive, and industrial automation, which demand strict guarantees for real-time performance, safety, and human interaction.  This AISoLA track aims to define a principled, scientific methodology for developing robust GenAI for HCPS. To achieve this, we solicitate contributions that explore methods for making GenAI for HCPS more robust, including fine-tuning and retrieval-augmented generation, hybrid neuro-symbolic and agentic AI approaches, novel verification and testing frameworks, runtime monitoring and guardrails, and mechanisms for ensuring value alignment and ethical, human-centric interaction.

Responsible and Trusted AI: An Interdisciplinary Perspective

Organizers:

  • Dr. Kevin Baum (German Research Center for Artificial Intelligence / Hamburg University of Technology, DE)
  • Thorsten Helfer (CISPA Helmholtz Center for Information Security, DE)
  • Dr. Sophie Kerstan (University of Freiburg, DE)
  • Dr. Andreas Sesing-Wagenpfeil (Saarland University, DE)
  • Dr. Timo Speith (University of Bayreuth, DE)
  • Sarah Sterz (Saarland University, DE)

The responsible development and deployment of AI systems requires more than technical excellence — it demands sustained interdisciplinary engagement with ethical, legal, social, and governance questions. Building on similar tracks at AISoLA 2023, 2024, and 2025, this track brings together researchers from philosophy, law, computer science, psychology, economics, political science, and other fields to collaboratively examine the societal implications of AI.

2026 context: This year’s track comes at a time when central provisions of the EU AI Act are entering into force and organizations across Europe are beginning the practical work of compliance. We particularly welcome contributions that address the operationalization of regulatory requirements — including experiences from AI regulatory sandboxes, the implementation of human oversight mechanisms, and the translation of normative desiderata (such as transparency, fairness, privacy, security, autonomy, or accountability) into organizational and technical practice.

We also invite work engaging with emerging challenges posed by increasingly autonomous and agentic AI systems, the transformative impact of AI on education and academic integrity, and the interdisciplinary dimensions of AI security, ethics, and governance.

Topics of interest include, but are not limited to:

Conceptual foundations

  • Critical analysis of concepts like trustworthiness of or trust in AI systems
  • Normative assumptions embedded in AI system design
  • The ethics and epistemology of human oversight

Regulation and governance

  • EU AI Act implementation: challenges, sandboxes, and early lessons
  • Operationalizing transparency, explainability, and accountability requirements
  • Legal liability and responsibility attribution for AI-caused harms

Emerging challenges

  • Agentic AI systems: safety, autonomy, control, and appropriate trust/reliance
  • AI security as an interdisciplinary problem
  • AI in education: integrity, assessment, and pedagogical transformation

Broader societal implications

  • Bias, discrimination, and algorithmic fairness
  • Labor market dynamics and economic impacts
  • Intellectual property, authorship, and generative AI
  • AI “Companions”, Deadbots, and AI in (pseudo-)therapeutic settings

This track aims to foster dialogue across disciplinary boundaries and to develop a richer understanding of what responsible AI means in practice — not only in design, but in deployment, regulation, and ongoing governance.

Small Data Challenges in AI for Materials Science

Organizers:

  • Tiziana Margaria (Univeristy of Limerick, IE)
  • TBA

There is great excitement in materials science about accelerating materials development and chemical synthesis via AI and ML. Traditionally, materials science has evaluated proposed material designs using time-consuming physical experiments and compute-intensive calculations, resulting in a slow, expensive design loop. This, and the lack of a programming standard for matter, hinders efforts to combat climate change, fight disease, and improve the human condition.

Research at the interface of AI/ML and materials science has begun to accelerate this process. Supervised learning can screen out materials that are likely to lack critical properties; Bayesian optimization and active learning can efficiently search a materials design space; Computer vision can improve the efficiency and the reproducibility of materials characterization. Yet, this effort faces a major challenge: available data are much fewer than in traditional AI. We must learn smarter, making better use of heterogeneous and high-dimensional experimental measures and computational predictions, and assimilating multimodal structured data. In contrast to many other application areas of AI, there is abundant domain knowledge in the form of physical laws; incorporating this knowledge into the learning process is crucial to its success.

To advance AI-supported materials, this workshop will bring researchers from materials science together with those working in AI/ML, focusing on the small data challenges. Jointly, we will identify common problems and develop plans for tackling them.