Tracks (ISoLA)
Digital Twin Engineering
Organizers:
- Volker Stolz (vsto@hvl.no)
- Eduard Kamburjan (eduard.kamburjan@itu.dk)
- Martin Leucker (leucker@isp.uni-luebeck.de)
A Digital Twin is a virtual replica of a real-world system (usually called the physical twin) that can inform decision-making in operation and design. Digital twins combine multi-models developed during design processes, with models derived by Machine Learning (ML) from
data acquired from the physical twin and models expressed using Knowledge Representation (KR) techniques for data and process integration. The potential benefits include the ability to perform what-if analysis, reconfigure in response to internal faults or a changing environment, perform preventive maintenance and facilitate visualisation. Every Digital Twin has its lifecycle, where it is designed, developed, used, maintained and operated alongside the physical twin, yet tightly interacts with all life phases of the physical twin and plays a crucial role in ensuring the dependable operation of overall cyber-physical systems by addressing the (potential) consequences of evolving system components, and the ability to explore and identify changes that do not unduly compromise overall dependability. This combination of prediction and response alongside support for informed decision-making and redesign by humans requires both data derived from operational settings and models developed in design stages.
While Digital Twins have started to see adaptation in, e.g., manufacturing, transport, energy or agriculture, ensuring that Digital Twins themselves are dependable still presents challenges and questions for formal methods, model-based systems engineering and
software engineering. These include:
● When is it worthwhile to construct a digital twin, and what value can be gained?
● What are the appropriate architectures to facilitate the sound implementation of
digital twins?
● What are the common abstractions for models stemming from ML, KR and formal design, and how can they be used to analyze highly heterogeneous digital twins?
This track will discuss how one can enable the well-founded engineering of dependable digital twins for dependable CPSs. In addition, the plan is to co-locate the track with the general assembly of the INTO-CPS Association and so attract additional participants.
Formal methods for DIStributed COmputing in future RAILway systems (DisCoRail)
Organizers:
- Alessandro Fantechi (Università di Firenze, IT)
- Stefania Gnesi (Institute of Information Science and Technologies ISTI-CNR, IT)
- Anne Haxthausen (Technical University of Denmark, DK)
The railway sector is undergoing significant transformation to meet increasing demand while addressing infrastructure limitations and costs. Advanced train control systems, such as ERTMS-ETCS, are being deployed widely and have reached a high maturity level, consolidating their high standards of safety and dependability. But research is progressing, striving for more performant systems, towards greater automation, autonomy, interoperability, not only at the level of train control, but at the operation management and planning levels as well. Digitalisation is at the basis of this overall transformation, supported, due to the dispersed nature of railway networks and operations, by distributed computing and communication networks. The increasing adoption of Artificial Intelligence techniques to support many functions and innovations at all levels challenges the achievement of assuring the high dependability scores required by future intensive railway operation.
This track, following the spirit of the previous editions of DisCoRail tracks of ISOLA, wants to stimulate discussion on these themes among researchers and experts from industry and academia, especially regarding how the adoption of formal methods for system specification and verification can address these challenges.
Rigorous Engineering of Collective Adaptive Systems (in the Age of Pervasive AI)
Organizers:
- Martin Wirsing (LMU München, Germany)
- Rocco De Nicola (IMT Lucca, Italy)
- Stefan Jähnichen (TU Berlin, Germany)
- Catia Trubiani (Gran Sasso Science Institute, Italy
Modern software systems are increasingly distributed, decentralised, and autonomous, operating in dynamic and open-ended environments. These systems are commonly composed of heterogeneous components that include conventional software, formally specified controllers, adaptive mechanisms, and – more recently – artificial intelligence (AI) components. Together, these elements support decision-making, coordination, adaptation, and interaction with humans, often giving rise to complex collective behaviours.
In collective adaptive systems, the interplay between software components, AI-based techniques, and human actors can lead to emergent behaviours that are difficult to predict and analyse. While recent advances in generative AI, offer powerful new capabilities, they also introduce additional sources of uncertainty, such as underspecified behaviour, limited explainability, and deviations from assumed operating conditions. When such components are deployed at scale or in safety-critical and socio-technical contexts, uncertainties may propagate through the system, amplifying the risk of undesirable global behaviour.
Addressing these challenges requires a principled engineering approach grounded in modelling, abstraction, and formal reasoning. Rigorous engineering methodologies, formal methods, and systematic verification and validation techniques play a central role in ensuring that collective adaptive systems remain safe, reliable, and trustworthy. These approaches are essential not only for analysing traditional software components, but also for constraining, monitoring, and integrating AI-based elements within well-defined architectural and behavioural boundaries. In particular, there is a growing need for methods and tools that enable system-level reasoning about safety, security, correctness, reliability, and resilience, while complementing data-driven AI approaches with explicit models, guarantees, and assurances.
This track is a continuation of the successful ISOLA tracks on “Rigorous Engineering,” held in 2014, 2016, 2018, 2020/2021, 2022, and 2024. While earlier editions focused on autonomic ensembles and collective adaptive systems, this 2026 edition explicitly acknowledges the pervasive role of generative and agentic AI and the resulting need to reconcile learning-based techniques with rigorous, formally grounded engineering approaches.
The track provides a forum for presenting research on principled methods for the design, analysis, verification, and operation of collective adaptive systems that may integrate AI components while maintaining strong guarantees about system behaviour.
Topics of interest include (but are not limited to):
- Formal methods, theoretical foundations, and rigorously grounded applications of machine learning for the design, specification, and analysis of CAS
- Engineering techniques and methodologies for programming, orchestrating, and operating CAS, including the use of generative and agentic AI approaches
- Methods and frameworks for adaptation, learning, and dynamic self-expression with formal guarantees and runtime assurances
- Verification, validation, monitoring, and certification techniques for CAS that may incorporate or rely on AI components
- Techniques and mechanisms to detect, mitigate, and control emergent misbehavior, unintended interactions, and AI hallucinations
- Quantitative methods and metrics for the evaluation of CAS
- Approaches to ensuring and assessing the trustworthiness, security, performance, and resilience of CAS
- Methods to identify, model, specify, and reason about CAS, possibly including AI-augmented components
ScaIVeri – Scalable, Intelligent Verification and Validation of Concurrent and Distributed Systems
Organizers:
- Cristina Seceleanu – Mälardalen University, Sweden (cristina.seceleanu@mdu.se)
- Marieke Huismann – University of Twente, Netherlands (m.huisman@utwente.nl)
- Stephan Merz – INRIA Nancy & LORIA, France (Stephan.Merz@loria.fr)
To improve performance, resilience, and adaptability, many computerized systems involved in critical infrastructures, such as smart grids, supply chains, traffic management systems, smart factories, and cyber-physical environments, are increasingly concurrent, distributed, and adaptive. While these characteristics are essential to meet modern performance and scalability demands, they also introduce significant complexity and subtle error modes that are notoriously difficult to detect. Formal verification and validation (V&V) techniques are indispensable for ensuring correctness, safety, and reliability of such systems. However, the state-space explosion, heterogeneity of components, and dynamic behavior inherent to concurrent and distributed systems continue to limit the applicability of existing formal methods to industrial-scale systems.
The ScaVeri track at ISoLA 2024 demonstrated substantial progress in scalable V&V of concurrent and distributed systems. Nevertheless, scalability remains a major challenge. At the same time, recent advances in artificial intelligence and data-driven techniques offer new opportunities to complement and enhance formal verification, for example by guiding abstraction, reducing verification effort, or improving automation.
Building on the success of previous ScaVeri editions, this ISoLA 2026 track, called ScaIVeri, aims to explore how scalable verification and validation can be achieved in concurrent and distributed systems by combining rigorous formal methods with intelligent, data-driven, and hybrid approaches, while maintaining strong correctness guarantees.
Scope and Topics
The track invites original contributions addressing the scalable verification and validation of concurrent and distributed systems, including (but not limited to):
Core Topics
- Modular, compositional, and assume-guarantee verification
- Abstraction techniques and refinement-based methods
- Parametric and family-based verification
- Verification of distributed algorithms and protocols
- Formal V&V of cyber-physical and hybrid systems
- Verification of fault tolerance, resilience, and self-adaptive behavior
- Runtime verification and monitoring for concurrent and distributed systems
- Bounded, approximate, and statistical verification
- Incremental, parallel, and distributed verification methods
- Exploiting modern hardware (multi-core, GPU, cloud, quantum-inspired methods)
- Tool support and benchmarks for large-scale systems
AI-Enhanced and Data-Driven V&V
- Integration of AI/ML techniques with formal verification
- Learning-assisted abstraction, invariant inference, and refinement
- Heuristic-guided state-space exploration and model checking
- Combining symbolic methods with statistical and probabilistic learning
- Explainable AI for verification and validation results
- Trustworthy AI techniques for safety-critical distributed systems
Industrial and Applied Perspectives
- Industrial case studies and experience reports
- Verification challenges from real-world distributed systems
- Toolchains and integrated environments for scalable V&V
- Lessons learned from deploying formal methods at scale
Specify This – Bridging gaps between software and system contracts
Organizers:
- Alessandro Cimatti, Fondazione Bruno Kessler, IT
- Gidon Ernst, LMU Munich, DE
- Paula Herber, University of Münster, DE
- Marieke Huisman, University of Twente, NL
- Mattias Ulbrich, Karlsruhe Institute of Technology, DE
The fields of both software verification and system verification have seen considerable successes in recent years. At the same time, both the variety of properties that can be specified and the collection of approaches that solve software and system verification challenges have specialised and diversified a lot. Examples include contract-based deductive specification and verification of functional properties, the specification and verification of temporal properties using model checking or static analyses for secure information flow properties. While this diversification enables the formal analyses of properties that were not accessible a few years ago, it may leave the impression of isolated solutions that solve different isolated problems, and the two communities of software and system verification often have different perspectives on similar problems.
Since Specify This at ISoLA 2022 and ISoLA 2024, several efforts have been made towards bridging the gaps between specification and modeling paradigms across software and systems. This includes coordinated efforts like a Dagstuhl seminar on contract languages, community-wide case-studies in the VerifyThis long-term challenge series, and a Lorentz seminar dedicated to discuss further directions on how to increase the expressiveness, abstraction, interoperability, and applications of contracts. Very recently, we also organised a
Dagstuhl seminar on “Software Contracts meet System Contracts” . The efforts have been very influential in inspiring active research programs, with new theoretical approaches being developed, verification tools gradually embracing more diverse feature sets as part of their specification languages, which in turn unlocks new application areas. The proposed SpecifyThis track at the upcoming ISoLA 2026 will be a perfect and well-timed opportunity by which such fresh ideas on further research directions on the integration of software and system specification paradigms can be made concrete:
● ISoLA 2026 is intended as the venue in which contributions to the ongoing collaborative VerifyThis long-term challenge are documented and it will offer a similar opportunity for polished solutions to the VerifyThis on-site competition (associated with ETAPS 2026).
● Most importantly, we will be able to present and discuss the progress that has been achieved with respect to the perspectives opened up during the most recent Dagstuhl seminar. There will be new results on approaches that bridge across system and software verification techniques, including approaches that combine state-based and trace-based views on system specifications, probabilistic and quantitative aspects, means to validate contracts, language-independent specifications, and underapproximations of contracts
X-by-Construction Meets Intersymbolic AI (XbC-IAI)
Organizers:
- Maurice H. ter Beek (Institute of Information Science and Technologies, IT)
- Juliane Päßler (TU-Wien, AT)
- Clemens Dubslaff (Eindhoven University of Technology, NL & Centre for Tactile Internet with Human-in-the-loop, DE)
- Ina Schaefer (Karlsruhe Institute of Technology, DE)
A key benefit of symbolic (rule-based) artificial intelligence (AI) is their formal rigor, which comes at the cost of formal modeling effort and computational expensive reasoning. Differently, subsymbolic (data-driven) AI approaches usually outperform rigorous ones in performance but might lead to unsound results. Intersymbolic AI combines symbolic and subsymbolic AI approaches, exploiting the benefits from both worlds.
The ISoLA track “X-by-Construction Meets Intersymbolic AI (XbC-IAI)” addresses formal aspects of intersymbolic AI such as synthesis, verification, reasoning, or refinement operations with guarantees. How can we build systems that are correct by construction, taking into account non-functional properties that are naturally arising in intersymbolic AI? What is the role of formal methods in intersymbolic AI and how can formal methods ensure rigorous explanations of intersymbolic AI approaches? We intend to address these as well as other questions, attracting researchers and practitioners from formal methods and symbolic or subsymbolic AI.
Low-Code/No-Code Approaches to Application Development: Challenges and Opportunities
Organizers:
- Tiziana Margaria (University of Limerick, IE)
- TBA
The paradigm of Low-Code (LC) / No-Code (NC) software development is a promising avenue to counter the currently dire shortage of skilled software developers. Because LC/NC approaches use models, it is in principle possible to formalize the “language” of various flavours of LC and NC. Because these languages are abstract, very often graphical, and tend to be domain-specific, they resonate well with domains experts who are not able to code and do not intend to become skilled in programming in order to do their (non-programmer) jobs. The track will explore the impact of using different flavours of models, languages and tools on the progress of LC/NC towards becoming a paradigm for “everyone” as application developer. Specifically, this track intends to thematize the relationship between LC/NC, model driven development and formal methods, because this is a key point in the connection between the art and engineering of software design as it has been taught and practiced for decades and the new approach, which, according to some enthusiasts, has the potential to become the “next wave” of revolution in the software and IT landscape next to or after AI.
Digital Humanities
Organizers:
- Tiziana Margaria (University of Limerick, IE)
- TBA
We are in the middle of an AI and IT revolution and at a point of digital cultural heritage data saturation, but humanities’ scholarship is struggling to keep pace. In this Track we discuss the challenges faced by both computing and historical sciences to outline a roadmap to address some of the most pressing issues of data access, preservation, conservation, harmonisation across national datasets, and governance on one side, and the opportunities and threats brought by AI and machine learning to the advancement of rigorous data analytics. We concentrate primarily on written/printed documents rather than on pictures and images. We stress the importance of collaboration across the discipline boundaries and their cultures to ensure that mutual respect and equal partnerships are fostered from the outset so that in turn better practices can ensue.
In the track we welcome contributions that address these and other related topics:
- Advances brought by modern software development, AI, ML and data analytics to the transcription of documents and sources
- Tools and platforms that address the digital divide between physical, analog or digital sources and the level of curation of datasets needed for modern analytics
- Design for accessibility an interoperability of data sets, including corpora and thesauri
- Tools and techniques for machine-understanding form-based documents, recognition of digits and codes, handwriting, and other semantically structured data
- Knowledge representation for better analysis of semi-structured data from relevant domains (diaries, registers, reports, etc.)
- Specific needs arising from the study of minority languages and populations, disadvantaged groups and any other rare or less documented phenomena and groups
- Challenges relative to the conservation, publication, curation, and governance of data as open access artefacts
- Challenges relative to initial and continuing education and curricular or extracurricular professional formation in the digital humanities professions
- Spatial digital humanities