Keynote Speakers
Edward Lee, Berkeley University, USA
Is Information Digital? A Defense of Reality
Data-driven techniques, such as large-language models, have proven astonishingly powerful in recent years, but progress has been much slower with cyber-physical systems such as robots. Many people assume this is because there is not enough training data. In this talk, I explore a possible explanation that is much more fundamental. Specifically, I ask the question of whether there is a fundamental difference between acquisition of knowledge through observation and acquisition of knowledge through embodied interaction. Can you learn to ride a bicycle by watching others ride a bicycle? In previous work, I have used concepts from computer science (zero-knowledge proofs, bisimulation, etc.) to show that there are things you can learn from embodied interaction that cannot be learned by objective observation. In this talk, I use Shannon information theory to argue that objective observation falls far short of revealing everything about physical reality. There is information in the real world that cannot be represented digitally, and objective observation can never acquire more than a small subset of this information. In short, learning to ride a bicycle may require getting on a bicycle, even for a robot.
Onur Güntürkün: Ruhr Universität Bochum, D
The Parallel Evolution of Complex Cognition
The famous evolutionary biologist Stephen Jay Gould once aptly remarked: “If you could rewind the film of life under the same conditions, evolution would take a completely different course, and the human mind would not re-emerge, even if you could play this film of life a thousand times.” In contrast, his contemporary colleague Simon Conway Morris wrote: “Convergence points to the existence of a deeper structure of biology … Evolutionary paths are diverse, but the goals are limited.” Decades after this controversy, we are increasingly discovering that evolution does indeed seems to have serious limitations of its degrees of freedom. At least when it comes to brain and cognition. Take the following example: Great apes such as chimpanzees have complex cognitive abilities and a brain weighing approximately 400 g, which includes a large neocortex. In comparison, birds such as corvids and parrots have a brain weighing between 3 and 25 g and no neocortex. This should make it impossible for birds to develop high cognitive abilities. However, studies over the past two decades have shown that there is not a single cognitive ability in chimpanzees that has not also been demonstrated in corvids and parrots. In addition, the algorithmic architecture of cognitive properties of mammals and birds is highly overlapping. How is this possible? This question keeps me awake at night because it challenges fundamental assumptions about evolution of the neural basis of complex cognition. But there is an even deeper question looming in the depth: What if Simon Conway Morris’ hypothesis has even to be extended to non-biological systems that might share deeper structures that we can only discover by comparing core principles information processing? So, I have come to realize that I must travel back to the depths of vertebrate evolution to find answers. This talk is about that journey.
David Dill, Stanford University, USA
AI-Assisted System Design and Verification
Researchers have been working since (at least) the 1950’s on the problem of how to prove software and hardware correct (formal verification). In spite of brilliant work and many breakthroughs, the number of design errors in systems at all levels only increases. Formal verification is used in specific areas of system design, but it is not pervasive. The dream of systems that are guaranteed to be correct before they are deployed is mostly unrealized.
That’s because formal verification is very difficult. One huge problem is the need to define “correctness” for a system. In the best case, specification is additional work beyond conventional system design. It requires knowledge of logic, the ability to think very precisely, and somehow, the ability to interpret ambiguous natural language documents (if they exist).
The other extreme challenge is the computational difficulty of the problems that arise. For most of these problems, straightforward approaches often clearly would not complete during a human lifetime. After pulling out all the stops to make software, tools as efficient was we can,
we have worked around the computational difficulty by using cleverness and expert knowledge.
For both of these reasons, formal verification requires clever, skilled practitioners to do a lot of detailed work for limited impact. But modern AI may be the key to breaking through these barriers. Modern coding agents can act as a team of experts “in a box”. They can be made to apply the full range of verification methods much faster than a human – basically acting like a team of experts in a box. They seem to be very smart and fast, but it’s a challenge to get them to work systematically and display good judgement.
The problems of formal verification are similar in some ways to coding, a task at which AI agents excel, but formal verification has both advantages and disadvantages over coding as an AI-driven task.
Jeannette Wing, Columbia University, USA
Trustworthy AI
Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, and healthcare, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted. How can we deliver on the promise of the benefits of AI but address scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI?
Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success. Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems. However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.
Moshe Y. Vardi, Rice University, USA
Are AI minds genuine minds?
The question “Are AI minds genuine minds?” invites us to examine the nature of mind itself and whether artificial intelligence meets its defining criteria. A genuine mind is typically associated with consciousness, self-awareness, intentionality, and the capacity to experience mental states such as emotions. Whether AI qualifies as possessing a true mind ultimately depends on how we define the essential qualities of consciousness and intelligence. While this question has already been raised in the 19th Century, recent progress in AI requires us to re-examine it deeply.
Edward Lee, Berkeley University, USA
Edward A. Lee is Professor of the Graduate School and Distinguished Professor Emeritus in Electrical Engineering and Computer Sciences (EECS) at the University of California at Berkeley, where he has been on the faculty since 1986. He is also co-founder of Xronos Inc. and BDTI, Inc. He is the author of seven books, some with several editions and translations, including two for a general audience, and hundreds of papers and technical reports. Lee has delivered hundreds of keynotes and other invited talks at venues worldwide and has graduated 40 PhD students.
Professor Lee’s research studies cyber-physical systems, which integrate physical dynamics with software and networks. His focus is on the use of deterministic models as a central part of the engineering toolkit for such systems. He is the director of iCyPhy, the Berkeley Industrial Cyber-Physical Systems Research Center. From 2005-2008, he served as Chair of the EE Division and then Chair of the EECS Department at UC Berkeley. He has led the development of several influential open-source software packages, notably Ptolemy and Lingua Franca.
Lee received his BS degree in 1979 from Yale University, with a double major in Computer Science and Engineering and Applied Science, an SM degree in EECS from MIT in 1981, and a Ph.D. in EECS from UC Berkeley in 1986. From 1979 to 1982 he was a member of technical staff at Bell Labs in Holmdel, New Jersey, in the Advanced Data Communications Laboratory.
Lee is a Fellow of the IEEE, was an NSF Presidential Young Investigator, and won the 1997 Frederick Emmons Terman Award for Engineering Education. He received the 2016 Outstanding Technical Achievement and Leadership Award from the IEEE Technical Committee on Real-Time Systems (TCRTS), the 2018 Berkeley Citation, the 2019 IEEE Technical Committee on Cyber-Physical Systems (TCCPS) Technical Achievement Award, the 2022 European Design and Automation Association (EDAA) Achievement Award, the 2022 ACM SIGBED Technical Achievement Award, an honorary doctorate from the Technical University of Vienna in 2022, the 2023 CASES Test of Time Award for a paper published in 2008 on PRET machines, and the 2024 HiPEAC Technology Transfer Award.
Onur Güntürkün: Ruhr Universität Bochum, D
I’m a Turkish-born Professor for Biopsychology at the Ruhr-University Bochum in Germany and I’m kept awake with questions like: “Can different kinds of brains produce the same cognition?” or “Why are brains asymmetrically organized?”. I have spent many years in different universities and science institutions on five continents and work (in descending order) with pigeons, humans, dolphins, corvids, and crocodiles as experimental subjects. I would call myself a cognitive and comparative neuroscientist who works with research approaches ranging from field work via single cell recordings, behavioral experiments and neuroanatomy up to brain imaging at ultrahigh magnetic fields. I am an elected member of several scientific academies, including the German Academy of Sciences Leopoldina, and have received numerous national and international scientific awards, including the highest German (Leibniz) and Turkish science prizes (TÜBITAK Special Prize) and the European ERC Advanced Grant.
David Dill, Standord University, USA
David L. Dill, Donald E. Knuth Professor, Emeritus, in the School of Engineering at Stanford University, has worked on formal verification tools and methodology since 1983. He was a Professor of Computer Science at Stanford University for 30 years, then at Meta for over 4 years. His work in formal verification spans a wide range of methods and tools, with a focus on practical applications. Recently, he has been exploring synergies between formal verification and artificial intelligence, especially how modern coding agents can make formal verification more practical.
Dill was a co-founder of 0-In Design Automation, an early formal hardware verification startup. He has three “Computer-Aided Verification” awards for different contributions to the field of formal verification and an Alonzo Church Award (for timed automata). He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences.
Jeannette Wing, Columbia University, USA
Jeannette M. Wing is the Executive Vice President for Research and Professor of Computer Science at Columbia University. She joined Columbia in 2017 as the inaugural Avanessians Director of the Data Science Institute. From 2013 to 2017, she was a Corporate Vice President of Microsoft Research. She is Adjunct Professor of Computer Science at Carnegie Mellon where she twice served as the Head of the Computer Science Department and had been on the faculty since 1985.
From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. She received her S.B., S.M., and Ph.D. degrees in Computer Science, all from the Massachusetts Institute of Technology She holds an honorary doctorate of technology from Linkoping University, Sweden. Professor Wing’s current research focus is on trustworthy AI. Her general research interests are in the areas of trustworthy computing, security and privacy, specification and verification, concurrent and distributed systems, programming languages, and software engineering. She is known for her work on linearizability, behavioral subtyping, attack graphs, and privacy-compliance checkers. Her 2006 seminal essay, titled “Computational Thinking,” is credited with helping to establish the centrality of computer science to problem-solving in fields where previously it had not been embraced.
She is currently a member of the American Academy for Arts and Sciences Board of Directors and Council; Computing Research Association Board; American Association of Universities, Senior Research Officers Steering Committee; the Advisory Board for the Association for Women in Mathematics; the Chan-Zuckerberg New York Biohub Steering Committee; and the Empire AI, Inc. Board of Directors. She has been chair and/or a member of many other academic, government, industry, and professional society advisory boards. She was or is on the editorial board of twelve journals, including the Journal of the ACM, the Communications of the ACM, and the Harvard Data Science Review. She received the CRA Distinguished Service Award in 2011 and the ACM Distinguished Service Award in 2014. She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, Association for Computing Machinery (ACM), Institute of Electrical and Electronic Engineers (IEEE), and National Academy of Innovators. She is a member of the National Academy of Engineering and the MIT Corporation.
Moshe Y. Vardi, Rice University, USA
Moshe Y. Vardi is University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. His research focuses on the interface of mathematical logic and computation – including database theory, hardware/software design and verification, multi-agent systems, and constraint satisfaction. He is the recipient of numerous awards, including the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Knuth Prize, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 800 papers, as well as two books. He is a Guggenheim Fellow as well as fellow of several societies, and a member of several academies, including the US National Academy of Engineering, National Academy of Science, the American Academy of Arts and Science, and the Royal Society of London. He holds ten honorary titles. He is a Senior Editor of the Communications of the ACM, the premier publication in computing.