Keynote Speakers
David Dill, Standord University, USA
AI-Assisted System Design and Verification
Researchers have been working since (at least) the 1950’s on the problem of how to prove software and hardware correct (formal verification). In spite of brilliant work and many breakthroughs, the number of design errors in systems at all levels only increases. Formal verification is used in specific areas of system design, but it is not pervasive. The dream of systems that are guaranteed to be correct before they are deployed is mostly unrealized.
That’s because formal verification is very difficult. One huge problem is the need to define “correctness” for a system. In the best case, specification is additional work beyond conventional system design. It requires knowledge of logic, the ability to think very precisely, and somehow, the ability to interpret ambiguous natural language documents (if they exist).
The other extreme challenge is the computational difficulty of the problems that arise. For most of these problems, straightforward approaches often clearly would not complete during a human lifetime. After pulling out all the stops to make software, tools as efficient was we can,
we have worked around the computational difficulty by using cleverness and expert knowledge.
For both of these reasons, formal verification requires clever, skilled practitioners to do a lot of detailed work for limited impact. But modern AI may be the key to breaking through these barriers. Modern coding agents can act as a team of experts “in a box”. They can be made to apply the full range of verification methods much faster than a human – basically acting like a team of experts in a box. They seem to be very smart and fast, but it’s a challenge to get them to work systematically and display good judgement.
The problems of formal verification are similar in some ways to coding, a task at which AI agents excel, but formal verification has both advantages and disadvantages over coding as an AI-driven task.
Edward Lee, Berkeley University, USA
Is Information Digital? A Defense of Reality
Data-driven techniques, such as large-language models, have proven astonishingly powerful in recent years, but progress has been much slower with cyber-physical systems such as robots. Many people assume this is because there is not enough training data. In this talk, I explore a possible explanation that is much more fundamental. Specifically, I ask the question of whether there is a fundamental difference between acquisition of knowledge through observation and acquisition of knowledge through embodied interaction. Can you learn to ride a bicycle by watching others ride a bicycle? In previous work, I have used concepts from computer science (zero-knowledge proofs, bisimulation, etc.) to show that there are things you can learn from embodied interaction that cannot be learned by objective observation. In this talk, I use Shannon information theory to argue that objective observation falls far short of revealing everything about physical reality. There is information in the real world that cannot be represented digitally, and objective observation can never acquire more than a small subset of this information. In short, learning to ride a bicycle may require getting on a bicycle, even for a robot.
Onur Güntürkün: Ruhr Universität Bochum, D
The Parallel Evolution of Complex Cognition
The famous evolutionary biologist Stephen Jay Gould once aptly remarked: “If you could rewind the film of life under the same conditions, evolution would take a completely different course, and the human mind would not re-emerge, even if you could play this film of life a thousand times.” In contrast, his contemporary colleague Simon Conway Morris wrote: “Convergence points to the existence of a deeper structure of biology … Evolutionary paths are diverse, but the goals are limited.” Decades after this controversy, we are increasingly discovering that evolution does indeed seems to have serious limitations of its degrees of freedom. At least when it comes to brain and cognition. Take the following example: Great apes such as chimpanzees have complex cognitive abilities and a brain weighing approximately 400 g, which includes a large neocortex. In comparison, birds such as corvids and parrots have a brain weighing between 3 and 25 g and no neocortex. This should make it impossible for birds to develop high cognitive abilities. However, studies over the past two decades have shown that there is not a single cognitive ability in chimpanzees that has not also been demonstrated in corvids and parrots. In addition, the algorithmic architecture of cognitive properties of mammals and birds is highly overlapping. How is this possible? This question keeps me awake at night because it challenges fundamental assumptions about evolution of the neural basis of complex cognition. But there is an even deeper question looming in the depth: What if Simon Conway Morris’ hypothesis has even to be extended to non-biological systems that might share deeper structures that we can only discover by comparing core principles information processing? So, I have come to realize that I must travel back to the depths of vertebrate evolution to find answers. This talk is about that journey.
Jeannette Wing, Columbia University, USA