CMMRS 2025 will include lectures from faculty at the Cornell University, University of Maryland, and the Max Planck Institutes.
Rediet Abebe, ELLIS Institute, Tübingen
TBD (AI/Mechanisms)
Bahar Asgari, University of Maryland, College Park
Lecture 1: From General‑Purpose CPUs to Domain‑Specific Architectures
For decades, the computing ecosystem has been dominated by general‑purpose CPUs (and more recently GPUs) whose flexibility comes at the expense of energy, area, and cost efficiency. With Moore’s Law slowing, contemporary systems can no longer rely on brute‑force transistor scaling to close the performance gap between peak and sustained throughput. This lecture introduces the fundamental principles of computer architecture with a focus on why “designing for the common case” limits performance. We will cover essential building blocks of domain‑specific architectures (DSAs) including dataflow architectures, systolic arrays, streaming accelerators, sparsity in workloads, and the Roofline performance model. By the end of this lecture, students will understand how DSAs achieve orders‑of‑magnitude improvements in throughput and energy efficiency over general‑purpose hardware, laying the groundwork for the more advanced, research‑oriented topics in Lecture 2.
Lecture 2: Reconfigurable DSAs for Adaptive Modern Workloads
As modern workloads such as ML/AI or advanced scientific computing become increasingly heterogeneous, static DSAs struggle to deliver consistently high performance across varied data characteristics. Reconfigurable computing bridges this gap by enabling hardware to adapt its dataflow and resource allocation at runtime. In this lecture, we will explore two state‑of‑the‑art research efforts. First, we’ will examine a machine learning–guided approach for dynamically selecting optimal dataflow schemes in sparse matrix‑matrix multiplication, demonstrating how decision trees and reinforcement learning can outperform static heuristics. Second, we will study a partially reconfigurable accelerator for sparse scientific computing that dynamically balances latency, resource utilization, and solver convergence. Students will gain insight into the methodology of designing, evaluating, and benchmarking adaptive hardware architectures, and will leave prepared to identify open research questions at the intersection of machine learning, reconfigurable systems, and domain‑specific computing.
Justin Hsu, Cornell University
Type Systems: Between Theory and Practice
To outsiders, research on type systems—and programming languages in general—can seem highly intimidating, with forests of symbols, obscure technical jargon, formidable mathematical abstractions, or sometimes all of the above. In the first lecture, I’ll try to demystify this area by focusing on what type systems are, what they can do, and why they are interesting. In the second lecture, I’ll present a case study of type systems that apply abstract constructions from category theory to yield automated and scalable tools for a highly concrete problem: analyzing rounding error in floating-point programs.
Manuel Gomez Rodriguez, MPI for Software Systems (MPI-SWS)
Counterfactuals in Machine Learning
“Had I clicked on the attachment of that email, my computer would have been hacked.’’ Reasoning about how things could have turned out differently from how they did in reality is a landmark of human intelligence. Such type of reasoning, called counterfactual reasoning, has been shown to play a significant role in the ability that humans have to learn from limited past experience and improve their decision making skills over time. Is counterfactual reasoning a human capacity that machines cannot have? Surprisingly, recent advances at the interface of machine learning and causality have demonstrated that it is possible to build machines that perform and benefit from counterfactual reasoning, in a way similarly as humans do. In this lecture, you will learn about counterfactuals in machine learning, including its use in AI-assisted decision making, explainability, safety, fairness and reinforcement learning.
Abhinav Shrivastava, University of Maryland, College Park
TBD (Vision/Robotics)
Alexandra Silva, Cornell University
Algebraic Network Verification
I will present NetKAT, a language based on Kleene Algebra with Tests, that has been used in network verification. I will show recent developments on its verification engine, including the design of efficient data structures to reason about equivalence. I will then show different extensions of NetKAT and how they enable more expressive verification and analysis techniques.