Predictions when deployed in societal systems can trigger actions and reactions of individuals. Thereby they can change the way the broader system behaves -- a dynamic effect that traditional machine learning fails to account for. To formalize and reason about performativity in the context of machine learning, we have developed the framework of performative prediction. It extends the classical risk minimization framework in that it allows the data distribution to depend on the deployed model. This inherently dynamic viewpoint leads to new solution concepts and optimization challenges, it brings forth interesting connections to concepts from causality and game theory, and it relates machine learning to the study of century-old questions about the predictability of social events.
Algorithmic predictions play an increasingly important role in digital economies as they mediate services, platforms, and markets at societal scale. The fact that such services can steer consumption and behavior is a central concern in modern anti-trust. I am exploring how the concept of performativity can help quantify economic power in digital economies and support digital market investigations. Intuitively, the more powerful a firm the more performative their predictions, a causal effect strength we can measure from observational and experimental data.
A Chrome extension that measure how algorithmic decisions of search engines can impact user click behavior by collecting experimental data while you surf.
→ Sign up at https://powermeter.is.tue.mpg.de/
When building societal-scale machine learning systems, training time and resource constraints can be a critical bottleneck for dynamic system optimization. Efficient training algorithms that are aware of distributed architectures, interconnect topologies, memory constraints and accelerator units form an important bilding block towards using available resources most efficiently. As part of my PhD research, we demonstrated that system-aware algroithms can lead to several orders of magnitude reduction in training time compared to standard system-agnostic methods. Today, the innovations of my PhD research form the backbone of the IBM Snap ML library that has been integrated with several of IBM's core AI products.
Snap ML is a library that provides resource efficient and fast training of
popular machine learning models on modern computing systems.
>400k downloads on PyPi
US20210264320A1 - T. Parnell, A. Anghel, N. Ioannou, N. Papandreou, C. Mendler-Dünner, D.
Sarigiannis, H. Pozidis.
US11562270B2 - M. Kaufmann, T. Parnell, A. Kourtis, C. Mendler-Dünner.
US11573803B2 - N. Ioannou, C. Dünner, T. Parnell.
US11295236B2 - C. Dünner, T. Parnell, H. Pozidis.
US11315035B2 - T. Parnell, C. Dünner, H. Pozidis, D. Sarigiannis
US11461694B2 - T. Parnell, C. Dünner, D. Sarigiannis, H. Pozidis.
US11301776B2 - C. Dünner, T. Parnell, H. Pozidis.
US10147103B2 - C. Dünner, T. Parnell, H. Pozidis, V. Vasileiadis, M. Vlachos.
US10839255B2 - K. Atasu, C. Dünner, T. Mittelholzer, T. Parnell, H. Pozidis, M. Vlachos.