Celestine Dünner

Celestine Mendler-Dünner

I am a research group lead at the Max Planck Institute for Intelligent Systems in Tübingen. My research focuses on the role of society in the study of computation, taking into account actions and reactions of individuals when analyzing and designing algorithmic systems. Prior to joining MPI I spent two years as an SNSF postdoctoral fellow at UC Berkeley hosted by Moritz Hardt. I have obtained my PhD from ETH Zurich where I was affiliated with the Data Analytics Laboratory and supervised by Thomas Hofmann. During my PhD I was employed at IBM Research Zurich where I contributed to the design and implementation of system-aware learning algorithms that today form the backbone of the IBM Snap ML library.

LinkedIn Google Scholar Twitter

News

    We are organizing a NeurIPS'21 workshop on Learning and Decision-Making with Strategic Feedback.
    I will be joining MPI-IS in Tübingen as a group leader.
    IBM released Snap ML for public use > pip install snapml
     I am honored to be a member of the ELLIS society.
   We are hosting a session on performative prediction at the WiML Un-workshop at ICML 2020
     I was awarded the ETH Medal for my dissertation
     I won the Fritz Kutter Award for the high industrial impact of my research on system-aware algorithm design
    Our latest paper was accepted at NeurIPS 2019 as a spotlight presentation
     I was awarded the SNF Early Postdoc.Mobility fellowship and will join UC Berkeley in Summer 2019
   I successfully defended my PhD. I am now a postdoctoral researcher at IBM Research, Zurich
    Snap ML in the press: Forbes, The Register and EE Times writing about our research

Research Projects

  • Social Dynamics of Decision-Making

    Machine learning is increasingly used to support consequential decisions that impact people. When informing decisions, predictions have the potential to change the way the broader system behaves and alter the data distribution the predictive model has been trained on -- a dynamic effect that traditional machine learning fails to account for. To address this, we introduce the framework of performative prediction to supervised learning [ICML'20]. We analyze the dynamics of retraining strategies in this setup and address challenges faced in stochastic optimization when the deployment of a model triggers performative effects in the data distribution it is being trained on [NeurIPS'20]. When performative effects are strong we would wish to model and understand these effects in order to incorporate them into the very design of learning systems. Towards this ambitious goal we explore connections to microfoundations from macroeconomics theory and investigate how assumptions on individual behavior can be used to model and analyze performative effects in the context of strategic classification [ICML'21]. Challenges related to social dynamics and performative prediction are increasingly receiving attention from the machine learning community and there are many exciting, unexplored research questions at the intersection to optimization, causality, control theory, economics, and sociology.

  • System-Aware Algorithm Design

    When training machine learning models in production, speed and efficiency are critical factors. Fast training times allow short development cycles, offer fast time-to-insight, and after all, save valuable resources. Our approach to achieving fast training is to enable the efficient use of modern hardware through novel algorithm design. In particular, we develop principled tools and methods for training machine learning models focusing on: compute parallelism [NeurIPS'19][ICML'20], hierarchical memory structures [HiPC'19][NeurIPS'17], accelerator units [FGCS'17] and interconnect bandwidth in distributed systems [ICML'18]. We demonstrated [NeurIPS'18] that such an approach can lead to several orders of magnitude reduction in training time compared to standard system-agnostic methods. The core innovations of this research have been integrated in the IBM Snap ML library and help diverse companies improve speed, efficiency and scalability of their machine learning workloads.

Publications

*equal contribution
Test-time Collective Prediction
C.Mendler-Dünner, W.Guo, S.Bates and M.I.Jordan
to appear at Advances in Neural Information Processing Systems (NeurIPS), 2021.
Alternative Microfoundations for Strategic Classification
M.Jagadeesan, C.Mendler-Dünner and M.Hardt
International Conference on Machine Learning (ICML), 2021.
Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
AAAI Conference on Artificial Intelligence (AAAI), 2021.
Stochastic Optimization for Performative Prediction
C.Mendler-Dünner*, J.C.Perdomo*, T.Zrnic* and M.Hardt
Advances in Neural Information Processing Systems (NeurIPS), 2020.
Performative Prediction
J.C.Perdomo*, T.Zrnic*, C.Mendler-Dünner and M.Hardt
International Conference on Machine Learning (ICML), 2020.
Randomized Block-Diagonal Preconditioning for Parallel Learning
C.Mendler-Dünner and A.Lucchi
International Conference on Machine Learning (ICML), 2020.
SySCD: A System-Aware Parallel Coordinate Descent Algorithm
N.Ioannou*, C.Mendler-Dünner* and T.Parnell
Advances in Neural Information Processing Systems (NeurIPS -- Spotlight), 2019.
On Linear Learning with Manycore Processors
E.Wszola, C.Mendler-Dünner, M.Jaggi and M.Püschel
IEEE International Conference on High Performance Computing (HiPC -- best paper finalist), 2019.
System-Aware Algorithms for Machine Learning
C.Mendler-Dünner
ETH Research Collection (PhD Thesis -- ETH medal), 2019.
Snap ML: A Hierarchical Framework for Machine Learning
C.Dünner*, T.Parnell*, D.Sarigiannis, N.Ioannou, A.Anghel, G.Ravi, M.Kandasamy and H.Pozidis
Advances in Neural Information Processing Systems (NeurIPS), 2018.
A Distributed Second-Order Algorithm You Can Trust
C.Dünner, M.Gargiani, A.Lucchi, A.Bian, T.Hofmann and M.Jaggi
International Conference on Machine Learning (ICML), 2018.
Addressing Interpretability and Cold-Start in Matrix Factorization for Recommender Systems
C.Dünner*, M. Vlachos*, R.Heckel, V.Vassiliaadis, T.Parnell and K.Atasu
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
Tera-Scale Coordinate Descent on GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
Journal of Future Generation Computer Systems (FGCS), 2018.
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
C.Dünner, T.Parnell and M.Jaggi
Advances in Neural Information Processing Systems (NIPS), 2017.
Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark
C.Dünner, T.Parnell, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Conference on Big Data (IEEE Big Data), 2017.
High-Performance Recommender System Training Using Co-Clustering on CPU/GPU Clusters
K.Atasu, T.Parnell, C.Dünner, M.Vlachos and H.Pozidis
International Conference on Parallel Processing (ICPP), 2017.
Large-Scale Stochastic Learning using GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning), 2017.
Scalable and Interpretable Product Recommendations via Overlapping Co-Clustering
R.Heckel, M.Vlachos, T.Parnell and C.Dünner
IEEE International Conference on Data Engineering (ICDE), 2017.
Primal-Dual Rates and Certificates
C.Dünner, S.Forte, M.Takac and M.Jaggi
International Conference on Machine Learning (ICML), 2016.

Peer-reviewed Workshop Contributions

Revisiting Design Choices in Proximal Policy Optimization
C.C.-Y.Hsu, C.Mendler-Dünner and M.Hardt
Workshop on Real World Challenges in RL (RWRL@NeurIPS), 2020.
Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
Workshop on Privacy Preserving ML (PPML@NeurIPS), 2020.
Breadth-first, Depth-next Training of Random Forests
A.Anghel*, N.Ioannou*, T.Parnell, N.Papandreou, C.Mendler-Dünner and H.Pozidis
Workshop on Systems for ML (MLSys@NeurIPS), 2019.
Snap ML
C.Mendler-Dünner and A.Anghel
Women in Machine Learning Workshop (WiML@NeurIPS), 2018.
Sampling Acquisition Functions for Batch Bayesian Optimization
A.De Palma, C.Mendler-Dünner, T.Parnell, A.Anghel and H.Pozidis
Workshop on Bayesian Nonparametrics (BNP@NeurIPS), 2018.
Parallel training of linear models without compromising convergence
N.Ioannou, C.Mendler-Dünner, K.Kourtis, T.Parnell
Workshop on Systems for ML (MLSys@NeurIPS), 2018.