Stop Guessing: Discover Optimal Trade-offs in Your ML Models

Andrei Paleyes

AutoML Seminar

January 2026

Brendan Avent Bogdan Ficiu Emile Ferreira

Discovering and navigating trade-offs in ML models...

Sounds familiar?

"Multi-Objective AutoML: Towards Accurate and Robust models", Jan van Rijn, AutoML seminar, 16 Oct 2025

Today's highlights

3 objectives: fairness, privacy, energy efficiency

Multi-objective Bayesian Optimisation (MOBO)

Step-by-step guide!

Two models dilemma

Model A
  • 96% accuracy
  • 15 samples per juole
Model B
  • 89% accuracy
  • 21 samples per juole

Option 1: grid or random search

Option 2: constrained optimisation

Table sources: Xu et al. 2019 "Achieving Differential Privacy and Fairness in Logistic Regression", Chang and Shokri 2020 "On the Privacy Risks of Algorithmic Fairness"

Can we discover an entire trade-off surface?

Pareto front

"Disclosure avoidance for block level data and protection of confidentiality in public tabulations.", John M. Abowd, Census Scientific Advisory Committee (Fall Meeting), 2018

Option 3: Multi-objective Bayesian optimisation (MOBO)

Image source: "High-Dimensional Bayesian Multi-Objective Optimization", Gaudrie, 2019

Bayesian optimisation recap

Image source: "Bayesian Optimization and Active Learning Cookbook", Sam Lishak, Physics X, 2025

MOBO: differences

Assuming 2 objectives
  • f(x) = (y1, y2)
  • Model: multi-output GPs or 2 single-output GPs
  • Hypervolume-based acquisition function

"Multiobjective optimization of expensive-to-evaluate deterministic computer simulator models", Svenson and Santner, Computational Statistics & Data Analysis, 2016

HVPoI \[\boxed{\alpha_{\mathrm{HVPoI}}(x) = \mathbb{P}\left(\mathrm{HVI}(\mathcal{P}, x) > 0\right)}\] EHVI \[\boxed{\alpha_{\mathrm{EHVI}}(x) = \mathbb{E}\left[ \mathrm{HVI}(\mathcal{P},x) \right]}\] where \[\mathrm{HVI}(\mathcal{P},x) = \mathrm{HV}(\mathcal{P} \cup x) - \mathrm{HV}(\mathcal{P})\]

Key points

  • Keep scales comparable
  • Reference point matters
  • EHVI is a good practical choice

Case Studies

Case study 1: DPareto - utility vs privacy

  • Sensitive data
  • ε-Differential privacy (DP)
  • Increased privacy decreases performance
  • Hyperparameters:
    • Training: batch size, number of epochs
    • DP: clipping norm, noise variance

"Automatic discovery of privacy-utility pareto fronts", Brendan Avent, Javier Gonzalez, Tom Diethe, AP, Borja Balle, PETS 2020

HVPoI

Case study 2: PFairDP - utility vs privacy vs fairness

  • 🔥3D fronts🔥
  • Extension of DPareto
  • New objective: algorithmic fairness
  • Two fairness methods
    • Statistical parity difference (SPD)
    • Disparate impact (DI)

"Automated discovery of trade-off between utility, privacy and fairness in machine learning models", Bogdan Ficiu, Neil Lawrence, AP, BIAS@ECML 2023

"Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks", Pannekoek and Spigler, arxiv, 2021

"Achieving Differential Privacy and Fairness in Logistic Regression", Xu, Yuan, and Wu, WWW, 2019

EHVI

Case study 3: ECOpt - utility vs energy efficiency

  • Performance vs energy efficiency
  • Focus on ML inference

"Optimising for Energy Efficiency and Performance in Machine Learning", Emile Ferreira, Neil Lawrence, AP, CAIN 2026

Guide

  • Define objectives
  • Define metrics
  • Keep metric values on similar scales
  • Define hyperparameter space
  • Implement MOBO pipeline (just use BoTorch)
  • Choose acquisition function (just use EHVI)
  • Prototype small first
  • Run optimisation
  • How will you use your discovered trade-off surface?