GitHub - facebookincubator/MCGrad: MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model predictions are well-calibrated not just globally (across all data), but also across virtually any segment defined by your features

3 min read Original article β†—

MCGrad: Production-ready multicalibration

Production-ready multicalibration


What is MCGrad?

MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model predictions are well-calibrated not just globally (across all data), but also across virtually any segment defined by your features (e.g., by country, content type, or any combination).

Traditional calibration methods, like Isotonic Regression or Platt Scaling, only ensure global calibrationβ€”meaning predicted probabilities match observed outcomes on average across all dataβ€”but your model can still be systematically overconfident or underconfident for specific groups. MCGrad automatically identifies and corrects these hidden calibration gaps without requiring you to manually specify protected groups.

Global calibration curve showing well-calibrated predictions on average

A globally well-calibrated model: predictions match observed outcomes on average.

Segment-level calibration curves revealing hidden miscalibration in specific groups

The same model showing hidden miscalibration when broken down by segment. MCGrad fixes this.

🌟 Why MCGrad?

  • State-of-the-art multicalibration β€” Best-in-class calibration quality across a vast number of segments.
  • Easy to use β€” Familiar interface. Pass features, not segments.
  • Highly scalable β€” Fast to train, low inference overhead, even on web-scale data.
  • Safe by design β€” Likelihood-improving updates with validation-based early stopping.

🏭 Production Proven

MCGrad has been deployed at Meta on hundreds of production models. See the research paper for detailed experimental results.

πŸ“¦ Installation

Requirements: Python 3.10+

Stable release:

Latest development version:

pip install git+https://github.com/facebookincubator/MCGrad.git

πŸš€ Quick Start

from mcgrad import methods
import numpy as np
import pandas as pd

# Prepare your data in a DataFrame
df = pd.DataFrame({
    'prediction': np.array([0.1, 0.3, 0.7, 0.9, 0.5, 0.2]),  # Your model's predictions
    'label': np.array([0, 0, 1, 1, 1, 0]),  # Ground truth labels
    'country': ['US', 'UK', 'US', 'UK', 'US', 'UK'],  # Categorical feature
    'content_type': ['photo', 'video', 'photo', 'video', 'photo', 'video'],  # Categorical feature
})

# Apply MCGrad
mcgrad = methods.MCGrad()
mcgrad.fit(
    df_train=df,
    prediction_column_name='prediction',
    label_column_name='label',
    categorical_feature_column_names=['country', 'content_type']
)

# Get calibrated predictions
calibrated_predictions = mcgrad.predict(
    df=df,
    prediction_column_name='prediction',
    categorical_feature_column_names=['country', 'content_type']
)
# Returns: numpy array of calibrated probabilities, e.g., [0.12, 0.28, 0.72, ...]

πŸ“š Documentation

πŸ’¬ Community & Support

πŸ“– Citation

If you use MCGrad in your research, please cite our paper.

DOI

@inproceedings{tax2026mcgrad,
  title={{MCGrad: Multicalibration at Web Scale}},
  author={Tax, Niek and Perini, Lorenzo and Linder, Fridolin and Haimovich, Daniel and Karamshuk, Dima and Okati, Nastaran and Vojnovic, Milan and Apostolopoulos, Pavlos Athanasios},
  booktitle={Proceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1 (KDD 2026)},
  year={2026},
  doi={10.1145/3770854.3783954}
}

Related Publications

Some of our team's other work on multicalibration:

  • A New Metric to Measure Multicalibration: Guy, I., Haimovich, D., Linder, F., Okati, N., Perini, L., Tax, N., & Tygert, M. (2025). Measuring multi-calibration. arXiv:2506.11251.

  • Theoretical Results on Value of Multicalibration: Baldeschi, R. C., Di Gregorio, S., Fioravanti, S., Fusco, F., Guy, I., Haimovich, D., Leonardi, S., et al. (2025). Multicalibration yields better matchings. arXiv:2511.11413.