FORC 2025 Program

Wednesday, June 4th
Location:  Tresidder Oak Lounge, Tresidder Memorial Union (2nd Floor), 459 Lagunita Drive, Stanford, CA 94305

8:00-9:00 Breakfast

9:00-10:15 Session 1 Chair: Parikshit Gopalan

When Does a Predictor Know its Own Loss?
Aravind Gollakota, Parikshit Gopalan, Aayush Karan, Charlotte Peale, and Udi Wieder

Kandinsky Conformal Prediction: Beyond Class- and Covariate-Conditional Coverage
Konstantina Bairaktari, Jiayun Wu, and Zhiwei Steven Wu

Kernel Multiaccuracy
Carol Long, Wael Alghamdi, Alexander Glynn, Yixuan Wu, and Flavio Camon

Near-Optimal Algorithms for Omniprediction
Princewill Okoroafor, Robert Kleinberg, and Michael P. Kim

10:15-10:45 Coffee break

10:45-12:00 Session 2 Chair: Michael P. Kim

Mapping the Tradeoffs and Limitations of Algorithmic Fairness
Etam Benger and Katrina Ligett

Provable Uncertainty Decomposition via Higher-Order Calibration
Gustaf Ahdritz, Aravind Gollakota, Parikshit Gopalan, Charlotte Peale, and Udi Wieder

Smoothed Calibration and Decision Making
Jason Hartline, Yifan Wu, and Yunran Yang

Model Ensembling for Constrained Optimization
Ira Globus-Harris, Varun Gupta, Michael Kearns, and Aaron Roth

12:00-2:15 Lunch (on your own)

2:15-3:15 Session 3 Chair: Mark Bun

Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Yevgeniy Dodis and Eli Goldin

Differentially Private Learning Beyond the Classical Dimensionality Regime
Cynthia Dwork, Pranay Tankala, and Linjun Zhang

Optimal Rates for Robust Stochastic Convex Optimization
Changyu Gao, Andrew Lowy, Xingyu Zhou, and Stephen Wright

3:15-3:45 Coffee break

3:45-5:00 Session 4 Chair: Vitaly Feldman

Fingerprinting Codes Meet Geometry: Improved Lower Bounds for Private Query Release and Adaptive Data Analysis
Xin Lyu and Kunal Talwar

Differential Privacy with Multiple Selections
Ashish Goel, Zhihao Jiang, Aleksandra Korolova, Kamesh Munagala, and Sahasrajit Sarmasarkar

Laplace Transform Interpretation of Differential Privacy
Rishav Chourasia, Uzair Javaid, and Biplap Sikdar

Smooth Sensitivity Revisited: Towards Optimality
Richard Hladík and Jakub Tětek

Thursday, June 5th
Location:  Tresidder Oak Lounge, Tresidder Memorial Union (2nd Floor), 459 Lagunita Drive, Stanford, CA 94305

8:00-9:00 Breakfast

9:00-10:00 Keynote: Susan Athey

New Methods for Fine Tuning Transformer Models and LLMs: Representativeness, Wage Models, and Causal Analysis

Abstract: The rise of foundation models marks a paradigm shift in machine learning: instead of training specialized models from scratch, foundation models are first trained on massive datasets before being adapted or fine-tuned to make predictions on smaller datasets. Initially developed for text, foundation models can also excel at making predictions about social science data. However, while many estimation problems in the social sciences use prediction as an intermediate step, they ultimately require different criteria for success. We develop methods for fine-tuning foundation models to perform these estimation problems. We first characterize an omitted variable bias that can arise when a foundation model is only fine-tuned to maximize predictive accuracy. We then provide a novel set of conditions for fine-tuning under which estimates of causal effects derived from a foundation model are root-n-consistent. Based on this theory, we develop new fine-tuning algorithms that empirically mitigate this omitted variable bias. To demonstrate our ideas, we study gender wage decomposition.

Bio: Professor Susan Athey is The Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her PhD from Stanford, and she holds an honorary doctorate from Duke University.She previously taught at the economics departments at  MIT, Stanford, and Harvard. She is an elected member of the National Academy of Science and is the recipient of the John Bates Clark Medal, awarded by the American Economics Association to the economist under 40 who has made the greatest contributions to thought and knowledge. Her current research focuses on the economics of digitization, marketplace design, and the intersection of causal inference and machine learning.

As one of the first “tech economists,” she served as consulting chief economist for Microsoft Corporation for six years, and has served on the boards of multiple private and public technology firms. She was a founding associate director of the Stanford Institute for Human-Centered Artificial Intelligence, where she currently serves as senior fellow, and she is the founding director of the Golub Capital Social Impact Lab at Stanford GSB.

From 2022 to 2024, she took leave from Stanford to serve as Chief Economist at the U.S. Department of Justice Antitrust Division. Professor Athey was the 2023 President of the American Economics Association, where she previously served as vice president and elected member of the Executive Committee.

10:00-10:30 Coffee break

10:30-12:00 Session 5 Chair: Parikshit Gopalan

Group Fairness and Multi-criteria Optimization in School Assignment
Santhini K. A., Kamesh Munagala, Meghana Nasre and Govind S. Sankar

Cost over Content: Information Choice in Trade
Kristof Madarasz and Marek Pycia

Pessimism Traps and Algorithmic Interventions
Avrim Blum, Emily Diana, Kavya Ravichandran, and Alexander Tolbert

The Value of Prediction in Identifying the Worst-Off
Unai Fischer-Abaigar, Christoph Kern, and Juan C. Perdomo

The Hidden Cost of Waiting for Accurate Predictions
Ali Shirali, Ariel D. Procaccia, and Rediet Abebe

12:00-2:00 Lunch (on your own)

2:00-3:20 Session 6 Chair: Thomas Steinke

Fully Dynamic Graph Algorithms with Edge Differential Privacy
Sofya Raskhodnikova and Teresa Anna Steiner

Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High ε Regime
Charlie Harrison and Pasin Manurangsi

Count on Your Elders: Laplace vs Gaussian Noise
Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner and Sahel Torkamani

Better Gaussian Mechanism using Correlated Noise
Christian Janos Lebeda
+
The Correlated Gaussian Sparse Histogram Mechanism
Christian Janos Lebeda and Lukas Retschmeier

3:20-3:50 Coffee break

3:50-5:00 Poster session

5:15-5:45 Business meeting and awards

Friday, June 6th
Location:  Tresidder Oak Lounge, Tresidder Memorial Union (2nd Floor), 459 Lagunita Drive, Stanford, CA 94305

8:00-9:00 Breakfast

9:00-10:00 Keynote: Nicholas Carlini

How LLMs could enable harm at scale

Abstract: This talk considers the risks of advanced LLMs. First, as a proof-of-work, I demonstrate that I’m a real researcher and discuss some recent work considering how adversaries could use language models to cause harm by exploiting vulnerable systems and improving the monetization of exploited systems.

Then I turn to a much more ambiguous question and ask: what’s going on with this whole AI thing? If language models continue to get more advanced, what (worse) harms should we expect? And how can we begin to prepare?

Bio: Nicholas Carlini is a research scientist at Anthropic working at the intersection of security and machine learning. His current work studies what harms an adversary could do with, or do to, language models. His work has received best paper awards from EuroCrypt, USENIX Security, ICML, and IEEE S&P. He received his PhD from UC Berkeley under David Wagner.

10:00-10:30 Coffee break

10:30-12:00 Session 7 Chair: Mark Bun

Scalable Private Partition Selection via Adaptive Weighting
Justin Y. Chen, Vincent Cohen-Addad, Alessandro Epasto, and Morteza Zadimoghaddam

Optimal Bounds for Private Minimum Spanning Trees via Input Perturbation
Rasmus Pagh, Lukas Retschmeier, Hao Wu, and Hanwen Zhang

Near-Universally-Optimal Differentially Private Minimum Spanning Trees
Richard Hladík and Jakub Tětek

OWA for Bipartite Assignments
Jabari Hastings, Sigal Oren, and Omer Reingold

Hardness and Approximation Algorithms for Balanced Districting Problems
Prathamesh Dharangutte, Jie Gao, Shang-En Huang, and Fang-Yi Yu

12:00-2:00 Lunch (on your own)

2:00-3:15 Session 8 Chair: Jayshree Sarathy

Privacy-Computation Trade-Offs in Private Repetition and Metaselection
Kunal Talwar

On the Differential Privacy and Interactivity of Privacy Sandbox Reports
Badih Ghazi, Charlie Harrison, Arpana Hosabettu, Pritish Kamath, Alexander Knop, Ravi Kumar, Ethan Leeman, Pasin Manurangsi, Mariana Raykova, Vikas Sahu, and Phillipp Schoppmann

Privately Evaluating Black-Box Functions
Ephraim Linder, Sofya Raskhodnikova, Adam Smith, and Thomas Steinke

Private Estimation when Data and Privacy Demands are Correlated
Syomantak Chaudhuri and Thomas Courtade

3:15-3:45 Coffee break

3:45-5:00 Session 9 Chair: Thomas Steinke

Debiasing Functions of Private Statistics in Postprocessing
Flavio Calmon, Elbert Du, Cynthia Dwork, Brian Finley, and Grigory Franguridi

Differentially Private High-Dimensional Approximate Range Counting, Revisited
Martin Aumüller, Fabrizio Boninsegna, and Francesco Silvestri

PREM: Privately Answering Statistical Queries with Relative Error
Badih Ghazi, Cristobal Guzman, Pritish Kamath, Alexander Knop, Ravi Kumar, Pasin Manurangsi, and Sushant Sachdeva

Differentially Private Sequential Learning
Yuxin Liu and Amin Rahimian