Register now After registration you will be able to apply for this opportunity online.
Learning the uncertainty in stochastic optimization
This project aims at automatically learning problem-dependent uncertainty sets by exploiting available data on the uncertain parameters, hence surpassing the limitations of traditional methods such as robust and stochastic optimization approaches that assume the exact knowledge of the support set and of the probability distribution respectively.
Almost all optimization programs we are confronted with in real-world applications, ranging from supply chain to energy and financial markets, to robot trajectory planning, are affected by uncertain parameters. Decision-making under uncertainty is therefore the cornerstone of many communities, from operation research, to control, to machine learning. Traditionally, methods to tackle these problems fall under two distinct groups. On one side, robust optimization assumes the exact knowledge of a deterministic uncertainty set where the uncertain parameters lie and optimizes over the worst case. On the other side, stochastic optimization assumes the exact knowledge of the underlying probability distribution and minimizes on average. However, in practice, neither the uncertainty set, nor the distribution are exactly known to the decision-maker, who instead has only access to a finite amount of uncertainty realizations, i.e., data.
In this project, we aim to fill this gap by proposing a data-driven technique to automatically learn the uncertainty set, by reshaping it in order to minimize the expected cost or safety violation of the optimization program at hand. Inspired by the hyperparameter tuning literature from the machine learning community, we will rephrase the problem at hand as the problem of "optimally tuning uncertainty sets in a data-driven fashion" and we will cast it as a bilevel optimization problem. Ultimately, the proposed procedure could benefit several uncertainty quantification methods such as Wasserstein ambiguity sets, conformal prediction, etc, whose tuning parameters will be automatically learned based on (i) the data and (ii) the specific optimization program at hand, in order to produce better out-of-sample performances.
Almost all optimization programs we are confronted with in real-world applications, ranging from supply chain to energy and financial markets, to robot trajectory planning, are affected by uncertain parameters. Decision-making under uncertainty is therefore the cornerstone of many communities, from operation research, to control, to machine learning. Traditionally, methods to tackle these problems fall under two distinct groups. On one side, robust optimization assumes the exact knowledge of a deterministic uncertainty set where the uncertain parameters lie and optimizes over the worst case. On the other side, stochastic optimization assumes the exact knowledge of the underlying probability distribution and minimizes on average. However, in practice, neither the uncertainty set, nor the distribution are exactly known to the decision-maker, who instead has only access to a finite amount of uncertainty realizations, i.e., data.
In this project, we aim to fill this gap by proposing a data-driven technique to automatically learn the uncertainty set, by reshaping it in order to minimize the expected cost or safety violation of the optimization program at hand. Inspired by the hyperparameter tuning literature from the machine learning community, we will rephrase the problem at hand as the problem of "optimally tuning uncertainty sets in a data-driven fashion" and we will cast it as a bilevel optimization problem. Ultimately, the proposed procedure could benefit several uncertainty quantification methods such as Wasserstein ambiguity sets, conformal prediction, etc, whose tuning parameters will be automatically learned based on (i) the data and (ii) the specific optimization program at hand, in order to produce better out-of-sample performances.
The goals of the project are as follows:
Month 1-1.5: Literature review about uncertainty quantification and optimization under uncertainty;
Month 1.5- 3: Formulate the problem of automatically learning problem-specific uncertainty sets exploiting available data as a bilevel optimization problem;
Month 3-4: Provide convergence guarantees for the bilevel problem.
Month 4-5: Implement the algorithm on several examples (i.e., newsvendor problem, trajectory optimization, portfolio optimization) and benchmark it against other alternatives.
Month 5-6: Write a report and prepare a presentation.
While the above timeline is intended for master thesis, a reduced version of the project can be offered as semester project as well.
The goals of the project are as follows:
Month 1-1.5: Literature review about uncertainty quantification and optimization under uncertainty;
Month 1.5- 3: Formulate the problem of automatically learning problem-specific uncertainty sets exploiting available data as a bilevel optimization problem;
Month 3-4: Provide convergence guarantees for the bilevel problem.
Month 4-5: Implement the algorithm on several examples (i.e., newsvendor problem, trajectory optimization, portfolio optimization) and benchmark it against other alternatives.
Month 5-6: Write a report and prepare a presentation.
While the above timeline is intended for master thesis, a reduced version of the project can be offered as semester project as well.
Please send your resume/CV (including lists of relevant publications/projects) and transcript of
records in PDF format via email to mfochesato@control.ee.ethz.ch, rzuliani@control.ee.ethz.ch.
Prerequisites are as follows:
The project is suitable for master student;
Familiarity with convex optimization and numerical optimization is beneficial;
Good programming skills in Python are mandatory;
Proficiency in English.
Please send your resume/CV (including lists of relevant publications/projects) and transcript of records in PDF format via email to mfochesato@control.ee.ethz.ch, rzuliani@control.ee.ethz.ch.
Prerequisites are as follows:
The project is suitable for master student;
Familiarity with convex optimization and numerical optimization is beneficial;