Graduate Thesis Or Dissertation

 

Advances in Stochastic Optimization with Decision-Dependent Distributions Public Deposited

https://scholar.colorado.edu/concern/graduate_thesis_or_dissertations/qr46r2736
Abstract
  • The success of stochastic optimization hinges on the assumption that the distribution of thedata remains stationary both throughout the run of an optimization algorithm and after deployment of a solution. However, in applications where data acquisition requires feedback from humans with vested interests in optimization outcomes, this assumption often fails as humans tend to modify their attributes to achieve desired results, leading to a changing data distribution. To capture this optimization induced distributional shift, we pose the formulation of stochastic optimization problems in which the data distribution depends explicitly on optimization variables.

    We characterize two distinct types of solutions that arise: optimal points that are universallybest but require significant investment to find, and stable points that can be found during “ standard operation” but are only optimal for the behaviors they induce. This work provides convergence guarantees for stochastic gradient algorithms that find stable points using only feedback from the system. We demonstrate online tracking for a time-varying extension in expectation, and high probability. We show that stochastic saddle point problems with decision-dependence can be solved using derivative free methods, and the resulting stable point problem can be solved using stochastic primal-dual. Furthermore, we extend this framework to continuous games, demonstrating that a approximate Nash equilibrium can be achieved when players are capable of learning a parameterized model of their distribution

Creator
Date Issued
  • 2024-07-28
Academic Affiliation
Advisor
Committee Member
Degree Grantor
Commencement Year
Subject
Publisher
Last Modified
  • 2025-01-08
Resource Type
Rights Statement
Language

Relationships

Items