Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
TA11 - ML5: Learning algorithms
Time:
Tuesday, 28/June/2022:
TA 8:30-10:00

Session Chair: Antoine Desir
Location: Forum 15


Show help for 'Increase or decrease the abstract text size'
Presentations

Online learning via offline greedy algorithms: applications in market design and optimization

Rad Niazadeh1, Negin Golrezaei2, Joshua Wang3, Fransisca Susan2, Ashwinkumar Badanidiyuru3

1Chicago Booth School of Business, Operations Management; 2MIT Sloan School of Management, Operations Management; 3Google Research Mountain View

We study the problem of transforming offline algorithms to their online counterparts, focusing on offline combinatorial problems that are amenable to a constant factor approximation using a greedy algorithm that is robust to local errors. We provide a general offline-to-online framework using Blackwell approachability, producing T^1/2 regret under the full information setting and T^2/3 regret in the bandit setting. We apply our framework to operations problems and produce improved regret bounds.



Deep policy iteration with integer programming for inventory management

Pavithra Harsha, Ashish Jagmohan, Jayant Kalgnanam, Brian Quanz, Divya Singhvi

Leonard N Stern School of Business, United States of America

In this work, we discuss Programmable Actor Reinforcement Learning (PARL), a policy iteration method that uses techniques from integer programming and sample average approximation. We numerically benchmark the algorithm in complex supply chain settings where optimal solution is intractable and show its performs comparable to, and sometimes better than, state-of-the-art RL and commonly used inventory management benchmarks.



Representing random utility choice models with neural networks

Ali Aouad1, Antoine Désir2

1London Business School, United Kingdom; 2INSEAD

Motivated by the successes of deep learning, we propose a class of neural network-based discrete choice models, called RUMnets, which is inspired by the random utility maximization (RUM) framework. This model formulates the agents' random utility function using the sample average approximation (SAA) method. We show that RUMnets sharply approximates the class of RUM discrete choice models. We provide analytical and empirical evidence of the predictive power of RUMnets.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: MSOM 2022
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany