Graduate Thesis Or Dissertation
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component 公开 Deposited
https://scholar.colorado.edu/concern/graduate_thesis_or_dissertations/3b591886z
- Abstract
- Markov Decision Processes (MDPs) are discrete-time random processes that provide a framework to model sequential decision problems in stochastic environments. However, the use of MDPs to model real-world decision problems is restricted, since they often have continuous variables as part of their state space. Common approaches to extending the use of MDPs to solve these problems include discretization which suffers from inefficiency and inaccuracy. Here, we solve MDPs with continuous and discrete state variables by assuming the reward to be piecewise linear. We however allow for the continuous variable to have an infinite and continuous transition function. We then use our approach to solve an MDP modeling human behaviour in a specific task called delayed gratification. Simulation results are presented to analyze the model predictions which are fit post-hoc to synthetic as well as human data, to justify the approach solving the MDP and modeling behaviour.
- Creator
- Date Issued
- 2017
- Academic Affiliation
- Advisor
- Committee Member
- Degree Grantor
- Commencement Year
- Subject
- 最新修改
- 2019-11-18
- Resource Type
- 权利声明
- Language
关联
单件
缩略图 | 标题 | 上传日期 | 公开度 | 行动 |
---|---|---|---|---|
analysisAndSolutionOfMarkovDecisionProblemsWithAContinuo.pdf | 2019-11-18 | 公开 | 下载 |