Graduate Thesis Or Dissertation

 

Parameter Dimension Reduction for Scientific Computing Public Deposited

Downloadable Content

Download PDF
https://scholar.colorado.edu/concern/graduate_thesis_or_dissertations/x633f122j
Abstract
  • Advances in computational power have enabled the simulation of increasingly complex physical systems. Mathematically, we represent these simulations as a mapping from inputs to outputs. Studying this map—e.g., performing optimization, quantifying uncertainties, etc.—is a critical component of computational science research. Such studies, however, can suffer from the curse of dimensionality—i.e., an exponential increase in computational cost resulting from increases in the input dimension. Dimension reduction combats this curse by determining relatively important (or unimportant) directions in the input space. The problem is then reformulated to emphasize the important directions while the unimportant directions are ignored. Functions that exhibit this sort of low-dimensional structure through linear transformations of the input space are known as ridge functions. Ridge functions appear as the basic components in various approximation and regression techniques such as neural networks, projection pursuit regression, and multivariate Fourier series expansion. This work focuses on how to discover, interpret, and exploit ridge functions to improve scientific computing.In this thesis, we examine relationships between the ridge recovery technique active subspaces and the physically-motivated Buckingham Pi Theorem in magnetohydrodynamics (MHD) models. We show that active subspaces can recover known unitless quantities from MHD such as the Reynolds and Hartmann numbers through a log transformation of the inputs. We then study the relationship between ridge functions and statistical dimension reduction for regression problem—i.e., sufficient dimension reduction (SDR). We show that a class of SDR methods called inverse regression methods provide a gradient-free approach to ridge recovery when applied to deterministic functions. We examine the numerical properties of these methods as well as their failure cases. We also introduce novel algorithms for computing the underlying population matrices of these inverse regression methods using classical iterative methods for generating orthogonal polynomials. Finally, we introduce a new method for cheaply constructing accurate polynomial surrogates on one-dimensional ridge functions by obtaining generalized Gauss-Christoffel quadrature rules with respect to the marginal density on the one-dimensional ridge subspace.
Creator
Date Issued
  • 2018
Academic Affiliation
Advisor
Committee Member
Degree Grantor
Commencement Year
Subject
Last Modified
  • 2019-11-14
Resource Type
Rights Statement
Language

Relationships

Items