Graduate Thesis Or Dissertation
Parallelized Deep Neural Networks for Distributed Intelligent Systems Público Deposited
Conteúdo disponível para baixar
Baixar PDF
https://scholar.colorado.edu/concern/graduate_thesis_or_dissertations/73666499d
- Abstract
- We present rigorous analysis of distributed intelligent systems, particularly through work on large-scale deep neural networks. We show how networks represent functions, and examine how all functions and physical systems can be learned by an infinite number of neural networks. Stressing dimensionality reduction as key to network optimization, we study encoding, energy minimization, and topographic independent components analysis. We explain how networks can be parallelized along local receptive fields by asynchronous stochastic gradient descent, and how robustness can increase with adaptive subgradients. We show how communication latency across an InfiniBand cluster grows linearly with number of computers, a positive result for large-scale parallelization of neural networks via message passing. We also present results of a topographic hierarchical network model of the human visual cortex on the NYU Object Recognition Benchmark.
- Creator
- Date Issued
- 2013
- Academic Affiliation
- Advisor
- Committee Member
- Degree Grantor
- Commencement Year
- Subject
- Última modificação
- 2019-11-18
- Resource Type
- Declaração de direitos
- Language
Relações
Itens
Miniatura | Título | Data de carga | Acesso | Ações |
---|---|---|---|---|
parallelizedDeepNeuralNetworksForDistributedIntelligentSys.pdf | 2019-11-18 | Público | Baixar |