Date of Award
1990
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Computer Science
First Advisor
Sitharama Iyengar
Abstract
This dissertation addresses a fundamental problem in computational AI--developing a class of massively parallel, neural algorithms for learning robustly, and in real-time, complex nonlinear transformations from representative exemplars. Provision of such a capability is at the core of many real-life problems in robotics, signal processing and control. The concepts of terminal attractors in dynamical systems theory and adjoint operators in nonlinear sensitivity theory are exploited to provide a firm mathematical foundation for learning such mappings with dynamical neural networks, while achieving a dramatic reduction in the overall computational costs. Further, we derive an efficient methodology for handling a multiplicity of application-specific constraints during run-time, that precludes additional retraining or disturbing the synaptic structure of the "learned" network. The scalability of proposed theoretical models to large-scale embodiments in neural hardware is analyzed. Neurodynamical parameters, e.g., decay constants, response gains, etc., are systematically analyzed to understand their implications on network scalability, convergence, throughput and fault tolerance, during both concurrent simulations and implementation in concurrently asynchronous VLSI, optical and opto-electronic hardware. Dynamical diagnostics, e.g., Lyapunov exponents, are used to formally characterize the widely observed dynamical instability in neural networks as "emergent computational chaos". Using contracting operators and nonconstructive theorems from fixed point theory, we rigorously derive necessary and sufficient conditions for eliminating all oscillatory and chaotic behavior in additive-type networks. Extensive benchmarking experiments are conducted with arbitrarily large neural networks (over 100 million interconnects) to verify the methodological robustness of our network "conditioning" formalisms. Finally, we provide insight for exploiting our proposed repertoire of neural learning formalisms in addressing a fundamental problem in robotics--manipulation controller design for robots operating in unpredictable environments. Using some recent results in task analysis and dynamic modeling we develop the "Perceptual Manipulation Architecture". The architecture, conceptualized within a perceptual framework, is shown to be well beyond the state-of-the-art model-directed robotics. For a stronger physical interpretation of its implications, our discussions are embedded in context of a novel systems' concept for automated space operations.
Recommended Citation
Gulati, Sandeep, "Computational Neural Learning Formalisms for Perceptual Manipulation: Singularity Interaction Dynamics Model." (1990). LSU Historical Dissertations and Theses. 5051.
https://repository.lsu.edu/gradschool_disstheses/5051
Pages
257
DOI
10.31390/gradschool_disstheses.5051