Title
A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints
Document Type
Article
Publication Date
12-1-2000
Abstract
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints. © 2000 IEEE.
Publication Source (Journal or Book title)
IEEE Transactions on Neural Networks
First Page
1251
Last Page
1262
Recommended Citation
Liang, X., & Wang, J. (2000). A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints. IEEE Transactions on Neural Networks, 11 (6), 1251-1262. https://doi.org/10.1109/72.883412