Atjaunināt sīkdatņu piekrišanu

Recurrent Neural Networks: Design and Applications [Hardback]

Edited by , Edited by
  • Formāts: Hardback, 416 pages, height x width: 235x156 mm, weight: 744 g, 278 equations; 11 Tables, black and white, Contains 13 hardbacks
  • Sērija : International Series on Computational Intelligence
  • Izdošanas datums: 20-Dec-1999
  • Izdevniecība: CRC Press Inc
  • ISBN-10: 0849371813
  • ISBN-13: 9780849371813
Citas grāmatas par šo tēmu:
  • Formāts: Hardback, 416 pages, height x width: 235x156 mm, weight: 744 g, 278 equations; 11 Tables, black and white, Contains 13 hardbacks
  • Sērija : International Series on Computational Intelligence
  • Izdošanas datums: 20-Dec-1999
  • Izdevniecība: CRC Press Inc
  • ISBN-10: 0849371813
  • ISBN-13: 9780849371813
Citas grāmatas par šo tēmu:
Thirteen chapters summarize the design and application of recurrent neural networks (RNN), and exemplify current research ideas and challenges in this subfield of artificial neural network research and development. The first section concentrates on ideas for alternate designs and advances in theoretical aspects of RNNs. Some authors discuss aspects of improving RNN performance and connections with Bayesian analysis and knowledge representation. The second section looks at recent applications of RNNs, such as trajectories, control systems, robotics, and language learning. Architectures and learning techniques are addressed in every chapter. Annotation c. Book News, Inc., Portland, OR (booknews.com)

With existent uses ranging from motion detection to music synthesis to financial forecasting, recurrent neural networks have generated widespread attention. The tremendous interest in these networks drives Recurrent Neural Networks: Design and Applications, a summary of the design, applications, current research, and challenges of this subfield of artificial neural networks.
This overview incorporates every aspect of recurrent neural networks. It outlines the wide variety of complex learning techniques and associated research projects. Each chapter addresses architectures, from fully connected to partially connected, including recurrent multilayer feedforward. It presents problems involving trajectories, control systems, and robotics, as well as RNN use in chaotic systems. The authors also share their expert knowledge of ideas for alternate designs and advances in theoretical aspects.
The dynamical behavior of recurrent neural networks is useful for solving problems in science, engineering, and business. This approach will yield huge advances in the coming years. Recurrent Neural Networks illuminates the opportunities and provides you with a broad view of the current events in this rich field.
Introduction 1(12) Samir B. Unadkat Malina M. Ciocoiu Larry R. Medsker Overview 1(3) Recurrent Neural Net Architectures 2(1) Learning in Recurrent Neural Nets 3(1) Design Issues and Theory 4(3) Optimization 4(1) Discrete-Time Systems 4(1) Bayesian Belief Revision 5(1) Knowledge Representation 6(1) Long-Term Dependencies 6(1) Applications 7(3) Chaotic Recurrent Networks 7(1) Language Learning 7(1) Sequential Autoassociation 8(1) Trajectory Problems 9(1) Filtering And Control 9(1) Adaptive Robot Behavior 10(1) Future Directions 10(3) Recurrent Neural Networks for Optimization: The State of the Art 13(34) Youshen Xia Jun Wang Introduction 13(2) Continuous-Time Neural Networks for QP and LCP 15(10) Problems and Design of Neural Networks 15(5) Primal-Dual Neural Networks for LP and QP 20(3) Neural Networks for LCP 23(2) Discrete-Time Neural Networks for QP and LCP 25(12) Neural Networks for QP and LCP 25(3) Primal-Dual Neural Network for Linear Assignment 28(9) Simulation Results 37(4) Concluding Remarks 41(6) Efficient Second-Order Learning Algorithms for Discrete-Time Recurrent Neural Networks 47(30) Euripedes P. dos Santos Fernando J. Von Zuben Introduction 47(1) Spatial x Spatio-Temporal Processing 48(1) Computational Capability 49(1) Recurrent Neural Networks as Nonlinear Dynamic Systems 49(2) Recurrent Neural Networks and Second-Order Learning Algorithms 51(2) Recurrent Neural Network Architectures 53(3) State Space Representation for Recurrent Neural Networks 56(2) Second-Order Information in Optimization-Based Learning Algorithms 58(1) The Conjugate Gradient Algorithm 59(4) The Algorithm 60(1) The Case of Non-Quadratic Functions 61(1) Scaled Conjugate Gradient Algorithm 62(1) An Improved SCGM Method 63(3) Hybridization in the Choice of βj 64(1) Exact Multiplication by the Hessian 65(1) The Learning Algorithm for Recurrent Neural Networks 66(4) Computation of ∇ET(W) 67(1) Computation of H(W)V 68(2) Simulation Results 70(1) Concluding Remarks 71(6) Designing High Order Recurrent Networks for Bayesian Belief Revision 77(22) Ahsraf Abdelbar Introduction 77(1) Belief Revision and Reasoning Under Uncertainty 77(5) Reasoning Under Uncertainty 77(2) Bayesian Belief Networks 79(2) Belief Revision 81(1) Approaches to Finding Map Assignments 82(1) Hopfield Networks and Mean Field Annealing 82(2) Optimization and the Hopfield Network 82(2) Boltzmann Machine 84(1) Mean Field Annealing 84(1) High Order Recurrent Networks 84(3) Efficient Data Structures for Implementing HORNs 87(1) Designing HORNs for Belief Revision 88(4) Conclusions 92(7) Equivalence in Knowledge Representation: Automata, Recurrent Neural Networks, and Dynamical Fuzzy Systems 99(34) C. Lee Giles Christian W. Omlin K. K. Thornber Introduction 99(4) Motivation 99(1) Background 100(2) Overview 102(1) Fuzzy Finite State Automata 103(1) Representation of Fuzzy States 104(3) Preliminaries 104(1) DFA Encoding Algorithm 104(1) Recurrent State Neurons with Variable Output Range 105(2) Programming Fuzzy State Transitions 107(1) Automata Transformation 107(6) Preliminaries 107(1) Transformation Algorithm 107(2) Example 109(1) Properties of the Transformation Algorithm 110(3) Network Architecture 113(2) Network Stability Analysis 115(9) Preliminaries 115(1) Fixed Point Analysis for Sigmoidal Discriminant Function 116(6) Network Stability 122(2) Simulations 124(1) Conclusions 124(9) Learning Long-Term Dependencies in NARX Recurrent Neural Networks 133(20) Tsungnan Lin Bill G. Horne Peter Tino C. Lee Giles Introduction 133(1) Vanishing Gradients and Long-Term Dependencies 134(2) NARX Networks 136(2) An Intuitive Explanation of NARX Network Behavior 138(1) Experimental Results 139(6) The Latching Problem 139(3) An Automaton Problem 142(3) Conclusion 145(8) Appendix 146(7) Oscillation Responses in a Chaotic Recurrent Network 153(26) Judy Dayhoff Peter J. Palmadesso Fred Richards Introduction 153(2) Progression to Chaos 155(5) Activity Measurements 157(1) Different Initial States 158(2) External Patterns 160(4) Progression from Chaos to a Fixed Point 161(1) Quick Response 161(3) Dynamic Adjustment of Pattern Strength 164(2) Characteristics of the Pattern-to-Oscillation Map 166(7) Discussion 173(6) Lessons from Language Learning 179(26) Stefan C. Kremer Introduction 179(3) Language Learning 179(1) Classical Grammar Induction 180(1) Grammatical Induction 181(1) Grammars in Recurrent Networks 181(1) Outline 182(1) Language Learning Is Hard 182(1) When Possible, Search a Smaller Space 183(9) An Example: Where Did I Leave My Keys? 183(1) Reducing and Ordering in Grammatical Induction 184(1) Restricted Hypothesis Spaces in Connectionist Networks 184(1) Choose an Appropriate Network Topology 185(3) Choose a Limited Number of Hidden Units 188(1) Fix Some Weights 189(1) Set Initial Weights 190(2) Search the Most Likely Places First 192(2) Order Your Training Data 194(5) Classical Results 194(1) Input Ordering Used in Recurrent Networks 195(1) How Recurrent Networks Pay Attention to Order 196(3) Summary 199(6) Recurrent Autoassociative Networks: Developing Distributed Representations of Hierarchically Structured Sequences by Autoassociation 205(38) Ivelin Stoianov Introduction 205(2) Sequences, Hierarchy, and Representations 207(2) Neural Networks and Sequential Processing 209(7) Architectures 209(2) Representing Natural Language 211(5) Recurrent Autoassociative Networks 216(8) Training RAN with the Backpropagation Through Time Learning Algorithm 218(3) Experimenting with RANs: Learning Syllables 221(3) A Cascade of RANs 224(7) Simulation with a Cascade of RANs: Representing Polysyllabic Words 228(1) A More Realistic Experiment: Looking for Systematicity 229(2) Going Further to a Cognitive Model 231(2) Discussion 233(3) Conclusions 236(7) Comparison of Recurrent Neural Networks for Trajectory Generation 243(34) David G. Hagner Mohamad H. Hassoun Paul B. Watta Introduction 243(1) Architecture 244(3) Training Set 247(1) Error Function and Performance Metric 248(4) Training Algorithms 252(5) Gradient Descent and Conjugate Gradient Descent 253(2) Recursive Least Squares and the Kalman Filter 255(2) Simulations 257(16) Algorithm Speed 257(2) Circle Results 259(5) Figure-Eight Results 264(4) Algorithm Analysis 268(1) Algorithm Stability 269(2) Convergence Criteria 271(1) Trajectory Stability and Convergence Dynamics 272(1) Conclusions 273(4) Training Algorithms for Recurrent Neural Nets that Eliminate the Need for Computation of Error Gradients with Application to Trajectory Production Problem 277(48) Malur K. Sundareshan Yee Chin Wong Thomas Condarcure Introduction 277(3) Description of the Learning Problem and Some Issues in Spatiotemporal Training 280(9) General Framework and Training Goals 280(2) Recurrent Neural Network Architectures 282(2) Some Issues of Interest in Neural Network Training 284(5) Training by Methods of Learning Automata 289(9) Some Basics on Learning Automata 289(2) Application to Training Recurrent Networks 291(2) Trajectory Generation Performance 293(5) Training by Simplex Optimization Method 298(23) Some Basics on Simplex Optimization 298(6) Application to Training Recurrent Networks 304(8) Trajectory Generation Performance 312(9) Conclusions 321(4) Training Recurrent Neural Networks for Filtering and Control 325(30) Martin T. Hagan Orlando De Jesus Roger Schultz Introduction 325(1) Preliminaries 325(3) Layered Feedforward Network 326(1) Layered Digital Recurrent Network 327(1) Principles of Dynamic Learning 328(4) Dynamic Backprop for the LDRN 332(5) Preliminaries 332(1) Explicit Derivatives 333(1) Complete FP Algorithms for the LDRN 334(3) Neurocontrol Application 337(8) Recurrent Filter 345(8) Summary 353(2) Remembering How To Behave: Recurrent Neural Networks for Adaptive Robot Behavior 355(36) T. Ziemke Introduction 355(1) Background 355(6) Recurrent Neural Networks for Adaptive Robot Behavior 361(22) Motivation 361(1) Robot and Simulator 361(1) Robot Control Architectures 362(1) Experiment 1 363(10) Experiment 2 373(10) Summary and Discussion 383(8) Index 391