Systems engineering neural networks
Otros Autores: | , |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Hoboken, NJ :
John Wiley & Sons, Inc
[2023]
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009811329906719 |
Tabla de Contenidos:
- Cover
- Title Page
- Copyright
- Contents
- About the Authors
- Acknowledgements
- How to Read this Book
- Part I Setting the Scene
- Chapter 1 A Brief Introduction
- 1.1 The Systems Engineering Approach to Artificial Intelligence (AI)
- 1.2 Chapter Summary
- Questions
- Chapter 2 Defining a Neural Network
- 2.1 Biological Networks
- 2.2 From Biology to Mathematics
- 2.3 We Came a Full Circle
- 2.4 The Model of McCulloch‐Pitts
- 2.5 The Artificial Neuron of Rosenblatt
- 2.6 Final Remarks
- 2.7 Chapter Summary
- Questions
- Sources
- Chapter 3 Engineering Neural Networks
- 3.1 A Brief Recap on Systems Engineering
- 3.2 The Keystone: SE4AI and AI4SE
- 3.3 Engineering Complexity
- 3.4 The Sport System
- 3.5 Engineering a Sports Club
- 3.6 Optimization
- 3.7 An Example of Decision Making
- 3.8 Futurism and Foresight
- 3.9 Qualitative to Quantitative
- 3.10 Fuzzy Thinking
- 3.11 It Is all in the Tools
- 3.12 Chapter Summary
- Questions
- Sources
- Part II Neural Networks in Action
- Chapter 4 Systems Thinking for Software Development
- 4.1 Programming Languages
- 4.2 One More Thing: Software Engineering
- 4.3 Chapter Summary
- Questions
- Source
- Chapter 5 Practice Makes Perfect
- 5.1 Example 1: Cosine Function
- 5.2 Example 2: Corrosion on a Metal Structure
- 5.3 Example 3: Defining Roles of Athletes
- 5.4 Example 4: Athlete's Performance
- 5.5 Example 5: Team Performance
- 5.5.1 A Human‐Defined‐System
- 5.5.2 Human Factors
- 5.5.3 The Sports Team as System of Interest
- 5.5.4 Impact of Human Error on Sports Team Performance
- 5.5.4.1 Dataset
- 5.5.4.2 Problem Statement
- 5.5.4.3 Feature Engineering and Extraction
- 5.5.4.4 Creation of Computed Columns
- 5.5.4.5 Explorative Data Analysis (EDA)
- 5.5.4.6 Extension ‐ Sampling Method for an Imbalanced Dataset.
- 5.5.4.7 Building a Neural Network Model
- 5.5.4.8 Training Outcome and Model Evaluation
- 5.5.4.9 Evaluate Using Test Data
- 5.6 Example 6: Trend Prediction
- 5.7 Example 7: Symplex and Game Theory
- 5.8 Example 8: Sorting Machine for Lego® Bricks
- 5.8.1 Challenge for Readers
- Part III Down to the Basics
- Chapter 6 Input/Output, Hidden Layer and Bias
- 6.1 Input/Output
- 6.2 Hidden Layer
- 6.2.1 How Many Hidden Nodes Should we Have?
- 6.3 Bias
- 6.4 Final Remarks
- 6.5 Chapter Summary
- Questions
- Source
- Chapter 7 Activation Function
- 7.1 Types of Activation Functions
- 7.2 Activation Function Derivatives
- 7.3 Activation Functions Response to W and b Variables
- 7.4 Final Remarks
- 7.5 Chapter Summary
- Questions
- Source
- Chapter 8 Cost Function, Back‐Propagation and Other Iterative Methods
- 8.1 What Is the Difference between Loss and Cost?
- 8.2 Training the Neural Network
- 8.3 Back‐Propagation (BP)
- 8.4 One More Thing: Gradient Method and Conjugate Gradient Method
- 8.5 One More Thing: Newton's Method
- 8.6 Chapter Summary
- Questions
- Sources
- Chapter 9 Conclusions and Future Developments
- Glossary and Insights
- Index
- EULA.