Deep learning for natural language processing

Explore the most challenging issues of natural language processing, and learn how to solve them with cutting-edge deep learning! Deep learning has advanced natural language processing to exciting new levels and powerful new applications! For the first time, computer systems can achieve "human&q...

Descripción completa

Detalles Bibliográficos
Otros Autores: Raaijmakers, Stephan, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Shelter Island, New York : Manning Publications Co. LLC [2022]
Edición:[First edition]
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009707504206719
Tabla de Contenidos:
  • Intro
  • Deep Learning for Natural Language Processing
  • Copyright
  • brief contents
  • contents
  • front matter
  • preface
  • acknowledgments
  • about this book
  • Who should read this book
  • How this book is organized: A road map
  • About the code
  • liveBook discussion forum
  • about the author
  • about the cover illustration
  • Part 1. Introduction
  • 1 Deep learning for NLP
  • 1.1 A selection of machine learning methods for NLP
  • 1.1.1 The perceptron
  • 1.1.2 Support vector machines
  • 1.1.3 Memory-based learning
  • 1.2 Deep learning
  • 1.3 Vector representations of language
  • 1.3.1 Representational vectors
  • 1.3.2 Operational vectors
  • 1.4 Vector sanitization
  • 1.4.1 The hashing trick
  • 1.4.2 Vector normalization
  • Summary
  • 2 Deep learning and language: The basics
  • 2.1 Basic architectures of deep learning
  • 2.1.1 Deep multilayer perceptrons
  • 2.1.2 Two basic operators: Spatial and temporal
  • 2.2 Deep learning and NLP: A new paradigm
  • Summary
  • 3 Text embeddings
  • 3.1 Embeddings
  • 3.1.1 Embedding by direct computation: Representational embeddings
  • 3.1.2 Learning to embed: Procedural embeddings
  • 3.2 From words to vectors: Word2Vec
  • 3.3 From documents to vectors: Doc2Vec
  • Summary
  • Part 2. Deep NLP
  • 4 Textual similarity
  • 4.1 The problem
  • 4.2 The data
  • 4.2.1 Authorship attribution and verification data
  • 4.3 Data representation
  • 4.3.1 Segmenting documents
  • 4.3.2 Word-level information
  • 4.3.3 Subword-level information
  • 4.4 Models for measuring similarity
  • 4.4.1 Authorship attribution
  • 4.4.2 Verifying authorship
  • Summary
  • 5 Sequential NLP
  • 5.1 Memory and language
  • 5.1.1 The problem: Question Answering
  • 5.2 Data and data processing
  • 5.3 Question Answering with sequential models
  • 5.3.1 RNNs for Question Answering
  • 5.3.2 LSTMs for Question Answering.
  • 5.3.3 End-to-end memory networks for Question Answering
  • Summary
  • 6 Episodic memory for NLP
  • 6.1 Memory networks for sequential NLP
  • 6.2 Data and data processing
  • 6.2.1 PP-attachment data
  • 6.2.2 Dutch diminutive data
  • 6.2.3 Spanish part-of-speech data
  • 6.3 Strongly supervised memory networks: Experiments and results
  • 6.3.1 PP-attachment
  • 6.3.2 Dutch diminutives
  • 6.3.3 Spanish part-of-speech tagging
  • 6.4 Semi-supervised memory networks
  • 6.4.1 Semi-supervised memory networks: Experiments and results
  • Summary
  • Part 3. Advanced topics
  • 7 Attention
  • 7.1 Neural attention
  • 7.2 Data
  • 7.3 Static attention: MLP
  • 7.4 Temporal attention: LSTM
  • 7.5 Experiments
  • 7.5.1 MLP
  • 7.5.2 LSTM
  • Summary
  • 8 Multitask learning
  • 8.1 Introduction to multitask learning
  • 8.2 Multitask learning
  • 8.3 Multitask learning for consumer reviews: Yelp and Amazon
  • 8.3.1 Data handling
  • 8.3.2 Hard parameter sharing
  • 8.3.3 Soft parameter sharing
  • 8.3.4 Mixed parameter sharing
  • 8.4 Multitask learning for Reuters topic classification
  • 8.4.1 Data handling
  • 8.4.2 Hard parameter sharing
  • 8.4.3 Soft parameter sharing
  • 8.4.4 Mixed parameter sharing
  • 8.5 Multitask learning for part-of-speech tagging and named-entity recognition
  • 8.5.1 Data handling
  • 8.5.2 Hard parameter sharing
  • 8.5.3 Soft parameter sharing
  • 8.5.4 Mixed parameter sharing
  • Summary
  • 9 Transformers
  • 9.1 BERT up close: Transformers
  • 9.2 Transformer encoders
  • 9.2.1 Positional encoding
  • 9.3 Transformer decoders
  • 9.4 BERT: Masked language modeling
  • 9.4.1 Training BERT
  • 9.4.2 Fine-tuning BERT
  • 9.4.3 Beyond BERT
  • Summary
  • 10 Applications of Transformers: Hands-on with BERT
  • 10.1 Introduction: Working with BERT in practice
  • 10.2 A BERT layer
  • 10.3 Training BERT on your data
  • 10.4 Fine-tuning BERT
  • 10.5 Inspecting BERT.
  • 10.5.1 Homonyms in BERT
  • 10.6 Applying BERT
  • Summary
  • bibliography
  • index.