HAPPY BOOKSGIVING
Use code BOOKSGIVING during checkout to save 40%-55% on books and eBooks. Shop now.
Video accessible from your Account page after purchase.
Register your product to gain access to bonus material or receive a coupon.
10+ Hours of Video Instruction
Learn how to apply state-of-the-art transformer-based LLMs, including BERT, ChatGPT, GPT-3, and T5, to solve modern NLP tasks.
Overview:
Introduction to Transformer Models for NLP LiveLessons provides a comprehensive overview of LLMs, transformers, and the mechanisms--attention, embedding, and tokenization--that set the stage for state-of-the-art NLP models like BERT and ChatGPT to flourish. The focus for these lessons is providing a practical, comprehensive, and functional understanding of transformer architectures and how they are used to create modern NLP pipelines. Throughout this series, instructor Sinan Ozdemir will bring theory to life through illustrations, solved mathematical examples, and straightforward Python examples within Jupyter notebooks.
All lessons in the course are grounded by real-life case studies and hands-on code examples. After completing this lesson, you will be in a great position to understand and build cutting-edge NLP pipelines using transformers. You will also be provided with extensive resources and curriculum detail, which can all be found at the course's GitHub repository.
Skill Level:
Prompt engineer for optimal outputs from GPT-3 and ChatGPT
Those who want the best outputs from the GPT-3 or ChatGPT model
Introduction
Lesson 1: Introduction to Attention and Language Models
1.1 A brief history of NLP
1.2 Paying attention with attention
1.3 Encoder-decoder architectures
1.4 How language models look at text
Lesson 2: How Transformers Use Attention to Process Text
2.1 Introduction to transformers
2.2 Scaled dot product attention
2.3 Multi-headed attention
Lesson 3: Transfer Learning
3.1 Introduction to transfer learning
3.2 Introduction to PyTorch
3.3 Fine-tuning transformers with PyTorch
Lesson 4: Natural Language Understanding with BERT
4.1 Introduction to BERT
4.2 Wordpiece tokenization
4.3 The many embeddings of BERT
Lesson 5: Pre-training and Fine-Tuning BERT
5.1 The Masked Language Modeling Task
5.2 The Next Sentence Prediction Task
5.3 Fine-tuning BERT to solve NLP tasks
Lesson 6: Hands on BERT
6.1 Flavors of BERT
6.2 BERT for sequence classification
6.3 BERT for token classification
6.4 BERT for question/answering
Lesson 7: Natural Language Generation with GPT
7.1 Introduction to the GPT family
7.2 Masked multi-headed attention
7.3 Pre-training GPT
7.4 Few-shot learning
Lesson 8: Hands on GPT
8.1 GPT for style completion
8.2 GPT for code dictation
Lesson 9: Further Applications of BERT + GPT
9.1 Siamese BERT-networks for semantic searching
9.2 Teaching GPT multiple tasks at once with prompt engineering
Lesson 10: T5 Back to Basics
10.1 Encoders and decoders welcome: T5's architecture
10.2 Cross-attention
Lesson 11: Hands-on T5
11.1 Off the shelf results with T5
11.2 Using T5 for abstractive summarization
Lesson 12: The Vision Transformer
12.1 Introduction to the Vision Transformer (ViT)
12.2 Fine-tuning an image captioning system
Lesson 13: Deploying Transformer Models
13.1 Introduction to MLOps
13.2 Sharing our models on Hugging Face
13.3 Deploying a fine-tuned BERT model using FastAPI
Lesson 14: Using Massively Large Language Models
14.1 Modern Large Language Models
14.2 GPT-3 and ChatGPT
14.3 Other LLMs and Semantic Search with Open AI Embeddings
Summary