SKIP THE SHIPPING
Use code NOSHIP during checkout to save 40% on eligible eBooks, now through January 5. Shop now.
Register your product to gain access to bonus material or receive a coupon.
Video accessible from your Account page after purchase.
1+ Hours of Video Instruction
Your introduction to how generative large language models work.
Generative language models, such as ChatGPT and Microsoft Bing, are becoming a daily tool for a lot of us, but these models remain black boxes to many. How does ChatGPT know which word to output next? How does it understand the meaning of the text you prompt it with? Everyone, from those who have never once interacted with a chatbot, to those who do so regularly, can benefit from a basic understanding of how these language models function. This course answers some of your fundamental questions about how generative AI works.
In this course, you learn about word embeddings: not only how they are used in these models, but also how they can be leveraged to parse large amounts of textual information utilizing concepts such as vector storage and retrieval augmented generation. It is important to understand how these models work, so you know both what they are capable of and where their limitations lie.
Learn How To:
Who Should Take This Course:
Anyone who:
Course Requirements:
No specific requirements
Lesson Descriptions:
Lesson 1: Introduction to LLMs and Generative AI
Lesson 1 is an introduction to large language models and generative artificial intelligence. Kate discusses what an LLM is and what generative AI is, and provides a general introduction to machine learning.
Lesson 2: Word Embeddings
Lesson 2 introduces word embeddings. Kate introduces the word embedding space and discusses how word embeddings capture word meanings, enabling LLMs to read and produce textual content. The lesson then turns to another AI concept, tokenization, followed by a discussion that pulls it all together. Kate finishes the lesson with an interesting side-effect of word embeddings.
Lesson 3: Word Embeddings in Generative Language Models
Lesson 3 begins with a discussion of how word embeddings are used in generative language models. Kate then introduces model architectures that use word embeddings, specifically recurrent neural networks (RNNs) and transformers. Kate covers the attention mechanism in transformers, contextual word embeddings, and how transformers are used for language generation. The lesson finishes with a discussion of what works well and what can go wrong when we train models on word embeddings.
Lesson 4: Other Use Cases for Embeddings
Lesson 4 covers how embeddings can also be used for summarization and vector storage. It finishes with an example of how embeddings can be used for retrieval augmented generation (RAG).
About Pearson Video Training:
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Prentice Hall, Sams, and Que Topics include IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.
Video Lessons are available for download for offline viewing within the streaming format. Look for the green arrow in each lesson.