Register your product to gain access to bonus material or receive a coupon.
Video accessible from your Account page after purchase.
Get the strategies, methodologies, tools, and best practices for AI security.
Overview:
3.5 hours of video training
This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and Retrieval-Augmented Generation (RAG). It addresses critical considerations and mitigations to reduce the overall risk in organizational AI system development processes. Experienced author and trainer Omar Santos emphasizes secure by design principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. You will be introduced to AI threats, LLM security, prompt injection, insecure output handling, and Red Team AI models. The course concludes by teaching you how to protect RAG implementations. You learn about orchestration libraries such as LangChain, LlamaIndex, and others, as well as securing vector databases, selecting embedding models, and more.
Related learning
Skill Level
Intermediate
Course Requirement
Linux system with Python 3.x installed.
About Pearson Video Training
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, and Que. Topics include IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.
Video Lessons are available for download for offline viewing within the streaming format. Look for the green arrow in each lesson.
Lesson 1: Introduction to AI Threats and LLM Security
1.1 Understanding the Significance of LLMs in the AI Landscape
1.2 Exploring the Resources for this Course - GitHub Repositories and Others
1.3 Introducing Retrieval Augmented Generation (RAG)
1.4 Understanding the OWASP Top-10 Risks for LLMs
1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
1.6 Understanding the NIST Taxonomy and Terminology of Attacks and Mitigations
Lesson 2: Understanding Prompt Injection & Insecure Output Handling
2.1 Defining Prompt Injection Attacks
2.2 Exploring Real-life Prompt Injection Attacks
2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
2.4 Enforcing Privilege Control on LLM Access to Backend Systems
2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
2.6 Understanding Insecure Output Handling Attacks
2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling
Lesson 3: Training Data Poisoning, Model Denial of Service & Supply Chain Vulnerabilities
3.1 Understanding Training Data Poisoning Attacks
3.2 Exploring Model Denial of Service Attacks
3.3 Understanding the Risks of the AI and ML Supply Chain
3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments
Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency
4.1 Understanding Sensitive Information Disclosure
4.2 Exploiting Insecure Plugin Design
4.3 Avoiding Excessive Agency
Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models
5.1 Understanding Overreliance
5.2 Exploring Model Theft Attacks
5.3 Understanding Red Teaming of AI Models
Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations
6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
6.2 Securing Embedding Models
6.3 Securing Vector Databases
6.4 Monitoring and Incident Response