HAPPY BOOKSGIVING
Use code BOOKSGIVING during checkout to save 40%-55% on books and eBooks. Shop now.
Video accessible from your Account page after purchase.
Register your product to gain access to bonus material or receive a coupon.
12+ Hours of Video Instruction
Building Spark Applications LiveLessons provides data scientists and developers with a practical introduction to the Apache Spark framework using Python, R, and SQL. Additionally, it covers best practices for developing scalable Spark applications for predictive analytics in the context of a data scientist's standard workflow.
Description
In this video training, Jonathan starts off with a brief history of Spark itself and shows you how to get started programming in a Spark environment on a laptop. Taking an application and code first approach, he then covers the various APIs in Python, R, and SQL to show how Spark makes large scale data analysis much more accessible through languages familiar to data scientists and analysts alike. With the basics covered, the videos move into a real-world case study showing you how to explore data, process text, and build models with Spark. Throughout the process, Jonathan exposes the internals of the Spark framework itself to show you how to write better application code, optimize performance, and set up a cluster to fully leverage the distributed nature of Spark. After watching these videos, data scientists and developers will feel confident building an end-to-end application with Spark to perform machine learning and do data analysis at scale!
Code: https://github.com/zipfian/building-spark-applications-live-lessons
Resources: http://galvanize.com/resources/spark
Forum: https://gitter.im/zipfian/building-spark-applications-live-lessons
Data: https://s3.amazonaws.com/galvanize-example-data/spark-live-lessons-data.zip
Skill Level
What You Will Learn
Who Should Take This Course
Course Requirements
Lesson 1: Introduction to the Spark Environment
Lesson 1, “Introduction to the Spark Environment,” introduces Spark and provides context for the history and motivation for the framework. This lesson covers how to install and set up Spark locally, work with the Spark REPL and Jupyter notebook, and the basics of programming with Spark.
Lesson 2: Spark Programming APIs
Lesson 2, “Spark Programming APIs,” covers each of the various Spark programming interfaces. This lesson highlights the differences between and the tradeoffs of the Python (PySpark), R (SparkR), and SQL (Spark SQL and DataFrames) APIs as well as typical workflows for which each is best suited.
Lesson 3: Your First Spark Application
Lesson 3, “Your First Spark Application,” walks you through a case study with DonorsChoose.org data showing how Spark fits into the typical data science workflow. This lesson covers how to perform exploratory data analysis at scale, apply natural language processing techniques, and write an implementation of the k-means algorithm for unsupervised learning on text data.
Lesson 4: Spark Internals
Lesson 4, “Spark Internals,” peels back the layers of the framework and walks you through how Spark executes code in a distributed fashion. This lesson starts with a primer on distributed systems theory before diving into the Spark execution context, the details of RDDs, and how to run Spark in cluster mode on Amazon EC2. The lesson finishes with best practices for monitoring and tuning the performance of a Spark application.
Lesson 5: Advanced Applications
Lesson 5, “Advanced Applications,” takes you through a KDD cup competition, showing you how to leverage Spark’s higher level machine learning libraries (MLlib and spark.ml). The lesson covers the basics of machine learning theory, shows you how to evaluate the performance of models through cross validation, and demonstrates how to build a machine learning pipeline with Spark. The lesson finishes by showing you how to serialize and deploy models for use in a production setting.
About LiveLessons Video Training
The LiveLessons Video Training series publishes hundreds of hands-on, expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. This professional and personal technology video series features world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, IBM Press, Pearson IT Certification, Prentice Hall, Sams, and Que. Topics include: IT Certification, Programming, Web Development, Mobile Development, Home and Office Technologies, Business and Management, and more. View all LiveLessons on InformIT at: http://www.informit.com/livelessons
Video: Building Spark Applications: A Course Introduction
Video: Building Spark Applications: A Day in the Life of a Spark Application
Video: Building Spark Applications: Making Sense of Data Summary Statistics and Distributions
Video: Building Spark Applications: SparkR - Visualizing Data with ggplot2
Introduction
Lesson 1: Introduction to the Spark Environment
1.1 Getting the Materials
1.2 A Brief Historical Diversion
1.3 Origins of the Framework
1.4 Why Spark?
1.5 Getting Set Up: Spark and Java
1.6 Getting Set Up: Scientific Python
1.7 Getting Set Up: R Kernel for Jupyter
1.8 Your First PySpark Job
1.9 Introduction to RDDs: Functions, Transformations, and Actions
1.10 MapReduce with Spark: Programming with Key-Value Pairs
Lesson 2: Spark Programming APIs
2.1 Introduction to the Spark Programming APIs
2.2 PySpark: Loading and Importing Data
2.3 PySpark: Parsing and Transforming Data
2.4 PySpark: Analyzing Flight Delays
2.5 SparkR: Introduction to DataFrames
2.6 SparkR: Aggregations and Analysis
2.7 SparkR: Visualizing Data with ggplot2
2.8 Why (Spark) SQL?
2.9 Spark SQL: Adding Structure to Your Data
2.10 Spark SQL: Integration into Existing Workflows
Lesson 3: Your First Spark Application
3.1 How Spark Fits into the Data Science Process
3.2 Introduction to Exploratory Data Analysis
3.3 Case Study: DonorsChoose.org
3.4 Data Quality Checks with Accumulators
3.5 Making Sense of Data: Summary Statistics and Distributions
3.6 Working with Text: Introduction to NLP
3.7 Tokenization and Vectorization with Spark
3.8 Summarization with tf-idf
3.9 Introduction to Machine Learning
3.10 Unsupervised Learning with Spark: Implementing k-means
3.11 Testing k-means with DonorsChoose.org Essays
3.12 Challenges of k-means: Latent Features, Interpretation, and Validation
Lesson 4: Spark Internals
4.1 Introduction to Distributed Systems
4.2 Building Systems that Scale
4.3 The Spark Execution Context
4.4 RDD Deep Dive: Dependencies and Lineage
4.5 A Day in the Life of a Spark Application
4.6 How Code Runs: Stages, Tasks, and the Shuffle
4.7 Spark Deployment: Local and Cluster Modes
4.8 Setting Up Your Own Cluster
4.9 Spark Performance: Monitoring and Optimization
4.10 Tuning Your Spark Application
4.11 Making Spark Fly: Parallelism
4.12 Making Spark Fly: Caching
Lesson 5: Advanced Applications
5.1 Machine Learning on Spark: MLlib and spark.ml
5.2 The KDD Cup Competition: Preparing Data and Imputing Values
5.3 Introduction to Supervised Learning: Logistic Regression
5.4 Building a Model with MLlib
5.5 Model Evaluation and Metrics
5.6 Leveraging scikit-learn to Evaluate MLlib Models
5.7 Training Models with spark.ml
5.8 Machine Learning Pipelines with spark.ml
5.9 Tuning Models: Features, Cross Validation, and Grid Search
5.10 Serializing and Deploying Models
Summary