HAPPY BOOKSGIVING
Use code BOOKSGIVING during checkout to save 40%-55% on books and eBooks. Shop now.
Video accessible from your Account page after purchase.
Register your product to gain access to bonus material or receive a coupon.
4+ Hours of Video Instruction
The Perfect Way to Get Started with Data Pipelines, Kafka, and NiFi
Data Engineering Foundations Part 2: Building Data Pipelines with Kafka and NiFi provides over four hours of video introducing you to creating data pipelines at scale with Kafka and NiFi. You learn to work with the Kafka message broker and discover how to establish NiFi dataflow. You also learn about data movement and storage. All software used in videos is open source and freely available for your use and experimentation on the included virtual machine.
Skill Level:
Learn How To:
Who Should Take This Course:
Course Requirements:
Lesson Descriptions:
Lesson 7: Working with the Kafka Message Broker
In Lesson 7, Doug introduces introduce the Kafka message broker concept and describes the producer-consumer model that enables input data to be reliably decoupled from output requests. Kafka producers and consumers are developed using Python, and internal broker operations are displayed using the Kafkaesque graphical user interface.
Lesson 8: Working with NiFi Dataflow
Lesson 8 begins with a description of NiFi flow-based programming and then provides several examples that include writing pipeline data to the local file system, then to the Hadoop Distributed File System, and finally to Hadoop Hive tables. The entire flow process is constructed using the NiFi web Graphical User Interface. The creation of portable flow templates for all examples is also presented.
Lesson 9: Big Data Movement and Storage
Lesson 9 provides you with several methods for moving data to and from the Hadoop Distributed File System. Hands-on examples include direct web downloads and using Python Pydoop to move data. Basic data movement between Apache HBase, Hive, and Spark using Python Happybase and Hive-SQL is also presented. Finally, movement of relational data to and from the Hadoop Distributed File System is demonstrated using Apache Sqoop.
About Pearson Video Training:
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Sams, and Que Topics include: IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.
Introduction
Lesson 7: Working with the Kafka Message Broker
Learning Objectives
7.1 Understand Kafka topics, brokers, and partitions
7.2 Implment basic Kafka usage modes
7.3 Use Kafka producers and consumers with Python
7.4 Use the KafkaEsque graphical user interface
Lesson 8: Working with NiFi Dataflow
Learning Objectives
8.1 Understand the core concepts of NiFi
8.2 Understand NiFi flow and web UI components
8.3 Understand a NiFi web UI example
Lesson 9: Big Data Movement and Storage
Learning Objectives
9.1 Understand direct data movement with HDFS
9.2 Use HBase with Python
9.3 Use Sqoop for database movement
Summary