Python and PySpark Training Overview
This Python and PySpark training course teaches attendees Python and PySpark fundamentals for controlling the Apache Spark analytics engine. Participants strengthen their Python coding skills and perform enhanced data analytics using Spark.
Location and Pricing
Accelebrate offers instructor-led enterprise training for groups of 3 or more online or at your site. Most Accelebrate classes can be flexibly scheduled for your group, including delivery in half-day segments across a week or set of weeks. To receive a customized proposal and price quote for private corporate training on-site or online, please contact us.
In addition, we offer some courses as live, instructor-led online classes for individuals.
Objectives
- Create scripts, variables, collections, control statements, loops, and functions in Python
- Work in the PySpark Shell
- Perform data transformation with PySpark
- Incorporate RDD performance improvement techniques with PySpark
- Integrate Spark SQL with PySpark
- Repair and Normalize Data
- Perform data grouping and aggregation
Prerequisites
All attendees must have programming and/or scripting experience in a modern programming language.
Outline
Expand All | Collapse All
Introduction to Python
- What is Python
- Uses of Python
- Installing Python
- Python Package Manager (PIP)
- Using the Python Shell
- Python Code Conventions
- Importing Modules
- The Help(object) Command
- The Help Prompt
Python Scripts
- Executing Python Code
- Python Scripts
- Writing Scripts
- Running Python Scripts
- Self Executing Scripts
- Accepting Command-Line Parameters
- Accepting Interactive Input
- Retrieving Environment Settings
Data Types and Variables
- Creating Variables
- Displaying Variables
- Basic Concatenation
- Data Types
- Strings
- Strings as Arrays
- String Methods
- Combining Strings and Numbers
- Numeric Types
- Integer Types
- Floating Point Types
- Boolean Types
- Checking Data Type
Python Collections
- Python Collections
- List Type
- Modifying Lists
- Sorting a List
- Tuple Type
- Python Sets
- Modifying Sets
- Dictionary (Map) Type
- Dictionary Methods
- Sequences
Control Statements and Looping
- If Statement
- elif Keyword
- Boolean Conditions
- Single Line If Statements
- For-in Loops
- Looping over an Index
- Range Function
- Nested Loops
- While Loops
- Exception Handling
- Built-in Exceptions
- Exceptions thrown by Built-In Functions
Functions in Python
- Defining Functions
- Naming Functions
- Using Functions
- Function Parameters
- Named Parameters
- Variable Length Parameter List
- How Parameters are Passed
- Variable Scope
- Returning Values
- Docstrings
- Best Practices
- Single Responsibility
- Returning a Value
- Function Length
- Pure and Idempotent Functions
Working With Data in Python
- Data Type Conversions
- Conversions from other Types to Integer
- Conversions from other Types to Float
- Conversions from other Types to String
- Conversions from other Types to Boolean
- Converting Between Set, List and Tuple Data Structures
- Modifying Tuples
- Combining Set, List and Tuple Data Structures
- Creating Dictionaries from other Data Structures
Reading and Writing Text Files
- Opening a File
- Writing a File
- Reading a File
- Appending to a File
- File Operations Using the
- With
- Statement
- File and Directory Operations
- Reading JSON
- Writing JSON
Functional Programming Primer
- What is Functional Programming?
- Benefits of Functional Programming
- Functions as Data
- Using Map Function
- Using Filter Function
- Lambda expressions
- List.sort() Using Lambda Expression
- Difference Between Simple Loops and map/filter Type Functions
- Additional Functions
- General Rules for Creating Functions
Introduction to Apache Spark
- What is Apache Spark
- A Short History of Spark
- Where to Get Spark?
- The Spark Platform
- Spark Logo
- Common Spark Use Cases
- Languages Supported by Spark
- Running Spark on a Cluster
- The Driver Process
- Spark Applications
- Spark Shell
- The spark-submit Tool
- The spark-submit Tool Configuration
- The Executor and Worker Processes
- The Spark Application Architecture
- Interfaces with Data Storage Systems
- Limitations of Hadoop's MapReduce
- Spark vs. MapReduce
- Spark as an Alternative to Apache Tez
- The Resilient Distributed Dataset (RDD)
- Datasets and DataFrames
- Spark Streaming (Micro-batching)
- Spark SQL
- Example of Spark SQL
- Spark Machine Learning Library
- GraphX
- Spark vs. R
The Spark Shell
- The Spark Shell
- The Spark v.2 + Command-Line Shells
- The Spark Shell UI
- Spark Shell Options
- Getting Help
- Jupyter Notebook Shell Environment
- Example of a Jupyter Notebook Web UI (Databricks Cloud)
- The Spark Context (sc) and Spark Session (spark)
- Creating a Spark Session Object in Spark Applications
- The Shell Spark Context Object (sc)
- The Shell Spark Session Object (spark)
- Loading Files
- Saving Files
Spark RDDs
- The Resilient Distributed Dataset (RDD)
- Ways to Create an RDD
- Supported Data Types
- RDD Operations
- RDDs are Immutable
- Spark Actions
- RDD Transformations
- Other RDD Operations
- Chaining RDD Operations
- RDD Lineage
- The Big Picture
- What May Go Wrong
- Checkpointing RDDs
- Local Checkpointing
- Parallelized Collections
- More on parallelize() Method
- The Pair RDD
- Where do I use Pair RDDs?
- Example of Creating a Pair RDD with Map
- Example of Creating a Pair RDD with keyBy
- Miscellaneous Pair RDD Operations
- RDD Caching
- RDD Persistence
Parallel Data Processing with Spark
- Running Spark on a Cluster
- Data Partitioning
- Data Partitioning Diagram
- Single Local File System RDD Partitioning
- Multiple File RDD Partitioning
- Special Cases for Small-sized Files
- Parallel Data Processing of Partitions
- Spark Application, Jobs, and Tasks
- Stages and Shuffles
- The "Big Picture"
Shared Variables in Spark
- Shared Variables in Spark
- Broadcast Variables
- Creating and Using Broadcast Variables
- Example of Using Broadcast Variables
- Problems with Global Variables
- Example of the Closure Problem
- Accumulators
- Creating and Using Accumulators
- Example of Using Accumulators (Scala Example)
- Example of Using Accumulators (Python Example)
- Custom Accumulators
Introduction to Spark SQL
- What is Spark SQL?
- Uniform Data Access with Spark SQL
- Hive Integration
- Hive Interface
- Integration with BI Tools
- What is a DataFrame?
- Creating a DataFrame in PySpark
- Commonly Used DataFrame Methods and Properties in PySpark
- Grouping and Aggregation in PySpark
- The "DataFrame to RDD" Bridge in PySpark
- The SQLContext Object
- Examples of Spark SQL / DataFrame (PySpark Example)
- Converting an RDD to a DataFrame Example
- Example of Reading / Writing a JSON File
- Using JDBC Sources
- JDBC Connection Example
- Performance, Scalability, and Fault-tolerance of Spark SQL
Repairing and Normalizing Data
- Repairing and Normalizing Data
- Dealing with the Missing Data
- Sample Data Set
- Getting Info on Null Data
- Dropping a Column
- Interpolating Missing Data in pandas
- Replacing the Missing Values with the Mean Value
- Scaling (Normalizing) the Data
- Data Preprocessing with scikit-learn
- Scaling with the scale() Function
- The MinMaxScaler Object
Data Grouping and Aggregation in Python
- Data Aggregation and Grouping
- Sample Data Set
- The pandas.core.groupby.SeriesGroupBy Object
- Grouping by Two or More Columns
- Emulating SQL's WHERE Clause
- The Pivot Tables
- Cross-Tabulation
Conclusion
Training Materials
All Data Analytics training students receive comprehensive courseware.
Software Requirements
- Windows, Mac, or Linux with at least 8 GB RAM
- Most class activities will create Spark code and visualizations in a browser-based notebook environment. The class also details how to export these notebooks and how to run code outside of this environment.
- A current version of Anaconda for Python 3.x
- Related lab files that Accelebrate will provide
- Internet access