Course Number: PYTH-216

Duration: 5 days (32.5 hours)

Format: Live, hands-on

Data Engineering Training Overview

Data engineering sits at the start of every data science project and is concerned with compiling, cleaning harmonizing, and exploring data for downstream analysis. It is common that analysts do this crucial preparatory work but this becomes a challenge as projects scale.

This Data Engineering training course teaches aspiring data engineers, data scientists, data science managers, and other quantitative professionals how to prepare and harmonize data in a repeatable and scalable manner. Students learn the pain points that arise as data scales and how to construct a scalable data engineering pipeline. Attendees use Python, PySpark, and DataBricks Community for processing on a cloud-based scalable cluster.

Location and Pricing

Accelebrate offers instructor-led enterprise training for groups of 3 or more online or at your site. Most Accelebrate classes can be flexibly scheduled for your group, including delivery in half-day segments across a week or set of weeks. To receive a customized proposal and price quote for private corporate training on-site or online, please contact us.

Objectives

  • Manual inspection for data quality and reliability
  • Understand the key data ingestion methods
  • Understand the key database types
  • Articulate the use cases for SQL, NoSQL, and graph databases
  • Inspect data with univariate and bivariate inspection methods
  • Flag and quantify severity of outliers
  • Inspect data and flag data for deviation from normality
  • Inspect and flag missing data
  • Generate standard reports for data quality issues
  • Describe the four levels of cloud services (Paas, Saas, Iaas, Daas)
  • Become aware of the major cloud providers and their offerings
  • Understanding of traded-offs between cloud services and on-premise solutions
  • Articulate use cases for pure Python vs PySpark
  • Articulate use cases for local vs cloud-based analytics pipelines
  • Implement a prototype data pipeline with python in a Jupyter notebook
  • Implement a scalable data pipeline with PySpark
  • Migrate a data pipeline to the cloud using DataBricks Community
  • Build an end-to-end solution culminating in a data visualization

Prerequisites

Students must have a solid understanding of Python and shell command line coding, including file management through the command line and basic UNIX/Linux commands.

Outline

Expand All | Collapse All

What is Data Engineering?
The Data Lifecycle
  • Scoping and selecting data
  • Staging and harmonization
  • Staging and saving data
  • Analysis and summarization
  • Insight
  • Revise and repeat
  • Data Engineering in the organization
    • Prepares data for downstream consumers
  • Core data engineering responsibilities
    • Stage
    • Cleanse
    • Conform
    • Deliver
    • Plan for scaling and automation
  • Data engineering toolkit
    • PySpark for big data
    • Cloud cluster distributed computing
    • Storage systems
    • Automation and orchestration
  • Next steps
    • Processes scale
    • Track failures and successes
    • Organize growing collections of logs
    • Automate system processes and checks
  • Process orchestration
    • Splunk
    • ELK stack
    • Airflow
Challenges of Modern Data Engineering
  • Data size
    • The four Vs
    • Volume
    • Velocity
    • Variety
    • Voracity
    • How big?
  • Strategies for dealing with big data
    • Streaming
    • Chunking
    • Batching
    • Sampling
  • Types of data
    • Structured
    • Partially structured
    • Unstructured
    • Form or survey data
    • Tweet stream / text blobs
    • Image or sound data
    • Numeric measurement data
    • Sensor/IOT
    • Dirty or clean
    • Set format
    • Text data
    • Interpretation
    • Contradictory data (summary rules)
  • Resiliency
    • Sampling
    • Eventual consistency
    • Real-time decisions
    • Single or multiple pass processing
  • Presenting and analysis
    • Summarizing your data
    • Data granularity and drill down
    • Defining the question
    • Operationalization
  • Delivery format
    • Data visualization
    • Data summarization
    • Data life cycle
    • Data persistence
The Data Science Pipeline
  • The seven steps to data science
    • 1) Collect and clean: Extraction Transformation and Loading (ETL)
    • 2) Understand the data: Exploratory Data Analysis (EDA)
    • 3) Modeling and evaluation
    • 4) Interpretation and presentation
    • 5) Revision
    • 6) Productionalization
    • 7) Maintenance
Data Engineering on the Cloud
  • Local assets vs. the cloud
  • Compute asset management
  • Service architecture
  • Cloud providers
    • Amazon AWS
    • GPI
    • Azure
    • DigitalOcean
  • Four levels of cloud service
    • Software as a service (SaaS)
    • Platform as a service (PaaS)
    • Desktop as a service (DaaS)
    • Infrastructure as a service (IaaS)
  • Types of clouds
Python for Data Analytics
  • Alternative analytics coding languages
    • ExcelVBA
    • C
    • Java
    • R
    • Golang
    • SPSS
    • SAS
  • Why Python?
    • Python as the glue!
  • PyData ecosystem
    • Scikit-learn
    • Jupyter (notebook and lab)
    • Python platforms
    • Shell
    • Notebooks
    • IDEs
    • Visual Studio
    • SQL connections
    • PySpark
  • The Python community
    • The popularity of the language
    • Pep8 standards
  • The "Pythonic" code ethic
    • Readability
    • Clear function
    • Least effort
The Data Science Flow Using Python
  • Import dependencies
  • Import data
  • Check data quality
  • Data code book
  • Data dictionary
  • Missing data
  • Bias
  • Variance
  • Data distribution
  • Sanity checking
  • Check experimental design
  • Experimental protocol
  • Non-random sampling issues
  • Data cleaning
  • Imputation
  • Unbalanced samples
  • Data exploration
  • Univariate
  • Bivariate
  • Corrplots
  • Data visualization
  • Matplotlib
  • Seaborn
  • Holoviews for really big data
  • Dashboard with panel
PySpark for Big Data
  • Why use PySpark?
    • Distributed clusters
    • Single context
  • Alternatives
  • Pandas (Python)
    • R (datatable and Tidyverse)
    • DASK
    • Hadoo
  • Spark comprehensive components
    • Spark architecture
    • Spark session
    • Spark schema
    • Transformations
    • Actions
    • Leveraging Spark
Using the PySpark API
  • What is PySpark?
    • A Java Virtual machine (JVM py4J)
    • A Python wrapper for Spark Scala
    • When to use Spark Scala instead
  • Spark APIs
    • DataFrames
    • Dataset
    • RDD
    • Speed considerations python DF vs python RDD
    • Return types RDD vs other
  • PySpark coding
    • Data exploration
    • Functions
    • Spark DF to pandas DataFrame
Data Pipelines with PySpark locally
  • Build a project using Spark in a Jupyter notebook
  • Install Spark locally
    • Spark shell
    • PySpark drivers
    • PySpark env vars
  • Introduction to notebooking
    • Collaboratory introduction
    • Data scientist go-to toolkit
    • Iterative coding
    • Testing and prototyping
    • Communication
    • Markdown and code
  • open collaboratory notebook for shared analysis
    • Code cells and markdown
    • Markdown use, take notes use latex
    • Kernel definition and intro
  • Get CSV from the web
    • !wget
  • Explore the file system structure.
    • !ls, !pwd
  • Walkthrough the data science workflow on Spark
    • Data ingestion
    • EDA exploratory data analysis
    • ETL (extract transform and load)
    • Iterative data exploration
    • Data visualization
DataBricks as an End-to-End Cloud Solution
  • DataBricks community
    • A free offering to build a cloud cluster
    • Try and troubleshoot the first iteration of a workflow
  • Why DataBricks?
    • Easy repeatable setup
    • Cross-organization standardized platform
    • End-to-end solution
    • Automatic Spark dependencies and cluster generation
    • DataBricks history
    • Hadoop map-reduce
    • Berkeley amp lab
    • DataBricks history
  • Why use Spark on the cloud?
    • Scale with clusters
    • On-demand resources
    • Speed acceleration
  • Setting up DataBricks
  • DataBricks Can Use Different Backends
    • AWS, GPI, Azure OR community edition
    • Plan selection
  • Start with DataBricks community edition
    • Community.cloud.DataBricks.com
  • DataBricks tour
    • DataBricks concepts
    • Workspaces
    • Notebooks
    • Clusters
    • Libraries
    • Tables
    • Jobs (scheduling)
    • The dbc file format
    • DataBricks demo gallery
  • Make a Notebook in Workspace
    • Markdown and python in DataBricks notebook
    • Make spark context
    • Automatic spark setup and cluster generation
    • Managing DataBricks clusters
Machine Learning Workflow with PySpark on the Cloud
  • Set up and execute Machine Learning (ML) flows
  • Use DataBricks to run ML flow on student’s data
  • Increased performance with Spark on a cluster
  • Setup PySpark notebook on DataBricks
    • Demonstrate NLP and clustering workflows on Twitter data
    • Demonstrate cluster use and optimization on the same analysis
    • Student workshop to bring all concepts together and present result
Conclusion

Training Materials

All Data Engineering training students receive comprehensive courseware.

Software Requirements

  • Anaconda Python 3.6 or later
  • Spyder IDE and Jupyter notebook (Comes with Anaconda)


Learn faster

Our live, instructor-led lectures are far more effective than pre-recorded classes

Satisfaction guarantee

If your team is not 100% satisfied with your training, we do what's necessary to make it right

Learn online from anywhere

Whether you are at home or in the office, we make learning interactive and engaging

Multiple Payment Options

We accept check, ACH/EFT, major credit cards, and most purchase orders



Recent Training Locations

Alabama

Birmingham

Huntsville

Montgomery

Alaska

Anchorage

Arizona

Phoenix

Tucson

Arkansas

Fayetteville

Little Rock

California

Los Angeles

Oakland

Orange County

Sacramento

San Diego

San Francisco

San Jose

Colorado

Boulder

Colorado Springs

Denver

Connecticut

Hartford

DC

Washington

Florida

Fort Lauderdale

Jacksonville

Miami

Orlando

Tampa

Georgia

Atlanta

Augusta

Savannah

Hawaii

Honolulu

Idaho

Boise

Illinois

Chicago

Indiana

Indianapolis

Iowa

Cedar Rapids

Des Moines

Kansas

Wichita

Kentucky

Lexington

Louisville

Louisiana

New Orleans

Maine

Portland

Maryland

Annapolis

Baltimore

Frederick

Hagerstown

Massachusetts

Boston

Cambridge

Springfield

Michigan

Ann Arbor

Detroit

Grand Rapids

Minnesota

Minneapolis

Saint Paul

Mississippi

Jackson

Missouri

Kansas City

St. Louis

Nebraska

Lincoln

Omaha

Nevada

Las Vegas

Reno

New Jersey

Princeton

New Mexico

Albuquerque

New York

Albany

Buffalo

New York City

White Plains

North Carolina

Charlotte

Durham

Raleigh

Ohio

Akron

Canton

Cincinnati

Cleveland

Columbus

Dayton

Oklahoma

Oklahoma City

Tulsa

Oregon

Portland

Pennsylvania

Philadelphia

Pittsburgh

Rhode Island

Providence

South Carolina

Charleston

Columbia

Greenville

Tennessee

Knoxville

Memphis

Nashville

Texas

Austin

Dallas

El Paso

Houston

San Antonio

Utah

Salt Lake City

Virginia

Alexandria

Arlington

Norfolk

Richmond

Washington

Seattle

Tacoma

West Virginia

Charleston

Wisconsin

Madison

Milwaukee

Alberta

Calgary

Edmonton

British Columbia

Vancouver

Manitoba

Winnipeg

Nova Scotia

Halifax

Ontario

Ottawa

Toronto

Quebec

Montreal

Puerto Rico

San Juan