Machine Learning Security

SEC-140 (4 Days)
Request Pricing for Machine Learning Security

Secure Machine Learning Training Overview

Attacks on machine learning applications are gaining momentum and protecting against a machine learning (ML) breach is essential. This Machine Learning Security training course teaches developers the skills they need to protect their ML applications. Students learn specialized secure coding skills and how to avoid the security pitfalls of the Python programming language.

Location and Pricing

Accelebrate courses are taught as private, customized training for groups of 3 or more at your site. In addition, we offer live, private online training for teams who may be in multiple locations or wish to save on travel costs. To receive a customized proposal and price quote for private on-site or online training, please contact us.

Secure Machine Learning Training Objectives

All students will:

  • Understand essential cyber security concepts
  • Learn about various aspects of machine learning security
  • Discover the possible attacks and defense techniques in adversarial machine learning
  • Identify vulnerabilities and their consequences
  • Learn the security best practices in Python
  • Understand Input validation approaches and principles
  • Manage vulnerabilities in third-party components
  • Understand how cryptography can support application security
  • Learn how to use cryptographic APIs correctly in Python
  • Understand security testing methodology and approaches
  • Be familiar with common security testing techniques and tools

Secure Machine Learning Training Outline

Expand All | Collapse All | Printer-Friendly

Cyber Security Basics
  • What is security?
  • Threat and risk
  • Cyber security threat types
  • Consequences of insecure software
    • Constraints and the market
    • The dark side
  • Categorization of bugs
    • The Seven Pernicious Kingdoms
    • Common Weakness Enumeration (CWE)
    • CWE Top 25 Most Dangerous Software Errors
    • Vulnerabilities in the environment and dependencies
Cyber Security in Machine Learning
  • ML-specific cyber security considerations
  • What makes machine learning a valuable target?
  • Possible consequences
  • Inadvertent AI failures
  • Some real-world abuse examples
  • ML threat model
    • Creating a threat model for machine learning
    • Machine learning assets
    • Security requirements
    • Attack surface
    • Attacker model – resources, capabilities, goals
    • Confidentiality threats
    • Integrity threats (model)
    • Integrity threats (data, software)
    • Availability threats
    • Dealing with AI/ML threats in software security
Using ML in Cyber Security
  • Static code analysis and ML
  • ML in fuzz testing
  • ML in anomaly detection and network security
  • Limitations of ML in security
Malicious Use of AI and ML
  • Social engineering attacks and media manipulation
  • Vulnerability exploitation
  • Malware automation
  • Endpoint security evasion
Adversarial Machine Learning
  • Threats against machine learning
  • Attacks against machine learning integrity
    • Poisoning attacks
    • Poisoning attacks against supervised learning
    • Poisoning attacks against unsupervised and reinforcement learning
    • Evasion attacks
    • Common white-box evasion attack algorithms
    • Common black-box evasion attack algorithm
    • Transferability of poisoning and evasion attacks
  • Some defense techniques against adversarial samples
    • Adversarial training
    • Defensive distillation
    • Gradient masking
    • Feature squeezing
    • Using reformers on adversarial data
    • Caveats about the efficacy of current adversarial defenses
    • Simple practical defenses
  • Attacks against machine learning confidentiality
    • Model extraction attacks
    • Defending against model extraction attacks
    • Model inversion attacks
    • Defending against model inversion attacks
Denial of Service
  • Denial of Service
  • Resource exhaustion
  • Cash overflow
  • Flooding
  • Algorithm complexity issues
  • Denial of service in ML
    • Accuracy reduction attacks
    • Denial-of-information attacks
    • Catastrophic forgetting in neural networks
    • Resource exhaustion attacks against ML
    • Best practices for protecting availability in ML systems
Input Validation Principles
  • Blacklists and whitelists
  • Data validation techniques
  • What to validate – the attack surface
  • Where to validate – defense in depth
  • How to validate – validation vs transformations
  • Output sanitization
  • Encoding challenges
  • Validation with regex
  • Regular expression denial of service (ReDoS)
  • Dealing with ReDoS
Injection
  • Injection principles
  • Injection attacks
  • SQL injection
  • SQL injection basics
  • Attack techniques
  • Content-based blind SQL injection
  • Time-based blind SQL injection
  • SQL injection best practices
  • Input validation
  • Parameterized queries
  • Additional considerations
  • SQL injection and ORM
  • Code injection
    • Code injection via input()
    • OS command injection
  • General protection best practices
Integer Handling Problems
  • Representing signed numbers
  • Integer visualization
  • Integers in Python
  • Integer overflow
  • Integer overflow with ctypes and NumPy
  • Other numeric problems
Files and Streams
  • Path traversal
  • Path traversal-related examples
  • Additional challenges in Windows
  • Virtual resources
  • Path traversal best practices
  • Format string issues
Unsafe Native Code
  • Native code dependence
  • Best practices for dealing with native code
Input Validation in Machine Learning
  • Misleading the machine learning mechanism
  • Sanitizing data against poisoning and RONI
  • Code vulnerabilities causing evasion, misprediction, or misclustering
  • Typical ML input formats and their security
Security Features
  • Authentication
    • Authentication basics
    • Multi-factor authentication
    • Authentication weaknesses - spoofing
    • Password management
  • Information exposure
    • Exposure through extracted data and aggregation
    • Privacy violation
    • System information leakage
    • Information exposure best practices
Time and State
  • Race conditions
    • File race condition
    • Avoiding race conditions in Python
  • Mutual exclusion and locking
    • Deadlocks
  • Synchronization and thread safety
Errors
  • Error handling
    • Returning a misleading status code
    • Information exposure through error reporting
  • Exception handling
    • In the except, catch block. And now what?
    • Empty catch block
    • The danger of assert statements
Using Vulnerable Components
  • Assessing the environment
  • Hardening
  • Malicious packages in Python
  • Vulnerability management
    • Patch management
    • Vulnerability management
    • Bug bounty programs
    • Vulnerability databases
    • Vulnerability rating – CVSS
    • DevOps, the build process and CI / CD
    • Dependency checking in Python
  • ML Supply Chain Risks
    • Common ML system architectures
    • ML system architecture and the attack surface
    • Protecting data in transit – transport layer security
    • Protecting data in use – homomorphic encryption
    • Protecting data in use – differential privacy
    • Protecting data in use – multi-party computation
  • ML frameworks and security
    • General security concerns about ML platforms
    • TensorFlow security issues and vulnerabilities
Cryptography for Developers
  • Cryptography basics
  • Cryptography in Python
  • Elementary algorithms
    • Random number generation
    • Hashing
  • Confidentiality protection
    • Symmetric encryption
  • Homomorphic encryption
    • Basics of homomorphic encryption
    • Types of homomorphic encryption
    • FHE in machine learning
  • Integrity protection
    • Message Authentication Code (MAC)
    • Digital signature
  • Public Key Infrastructure (PKI)
    • Some further key management challenges
    • Certificates
Security Testing
  • Security testing methodology
    • Security testing – goals and methodologies
    • Overview of security testing processes
    • Threat modeling
  • Security testing techniques and tools
    • Code analysis
    • Dynamic analysis
Wrap Up
  • Secure coding principles
    • Principles of robust programming by Matt Bishop
    • Secure design principles of Saltzer and Schröder
  • And now what?
    • Software security sources and further reading
    • Python resources
    • Machine learning security resources

Machine Learning Security Webinar

In this ML Security Webinar, one of our senior secure code trainers discusses the ways your systems may be vulnerable to attacks and what you can do to avoid them.

Request Pricing for Machine Learning Security
Lecture percentage

50%

Lecture/Demo

Lab percentage

50%

Lab

Course Number:

SEC-140

Duration:

4 Days

Prerequisites:

Students should be Python developers working on machine learning systems.

Training Materials:

All attendees receive comprehensive courseware.

Software Requirements:

Accelebrate can either provide a VMware virtual machine that can be run locally for the training or can provide access to a preconfigured cloud environment for each participant. Please contact us for details.

Contact Us:

Accelebrate’s training classes are available for private groups of 3 or more people at your site or online anywhere worldwide.

Don't settle for a "one size fits all" public class! Have Accelebrate deliver exactly the training you want, privately at your site or online, for less than the cost of a public class.

For pricing and to learn more, please contact us.

Contact Us Train For Us

Have you read our Google reviews?

Toll-free in US/Canada:
877 849 1850
International:
+1 678 648 3113

Fax: +1 404 420 2491

925B Peachtree Street, NE
PMB 378
Atlanta, GA 30309-3918
USA

Subscribe to our Newsletter:

Never miss the latest news and information from Accelebrate:

Microsoft Gold Partner

Please see our complete list of
Microsoft Official Courses

Recent Training Locations

Alabama

Birmingham

Huntsville

Montgomery

Alaska

Anchorage

Arizona

Phoenix

Tucson

Arkansas

Fayetteville

Little Rock

California

Los Angeles

Oakland

Orange County

Sacramento

San Diego

San Francisco

San Jose

Colorado

Boulder

Colorado Springs

Denver

Connecticut

Hartford

DC

Washington

Florida

Fort Lauderdale

Jacksonville

Miami

Orlando

Tampa

Georgia

Atlanta

Augusta

Savannah

Hawaii

Honolulu

Idaho

Boise

Illinois

Chicago

Indiana

Indianapolis

Iowa

Cedar Rapids

Des Moines

Kansas

Wichita

Kentucky

Lexington

Louisville

Louisiana

New Orleans

Maine

Portland

Maryland

Annapolis

Baltimore

Frederick

Hagerstown

Massachusetts

Boston

Cambridge

Springfield

Michigan

Ann Arbor

Detroit

Grand Rapids

Minnesota

Minneapolis

Saint Paul

Mississippi

Jackson

Missouri

Kansas City

St. Louis

Nebraska

Lincoln

Omaha

Nevada

Las Vegas

Reno

New Jersey

Princeton

New Mexico

Albuquerque

New York

Albany

Buffalo

New York City

White Plains

North Carolina

Charlotte

Durham

Raleigh

Ohio

Akron

Canton

Cincinnati

Cleveland

Columbus

Dayton

Oklahoma

Oklahoma City

Tulsa

Oregon

Portland

Pennsylvania

Philadelphia

Pittsburgh

Rhode Island

Providence

South Carolina

Charleston

Columbia

Greenville

Tennessee

Knoxville

Memphis

Nashville

Texas

Austin

Dallas

El Paso

Houston

San Antonio

Utah

Salt Lake City

Virginia

Alexandria

Arlington

Norfolk

Richmond

Washington

Seattle

Tacoma

West Virginia

Charleston

Wisconsin

Madison

Milwaukee

Alberta

Calgary

Edmonton

British Columbia

Vancouver

Manitoba

Winnipeg

Nova Scotia

Halifax

Ontario

Ottawa

Toronto

Quebec

Montreal

Puerto Rico

San Juan

© 2013-2020 Accelebrate, Inc. All Rights Reserved. All trademarks are owned by their respective owners.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.