High Performance Deep Learning on Intel Architecture презентация

Содержание

Слайд 2

Bigger Data Better Hardware Smarter Algorithms Fast Evolution of Technology

Bigger Data

Better Hardware

Smarter Algorithms

Fast Evolution of Technology

Image: 1000 KB / picture
Audio:

5000 KB / song
Video: 5,000,000 KB / movie

Transistor density doubles every 18 months
Cost / GB in 1995: $1000.00
Cost / GB in 2015: $0.03

Advances in neural networks leading to better accuracy in training models

Слайд 3

Classical Machine Learning CLASSIFIER SVM Random Forest Naïve Bayes Decision Trees Logistic Regression Ensemble methods Arjun

Classical Machine Learning

 

CLASSIFIER
SVM
Random Forest
Naïve Bayes
Decision Trees
Logistic Regression
Ensemble methods

 

Arjun

Слайд 4

Deep learning A method of extracting features at multiple levels

Deep learning

A method of extracting features at multiple levels
of abstraction

Features are

discovered from data

Performance improves with more data

High degree of representational power

Слайд 5

End-to-End Deep Learning ~60 million parameters But old practices apply:

End-to-End Deep Learning

~60 million parameters

But old practices apply:
Data Cleaning, Exploration,

Data annotation, hyperparameters, etc.

 

Arjun

Слайд 6

Automating previously “human” tasks Human Performance 2010 Present ImageNet Error

Automating previously “human” tasks

Human Performance

2010

Present

ImageNet Error Rate

Using Deep Learning

2000

Present

Speech Error Rate

Слайд 7

( 1 ) Large compute requirements for training Deep Learning Challenges

( 1 ) Large compute requirements for training

Deep Learning Challenges

Слайд 8

( 2 ) Performance scales with data Deep Learning Challenges

( 2 ) Performance scales with data

Deep Learning Challenges

Слайд 9

Scaling is I/O Bound # OF PROCESSORS LEARNING SPEED INDUSTRY

Scaling is I/O Bound

# OF PROCESSORS

LEARNING SPEED

INDUSTRY STANDARD: COMMUNICATION

OVERHEAD =
PERFORMANCE CEILING

NERVANA TECHNOLOGY:
BETTER COMMUNICATION
FABRIC, NEAR
LINEAR SCALING

Слайд 10

Intel Provides the Compute Foundation for DL Deep Learning Frameworks

Intel Provides the Compute Foundation for DL

Deep Learning Frameworks

Слайд 11

INTEL® MKL-DNN

INTEL® MKL-DNN

Слайд 12

Deep learning with Intel® MKL-DNN Intel® MKL SW building block

Deep learning with Intel® MKL-DNN

Intel® MKL
SW building block to extract

maximum performance on Intel® CPU
Provide common interface to all Intel® CPUs.

Variety of popular Deep Learning frameworks

Deep Learning Frameworks

*Other names and brands may be claimed as property of others.

Intel®
MKL-DNN

Intel® Math Kernel Library
(Intel® MKL)

Host & Offload

Intel® MKL-DNN and Intel® MKL – path to bring performance to DL frameworks on Intel® CPU

Слайд 13

Deep learning with Intel® MKL-DNN Intel® MKL-DNN Tech preview, https://01.org/mkl-dnn

Deep learning with Intel® MKL-DNN

Intel® MKL-DNN
Tech preview, https://01.org/mkl-dnn
Demonstrates interfaces and the

library structure for accepting external contributions
Single precision
Plan to enable more primitives to support additional topologies
Plan more optimizations for latest generation of Intel® CPU
Слайд 14

Deep learning with Intel® MKL-DNN Intel® MKL-DNN Programming Model Primitive

Deep learning with Intel® MKL-DNN

Intel® MKL-DNN Programming Model
Primitive – any operation

(convolution, data format re-order, memory)
Operation/memory descriptor - convolution parameters, memory dimensions
Descriptor - complete description of a primitive
Primitive – a specific instance of a primitive relying on descriptor
Engine – execution device (e.g., CPU)
Stream – execution context

/* Initialize CPU engine */
auto cpu_engine = mkldnn::engine(mkldnn::engine::cpu, 0);
/* Create a vector of primitives */
std::vector net;
/* Allocate input data and create a tensor structure that describes it */
std::vector src(2 * 3 * 227 * 227);
mkldnn::tensor::dims conv_src_dims = {2, 3, 227, 227};
/* Create memory descriptors, one for data and another for convolution input */
auto user_src_md = mkldnn::memory::desc({conv_src_dims},
mkldnn::memory::precision::f32, mkldnn::memory::format::nchw);
auto conv_src_md = mkldnn::memory::desc({conv_src_dims},
mkldnn::memory::precision::f32, mkldnn::memory::format::any);
/* Create convolution descriptor */
auto conv_desc = mkldnn::convolution::desc(
mkldnn::prop_kind::forward, mkldnn::convolution::direct,
conv_src_md, conv_weights_md, conv_bias_md, conv_dst_md,
{1, 1}, {0, 0}, mkldnn::padding_kind::zero);
/* Create a convolution primitive descriptor */
auto conv_pd = mkldnn::convolution::primitive_desc(conv_desc, cpu_engine);
/* Create a memory descriptor and primitive */
auto user_src_memory_descriptor
= mkldnn::memory::primitive_desc(user_src_md, engine);
auto user_src_memory = mkldnn::memory(user_src_memory_descriptor, src);
/* Create a convolution primitive and add it to the net */
auto conv = mkldnn::convolution(conv_pd, conv_input, conv_weights_memory,
conv_user_bias_memory, conv_dst_memory);
net.push_back(conv);
/* Create a stream, submit all primitives and wait for completion */
mkldnn::stream().submit(net).wait();

Слайд 15

Intel® Xeon Phi ™ processor 7250 up to 400x performance

Intel® Xeon Phi ™ processor 7250 up to 400x performance increase

with Intel Optimized Frameworks compared to baseline out of box performance

Normalized Images/Second on Intel® Xeon Phi™ processor 7250 baseline
Higher is better

Up to 400x

Configuration details available in backup
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel measured as of November 2016

Слайд 16

Intel® Xeon Phi ™ processor Knights Mill up to 4x

Intel® Xeon Phi ™ processor Knights Mill up to 4x estimated

performance improvement over Intel® Xeon Phi™ processor 7290

Estimated normalized performance on Intel® Xeon Phi™ processor 7290 compared to Intel® Xeon Phi™ Knights Mill

Up to 4x

Configuration details available in backup
Knights Mill performance: Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel measured baseline Intel® Xeon Phi™ processor 7290 as of November 2016

Слайд 17

INTEL® Machine Learning Scaling Library

INTEL® Machine Learning Scaling Library

Слайд 18

Intel® Machine Learning Scaling Library (MLSL) Deep learning abstraction of

Intel® Machine Learning Scaling Library (MLSL)

Deep learning abstraction of message-passing implementations.
Built

on top of MPI, allows other communication libraries to be used
Optimized to drive scalability of communication patterns
Works across various interconnects: Intel® Omni-Path Architecture, InfiniBand, and Ethernet
Common API to support Deep Learning frameworks (Caffe, Theano, Torch etc.)

Scaling Deep Learning to 32 nodes and beyond.

Слайд 19

Intel® Xeon Phi™ Processor 7250 GoogleNet V1 Time-To-Train Scaling Efficiency

Intel® Xeon Phi™ Processor 7250 GoogleNet V1 Time-To-Train Scaling Efficiency up

to 97% on 32 nodes

Configuration details available in backup
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel measured as of November 2016

Data pre-partitioned across all nodes in the cluster before training. There is no data transferred over the fabric while training.

Number of Intel® Xeon Phi™ Processor 7250 nodes

Слайд 20

NeON framework

NeON framework

Слайд 21

Neon: DL Framework with Blazing Performance

Neon: DL Framework with Blazing Performance

Слайд 22

Intel® Nervana™ Graph Compiler Intel® Nervana™ Graph Compiler: High-level execution

Intel® Nervana™ Graph Compiler

Intel® Nervana™ Graph Compiler: High-level execution graph for neural

networks to enable optimizations that are applicable across multiple HW targets.

Hardware-Specific Transformers

Hardware Agnostic

Intel® Nervana™ Graph Compiler

Customer Solution

Neon Solutions

Neon Models

Customer Models

Neon Deep Learning Functions

Customer Algorithms

Efficient buffer allocation
Training vs inference optimizations
Efficient scaling across multiple nodes
Efficient partitioning of subgraphs
Compounding of ops

Слайд 23

Intel® Nervana™ Graph Compiler as the performance building block… …to

Intel® Nervana™ Graph Compiler as the performance building block…

…to accelerate

all the latest DL innovations across the industry.
Слайд 24

INTEL® DEEP Learning SDK

INTEL® DEEP Learning SDK

Слайд 25

Increased Productivity Faster Time-to-market for training and inference, Improve model

Increased Productivity

Faster Time-to-market for training and inference,
Improve model accuracy,
Reduce total

cost of ownership

Maximum Performance

Optimized performance for training and inference on Intel® Architecture

Intel® Deep Learning SDK Accelerate Your Deep Learning Solution

A free set of tools for data scientists and software developers to develop, train, and deploy deep learning solutions

How to setup Intel Caffe guide

“Plug & Train/Deploy”

Simplify installation & preparation of deep learning models using popular deep learning frameworks on Intel hardware

Слайд 26

Deep Learning Training Tool Intel® Deep Learning SDK Simplify installation

Deep Learning Training Tool Intel® Deep Learning SDK

Simplify installation of Intel optimized

Deep Learning Frameworks
Easy and Visual way to Set-up, Tune and Run Deep Learning Algorithms:
Create training dataset
Design model with automatically optimized hyper-parameters
Launch and monitor training of multiple candidate models
Visualize training performance and accuracy

DL Framework

Datacenter

MKL-DNN

DL Training Tool

Dataset

Install
Configure
Run

Accuracy
Utilization
Model

.prototxt .caffemodel

Trained Model

Data Scientist

Label

Слайд 27

Deep Learning Deployment Tool Intel® Deep Learning SDK Unleash fast

Deep Learning Deployment Tool Intel® Deep Learning SDK

Unleash fast scoring performance on

Intel products while abstracting the HW from developers
Imports trained models from all popular DL framework regardless of training HW
Compresses model for improved execution, storage & transmission (pruning, quantization)
Generate Inference HW-Specific Code (C/C++, OpenVX, OpenCL, etc.)
Enables seamless integration with full system / application software stack

.prototxt .caffemodel

Trained Model

Model Optimizer

FP Quantize

Model Compress

Import

Inference Run-Time

OpenVX

Application Logic

Forward

Result

Real-time Data
Validation Data

Model Analysis

MKL-DNN

Deploy-ready model

Слайд 28

Deep Learning Tools for End-to-End Workflow Intel® Deep Learning SDK

Deep Learning Tools for End-to-End Workflow Intel® Deep Learning SDK

MKL-DNN Optimized Machine

Learning Frameworks

Intel DL Training Tool

Intel DL Deployment Tool

IMPORT Trained Model (trained on Intel or 3rd Party HW)
COMPRESS Model for Inference on Target Intel HW
GENERATE Inference HW-Specific Code (OpenVX, C/C++)
INTEGRATE with System SW / Application Stack & TUNE
EVALUATE Results and ITERATE

INSTALL / SELECT IA-Optimized Frameworks
PREPARE / CREATE Dataset with Ground-truth
DESIGN / TRAIN Model(s) with IA-Opt. Hyper-Parameters
MONITOR Training Progress across Candidate Models
EVALUATE Results and ITERATE

configure_nn(fpga/,…)
allocate_buffer(…)
fpga_conv(input,output);
fpga_conv(…);
mkl_SoftMax(…);
mkl_SoftMax(…);

Xeon (local or cloud)

Optimized libraries & run-times (MKL-DNN, OpenVX, OpenCL)
Data acquisition (sensors) and acceleration HW (FPGA, etc)

Target Inference Hardware Platform (physical or simulated)

Слайд 29

Leading AI research

Leading AI research

Слайд 30

Summary Intel provides highly optimized libraries to accelerate all DL

Summary

Intel provides highly optimized libraries to accelerate all DL frameworks
Intel® Machine

Learning Scaling Library (MLSL) allow to scale DL to 32 nodes and beyond
Nervana graph compiler, next innovation for DL performance
Intel® Deep Learning SDK make it easy for you to start exploring DeepLearning
Intel is committed to provide algorithmic, SW and HW innovations to get best performance for DL on IA
Get more details at: https://software.intel.com/en-us/ai/deep-learning
Слайд 31

Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS

Legal Disclaimer & Optimization Notice

INFORMATION IN THIS DOCUMENT IS PROVIDED “AS

IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
Copyright © 2016, Intel Corporation. All rights reserved. Intel, Pentium, Xeon, Xeon Phi, Core, VTune, Cilk, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries.
Слайд 32

Слайд 33

Configuration details BASELINE: Caffe Out Of the Box, Intel® Xeon

Configuration details

BASELINE: Caffe Out Of the Box, Intel® Xeon Phi™ processor

7250 (68 Cores, 1.4 GHz, 16GB MCDRAM: cache mode), 96GB memory, Centos 7.2 based on Red Hat* Enterprise Linux 7.2, BVLC-Caffe: https://github.com/BVLC/caffe, with OpenBLAS, Relative performance 1.0
NEW: Caffe: Intel® Xeon Phi™ processor 7250 (68 Cores, 1.4 GHz, 16GB MCDRAM: cache mode), 96GB memory, Centos 7.2 based on Red Hat* Enterprise Linux 7.2, Intel® Caffe: : https://github.com/intel/caffe based on BVLC Caffe as of Jul 16, 2016, MKL GOLD UPDATE1, Relative performance up to 400x
AlexNet used for both configuration as per https://papers.nips.cc/paper/4824-Large image database-classification-with-deep-convolutional-neural-networks.pdf, Batch Size: 256
Слайд 34

Configuration details BASELINE: Intel® Xeon Phi™ Processor 7290 (16GB, 1.50

Configuration details

BASELINE: Intel® Xeon Phi™ Processor 7290 (16GB, 1.50 GHz, 72

core) with 192 GB Total Memory on Red Hat Enterprise Linux* 6.7 kernel 2.6.32-573 using MKL 11.3 Update 4, Relative performance 1.0
NEW: Intel® Xeon phi™ processor family – Knights Mill, Relative performance up to 4x
Имя файла: High-Performance-Deep-Learning-on-Intel-Architecture.pptx
Количество просмотров: 92
Количество скачиваний: 0