Question? Leave a message!

Kokkos, Manycore Device Performance Portability for C++ HPC Applications

Kokkos, Manycore Device Performance Portability for C++ HPC Applications 24
GraceRogers Profile Pic
Published Date:12-07-2017
Website URL
Kokkos, Manycore Device Photos placed in horizontal position with even amount Performance Portability of white space between photos and header for C++ HPC Applications Photos placed in horizontal position H. Carter Edwards and Christian Trott with even amount of white Sandia National Laboratories space between photos and header GPU TECHNOLOGY CONFERENCE 2015 MARCH 16-20, 2015 SAN JOSE, CALIFORNIA SAND2015-1885C (Unlimited Release) Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2011-XXXXP What is “Kokkos” ?  κόκκος (Greek)  Translation: “granule” or “grain” or “speck”  Like grains of salt or sand on a beach  Programming Model Abstractions  Identify / encapsulate grains of data and parallelizable operations  Aggregate these grains with data structure and parallel patterns  Map aggregated grains onto memory and cores / threads  An Implementation of the Kokkos Programming Model  Sandia National Laboratories’ open source C++ library 1 Outline  Core Abstractions and Capabilities  Performance portability challenge: memory access patterns  Layered C++ libraries  Spaces, policies, and patterns  Polymorphic multidimensional array  Easy parallel patterns with C++11 lambda  Managing memory access patterns  Atomic operations  Wrap up  Portable Hierarchical Parallelism  Initial Scalable Graph Algorithms  Conclusion 2 Performance Portability Challenge: Best (decent) performance requires computations to implement architecture-specific memory access patterns  CPUs (and Xeon Phi)  Core-data affinity: consistent NUMA access (first touch)  Array alignment for cache-lines and vector units  Hyperthreads’ cooperative use of L1 cache  GPUs  Thread-data affinity: coalesced access with cache-line alignment  Temporal locality and special hardware (texture cache)  Array of Structures (AoS) vs. Structure of Arrays (SoA) dilemma  i.e., architecture specific data structure layout and access  This has been the wrong concern The right concern: Abstractions for Performance Portability? 3 Kokkos’ Performance Portability Answer Integrated mapping of thread parallel computations and multidimensional array data onto manycore architecture 1. Map user’s parallel computations to threads  Parallel pattern: foreach, reduce, scan, task-dag, ...  Parallel loop/task body: C++11 lambda or C++98 functor 2. Map user’s datum to memory  Multidimensional array of datum, with a twist  Layout : multi-index (i,j,k,...) ↔ memory location  Kokkos chooses layout for architecture-specific memory access pattern  Polymorphic multidimensional array 3. Access user datum through special hardware (bonus)  GPU texture cache to speed up read-only random access patterns  Atomic operations for thread safety 4 Layered Collection of C++ Libraries  Standard C++, Not a language extension  Not a language extension: OpenMP, OpenACC, OpenCL, CUDA  In spirit of Intel’s TBB, NVIDIA’s Thrust & CUSP, MS C++AMP, ...  Uses C++ template meta-programming  Previously relied upon C++1998 standard  Now require C++2011 for lambda functionality Supported by Cuda 7.0; full functionality in Cuda 7.5  Participating in ISO/C++ standard committee for core capabilities Application & Library Domain Layer(s) Trilinos Sparse Linear Algebra Kokkos Containers & Algorithms Kokkos Core Back-ends: Cuda, OpenMP, pthreads, specialized libraries ... 5 Abstractions: Spaces, Policies, and Patterns  Memory Space : where data resides  Differentiated by performance; e.g., size, latency, bandwidth  Execution Space : where functions execute  Encapsulates hardware resources; e.g., cores, GPU, vector units, ...  Denote accessible memory spaces  Execution Policy : how (and where) a user function is executed  E.g., data parallel range : concurrently call function(i) for i = 0..N)  User’s function is a C++ functor or C++11 lambda  Pattern: parallel_for, parallel_reduce, parallel_scan, task-dag, ...  Compose: pattern + execution policy + user function; e.g., parallel_pattern( PolicySpace, Function);  Execute Function in Space according to pattern and Policy  Extensible spaces, policies, and patterns 6 Examples of Execution and Memory Spaces Compute Node Attached Accelerator GPU primary Multicore primary DDR GDDR Socket shared deep_copy Attached Accelerator Compute Node GPU primary GPU::capacity primary Multicore GDDR DDR (via pinned) shared perform Socket GPU::perform (via UVM) 7 Polymorphic Multidimensional Array View  View double38 , Space a(“a”,N,M);  Allocate array data in memory Space with dimensions NM38 ? C++17 improvement to allow Viewdouble 38,Space  a(i,j,k,l) : User’s access to array datum  “Space” accessibility enforced; e.g., GPU code cannot access CPU memory  Optional array bounds checking of indices for debugging  View Semantics: Viewdouble38,Space b = a ;  A shallow copy: ‘a’ and ‘b’ are pointers to the same allocated array data  Analogous to C++11 std::shared_ptr  deep_copy( destination_view , source_view );  Copy data from ‘source_view’ to ‘destination_view’  Kokkos policy: never hide an expensive deep copy operation 8 Polymorphic Multidimensional Array Layout  Layout mapping : a(i,j,k,l) → memory location  Layout is polymorphic, defined at compile time  Kokkos chooses default array layout appropriate for “Space”  E.g., row-major, column-major, Morton ordering, dimension padding, ...  User can specify Layout : View ArrayType, Layout, Space  Override Kokkos’ default choice for layout  Why? For compatibility with legacy code, algorithmic performance tuning, ...  Example Tiling Layout  Viewdouble,Tile8,8,Space m(“matrix”,N,N);  Tiling layout transparent to user code : m(i,j) unchanged  Layout-aware algorithm extracts tile subview 9 Multidimensional Array Subview & Attributes  Array subview of array view (new)  Y = subview( X , ...ranges_and_indices_argument_list... );  View of same data, with the appropriate layout and index map  Each index argument eliminates a dimension  Each range begin,end) argument contracts a dimension  Access intent Attributes View ArrayType, Layout, Space, Attributes  How user intends to access datum  Example, View with const and random access intension  View double , Cuda a(“mymatrix”, N, N );  View const double , Cuda, RandomAccess b = a ; Kokkos implements b(i,j) with GPU texture cache 10 Multidimensional Array functionality being considered by ISO/C++ standard committee  TBD: add layout polymorphism – a critical capability  To be discussed at May 2015 ISO/C++ meeting  TBD: add explicit (compile-time) dimensions  Minor change to core language to allow: T 38  Concern: performance loss when restricted to implicit (runtime) dimensions  TBD: use simple / intuitive array access API: x(i,j,k,l)  Currently considering : x i , j , k , l  Concern: performance loss due to intermediate initializer list  TBD: add shared pointer (std::shared_ptr) semantics  Currently merely a wrapper on user-managed memory  Concern: coordinating management of view and memory lifetime 11 Easy Parallel Patterns with C++11 and Defaults parallel_pattern( PolicySpace , UserFunction )  Easy example BLAS-1 AXPY with views parallel_for( N , KOKKOS_LAMBDA( int i ) y(i) = a x(i) + y(i); );  Default execution space chosen for Kokkos installation  Execution policy “N” = RangePolicyDefaultSpace(0,N)  define KOKKOS_LAMBDA = / non-Cuda /  define KOKKOS_LAMBDA =__device__ / Cuda 7.5 candidate feature /  Tell NVIDIA Cuda development team you like and want this in Cuda 7.5  More verbose without lambda and defaults: struct axpy_functor Viewdouble,Space x , y ; double a ; KOKKOS_INLINE_FUNCTION void operator()( int i ) const y(i) = a x(i) + y(i); // ... constructor ... ; parallel_for( RangePolicySpace(0,N) , axpy_functor(a,x,y) ); 12 Kokkos Manages Challenging Part of Patterns’ Implementation  Example: DOT product reduction parallel_reduce( N , KOKKOS_LAMBDA( int i , double & value ) value += x(i) y(i); , result );  Challenges: temporary memory and inter-thread reduction operations  Cuda shared memory for inter-warp reductions  Cuda global memory for inter-block reductions  Intra-warp, inter-warp, and inter-block reduction operations  User may define reduction type and operations struct my_reduction_functor typedef ... value_type ; KOKKOS_INLINE_FUNCTION void join( value_type&, const value_type&) const ; KOKKOS_INLINE_FUNCTION void init( value_type& ) const ; ;  ‘value_type’ can be runtime-sized one-dimensional array  ‘join’ and ‘init’ plugged into inter-thread reduction algorithm 13 Managing Memory Access Pattern: Compose Parallel Execution ○ Array Layout  Map Parallel Execution  Maps calls to function(iw) onto threads  GPU: iw = threadIdx + blockDim blockIds  CPU: iw ∈begin,end) ; contiguous partitions among threads Th  Choose Multidimensional Array Layout  Leading dimension is parallel work dimension  Leading multi-index is ‘iw’ : a( iw , j, k, l )  Choose appropriate array layout for space’s architecture  E.g., AoS for CPU and SoA for GPU  Fine-tune Array Layout  E.g., padding dimensions for cache line alignment 14 Performance Impact of Access Pattern  Molecular dynamics computational kernel in miniMD 7 13 ςς  Simple Lennard Jones force model: F = 6ε− 2 ∑ i ( ) ( ) r r j , r r ij ij ij cut 2  Atom neighbor list to avoid N computations pos_i = pos(i); for( jj = 0; jj num_neighbors(i); jj++) j = neighbors(i,jj); r_ij = pos_i – pos(j); //random read 3 floats if (r_ij r_cut) f_i += 6e((s/r_ij)7 – 2(s/r_ij)13) f(i) = f_i;  Test Problem 200  864k atoms, 77 neighbors correct layout  2D neighbor array 150 (with texture)  Different layouts CPU vs GPU 100 correct layout  Random read ‘pos’ through (without texture) GPU texture cache 50 wrong layout  Large performance loss (with texture) 0 with wrong array layout Xeon Xeon Phi K20x 15 GFlop/s Atomic operations atomic_exchange, atomic_compare_exchange_strong, atomic_fetch_add, atomic_fetch_or, atomic_fetch_and  Thread-scalability of non-trivial algorithms and data structures  Essential for lock-free implementations  Concurrent summations to shared variables  E.g., finite element computations summing to shared nodes  Updating shared dynamic data structure  E.g., append to a shared array or insert into a shared map  Portably map to compiler/hardware specific capabilities  GNU and CUDA extensions when available  Current: any 32bit or 64bit type, may use CAS-loop implementation  ISO/C++ 2011 and 2014 atomics not adequate for HPC  Proposed necessary improvements for C++17 16 Thread-Scalable Fill of Sparse Linear System −𝟏  MiniFENL: Newton iteration of FEM: 𝒙 =𝒙−𝑱 (𝒙 )𝒓 (𝒙� +𝒏𝟏𝒏𝒏𝒏  Fill sparse matrix via Scatter-Atomic-Add or Gather-Sum ?  Scatter-Atomic-Add + Simpler + Less memory – Slower HW atomic  Gather-Sum + Bit-wise reproducibility 0.35  Performance win? Phi-60 GatherSum 0.3  Scatter-atomic-add Phi-60 ScatterAtomic 0.25 Phi-240 GatherSum  equal Xeon PHI 0.2 Phi-240 ScatterAtomic 0.15  40% faster Kepler GPU K40X GatherSum 0.1  Pattern chosen K40X ScatterAtomic 0.05 0  Feedback to HW vendors: 1E+03 1E+04 1E+05 1E+06 1E+07 performant atomics Number of finite element nodes 17 Matrix Fill: microsec/node Core Abstractions and Capabilities (wrap up)  Abstractions  Identify / encapsulate grains of data and parallelizable operations  Aggregate these grains with data structure and parallel patterns  Map aggregated grains onto memory and cores / threads  Grains and Patterns  Parallelizable operation: C++11 lambda or C++98 functor  Parallel pattern: foreach, reduce, scan, task-dag, ...  Multidimensional array of datum  Atomic operations  Extensible Mappings  Polymorphic multidimensional array : space, layout, access intentions  Execution policy : where and how to execute  Next Step : Finer Grain Parallelism with Hierarchical Patterns  κόκκος : “like grains of sand on a beach” – how fine can we go? 18 Outline  Core Abstractions and Capabilities  Portable Hierarchical Parallelism  Two-level thread-team execution policy and nested parallel patterns  Thread-team shared memory  Three-level execution policy  Application to molecular dynamics kernels  Application to tensor mathematics kernels  Initial Scalable Graph Algorithms (very new)  Conclusion 19