Developer Reference

  • 2021.1
  • 12/04/2020
  • Public Content
Contents

Sparse BLAS Functionality

In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 1
Functionality
Operations
CPU
OpenMP Offload Intel GPU
Sparse Vector - Dense Vector addition (AXPY)
y <- alpha * w + y
Yes
No
Sparse Vector - Sparse Vector Dot product (SPDOT) (sv.sv -> sc)
d <- dot(w,v)
N/A
N/A
dot(w,v) = sum(w
i
* v
i
)
No
No
dot(w,v) = sum(conj(w
i
) * v
i
)
No
No
Sparse Vector - Dense Vector Dot product (SPDOT) (sv.dv -> sc)
d <- dot(w,x)
N/A
N/A
dot(w,v) = sum(w
i
* v
i
)
Yes
No
dot(w,v) = sum(conj(w
i
) * v
i
)
Yes
No
Dense Vector - Sparse Vector Conversion (sv <-> dv)
N/A
N/A
x = scatter(w)
Yes
No
w = gather(x,windx)
Yes
No
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 2
Functionality
Operations
CPU
OpenMP Offload Intel GPU
General Matrix-Vector multiplication (GEMV) (sm*dv->dv)
y <- beta*y + alpha * op(A)*x
N/A
N/A
op(A) = A
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
Symmetric Matrix-Vector multiplication (SYMV) (sm*dv->dv)
y <- beta*y + alpha * op(A)*x
N/A
N/A
op(A) = A
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
Triangular Matrix-Vector multiplication (TRMV) (sm*dv->dv)
y <- beta*y + alpha * op(A)*x
N/A
N/A
op(A) = A
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
General Matrix-Vector mult with dot product (GEMVDOT) (sm*dv -> dv, dv.dv->sc)
y <- beta*y + alpha * op(A)*x, d = dot(x,y)
N/A
N/A
op(A) = A
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
Triangular Solve (TRSV) (inv(sm)*dv -> dv)
solve for y, op(A)*y = alpha*x
N/A
N/A
op(A) = A
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 3
Functionality
Operations
CPU
OpenMP Offload Intel GPU
General Sparse Matrix - Dense Matrix Multiplication (GEMM) (sm*dm->dm)
Y <- alpha*op(A)*op(X) + beta*Y
N/A
N/A
op(A) = A, op(X) = X
Yes
No
op(A) = A
T
, op(X) = X
Yes
No
op(A) = A
H
, op(X) = X
Yes
No
op(A) = A, op(X) = X
T
No
No
op(A) = A
T
, op(X) = X
T
No
No
op(A) = A, op(X) = X
H
No
No
op(A) = A
H
No
No
op(A) = A
T
, op(X) = X
H
No
No
op(A) = A
H
, op(X) = X
H
No
No
General Dense Matrix - Sparse Matrix Multiplication (GEMM) (dm*sm->dm)
Y <- alpha*op(X)*op(A) + beta*Y
N/A
N/A
op(X) = X, op(A)=A
No
No
op(X) = X
H
, op(A)=A
No
No
op(X) = X
H
, op(A)=A
No
No
op(X) = X, op(A)=A
H
No
No
op(X) = X
H
, op(A)=A
H
No
No
op(X) = X
H
, op(A)=A
H
No
No
op(X) = X, op(A)=A
H
No
No
op(X) = X
H
, op(A)=A
H
No
No
op(X) = X
H
, op(A)=A
H
No
No
General Sparse Matrix - Sparse Matrix Multiplication (GEMM) (sm*sm->sm)
C <- alpha*op(A)*op(B) + beta*C
N/A
N/A
op(A)=A, op(B)=B
Yes
No
op(A)=A
T
, op(B)=B
Yes
No
op(A)=A
H
, op(B)=B
Yes
No
op(A)=A, op(B)=B
T
Yes
No
op(A)=A
T
, op(B)=B
T
Yes
No
op(A)=A
H
, op(B)=B
T
Yes
No
op(A)=A, op(B)=B
H
Yes
No
op(A)=A
T
, op(B)=B
H
Yes
No
op(A)=A
H
, op(B)=B
H
Yes
No
General Sparse Matrix - Sparse Matrix Multiplication (GEMM) (sm*sm->dm)
Y <- alpha*op(A)*op(B) + beta*Y
N/A
N/A
op(A)=A, op(B)=B
Yes
No
op(A)=A
T
, op(B)=B
Yes
No
op(A)=A
H
, op(B)=B
Yes
No
op(A)=A, op(B)=B
T
No
No
op(A)=A
T
, op(B)=B
T
No
No
op(A)=A
H
, op(B)=B
T
No
No
op(A)=A, op(B)=B
H
No
No
op(A)=A
T
, op(B)=B
H
No
No
op(A)=A
H
, op(B)=B
H
No
No
Symmetric Rank-K update (SYRK) (sm*sm->sm)
C <- op(A)*op(A)
H
N/A
N/A
op(A)=A
Yes
No
op(A)=A
T
Yes
No
op(A)=A
H
Yes
No
Symmetric Rank-K update (SYRK) (sm*sm->dm)
Y <- op(A)*op(A)
H
N/A
N/A
op(A)=A
Yes
No
op(A)=A
T
Yes
No
op(A)=A
H
Yes
No
Symmetric Triple Product (SYPR) (op(sm)*sm*sm -> sm)
C <- op(A)*B*op(A)
H
N/A
N/A
op(A)=A
Yes
No
op(A)=A
T
Yes
No
op(A)=A
H
Yes
No
Triangular Solve (TRSM) (inv(sm)*dm -> dm)
solve for Y, op(A)*Y = alpha*X
N/A
N/A
op(A)=A
Yes
No
op(A)=A
T
Yes
No
op(A)=A
H
Yes
No
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Other
Functionality
Operations
CPU
OpenMP Offload Intel GPU
Symmetric Gauss-Seidel Preconditioner (SYMGS) (update A*x=b, A=L+D+U)
x0 <- x*alpha; (L+D)*x1=b-U*x0; (U+D)*x=b-L*x1
Yes
No
Symmetric Gauss-Seidel Preconditioner with Matrix-Vector product (SYMGS_MV) (update A*x=b, A=L+D+U)
x0 <- x*alpha; (L+D)*x1=b-U*x0; (U+D)*x=b-L*x1; y=A*x
Yes
No
LU Smoother (LU_SMOOTHER) (update A*x=b, A=L+D+U, E~inv(D) )
r=b-A*x; (L+D)*E*(U+D)*dx=r; y=x+dr
Yes
No
Sparse Matrix Add (ADD)
C <- alpha*op(A) + B
Yes
No
op(A) = A
T
Yes
No
op(A) = A
H
Yes
No
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Helper Functions
Functionality
Operations
CPU
OpenMP Offload Intel GPU
Sort Indices of Matrix (ORDER)
N/A
Yes
No
Transpose of Sparse Matrix (TRANSPOSE)
A <- op(A) with op=trans or conjtrans
N/A
N/A
transpose CSR/CSC matrix
Yes