Sparse Generic Functions#

This module holds all sparse generic routines.

The sparse generic routines describe some of the most common operations that manipulate sparse matrices and vectors. The generic API is more flexible than the other rocSPARSE APIs in that it is easy to set different index types, data types and compute types. For some generic routines, for example SpMV, the generic API also allows users to select different algorithms which have different performance characteristics depending on the sparse matrix being operated on.

rocsparse_axpby()#

rocsparse_status rocsparse_axpby(rocsparse_handle handle, const void *alpha, rocsparse_const_spvec_descr x, const void *beta, rocsparse_dnvec_descr y)#

Scale a sparse vector and add it to a scaled dense vector.

rocsparse_axpby multiplies the sparse vector \(x\) with scalar \(\alpha\) and adds the result to the dense vector \(y\) that is multiplied with scalar \(\beta\), such that

\[ y := \alpha \cdot x + \beta \cdot y \]

for(i = 0; i < size; ++i)
{
    y[i] = beta * y[i]
}
for(i = 0; i < nnz; ++i)
{
    y[x_ind[i]] += alpha * x_val[i]
}

rocsparse_axpby supports the following uniform precision data types for the sparse and dense vectors x and y and compute types for the scalars \(\alpha\) and \(\beta\).

Uniform Precisions:

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
// Number of non-zeros of the sparse vector
int nnz = 3;

// Size of sparse and dense vector
int size = 9;

// Sparse index vector
std::vector<int> hx_ind = {0, 3, 5};

// Sparse value vector
std::vector<float> hx_val = {1.0f, 2.0f, 3.0f};

// Dense vector
std::vector<float> hy = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f};

// Scalar alpha
float alpha = 3.7f;

// Scalar beta
float beta = 1.2f;

// Offload data to device
int* dx_ind;
float* dx_val;
float* dy;
hipMalloc((void**)&dx_ind, sizeof(int) * nnz);
hipMalloc((void**)&dx_val, sizeof(float) * nnz);
hipMalloc((void**)&dy, sizeof(float) * size);

hipMemcpy(dx_ind, hx_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx_val, hx_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * size, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse vector X
rocsparse_create_spvec_descr(&vecX,
                             size,
                             nnz,
                             dx_ind,
                             dx_val,
                             idx_type,
                             idx_base,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             size,
                             dy,
                             data_type);

// Call axpby to perform y = beta * y + alpha * x
rocsparse_axpby(handle,
                &alpha,
                vecX,
                &beta,
                vecY);

rocsparse_dnvec_get_values(vecY, (void**)&dy);

// Copy result back to host
hipMemcpy(hy.data(), dy, sizeof(float) * size, hipMemcpyDeviceToHost);

std::cout << "y" << std::endl;
for(size_t i = 0; i < hy.size(); ++i)
{
    std::cout << hy[i] << " ";
}
std::cout << std::endl;

// Clear rocSPARSE
rocsparse_destroy_spvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dx_ind);
hipFree(dx_val);
hipFree(dy);

Note

This function is non blocking and executed asynchronously with respect to the host. It may return before the actual computation has finished.

Note

This routine supports execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • alpha[in] scalar \(\alpha\).

  • x[in] sparse matrix descriptor.

  • beta[in] scalar \(\beta\).

  • y[inout] dense matrix descriptor.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha, x, beta or y pointer is invalid.

rocsparse_gather()#

rocsparse_status rocsparse_gather(rocsparse_handle handle, rocsparse_const_dnvec_descr y, rocsparse_spvec_descr x)#

Gather elements from a dense vector and store them into a sparse vector.

rocsparse_gather gathers the elements from the dense vector \(y\) and stores them in the sparse vector \(x\).

for(i = 0; i < nnz; ++i)
{
    x_val[i] = y[x_ind[i]];
}

rocsparse_gather supports the following uniform precision data types for the sparse and dense vectors x and y.

Uniform Precisions:

X / Y

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
// Number of non-zeros of the sparse vector
int nnz = 3;

// Size of sparse and dense vector
int size = 9;

// Sparse index vector
std::vector<int> hx_ind = {0, 3, 5};

// Dense vector
std::vector<float> hy = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f};

// Offload data to device
int* dx_ind;
float* dx_val;
float* dy;
hipMalloc((void**)&dx_ind, sizeof(int) * nnz);
hipMalloc((void**)&dx_val, sizeof(float) * nnz);
hipMalloc((void**)&dy, sizeof(float) * size);

hipMemcpy(dx_ind, hx_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * size, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse vector X
rocsparse_create_spvec_descr(&vecX,
                             size,
                             nnz,
                             dx_ind,
                             dx_val,
                             idx_type,
                             idx_base,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             size,
                             dy,
                             data_type);

// Call axpby to perform gather
rocsparse_gather(handle, vecY, vecX);

rocsparse_spvec_get_values(vecX, (void**)&dx_val);

// Copy result back to host
std::vector<float> hx_val(nnz, 0.0f);
hipMemcpy(hx_val.data(), dx_val, sizeof(float) * nnz, hipMemcpyDeviceToHost);

std::cout << "x" << std::endl;
for(size_t i = 0; i < hx_val.size(); ++i)
{
    std::cout << hx_val[i] << " ";
}

std::cout << std::endl;

// Clear rocSPARSE
rocsparse_destroy_spvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dx_ind);
hipFree(dx_val);
hipFree(dy);

Note

This function is non blocking and executed asynchronously with respect to the host. It may return before the actual computation has finished.

Note

This routine supports execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • y[in] dense vector \(y\).

  • x[out] sparse vector \(x\).

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointerx or y pointer is invalid.

rocsparse_scatter()#

rocsparse_status rocsparse_scatter(rocsparse_handle handle, rocsparse_const_spvec_descr x, rocsparse_dnvec_descr y)#

Scatter elements from a sparse vector into a dense vector.

rocsparse_scatter scatters the elements from the sparse vector \(x\) in the dense vector \(y\).

for(i = 0; i < nnz; ++i)
{
    y[x_ind[i]] = x_val[i];
}

rocsparse_scatter supports the following uniform precision data types for the sparse and dense vectors x and y.

Uniform Precisions:

X / Y

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
// Number of non-zeros of the sparse vector
int nnz = 3;

// Size of sparse and dense vector
int size = 9;

// Sparse index vector
std::vector<int> hx_ind = {0, 3, 5};

// Sparse value vector
std::vector<float> hx_val = {1.0f, 2.0f, 3.0f};

// Dense vector
std::vector<float> hy = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f};

// Offload data to device
int* dx_ind;
float* dx_val;
float* dy;
hipMalloc((void**)&dx_ind, sizeof(int) * nnz);
hipMalloc((void**)&dx_val, sizeof(float) * nnz);
hipMalloc((void**)&dy, sizeof(float) * size);

hipMemcpy(dx_ind, hx_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx_val, hx_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * size, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse vector X
rocsparse_create_spvec_descr(&vecX,
                             size,
                             nnz,
                             dx_ind,
                             dx_val,
                             idx_type,
                             idx_base,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             size,
                             dy,
                             data_type);

// Call axpby to perform scatter
rocsparse_scatter(handle, vecX, vecY);

rocsparse_dnvec_get_values(vecY, (void**)&dy);

// Copy result back to host
hipMemcpy(hy.data(), dy, sizeof(float) * size, hipMemcpyDeviceToHost);

std::cout << "y" << std::endl;
for(size_t i = 0; i < hy.size(); ++i)
{
    std::cout << hy[i] << " ";
}

std::cout << std::endl;

// Clear rocSPARSE
rocsparse_destroy_spvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dx_ind);
hipFree(dx_val);
hipFree(dy);

Note

This function is non blocking and executed asynchronously with respect to the host. It may return before the actual computation has finished.

Note

This routine supports execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • x[in] sparse vector \(x\).

  • y[out] dense vector \(y\).

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointerx or y pointer is invalid.

rocsparse_rot()#

rocsparse_status rocsparse_rot(rocsparse_handle handle, const void *c, const void *s, rocsparse_spvec_descr x, rocsparse_dnvec_descr y)#

Apply Givens rotation to a dense and a sparse vector.

rocsparse_rot applies the Givens rotation matrix \(G\) to the sparse vector \(x\) and the dense vector \(y\), where

\[\begin{split} G = \begin{pmatrix} c & s \\ -s & c \end{pmatrix} \end{split}\]

for(i = 0; i < nnz; ++i)
{
    x_tmp = x_val[i];
    y_tmp = y[x_ind[i]];

    x_val[i]    = c * x_tmp + s * y_tmp;
    y[x_ind[i]] = c * y_tmp - s * x_tmp;
}

rocsparse_rot supports the following uniform precision data types for the sparse and dense vectors x and y and compute types for the scalars \(c\) and \(s\).

Uniform Precisions:

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
// Number of non-zeros of the sparse vector
int nnz = 3;

// Size of sparse and dense vector
int size = 9;

// Sparse index vector
std::vector<int> hx_ind = {0, 3, 5};

// Sparse value vector
std::vector<float> hx_val = {1.0f, 2.0f, 3.0f};

// Dense vector
std::vector<float> hy = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f};

// Scalar c
float c = 3.7f;

// Scalar s
float s = 1.2f;

// Offload data to device
int* dx_ind;
float* dx_val;
float* dy;
hipMalloc((void**)&dx_ind, sizeof(int) * nnz);
hipMalloc((void**)&dx_val, sizeof(float) * nnz);
hipMalloc((void**)&dy, sizeof(float) * size);

hipMemcpy(dx_ind, hx_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx_val, hx_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * size, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse vector X
rocsparse_create_spvec_descr(&vecX,
                             size,
                             nnz,
                             dx_ind,
                             dx_val,
                             idx_type,
                             idx_base,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             size,
                             dy,
                             data_type);

// Call rot
rocsparse_rot(handle, (void*)&c, (void*)&s, vecX, vecY);

rocsparse_spvec_get_values(vecX, (void**)&dx_val);
rocsparse_dnvec_get_values(vecY, (void**)&dy);

// Copy result back to host
hipMemcpy(hx_val.data(), dx_val, sizeof(float) * nnz, hipMemcpyDeviceToHost);
hipMemcpy(hy.data(), dy, sizeof(float) * size, hipMemcpyDeviceToHost);

std::cout << "x" << std::endl;
for(size_t i = 0; i < hx_val.size(); ++i)
{
    std::cout << hx_val[i] << " ";
}

std::cout << std::endl;

std::cout << "y" << std::endl;
for(size_t i = 0; i < hy.size(); ++i)
{
    std::cout << hy[i] << " ";
}

std::cout << std::endl;

// Clear rocSPARSE
rocsparse_destroy_spvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dx_ind);
hipFree(dx_val);
hipFree(dy);

Note

This function is non blocking and executed asynchronously with respect to the host. It may return before the actual computation has finished.

Note

This routine supports execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • c[in] pointer to the cosine element of \(G\), can be on host or device.

  • s[in] pointer to the sine element of \(G\), can be on host or device.

  • x[inout] sparse vector \(x\).

  • y[inout] dense vector \(y\).

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointerc, s, x or y pointer is invalid.

rocsparse_spvv()#

rocsparse_status rocsparse_spvv(rocsparse_handle handle, rocsparse_operation trans, rocsparse_const_spvec_descr x, rocsparse_const_dnvec_descr y, void *result, rocsparse_datatype compute_type, size_t *buffer_size, void *temp_buffer)#

Sparse vector inner dot product.

rocsparse_spvv computes the inner dot product of the sparse vector \(x\) with the dense vector \(y\), such that

\[ \text{result} := op(x) \cdot y, \]
with
\[\begin{split} op(x) = \left\{ \begin{array}{ll} x, & \text{if trans == rocsparse_operation_none} \\ \bar{x}, & \text{if trans == rocsparse_operation_conjugate_transpose} \\ \end{array} \right. \end{split}\]

result = 0;
for(i = 0; i < nnz; ++i)
{
    result += x_val[i] * y[x_ind[i]];
}

rocsparse_spvv supports the following uniform and mixed precision data types for the sparse and dense vectors x and y and compute types for the scalar \(result\).

Uniform Precisions:

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed precisions:

X / Y

compute_type / result

rocsparse_datatype_i8_r

rocsparse_datatype_i32_r

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

Example
// Number of non-zeros of the sparse vector
int nnz = 3;

// Size of sparse and dense vector
int size = 9;

// Sparse index vector
std::vector<int> hx_ind = {0, 3, 5};

// Sparse value vector
std::vector<float> hx_val = {1.0f, 2.0f, 3.0f};

// Dense vector
std::vector<float> hy = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f};

// Offload data to device
int* dx_ind;
float* dx_val;
float* dy;
hipMalloc((void**)&dx_ind, sizeof(int) * nnz);
hipMalloc((void**)&dx_val, sizeof(float) * nnz);
hipMalloc((void**)&dy, sizeof(float) * size);

hipMemcpy(dx_ind, hx_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx_val, hx_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * size, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_datatype  compute_type = rocsparse_datatype_f32_r;
rocsparse_operation trans = rocsparse_operation_none;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse vector X
rocsparse_create_spvec_descr(&vecX,
                             size,
                             nnz,
                             dx_ind,
                             dx_val,
                             idx_type,
                             idx_base,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             size,
                             dy,
                             data_type);

// Obtain buffer size
float hresult = 0.0f;
size_t buffer_size;
rocsparse_spvv(handle,
               trans,
               vecX,
               vecY,
               &hresult,
               compute_type,
               &buffer_size,
               nullptr);

void* temp_buffer;
hipMalloc(&temp_buffer, buffer_size);

// SpVV
rocsparse_spvv(handle,
               trans,
               vecX,
               vecY,
               &hresult,
               compute_type,
               &buffer_size,
               temp_buffer);

hipDeviceSynchronize();

// Clear rocSPARSE
rocsparse_destroy_spvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dx_ind);
hipFree(dx_val);
hipFree(dy);
hipFree(temp_buffer);

Note

This function writes the required allocation size (in bytes) to buffer_size and returns without performing the SpVV operation, when a nullptr is passed for temp_buffer.

Note

This function is blocking with respect to the host.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans[in] sparse vector operation type.

  • x[in] sparse vector descriptor.

  • y[in] dense vector descriptor.

  • result[out] pointer to the result, can be host or device memory

  • compute_type[in] floating point precision for the SpVV computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpVV operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointerx, y, result or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedcompute_type is currently not supported.

rocsparse_spmv()#

rocsparse_status rocsparse_spmv(rocsparse_handle handle, rocsparse_operation trans, const void *alpha, rocsparse_const_spmat_descr mat, rocsparse_const_dnvec_descr x, const void *beta, const rocsparse_dnvec_descr y, rocsparse_datatype compute_type, rocsparse_spmv_alg alg, rocsparse_spmv_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse matrix vector multiplication.

rocsparse_spmv multiplies the scalar \(\alpha\) with a sparse \(m \times n\) matrix \(op(A)\), defined in CSR, CSC, COO, COO (AoS), BSR, or ELL format, with the dense vector \(x\) and adds the result to the dense vector \(y\) that is multiplied by the scalar \(\beta\), such that

\[ y := \alpha \cdot op(A) \cdot x + \beta \cdot y, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans == rocsparse_operation_none} \\ A^T, & \text{if trans == rocsparse_operation_transpose} \\ A^H, & \text{if trans == rocsparse_operation_conjugate_transpose} \end{array} \right. \end{split}\]

Performing the above operation involves multiple steps. First the user calls rocsparse_spmv with the stage parameter set to rocsparse_spmv_stage_buffer_size to determine the size of the required temporary storage buffer. The user then allocates this buffer and calls rocsparse_spmv with the stage parameter set to rocsparse_spmv_stage_preprocess. Depending on the algorithm and sparse matrix format, this will perform analysis on the sparsity pattern of \(op(A)\). Finally the user completes the operation by calling rocsparse_spmv with the stage parmeter set to rocsparse_spmv_stage_compute. The buffer size, buffer allocation, and preprecess stages only need to be called once for a given sparse matrix \(op(A)\) while the computation stage can be repeatedly used with different \(x\) and \(y\) vectors. Once all calls to rocsparse_spmv are complete, the temporary buffer can be deallocated.

rocsparse_spmv supports multiple different algorithms. These algorithms have different trade offs depending on the sparsity pattern of the matrix, whether or not the results need to be deterministic, and how many times the sparse-vector product will be performed.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_csr_stream

Yes

No

Is best suited for matrices with all rows having a similar number of non-zeros. Can out perform adaptive and LRB algirthms in certain sparsity patterns. Will perform very poorly if some rows have few non-zeros and some rows have many non-zeros.

rocsparse_spmv_alg_csr_adaptive

No

Yes

Generally the fastest algorithm across all matrix sparsity patterns. This includes matrices that have some rows with many non-zeros and some rows with few non-zeros. Requires a lengthy preprocessing that needs to be amortized over many subsequent sparse vector products.

rocsparse_spmv_alg_csr_lrb

No

Yes

Like adaptive algorithm, generally performs well accross all matrix sparsity patterns. Generally not as fast as adaptive algorithm, however uses a much faster pre-processing step. Good for when only a few number of sparse vector products will be performed.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_coo

Yes

Yes

Generally not as fast as atomic algorithm but is deterministic

rocsparse_spmv_alg_coo_atomic

No

No

Generally the fastest COO algorithm

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_ell

Yes

No

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_bsr

Yes

No

rocsparse_spmv supports multiple combinations of data types and compute types. The tables below indicate the currently supported different data types that can be used for for the sparse matrix \(op(A)\) and the dense vectors \(x\) and \(y\) and the compute type for \(\alpha\) and \(\beta\). The advantage of using different data types is to save on memory bandwidth and storage when a user application allows while performing the actual computation in a higher precision.

rocsparse_spmv supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrices.

Uniform Precisions:

A / X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed precisions:

A / X

Y

compute_type

rocsparse_datatype_i8_r

rocsparse_datatype_i32_r

rocsparse_datatype_i32_r

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

rocsparse_datatype_f32_r

Mixed-regular real precisions

A

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed-regular Complex precisions

A

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_r

rocsparse_datatype_f64_c

Example
//     1 4 0 0 0 0
// A = 0 2 3 0 0 0
//     5 0 0 7 8 0
//     0 0 9 0 6 0
rocsparse_int m   = 4;
rocsparse_int n   = 6;

std::vector<int> hcsr_row_ptr = {0, 2, 4, 7, 9};
std::vector<int> hcsr_col_ind = {0, 1, 1, 2, 0, 3, 4, 2, 4};
std::vector<float> hcsr_val   = {1, 4, 2, 3, 5, 7, 8, 9, 6};
std::vector<float> hx(n, 1.0f);
std::vector<float> hy(m, 0.0f);

// Scalar alpha
float alpha = 3.7f;

// Scalar beta
float beta = 0.0f;

rocsparse_int nnz = hcsr_row_ptr[m] - hcsr_row_ptr[0];

// Offload data to device
int* dcsr_row_ptr;
int* dcsr_col_ind;
float* dcsr_val;
float* dx;
float* dy;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);
hipMalloc((void**)&dx, sizeof(float) * n);
hipMalloc((void**)&dy, sizeof(float) * m);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx, hx.data(), sizeof(float) * n, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA;
rocsparse_dnvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype row_idx_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_datatype  compute_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;
rocsparse_operation trans = rocsparse_operation_none;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_csr_descr(&matA,
                           m,
                           n,
                           nnz,
                           dcsr_row_ptr,
                           dcsr_col_ind,
                           dcsr_val,
                           row_idx_type,
                           col_idx_type,
                           idx_base,
                           data_type);

// Create dense vector X
rocsparse_create_dnvec_descr(&vecX,
                             n,
                             dx,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             m,
                             dy,
                             data_type);

// Call spmv to get buffer size
size_t buffer_size;
rocsparse_spmv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               &beta,
               vecY,
               compute_type,
               rocsparse_spmv_alg_csr_adaptive,
               rocsparse_spmv_stage_buffer_size,
               &buffer_size,
               nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

// Call spmv to perform analysis
rocsparse_spmv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               &beta,
               vecY,
               compute_type,
               rocsparse_spmv_alg_csr_adaptive,
               rocsparse_spmv_stage_preprocess,
               &buffer_size,
               temp_buffer);

// Call spmv to perform computation
rocsparse_spmv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               &beta,
               vecY,
               compute_type,
               rocsparse_spmv_alg_csr_adaptive,
               rocsparse_spmv_stage_compute,
               &buffer_size,
               temp_buffer);

// Copy result back to host
hipMemcpy(hy.data(), dy, sizeof(float) * m, hipMemcpyDeviceToHost);

// Clear rocSPARSE
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_dnvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(dx);
hipFree(dy);
hipFree(temp_buffer);

Note

None of the algorithms above are deterministic when \(A\) is transposed.

Note

Only the rocsparse_spmv_stage_buffer_size stage and the rocsparse_spmv_stage_compute stage are non blocking and executed asynchronously with respect to the host. They may return before the actual computation has finished. The rocsparse_spmv_stage_preprocess stage is blocking with respect to the host.

Note

Only the rocsparse_spmv_stage_buffer_size stage and the rocsparse_spmv_stage_compute stage support execution in a hipGraph context. The rocsparse_spmv_stage_preprocess stage does not support hipGraph.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans[in] matrix operation type.

  • alpha[in] scalar \(\alpha\).

  • mat[in] matrix descriptor.

  • x[in] vector descriptor.

  • beta[in] scalar \(\beta\).

  • y[inout] vector descriptor.

  • compute_type[in] floating point precision for the SpMV computation.

  • alg[in] SpMV algorithm for the SpMV computation.

  • stage[in] SpMV stage for the SpMV computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When the rocsparse_spmv_stage_buffer_size stage is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpMV operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context handle was not initialized.

  • rocsparse_status_invalid_pointeralpha, mat, x, beta, y or buffer_size pointer is invalid.

  • rocsparse_status_invalid_value – the value of trans, compute_type, alg, or stage is incorrect.

  • rocsparse_status_not_implementedcompute_type or alg is currently not supported.

rocsparse_spmv_ex()#

rocsparse_status rocsparse_spmv_ex(rocsparse_handle handle, rocsparse_operation trans, const void *alpha, const rocsparse_spmat_descr mat, const rocsparse_dnvec_descr x, const void *beta, const rocsparse_dnvec_descr y, rocsparse_datatype compute_type, rocsparse_spmv_alg alg, rocsparse_spmv_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse matrix vector multiplication.

rocsparse_spmv_ex multiplies the scalar \(\alpha\) with a sparse \(m \times n\) matrix \(op(A)\), defined in CSR, CSC, COO, COO (AoS), BSR, or ELL format, with the dense vector \(x\) and adds the result to the dense vector \(y\) that is multiplied by the scalar \(\beta\), such that

\[ y := \alpha \cdot op(A) \cdot x + \beta \cdot y, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans == rocsparse_operation_none} \\ A^T, & \text{if trans == rocsparse_operation_transpose} \\ A^H, & \text{if trans == rocsparse_operation_conjugate_transpose} \end{array} \right. \end{split}\]

Performing the above operation involves multiple steps. First the user calls rocsparse_spmv_ex with the stage parameter set to rocsparse_spmv_stage_buffer_size to determine the size of the required temporary storage buffer. The user then allocates this buffer and calls rocsparse_spmv_ex with the stage parameter set to rocsparse_spmv_stage_preprocess. Depending on the algorithm and sparse matrix format, this will perform analysis on the sparsity pattern of \(op(A)\). Finally the user completes the operation by calling rocsparse_spmv_ex with the stage parmeter set to rocsparse_spmv_stage_compute. The buffer size, buffer allocation, and preprecess stages only need to be called once for a given sparse matrix \(op(A)\) while the computation stage can be repeatedly used with different \(x\) and \(y\) vectors. Once all calls to rocsparse_spmv_ex are complete, the temporary buffer can be deallocated.

rocsparse_spmv_ex supports multiple different algorithms. These algorithms have different trade offs depending on the sparsity pattern of the matrix, whether or not the results need to be deterministic, and how many times the sparse-vector product will be performed.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_csr_stream

Yes

No

Is best suited for matrices with all rows having a similar number of non-zeros. Can out perform adaptive and LRB algirthms in certain sparsity patterns. Will perform very poorly if some rows have few non-zeros and some rows have many non-zeros.

rocsparse_spmv_alg_csr_adaptive

No

Yes

Generally the fastest algorithm across all matrix sparsity patterns. This includes matrices that have some rows with many non-zeros and some rows with few non-zeros. Requires a lengthy preprocessing that needs to be amortized over many subsequent sparse vector products.

rocsparse_spmv_alg_csr_lrb

No

Yes

Like adaptive algorithm, generally performs well accross all matrix sparsity patterns. Generally not as fast as adaptive algorithm, however uses a much faster pre-processing step. Good for when only a few number of sparse vector products will be performed.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_coo

Yes

Yes

Generally not as fast as atomic algorithm but is deterministic

rocsparse_spmv_alg_coo_atomic

No

No

Generally the fastest COO algorithm

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_ell

Yes

No

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmv_alg_bsr

Yes

No

rocsparse_spmv_ex supports multiple combinations of data types and compute types. The tables below indicate the currently supported different data types that can be used for for the sparse matrix \(A\) and the dense vectors \(x\) and \(y\) and the compute type for \(\alpha\) and \(\beta\). The advantage of using different data types is to save on memory bandwidth and storage when a user application allows while performing the actual computation in a higher precision.

rocsparse_spmv_ex supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrices.

Uniform Precisions:

A / X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed precisions:

A / X

Y

compute_type

rocsparse_datatype_i8_r

rocsparse_datatype_i32_r

rocsparse_datatype_i32_r

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

rocsparse_datatype_f32_r

Mixed-regular real precisions

A

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed-regular Complex precisions

A

X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_r

rocsparse_datatype_f64_c

Note

None of the algorithms above are deterministic when \(A\) is transposed.

Note

Only the rocsparse_spmv_stage_buffer_size stage and the rocsparse_spmv_stage_compute stage are non blocking and executed asynchronously with respect to the host. They may return before the actual computation has finished. The rocsparse_spmv_stage_preprocess stage is blocking with respect to the host.

Note

Only the rocsparse_spmv_stage_buffer_size stage and the rocsparse_spmv_stage_compute stage support execution in a hipGraph context. The rocsparse_spmv_stage_preprocess stage does not support hipGraph.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans[in] matrix operation type.

  • alpha[in] scalar \(\alpha\).

  • mat[in] matrix descriptor.

  • x[in] vector descriptor.

  • beta[in] scalar \(\beta\).

  • y[inout] vector descriptor.

  • compute_type[in] floating point precision for the SpMV computation.

  • alg[in] SpMV algorithm for the SpMV computation.

  • stage[in] SpMV stage for the SpMV computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpMV operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context handle was not initialized.

  • rocsparse_status_invalid_pointeralpha, mat, x, beta, y or buffer_size pointer is invalid.

  • rocsparse_status_invalid_value – the value of trans, compute_type, alg or stage is incorrect.

  • rocsparse_status_not_implementedcompute_type or alg is currently not supported.

rocsparse_spsv()#

rocsparse_status rocsparse_spsv(rocsparse_handle handle, rocsparse_operation trans, const void *alpha, rocsparse_const_spmat_descr mat, rocsparse_const_dnvec_descr x, const rocsparse_dnvec_descr y, rocsparse_datatype compute_type, rocsparse_spsv_alg alg, rocsparse_spsv_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse triangular system solve.

rocsparse_spsv solves a triangular linear system of equations defined by a sparse \(m \times m\) square matrix \(op(A)\), given in CSR or COO storage format, such that

\[ op(A) \cdot y = \alpha \cdot x, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans == rocsparse_operation_none} \\ A^T, & \text{if trans == rocsparse_operation_transpose} \end{array} \right. \end{split}\]
and where \(y\) is the dense solution vector and \(x\) is the dense right-hand side vector.

Performing the above operation requires three stages. First, rocsparse_spsv must be called with the stage rocsparse_spsv_stage_buffer_size which will determine the size of the required temporary storage buffer. The user then allocates this buffer and calls rocsparse_spsv with the stage rocsparse_spsv_stage_preprocess which will perform analysis on the sparse matrix \(op(A)\). Finally, the user completes the computation by calling rocsparse_spsv with the stage rocsparse_spsv_stage_compute. The buffer size, buffer allocation, and preprecess stages only need to be called once for a given sparse matrix \(op(A)\) while the computation stage can be repeatedly used with different \(x\) and \(y\) vectors. Once all calls to rocsparse_spsv are complete, the temporary buffer can be deallocated.

rocsparse_spsv supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index types for storing the row pointer and column indices arrays of the sparse matrices. rocsparse_spsv supports the following data types for \(op(A)\), \(x\), \(y\) and compute types for \(\alpha\):

Uniform Precisions:

A / X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
//     1 0 0 0
// A = 4 2 0 0
//     0 3 7 0
//     0 0 0 1
rocsparse_int m   = 4;

std::vector<int> hcsr_row_ptr = {0, 1, 3, 5, 6};
std::vector<int> hcsr_col_ind = {0, 0, 1, 1, 2, 3};
std::vector<float> hcsr_val   = {1, 4, 2, 3, 7, 1};
std::vector<float> hx(m, 1.0f);
std::vector<float> hy(m, 0.0f);

// Scalar alpha
float alpha = 1.0f;

rocsparse_int nnz = hcsr_row_ptr[m] - hcsr_row_ptr[0];

// Offload data to device
int* dcsr_row_ptr;
int* dcsr_col_ind;
float* dcsr_val;
float* dx;
float* dy;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);
hipMalloc((void**)&dx, sizeof(float) * m);
hipMalloc((void**)&dy, sizeof(float) * m);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx, hx.data(), sizeof(float) * m, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA;
rocsparse_dnvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype row_idx_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_datatype  compute_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;
rocsparse_operation trans = rocsparse_operation_none;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_csr_descr(&matA,
                           m,
                           m,
                           nnz,
                           dcsr_row_ptr,
                           dcsr_col_ind,
                           dcsr_val,
                           row_idx_type,
                           col_idx_type,
                           idx_base,
                           data_type);

// Create dense vector X
rocsparse_create_dnvec_descr(&vecX,
                             m,
                             dx,
                             data_type);

// Create dense vector Y
rocsparse_create_dnvec_descr(&vecY,
                             m,
                             dy,
                             data_type);

// Call spsv to get buffer size
size_t buffer_size;
rocsparse_spsv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               vecY,
               compute_type,
               rocsparse_spsv_alg_default,
               rocsparse_spsv_stage_buffer_size,
               &buffer_size,
               nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

// Call spsv to perform analysis
rocsparse_spsv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               vecY,
               compute_type,
               rocsparse_spsv_alg_default,
               rocsparse_spsv_stage_preprocess,
               &buffer_size,
               temp_buffer);

// Call spsv to perform computation
rocsparse_spsv(handle,
               trans,
               &alpha,
               matA,
               vecX,
               vecY,
               compute_type,
               rocsparse_spsv_alg_default,
               rocsparse_spsv_stage_compute,
               &buffer_size,
               temp_buffer);

// Copy result back to host
hipMemcpy(hy.data(), dy, sizeof(float) * m, hipMemcpyDeviceToHost);

// Clear rocSPARSE
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_dnvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(dx);
hipFree(dy);
hipFree(temp_buffer);

Note

The sparse matrix formats currently supported are: rocsparse_format_coo and rocsparse_format_csr.

Note

Only the rocsparse_spsv_stage_buffer_size stage and the rocsparse_spsv_stage_compute stage are non blocking and executed asynchronously with respect to the host. They may return before the actual computation has finished. The rocsparse_spsv_stage_preprocess stage is blocking with respect to the host.

Note

Currently, only trans == rocsparse_operation_none and trans == rocsparse_operation_transpose is supported.

Note

Only the rocsparse_spsv_stage_buffer_size stage and the rocsparse_spsv_stage_compute stage support execution in a hipGraph context. The rocsparse_spsv_stage_preprocess stage does not support hipGraph.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans[in] matrix operation type.

  • alpha[in] scalar \(\alpha\).

  • mat[in] matrix descriptor.

  • x[in] vector descriptor.

  • y[inout] vector descriptor.

  • compute_type[in] floating point precision for the SpSV computation.

  • alg[in] SpSV algorithm for the SpSV computation.

  • stage[in] SpSV stage for the SpSV computation.

  • buffer_size[out] number of bytes of the temporary storage buffer.

  • temp_buffer[in] temporary storage buffer allocated by the user. When the rocsparse_spsv_stage_buffer_size stage is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpSV operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha, mat, x, y or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedtrans, compute_type, stage or alg is currently not supported.

rocsparse_spsm()#

rocsparse_status rocsparse_spsm(rocsparse_handle handle, rocsparse_operation trans_A, rocsparse_operation trans_B, const void *alpha, rocsparse_const_spmat_descr matA, rocsparse_const_dnmat_descr matB, const rocsparse_dnmat_descr matC, rocsparse_datatype compute_type, rocsparse_spsm_alg alg, rocsparse_spsm_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse triangular system solve with multiple right-hand sides.

rocsparse_spsm solves a triangular linear system of equations defined by a sparse \(m \times m\) square matrix \(op(A)\), given in CSR or COO storage format, such that

\[ op(A) \cdot C = \alpha \cdot op(B), \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans == rocsparse_operation_none} \\ A^T, & \text{if trans == rocsparse_operation_transpose} \end{array} \right. \end{split}\]
and
\[\begin{split} op(B) = \left\{ \begin{array}{ll} B, & \text{if trans_B == rocsparse_operation_none} \\ B^T, & \text{if trans_B == rocsparse_operation_transpose} \end{array} \right. \end{split}\]
and where \(C\) is the dense solution matrix and \(B\) is the dense right-hand side matrix. Both \(B\) and \(C\) can be in row or column order.

Performing the above operation requires three stages. First, rocsparse_spsm must be called with the stage rocsparse_spsm_stage_buffer_size which will determine the size of the required temporary storage buffer. The user then allocates this buffer and calls rocsparse_spsm with the stage rocsparse_spsm_stage_preprocess which will perform analysis on the sparse matrix \(op(A)\). Finally, the user completes the computation by calling rocsparse_spsm with the stage rocsparse_spsm_stage_compute. The buffer size, buffer allocation, and preprecess stages only need to be called once for a given sparse triangular matrix \(op(A)\) while the computation stage can be repeatedly used with different \(B\) and \(C\) matrices. Once all calls to rocsparse_spsm are complete, the temporary buffer can be deallocated.

As noted above, both \(B\) and \(C\) can be in row or column order (this includes mixing the order so that \(B\) is row order and \(C\) is column order and vice versa). Internally however, rocSPARSE kernels solve the system assuming the matrices \(B\) and \(C\) are in row order as this provides the best memory access. This means that if the matrix \(C\) is not in row order and/or the matrix \(B\) is not row order (or \(B^{T}\) is not column order as this is equivalent to being in row order), then internally memory copies and/or transposing of data may be performed to get them into the correct order (possbily using extra buffer size). Once computation is completed, additional memory copies and/or transposing of data may be performed to get them back into the user arrays. For best performance and smallest required temporary storage buffers, use row order for the matrix \(C\) and row order for the matrix \(B\) (or column order if \(B\) is being transposed).

rocsparse_spsm supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrices. rocsparse_spsm supports the following data types for \(op(A)\), \(op(B)\), \(C\) and compute types for \(\alpha\):

Uniform Precisions:

A / B / C / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
//     1 0 0 0
// A = 4 2 0 0
//     0 3 7 0
//     0 0 0 1
rocsparse_int m   = 4;
rocsparse_int n   = 2;

std::vector<int> hcsr_row_ptr = {0, 1, 3, 5, 6};
std::vector<int> hcsr_col_ind = {0, 0, 1, 1, 2, 3};
std::vector<float> hcsr_val   = {1, 4, 2, 3, 7, 1};
std::vector<float> hB(m * n);
std::vector<float> hC(m * n);

for(int i = 0; i < n; i++)
{
    for(int j = 0; j < m; j++)
    {
        hB[m * i + j] = static_cast<float>(i + 1);
    }
}

// Scalar alpha
float alpha = 1.0f;

rocsparse_int nnz = hcsr_row_ptr[m] - hcsr_row_ptr[0];

// Offload data to device
int* dcsr_row_ptr;
int* dcsr_col_ind;
float* dcsr_val;
float* dB;
float* dC;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);
hipMalloc((void**)&dB, sizeof(float) * m * n);
hipMalloc((void**)&dC, sizeof(float) * m * n);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dB, hB.data(), sizeof(float) * m * n, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA;
rocsparse_dnmat_descr matB;
rocsparse_dnmat_descr matC;

rocsparse_indextype row_idx_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_datatype  compute_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;
rocsparse_operation trans_A = rocsparse_operation_none;
rocsparse_operation trans_B = rocsparse_operation_none;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_csr_descr(&matA,
                           m,
                           m,
                           nnz,
                           dcsr_row_ptr,
                           dcsr_col_ind,
                           dcsr_val,
                           row_idx_type,
                           col_idx_type,
                           idx_base,
                           data_type);

// Create dense matrix B
rocsparse_create_dnmat_descr(&matB,
                             m,
                             n,
                             m,
                             dB,
                             data_type,
                             rocsparse_order_column);

// Create dense matrix C
rocsparse_create_dnmat_descr(&matC,
                             m,
                             n,
                             m,
                             dC,
                             data_type,
                             rocsparse_order_column);

// Call spsv to get buffer size
size_t buffer_size;
rocsparse_spsm(handle,
               trans_A,
               trans_B,
               &alpha,
               matA,
               matB,
               matC,
               compute_type,
               rocsparse_spsm_alg_default,
               rocsparse_spsm_stage_buffer_size,
               &buffer_size,
               nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

// Call spsv to perform analysis
rocsparse_spsm(handle,
               trans_A,
               trans_B,
               &alpha,
               matA,
               matB,
               matC,
               compute_type,
               rocsparse_spsm_alg_default,
               rocsparse_spsm_stage_preprocess,
               &buffer_size,
               temp_buffer);

// Call spsv to perform computation
rocsparse_spsm(handle,
               trans_A,
               trans_B,
               &alpha,
               matA,
               matB,
               matC,
               compute_type,
               rocsparse_spsm_alg_default,
               rocsparse_spsm_stage_compute,
               &buffer_size,
               temp_buffer);

// Copy result back to host
hipMemcpy(hC.data(), dC, sizeof(float) * m * n, hipMemcpyDeviceToHost);

std::cout << "hC" << std::endl;
for(size_t i = 0; i < hC.size(); ++i)
{
    std::cout << hC[i] << " ";
}
std::cout << std::endl;

// Clear rocSPARSE
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_dnmat_descr(matB);
rocsparse_destroy_dnmat_descr(matC);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(dB);
hipFree(dC);
hipFree(temp_buffer);

Note

The sparse matrix formats currently supported are: rocsparse_format_coo and rocsparse_format_csr.

Note

Only the rocsparse_spsm_stage_buffer_size stage and the rocsparse_spsm_stage_compute stage are non blocking and executed asynchronously with respect to the host. They may return before the actual computation has finished. The rocsparse_spsm_stage_preprocess stage is blocking with respect to the host.

Note

Currently, only trans_A == rocsparse_operation_none and trans_A == rocsparse_operation_transpose is supported. Currently, only trans_B == rocsparse_operation_none and trans_B == rocsparse_operation_transpose is supported.

Note

Only the rocsparse_spsm_stage_buffer_size stage and the rocsparse_spsm_stage_compute stage support execution in a hipGraph context. The rocsparse_spsm_stage_preprocess stage does not support hipGraph.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans_A[in] matrix operation type for the sparse matrix \(op(A)\).

  • trans_B[in] matrix operation type for the dense matrix \(op(B)\).

  • alpha[in] scalar \(\alpha\).

  • matA[in] sparse matrix descriptor.

  • matB[in] dense matrix descriptor.

  • matC[inout] dense matrix descriptor.

  • compute_type[in] floating point precision for the SpSM computation.

  • alg[in] SpSM algorithm for the SpSM computation.

  • stage[in] SpSM stage for the SpSM computation.

  • buffer_size[out] number of bytes of the temporary storage buffer.

  • temp_buffer[in] temporary storage buffer allocated by the user. When the rocsparse_spsm_stage_buffer_size stage is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpSM operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha, matA, matB, matC, descr or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedtrans_A, trans_B, compute_type, stage or alg is currently not supported.

rocsparse_spmm()#

rocsparse_status rocsparse_spmm(rocsparse_handle handle, rocsparse_operation trans_A, rocsparse_operation trans_B, const void *alpha, rocsparse_const_spmat_descr mat_A, rocsparse_const_dnmat_descr mat_B, const void *beta, const rocsparse_dnmat_descr mat_C, rocsparse_datatype compute_type, rocsparse_spmm_alg alg, rocsparse_spmm_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse matrix dense matrix multiplication.

rocsparse_spmm multiplies the scalar \(\alpha\) with a sparse \(m \times k\) matrix \(op(A)\), defined in CSR, COO, BSR or Blocked ELL storage format, and the dense \(k \times n\) matrix \(op(B)\) and adds the result to the dense \(m \times n\) matrix \(C\) that is multiplied by the scalar \(\beta\), such that

\[ C := \alpha \cdot op(A) \cdot op(B) + \beta \cdot C, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans_A == rocsparse_operation_none} \\ A^T, & \text{if trans_A == rocsparse_operation_transpose} \\ A^H, & \text{if trans_A == rocsparse_operation_conjugate_transpose} \end{array} \right. \end{split}\]
and
\[\begin{split} op(B) = \left\{ \begin{array}{ll} B, & \text{if trans_B == rocsparse_operation_none} \\ B^T, & \text{if trans_B == rocsparse_operation_transpose} \\ B^H, & \text{if trans_B == rocsparse_operation_conjugate_transpose} \end{array} \right. \end{split}\]
Both \(B\) and \(C\) can be in row or column order.

rocsparse_spmm requires three stages to complete. First, the user passes the rocsparse_spmm_stage_buffer_size stage to determine the size of the required temporary storage buffer. Next, the user allocates this buffer and calls rocsparse_spmm again with the rocsparse_spmm_stage_preprocess stage which will perform analysis on the sparse matrix \(op(A)\). Finally, the user calls rocsparse_spmm with the rocsparse_spmm_stage_compute stage to perform the actual computation. The buffer size, buffer allocation, and preprecess stages only need to be called once for a given sparse matrix \(op(A)\) while the computation stage can be repeatedly used with different \(B\) and \(C\) matrices. Once all calls to rocsparse_spmm are complete, the temporary buffer can be deallocated.

As noted above, both \(B\) and \(C\) can be in row or column order (this includes mixing the order so that \(B\) is row order and \(C\) is column order and vice versa). For best performance, use row order for both \(B\) and \(C\) as this provides the best memory access.

rocsparse_spmm supports multiple different algorithms. These algorithms have different trade offs depending on the sparsity pattern of the matrix, whether or not the results need to be deterministic, and how many times the sparse-matrix product will be performed.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmm_alg_csr

Yes

No

Default algorithm.

rocsparse_spmm_alg_csr_row_split

Yes

No

Assigns a fixed number of threads per row regardless of the number of non-zeros in each row. This can perform well when each row in the matrix has roughly the same number of non-zeros

rocsparse_spmm_alg_csr_nnz_split

No

Yes

Distributes work by having each thread block work on a fixed number of non-zeros regardless of the number of rows that might be involved. This can perform well when the matrix has some rows with few non-zeros and some rows with many non-zeros

rocsparse_spmm_alg_csr_merge_path

No

Yes

Attempts to combine the approaches of row-split and non-zero split by having each block work on a fixed amount of work which can be either non-zeros or rows

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmm_alg_coo_segmented

Yes

No

Generally not as fast as atomic algorithm but is deterministic

rocsparse_spmm_alg_coo_atomic

No

No

Generally the fastest COO algorithm. Is the default algorithm

rocsparse_spmm_alg_coo_segmented_atomic

No

No

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmm_alg_bell

Yes

No

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_spmm_alg_bsr

Yes

No

One can also pass rocsparse_spmm_alg_default which will automatically select from the algorithms listed above based on the sparse matrix format. In the case of CSR matrices this will set the algorithm to be rocsparse_spmm_alg_csr, in the case of Blocked ELL matrices this will set the algorithm to be rocsparse_spmm_alg_bell, in the case of BSR matrix this will set the algorithm to be rocsparse_spmm_alg_bsr and for COO matrices it will set the algorithm to be rocsparse_spmm_alg_coo_atomic.

When A is transposed, rocsparse_spmm will revert to using rocsparse_spmm_alg_csr for CSR format and rocsparse_spmm_alg_coo_atomic for COO format regardless of algorithm selected.

rocsparse_spmm supports multiple combinations of data types and compute types. The tables below indicate the currently supported different data types that can be used for for the sparse matrix \(op(A)\) and the dense matrices \(op(B)\) and \(C\) and the compute type for \(\alpha\) and \(\beta\). The advantage of using different data types is to save on memory bandwidth and storage when a user application allows while performing the actual computation in a higher precision.

rocsparse_spmm supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrices.

Uniform Precisions:

A / B / C / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Mixed precisions:

A / B

C

compute_type

rocsparse_datatype_i8_r

rocsparse_datatype_i32_r

rocsparse_datatype_i32_r

rocsparse_datatype_i8_r

rocsparse_datatype_f32_r

rocsparse_datatype_f32_r

rocsparse_spmm also supports batched computation for CSR and COO matrices. There are three supported batch modes:

\[\begin{split} C_i = A \times B_i \\ C_i = A_i \times B \\ C_i = A_i \times B_i \end{split}\]

The batch mode is determined by the batch count and stride passed for each matrix. For example to use the first batch mode ( \(C_i = A \times B_i\)) with 100 batches for non-transposed \(A\), \(B\), and \(C\), one passes:

\[\begin{split} batch\_count\_A=1 \\ batch\_count\_B=100 \\ batch\_count\_C=100 \\ offsets\_batch\_stride\_A=0 \\ columns\_values\_batch\_stride\_A=0 \\ batch\_stride\_B=k*n \\ batch\_stride\_C=m*n \end{split}\]
To use the second batch mode ( \(C_i = A_i \times B\)) one could use:
\[\begin{split} batch\_count\_A=100 \\ batch\_count\_B=1 \\ batch\_count\_C=100 \\ offsets\_batch\_stride\_A=m+1 \\ columns\_values\_batch\_stride\_A=nnz \\ batch\_stride\_B=0 \\ batch\_stride\_C=m*n \end{split}\]
And to use the third batch mode ( \(C_i = A_i \times B_i\)) one could use:
\[\begin{split} batch\_count\_A=100 \\ batch\_count\_B=100 \\ batch\_count\_C=100 \\ offsets\_batch\_stride\_A=m+1 \\ columns\_values\_batch\_stride_A=nnz \\ batch\_stride_B=k*n \\ batch\_stride_C=m*n \end{split}\]
See examples below.

Example

This example performs sparse matrix-dense matrix multiplication, \(C := \alpha \cdot A \cdot B + \beta \cdot C\)

//     1 4 0 0 0 0
// A = 0 2 3 0 0 0
//     5 0 0 7 8 0
//     0 0 9 0 6 0

//     1 4 2
//     1 2 3
// B = 5 4 0
//     3 1 9
//     1 2 2
//     0 3 0

//     1 1 5
// C = 1 2 1
//     1 3 1
//     6 2 4

rocsparse_int m   = 4;
rocsparse_int k   = 6;
rocsparse_int n   = 3;

csr_row_ptr[m + 1] = {0, 1, 3};                                              // device memory
csr_col_ind[nnz]   = {0, 0, 1};                                              // device memory
csr_val[nnz]       = {1, 0, 4, 2, 0, 3, 5, 0, 0, 0, 0, 9, 7, 0, 8, 6, 0, 0}; // device memory

B[k * n]       = {1, 1, 5, 3, 1, 0, 4, 2, 4, 1, 2, 3, 2, 3, 0, 9, 2, 0};     // device memory
C[m * n]       = {1, 1, 1, 6, 1, 2, 3, 2, 5, 1, 1, 4};                       // device memory

rocsparse_int nnz = csr_row_ptr[m] - csr_row_ptr[0];

float alpha = 1.0f;
float beta  = 0.0f;

// Create CSR arrays on device
rocsparse_int* csr_row_ptr;
rocsparse_int* csr_col_ind;
float* csr_val;
float* B;
float* C;
hipMalloc((void**)&csr_row_ptr, sizeof(rocsparse_int) * (m + 1));
hipMalloc((void**)&csr_col_ind, sizeof(rocsparse_int) * nnz);
hipMalloc((void**)&csr_val, sizeof(float) * nnz);
hipMalloc((void**)&B, sizeof(float) * k * n);
hipMalloc((void**)&C, sizeof(float) * m * n);

// Create rocsparse handle
rocsparse_local_handle handle;

// Types
rocsparse_indextype itype = rocsparse_indextype_i32;
rocsparse_indextype jtype = rocsparse_indextype_i32;
rocsparse_datatype  ttype = rocsparse_datatype_f32_r;

// Create descriptors
rocsparse_spmat_descr mat_A;
rocsparse_dnmat_descr mat_B;
rocsparse_dnmat_descr mat_C;

rocsparse_create_csr_descr(&mat_A, m, k, nnz, csr_row_ptr, csr_col_ind, csr_val, itype, jtype, rocsparse_index_base_zero, ttype);
rocsparse_create_dnmat_descr(&mat_B, k, n, k, B, ttype, rocsparse_order_column);
rocsparse_create_dnmat_descr(&mat_C, m, n, m, C, ttype, rocsparse_order_column);

// Query SpMM buffer
size_t buffer_size;
rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_buffer_size,
               &buffer_size,
               nullptr));

// Allocate buffer
void* buffer;
hipMalloc(&buffer, buffer_size);

rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_preprocess,
               &buffer_size,
               buffer));

// Pointer mode host
rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_compute,
               &buffer_size,
               buffer));

// Clear up on device
hipFree(csr_row_ptr);
hipFree(csr_col_ind);
hipFree(csr_val);
hipFree(B);
hipFree(C);
hipFree(temp_buffer);

rocsparse_destroy_spmat_descr(mat_A);
rocsparse_destroy_dnmat_descr(mat_B);
rocsparse_destroy_dnmat_descr(mat_C);

Example

An example of the first batch mode ( \(C_i = A \times B_i\)) is provided below.

//     1 4 0 0 0 0
// A = 0 2 3 0 0 0
//     5 0 0 7 8 0
//     0 0 9 0 6 0

rocsparse_int m   = 4;
rocsparse_int k   = 6;
rocsparse_int n   = 3;

csr_row_ptr[m + 1] = {0, 1, 3};                                              // device memory
csr_col_ind[nnz]   = {0, 0, 1};                                              // device memory
csr_val[nnz]       = {1, 0, 4, 2, 0, 3, 5, 0, 0, 0, 0, 9, 7, 0, 8, 6, 0, 0}; // device memory

B[batch_count_B * k * n]       = {...};     // device memory
C[batch_count_C * m * n]       = {...};     // device memory

rocsparse_int nnz = csr_row_ptr[m] - csr_row_ptr[0];

rocsparse_int batch_count_A = 1;
rocsparse_int batch_count_B = 100;
rocsparse_int batch_count_C = 100;

rocsparse_int offsets_batch_stride_A        = 0;
rocsparse_int columns_values_batch_stride_A = 0;
rocsparse_int batch_stride_B                = k * n;
rocsparse_int batch_stride_C                = m * n;

float alpha = 1.0f;
float beta  = 0.0f;

// Create CSR arrays on device
rocsparse_int* csr_row_ptr;
rocsparse_int* csr_col_ind;
float* csr_val;
float* B;
float* C;
hipMalloc((void**)&csr_row_ptr, sizeof(rocsparse_int) * (m + 1));
hipMalloc((void**)&csr_col_ind, sizeof(rocsparse_int) * nnz);
hipMalloc((void**)&csr_val, sizeof(float) * nnz);
hipMalloc((void**)&B, sizeof(float) * batch_count_B * k * n);
hipMalloc((void**)&C, sizeof(float) * batch_count_C * m * n);

// Create rocsparse handle
rocsparse_local_handle handle;

// Types
rocsparse_indextype itype = rocsparse_indextype_i32;
rocsparse_indextype jtype = rocsparse_indextype_i32;
rocsparse_datatype  ttype = rocsparse_datatype_f32_r;

// Create descriptors
rocsparse_spmat_descr mat_A;
rocsparse_dnmat_descr mat_B;
rocsparse_dnmat_descr mat_C;

rocsparse_create_csr_descr(&mat_A, m, k, nnz, csr_row_ptr, csr_col_ind, csr_val, itype, jtype, rocsparse_index_base_zero, ttype);
rocsparse_create_dnmat_descr(&mat_B, k, n, k, B, ttype, rocsparse_order_column);
rocsparse_create_dnmat_descr(&mat_C, m, n, m, C, ttype, rocsparse_order_column);

rocsparse_csr_set_strided_batch(mat_A, batch_count_A, offsets_batch_stride_A, columns_values_batch_stride_A);
rocsparse_dnmat_set_strided_batch(B, batch_count_B, batch_stride_B);
rocsparse_dnmat_set_strided_batch(C, batch_count_C, batch_stride_C);

// Query SpMM buffer
size_t buffer_size;
rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_buffer_size,
               &buffer_size,
               nullptr));

// Allocate buffer
void* buffer;
hipMalloc(&buffer, buffer_size);

rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_preprocess,
               &buffer_size,
               buffer));

// Pointer mode host
rocsparse_spmm(handle,
               rocsparse_operation_none,
               rocsparse_operation_none,
               &alpha,
               mat_A,
               mat_B,
               &beta,
               mat_C,
               ttype,
               rocsparse_spmm_alg_default,
               rocsparse_spmm_stage_compute,
               &buffer_size,
               buffer));

// Clear up on device
hipFree(csr_row_ptr);
hipFree(csr_col_ind);
hipFree(csr_val);
hipFree(B);
hipFree(C);
hipFree(temp_buffer);

rocsparse_destroy_spmat_descr(mat_A);
rocsparse_destroy_dnmat_descr(mat_B);
rocsparse_destroy_dnmat_descr(mat_C);

Note

None of the algorithms above are deterministic when \(A\) is transposed or conjugate transposed.

Note

All algorithms perform best when using row ordering for the dense \(B\) and \(C\) matrices

Note

The sparse matrix formats currently supported are: rocsparse_format_coo, rocsparse_format_csr, rocsparse_format_csc, rocsparse_format_bsr, and rocsparse_format_bell.

Note

Mixed precisions only supported for BSR, CSR, CSC, and COO matrix formats.

Note

Only the rocsparse_spmm_stage_buffer_size stage and the rocsparse_spmm_stage_compute stage are non blocking and executed asynchronously with respect to the host. They may return before the actual computation has finished. The rocsparse_spmm_stage_preprocess stage is blocking with respect to the host.

Note

Currently, only trans_A == rocsparse_operation_none is supported for COO and Blocked ELL formats.

Note

Only the rocsparse_spmm_stage_buffer_size stage and the rocsparse_spmm_stage_compute stage support execution in a hipGraph context. The rocsparse_spmm_stage_preprocess stage does not support hipGraph.

Note

Currently, only CSR, COO, BSR and Blocked ELL sparse formats are supported.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans_A[in] matrix operation type.

  • trans_B[in] matrix operation type.

  • alpha[in] scalar \(\alpha\).

  • mat_A[in] matrix descriptor.

  • mat_B[in] matrix descriptor.

  • beta[in] scalar \(\beta\).

  • mat_C[in] matrix descriptor.

  • compute_type[in] floating point precision for the SpMM computation.

  • alg[in] SpMM algorithm for the SpMM computation.

  • stage[in] SpMM stage for the SpMM computation.

  • buffer_size[out] number of bytes of the temporary storage buffer.

  • temp_buffer[in] temporary storage buffer allocated by the user. When the rocsparse_spmm_stage_buffer_size stage is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpMM operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha, mat_A, mat_B, mat_C, beta, or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedtrans_A, trans_B, compute_type or alg is currently not supported.

rocsparse_spgemm()#

rocsparse_status rocsparse_spgemm(rocsparse_handle handle, rocsparse_operation trans_A, rocsparse_operation trans_B, const void *alpha, rocsparse_const_spmat_descr A, rocsparse_const_spmat_descr B, const void *beta, rocsparse_const_spmat_descr D, rocsparse_spmat_descr C, rocsparse_datatype compute_type, rocsparse_spgemm_alg alg, rocsparse_spgemm_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse matrix sparse matrix multiplication.

rocsparse_spgemm multiplies the scalar \(\alpha\) with the sparse \(m \times k\) matrix \(op(A)\) and the sparse \(k \times n\) matrix \(op(B)\) and adds the result to the sparse \(m \times n\) matrix \(D\) that is multiplied by \(\beta\). The final result is stored in the sparse \(m \times n\) matrix \(C\), such that

\[ C := \alpha \cdot op(A) \cdot op(B) + \beta \cdot D, \]
with
\[ op(A) = \left\{ \begin{array}{ll} A, & \text{if trans_A == rocsparse_operation_none} \end{array} \right. \]
and
\[ op(B) = \left\{ \begin{array}{ll} B, & \text{if trans_B == rocsparse_operation_none} \end{array} \right. \]

rocsparse_spgemm requires three stages to complete. First, the user passes the rocsparse_spgemm_stage_buffer_size stage to determine the size of the required temporary storage buffer. Next, the user allocates this buffer and calls rocsparse_spgemm again with the rocsparse_spgemm_stage_nnz stage which will determine the number of non-zeros in \(C\). This stage will also fill in the row pointer array of \(C\). Now that the number of non-zeros in \(C\) is known, the user allocates space for the column indices and values arrays of \(C\). Finally, the user calls rocsparse_spgemm with the rocsparse_spgemm_stage_compute stage to perform the actual computation which fills in the column indices and values arrays of \(C\). Once all calls to rocsparse_spgemm are complete, the temporary buffer can be deallocated.

Alternatively, the user may also want to perform sparse matrix products multiple times with matrices having the same sparsity pattern, but whose values differ. In this scenario, the process begins like before. First, the user calls rocsparse_spgemm with stage rocsparse_spgemm_stage_buffer_size to determine the required buffer size. The user again allocates this buffer and calls rocsparse_spgemm with the stage rocsparse_spgemm_stage_nnz to determine the number of non-zeros in \(C\). The user allocates the \(C\) column indices and values arrays. Now, however, the user calls rocsparse_spgemm with the rocsparse_spgemm_stage_symbolic stage which will fill in the column indices array of \(C\) but not the values array. The user is then free to repeatedly change the values of \(A\), \(B\), and \(D\) and call rocsparse_spgemm with the rocsparse_spgemm_stage_numeric stage which fill th values array of \(C\). The use of the extra rocsparse_spgemm_stage_symbolic and rocsparse_spgemm_stage_numeric stages allows the user to compute sparsity pattern of \(C\) once, but compute the values multiple times.

rocsparse_spgemm supports multiple combinations of data types and compute types. The tables below indicate the currently supported different data types that can be used for for the sparse matrices \(op(A)\), \(op(B)\), \(C\), and \(D\) and the compute type for \(\alpha\) and \(\beta\). The advantage of using different data types is to save on memory bandwidth and storage when a user application allows while performing the actual computation in a higher precision.

rocsparse_spgemm supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrices.

Uniform Precisions:

A / B / C / D / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

In general, when multiplying two sparse matrices together, it is entirely possible that the resulting matrix will require a a larger index representation to store correctly. For example, when multiplying \(A \times B\) using rocsparse_indextype_i32 index types for the row pointer and column indices arrays, it may be the case that the row pointer of the resulting \(C\) matrix would require index precision rocsparse_indextype_i64. This is currently not supported. In this scenario, the user would need to store the \(A\) and \(B\) matrices using the higher index precision.

Example
// A - m x k
// B - k x n
// C - m x n
int m = 400;
int n = 400;
int k = 300;

std::vector<int> hcsr_row_ptr_A = {...}; // host A m x k matrix
std::vector<int> hcsr_col_ind_A = {...}; // host A m x k matrix
std::vector<float> hcsr_val_A = {...};   // host A m x k matrix

std::vector<int> hcsr_row_ptr_B = {...}; // host B k x n matrix
std::vector<int> hcsr_col_ind_B = {...}; // host B k x n matrix
std::vector<float> hcsr_val_B = {...};   // host B k x n matrix

int nnz_A = hcsr_val_A.size();
int nnz_B = hcsr_val_B.size();

float alpha            = 1.0f;
float beta             = 0.0f;

int* dcsr_row_ptr_A = nullptr;
int* dcsr_col_ind_A = nullptr;
float* dcsr_val_A = nullptr;

int* dcsr_row_ptr_B = nullptr;
int* dcsr_col_ind_B = nullptr;
float* dcsr_val_B = nullptr;

int* dcsr_row_ptr_C = nullptr;

hipMalloc((void**)&dcsr_row_ptr_A, (m + 1) * sizeof(int));
hipMalloc((void**)&dcsr_col_ind_A, nnz_A * sizeof(int));
hipMalloc((void**)&dcsr_val_A, nnz_A * sizeof(float));

hipMalloc((void**)&dcsr_row_ptr_B, (k + 1) * sizeof(int));
hipMalloc((void**)&dcsr_col_ind_B, nnz_B * sizeof(int));
hipMalloc((void**)&dcsr_val_B, nnz_B * sizeof(float));

hipMalloc((void**)&dcsr_row_ptr_C, (m + 1) * sizeof(int));

hipMemcpy(dcsr_row_ptr_A, hcsr_row_ptr_A.data(), (m + 1) * sizeof(int), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind_A, hcsr_col_ind_A.data(), nnz_A * sizeof(int), hipMemcpyHostToDevice);
hipMemcpy(dcsr_val_A, hcsr_val_A.data(), nnz_A * sizeof(float), hipMemcpyHostToDevice);

hipMemcpy(dcsr_row_ptr_B, hcsr_row_ptr_B.data(), (k + 1) * sizeof(int), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind_B, hcsr_col_ind_B.data(), nnz_B * sizeof(int), hipMemcpyHostToDevice);
hipMemcpy(dcsr_val_B, hcsr_val_B.data(), nnz_B * sizeof(float), hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA, matB, matC, matD;
void*                temp_buffer    = NULL;
size_t               buffer_size = 0;

rocsparse_operation trans_A = rocsparse_operation_none;
rocsparse_operation trans_B = rocsparse_operation_none;
rocsparse_index_base index_base = rocsparse_index_base_zero;
rocsparse_indextype itype = rocsparse_indextype_i32;
rocsparse_indextype jtype = rocsparse_indextype_i32;
rocsparse_datatype  ttype = rocsparse_datatype_f32_r;

rocsparse_create_handle(&handle);

// Create sparse matrix A in CSR format
rocsparse_create_csr_descr(&matA, m, k, nnz_A,
                    dcsr_row_ptr_A, dcsr_col_ind_A, dcsr_val_A,
                    itype, jtype,
                    index_base, ttype);

// Create sparse matrix B in CSR format
rocsparse_create_csr_descr(&matB, k, n, nnz_B,
                    dcsr_row_ptr_B, dcsr_col_ind_B, dcsr_val_B,
                    itype, jtype,
                    index_base, ttype);

// Create sparse matrix C in CSR format
rocsparse_create_csr_descr(&matC, m, n, 0,
                    dcsr_row_ptr_C, nullptr, nullptr,
                    itype, jtype,
                    index_base, ttype);

// Create sparse matrix D in CSR format
rocsparse_create_csr_descr(&matD, 0, 0, 0,
                    nullptr, nullptr, nullptr,
                    itype, jtype,
                    index_base, ttype);

// Determine buffer size
rocsparse_spgemm(handle,
                 trans_A,
                 trans_B,
                 &alpha,
                 matA,
                 matB,
                 &beta,
                 matD,
                 matC,
                 ttype,
                 rocsparse_spgemm_alg_default,
                 rocsparse_spgemm_stage_buffer_size,
                 &buffer_size,
                 nullptr);

hipMalloc(&temp_buffer, buffer_size);

// Determine number of non-zeros in C matrix
rocsparse_spgemm(handle,
                 trans_A,
                 trans_B,
                 &alpha,
                 matA,
                 matB,
                 &beta,
                 matD,
                 matC,
                 ttype,
                 rocsparse_spgemm_alg_default,
                 rocsparse_spgemm_stage_nnz,
                 &buffer_size,
                 temp_buffer);

int64_t rows_C;
int64_t cols_C;
int64_t nnz_C;

// Extract number of non-zeros in C matrix so we can allocate the column indices and values arrays
rocsparse_spmat_get_size(matC, &rows_C, &cols_C, &nnz_C);

int* dcsr_col_ind_C;
float* dcsr_val_C;
hipMalloc((void**)&dcsr_col_ind_C, sizeof(int) * nnz_C);
hipMalloc((void**)&dcsr_val_C, sizeof(float) * nnz_C);

// Set C matrix pointers
rocsparse_csr_set_pointers(matC, dcsr_row_ptr_C, dcsr_col_ind_C, dcsr_val_C);

// SpGEMM computation
rocsparse_spgemm(handle,
                 trans_A,
                 trans_B,
                 &alpha,
                 matA,
                 matB,
                 &beta,
                 matD,
                 matC,
                 ttype,
                 rocsparse_spgemm_alg_default,
                 rocsparse_spgemm_stage_compute,
                 &buffer_size,
                 temp_buffer);

// Copy C matrix result back to host
std::vector<int> hcsr_row_ptr_C(m + 1);
std::vector<int> hcsr_col_ind_C(nnz_C);
std::vector<float>  hcsr_val_C(nnz_C);

hipMemcpy(hcsr_row_ptr_C.data(), dcsr_row_ptr_C, sizeof(int) * (m + 1), hipMemcpyDeviceToHost);
hipMemcpy(hcsr_col_ind_C.data(), dcsr_col_ind_C, sizeof(int) * nnz_C, hipMemcpyDeviceToHost);
hipMemcpy(hcsr_val_C.data(), dcsr_val_C, sizeof(float) * nnz_C, hipMemcpyDeviceToHost);

// Destroy matrix descriptors
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_spmat_descr(matB);
rocsparse_destroy_spmat_descr(matC);
rocsparse_destroy_spmat_descr(matD);
rocsparse_destroy_handle(handle);

// Free device arrays
hipFree(temp_buffer);
hipFree(dcsr_row_ptr_A);
hipFree(dcsr_col_ind_A);
hipFree(dcsr_val_A);

hipFree(dcsr_row_ptr_B);
hipFree(dcsr_col_ind_B);
hipFree(dcsr_val_B);

hipFree(dcsr_row_ptr_C);
hipFree(dcsr_col_ind_C);
hipFree(dcsr_val_C);

Note

This function does not produce deterministic results.

Note

SpGEMM requires three stages to complete. The first stage rocsparse_spgemm_stage_buffer_size will return the size of the temporary storage buffer that is required for subsequent calls to rocsparse_spgemm. The second stage rocsparse_spgemm_stage_nnz will determine the number of non-zero elements of the resulting \(C\) matrix. If the sparsity pattern of \(C\) is already known, this stage can be skipped. In the final stage rocsparse_spgemm_stage_compute, the actual computation is performed.

Note

If \(\alpha == 0\), then \(C = \beta \cdot D\) will be computed.

Note

If \(\beta == 0\), then \(C = \alpha \cdot op(A) \cdot op(B)\) will be computed.

Note

Currently only CSR and BSR formats are supported.

Note

If rocsparse_spgemm_stage_symbolic is selected then the symbolic computation is performed only.

Note

If rocsparse_spgemm_stage_numeric is selected then the numeric computation is performed only.

Note

For the rocsparse_spgemm_stage_symbolic and rocsparse_spgemm_stage_numeric stages, only CSR matrix format is currently supported.

Note

\(\alpha == beta == 0\) is invalid.

Note

It is allowed to pass the same sparse matrix for \(C\) and \(D\), if both matrices have the same sparsity pattern.

Note

Currently, only trans_A == rocsparse_operation_none is supported.

Note

Currently, only trans_B == rocsparse_operation_none is supported.

Note

This function is non blocking and executed asynchronously with respect to the host. It may return before the actual computation has finished.

Note

Please note, that for rare matrix products with more than 4096 non-zero entries per row, additional temporary storage buffer is allocated by the algorithm.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • trans_A[in] sparse matrix \(A\) operation type.

  • trans_B[in] sparse matrix \(B\) operation type.

  • alpha[in] scalar \(\alpha\).

  • A[in] sparse matrix \(A\) descriptor.

  • B[in] sparse matrix \(B\) descriptor.

  • beta[in] scalar \(\beta\).

  • D[in] sparse matrix \(D\) descriptor.

  • C[out] sparse matrix \(C\) descriptor.

  • compute_type[in] floating point precision for the SpGEMM computation.

  • alg[in] SpGEMM algorithm for the SpGEMM computation.

  • stage[in] SpGEMM stage for the SpGEMM computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpGEMM operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha and beta are invalid, A, B, D, C or buffer_size pointer is invalid.

  • rocsparse_status_memory_error – additional buffer for long rows could not be allocated.

  • rocsparse_status_not_implementedtrans_A != rocsparse_operation_none or trans_B != rocsparse_operation_none.

rocsparse_sddmm_buffer_size()#

rocsparse_status rocsparse_sddmm_buffer_size(rocsparse_handle handle, rocsparse_operation opA, rocsparse_operation opB, const void *alpha, rocsparse_const_dnmat_descr A, rocsparse_const_dnmat_descr B, const void *beta, rocsparse_spmat_descr C, rocsparse_datatype compute_type, rocsparse_sddmm_alg alg, size_t *buffer_size)#

Sampled Dense-Dense Matrix Multiplication.

rocsparse_sddmm_buffer_size returns the size of the required buffer to execute the SDDMM operation from a given configuration. This routine is used in conjunction with rocsparse_sddmm_preprocess() and rocsparse_sddmm().

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • opA[in] dense matrix \(A\) operation type.

  • opB[in] dense matrix \(B\) operation type.

  • alpha[in] scalar \(\alpha\).

  • A[in] dense matrix \(A\) descriptor.

  • B[in] dense matrix \(B\) descriptor.

  • beta[in] scalar \(\beta\).

  • C[inout] sparse matrix \(C\) descriptor.

  • compute_type[in] floating point precision for the SDDMM computation.

  • alg[in] specification of the algorithm to use.

  • buffer_size[out] number of bytes of the temporary storage buffer.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_value – the value of opA or opB is incorrect.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha and beta are invalid, A, B, D, C or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedopA == rocsparse_operation_conjugate_transpose or opB == rocsparse_operation_conjugate_transpose.

rocsparse_sddmm_preprocess()#

rocsparse_status rocsparse_sddmm_preprocess(rocsparse_handle handle, rocsparse_operation opA, rocsparse_operation opB, const void *alpha, rocsparse_const_dnmat_descr A, rocsparse_const_dnmat_descr B, const void *beta, rocsparse_spmat_descr C, rocsparse_datatype compute_type, rocsparse_sddmm_alg alg, void *temp_buffer)#

Sampled Dense-Dense Matrix Multiplication.

rocsparse_sddmm_preprocess executes a part of the algorithm that can be calculated once in the context of multiple calls of the rocsparse_sddmm with the same sparsity pattern.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • opA[in] dense matrix \(A\) operation type.

  • opB[in] dense matrix \(B\) operation type.

  • alpha[in] scalar \(\alpha\).

  • A[in] dense matrix \(A\) descriptor.

  • B[in] dense matrix \(B\) descriptor.

  • beta[in] scalar \(\beta\).

  • C[inout] sparse matrix \(C\) descriptor.

  • compute_type[in] floating point precision for the SDDMM computation.

  • alg[in] specification of the algorithm to use.

  • temp_buffer[in] temporary storage buffer allocated by the user. The size must be greater or equal to the size obtained with rocsparse_sddmm_buffer_size.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_value – the value of opA or opB is incorrect.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha and beta are invalid, A, B, D, C or temp_buffer pointer is invalid.

  • rocsparse_status_not_implementedopA == rocsparse_operation_conjugate_transpose or opB == rocsparse_operation_conjugate_transpose.

rocsparse_sddmm()#

rocsparse_status rocsparse_sddmm(rocsparse_handle handle, rocsparse_operation opA, rocsparse_operation opB, const void *alpha, rocsparse_const_dnmat_descr A, rocsparse_const_dnmat_descr B, const void *beta, rocsparse_spmat_descr C, rocsparse_datatype compute_type, rocsparse_sddmm_alg alg, void *temp_buffer)#

Sampled Dense-Dense Matrix Multiplication.

rocsparse_sddmm multiplies the scalar \(\alpha\) with the dense \(m \times k\) matrix \(op(A)\), the dense \(k \times n\) matrix \(op(B)\), filtered by the sparsity pattern of the \(m \times n\) sparse matrix \(C\) and adds the result to \(C\) scaled by \(\beta\). The final result is stored in the sparse \(m \times n\) matrix \(C\), such that

\[ C := \alpha ( op(A) \cdot op(B) ) \circ spy(C) + \beta C, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if op(A) == rocsparse_operation_none} \\ A^T, & \text{if op(A) == rocsparse_operation_transpose} \\ \end{array} \right. \end{split}\]
,
\[\begin{split} op(B) = \left\{ \begin{array}{ll} B, & \text{if op(B) == rocsparse_operation_none} \\ B^T, & \text{if op(B) == rocsparse_operation_transpose} \\ \end{array} \right. \end{split}\]
and
\[\begin{split} spy(C)_{ij} = \left\{ \begin{array}{ll} 1, & \text{ if C_{ij} != 0} \\ 0, & \text{ otherwise} \\ \end{array} \right. \end{split}\]

Computing the above sampled dense-dense multiplication requires three steps to complete. First, the user calls rocsparse_sddmm_buffer_size to determine the size of the required temporary storage buffer. Next, the user allocates this buffer and calls rocsparse_sddmm_preprocess which performs any analysis of the input matrices that may be required. Finally, the user calls rocsparse_sddmm to complete the computation. Once all calls to rocsparse_sddmm are complete, the temporary buffer can be deallocated.

rocsparse_sddmm supports different algorithms which can provide better performance for different matrices.

Algorithm

Deterministic

Preprocessing

Notes

rocsparse_sddmm_alg_default

Yes

No

Uses the sparsity pattern of matrix C to perform a limited set of dot products

rocsparse_sddmm_alg_dense

Yes

No

Explicitly converts the matrix C into a dense matrix to perform a dense matrix multiply and add

Currently, rocsparse_sddmm only supports the uniform precisions indicated in the table below. For the sparse matrix \(C\), rocsparse_sddmm supports the index types rocsparse_indextype_i32 and rocsparse_indextype_i64.

Uniform Precisions:

A / B / C / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example

This example performs sampled dense-dense matrix product, \(C := \alpha ( A \cdot B ) \circ spy(C) + \beta C\) where \(\circ\) is the hadamard product

// rocSPARSE handle
rocsparse_handle handle;
rocsparse_create_handle(&handle);

float halpha = 1.0f;
float hbeta = -1.0f;

// A, B, and C are mxk, kxn, and mxn
int m = 4;
int k = 3;
int n = 2;
int nnzC = 5;

//     2  3  -1
// A = 0  2   1
//     0  0   5
//     0 -2 0.5

//      0  4
// B =  1  0
//     -2  0.5

//      1 0            1 0
// C =  2 3   spy(C) = 1 1
//      0 0            0 0
//      4 5            1 1

std::vector<float> hA = {2.0f, 3.0f, -1.0f, 0.0, 2.0f, 1.0f, 0.0f, 0.0f, 5.0f, 0.0f, -2.0f, 0.5f};
std::vector<float> hB = {0.0f, 4.0f, 1.0f, 0.0, -2.0f, 0.5f};

std::vector<int> hcsr_row_ptrC = {0, 1, 3, 3, 5};
std::vector<int> hcsr_col_indC = {0, 0, 1, 0, 1};
std::vector<float> hcsr_valC = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f};

float* dA = nullptr;
float* dB = nullptr;
hipMalloc((void**)&dA, sizeof(float) * m * k);
hipMalloc((void**)&dB, sizeof(float) * k * n);

int* dcsr_row_ptrC = nullptr;
int* dcsr_col_indC = nullptr;
float* dcsr_valC = nullptr;
hipMalloc((void**)&dcsr_row_ptrC, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_indC, sizeof(int) * nnzC);
hipMalloc((void**)&dcsr_valC, sizeof(float) * nnzC);

hipMemcpy(dA, hA.data(), sizeof(float) * m * k, hipMemcpyHostToDevice);
hipMemcpy(dB, hB.data(), sizeof(float) * k * n, hipMemcpyHostToDevice);

hipMemcpy(dcsr_row_ptrC, hcsr_row_ptrC.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_indC, hcsr_col_indC.data(), sizeof(int) * nnzC, hipMemcpyHostToDevice);
hipMemcpy(dcsr_valC, hcsr_valC.data(), sizeof(float) * nnzC, hipMemcpyHostToDevice);

rocsparse_dnmat_descr matA;
rocsparse_create_dnmat_descr(&matA, m, k, k, dA, rocsparse_datatype_f32_r, rocsparse_order_row);

rocsparse_dnmat_descr matB;
rocsparse_create_dnmat_descr(&matB, k, n, n, dB, rocsparse_datatype_f32_r, rocsparse_order_row);

rocsparse_spmat_descr matC;
rocsparse_create_csr_descr(&matC,
                           m,
                           n,
                           nnzC,
                           dcsr_row_ptrC,
                           dcsr_col_indC,
                           dcsr_valC,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);

size_t buffer_size = 0;
rocsparse_sddmm_buffer_size(handle,
                            rocsparse_operation_none,
                            rocsparse_operation_none,
                            &halpha,
                            matA,
                            matB,
                            &hbeta,
                            matC,
                            rocsparse_datatype_f32_r,
                            rocsparse_sddmm_alg_default,
                            &buffer_size);

void* dbuffer = nullptr;
hipMalloc((void**) &dbuffer, buffer_size);

rocsparse_sddmm_preprocess(handle,
                           rocsparse_operation_none,
                           rocsparse_operation_none,
                           &halpha,
                           matA,
                           matB,
                           &hbeta,
                           matC,
                           rocsparse_datatype_f32_r,
                           rocsparse_sddmm_alg_default,
                           dbuffer);

rocsparse_sddmm(handle,
                rocsparse_operation_none,
                rocsparse_operation_none,
                &halpha,
                matA,
                matB,
                &hbeta,
                matC,
                rocsparse_datatype_f32_r,
                rocsparse_sddmm_alg_default,
                dbuffer);

hipMemcpy(hcsr_row_ptrC.data(), dcsr_row_ptrC, sizeof(int) * (m + 1), hipMemcpyDeviceToHost);
hipMemcpy(hcsr_col_indC.data(), dcsr_col_indC, sizeof(int) * nnzC, hipMemcpyDeviceToHost);
hipMemcpy(hcsr_valC.data(), dcsr_valC, sizeof(float) * nnzC, hipMemcpyDeviceToHost);

rocsparse_destroy_dnmat_descr(matA);
rocsparse_destroy_dnmat_descr(matB);
rocsparse_destroy_spmat_descr(matC);
rocsparse_destroy_handle(handle);

hipFree(dA);
hipFree(dB);
hipFree(dcsr_row_ptrC);
hipFree(dcsr_col_indC);
hipFree(dcsr_valC);
hipFree(dbuffer);

Note

The sparse matrix formats currently supported are: rocsparse_format_csr.

Note

opA == rocsparse_operation_conjugate_transpose is not supported.

Note

opB == rocsparse_operation_conjugate_transpose is not supported.

Note

This routine supports execution in a hipGraph context only when alg == rocsparse_sddmm_alg_default.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • opA[in] dense matrix \(A\) operation type.

  • opB[in] dense matrix \(B\) operation type.

  • alpha[in] scalar \(\alpha\).

  • A[in] dense matrix \(A\) descriptor.

  • B[in] dense matrix \(B\) descriptor.

  • beta[in] scalar \(\beta\).

  • C[inout] sparse matrix \(C\) descriptor.

  • compute_type[in] floating point precision for the SDDMM computation.

  • alg[in] specification of the algorithm to use.

  • temp_buffer[in] temporary storage buffer allocated by the user. The size must be greater or equal to the size obtained with rocsparse_sddmm_buffer_size.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_value – the value of opA, opB, compute_type or alg is incorrect.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha and beta are invalid, A, B, D, C or temp_buffer pointer is invalid.

  • rocsparse_status_not_implementedopA == rocsparse_operation_conjugate_transpose or opB == rocsparse_operation_conjugate_transpose.

rocsparse_dense_to_sparse()#

rocsparse_status rocsparse_dense_to_sparse(rocsparse_handle handle, rocsparse_const_dnmat_descr mat_A, rocsparse_spmat_descr mat_B, rocsparse_dense_to_sparse_alg alg, size_t *buffer_size, void *temp_buffer)#

Dense matrix to sparse matrix conversion.

rocsparse_dense_to_sparse performs the conversion of a dense matrix to a sparse matrix in CSR, CSC, or COO format.

rocsparse_dense_to_sparse requires multiple steps to complete. First, the user calls rocsparse_dense_to_sparse with nullptr passed into temp_buffer:

// Call dense_to_sparse to get required buffer size
size_t buffer_size = 0;
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          &buffer_size,
                          nullptr);
After this is called, the buffer_size will be filled with the size of the required buffer that must be then allocated by the user. Next the user calls rocsparse_dense_to_sparse with the newly allocated temp_buffer and nullptr passed into buffer_size:
// Call dense_to_sparse to perform analysis
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          nullptr,
                          temp_buffer);
This will determine the number of non-zeros that will exist in the sparse matrix which can be queried using rocsparse_spmat_get_size routine. With this, the user can allocate the sparse matrix device arrays and set them on the sparse matrix descriptor using rocsparse_csr_set_pointers (CSR format), rocsparse_csc_set_pointers (for CSC format), or rocsparse_coo_set_pointers (for COO format). Finally, the conversion is completed by calling rocsparse_dense_to_sparse with both the buffer_size and temp_buffer:
// Call dense_to_sparse to complete conversion
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          &buffer_size,
                          temp_buffer);
Currently, rocsparse_dense_to_sparse only supports the algorithm rocsparse_dense_to_sparse_alg_default. See full example below.

rocsparse_dense_to_sparse supports rocsparse_datatype_f32_r, rocsparse_datatype_f64_r, rocsparse_datatype_f32_c, and rocsparse_datatype_f64_c for values arrays in the sparse matrix (stored in CSR, CSC, or COO format) and the dense matrix. For the row/column offset and row/column index arrays of the sparse matrix, rocsparse_dense_to_sparse supports the precisions rocsparse_indextype_i32 and rocsparse_indextype_i64.

Example
//     1 4 0 0 0 0
// A = 0 2 3 0 0 0
//     5 0 0 7 8 0
//     0 0 9 0 6 0
int m   = 4;
int n   = 6;

std::vector<float> hdense = {1, 0, 5, 0, 4, 2, 0, 0, 0, 3, 0, 9, 0, 0, 7, 0, 0, 0, 8, 6, 0, 0, 0, 0};

// Offload data to device
int* dcsr_row_ptr;
float* ddense;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&ddense, sizeof(float) * m * n);

hipMemcpy(ddense, hdense.data(), sizeof(float) * m * n, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_dnmat_descr matA;
rocsparse_spmat_descr matB;

rocsparse_indextype row_idx_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_dnmat_descr(&matA, m, n, m, ddense, data_type, rocsparse_order_column);

// Create dense matrix B
rocsparse_create_csr_descr(&matB,
                           m,
                           n,
                           0,
                           dcsr_row_ptr,
                           nullptr,
                           nullptr,
                           row_idx_type,
                           col_idx_type,
                           idx_base,
                           data_type);

// Call dense_to_sparse to get required buffer size
size_t buffer_size = 0;
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          &buffer_size,
                          nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

// Call dense_to_sparse to perform analysis
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          nullptr,
                          temp_buffer);

int64_t num_rows_tmp, num_cols_tmp, nnz;
rocsparse_spmat_get_size(matB, &num_rows_tmp, &num_cols_tmp, &nnz);

int* dcsr_col_ind;
float* dcsr_val;
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);

rocsparse_csr_set_pointers(matB, dcsr_row_ptr, dcsr_col_ind, dcsr_val);

// Call dense_to_sparse to complete conversion
rocsparse_dense_to_sparse(handle,
                          matA,
                          matB,
                          rocsparse_dense_to_sparse_alg_default,
                          &buffer_size,
                          temp_buffer);

std::vector<int> hcsr_row_ptr(m + 1, 0);
std::vector<int> hcsr_col_ind(nnz, 0);
std::vector<float> hcsr_val(nnz, 0);

// Copy result back to host
hipMemcpy(hcsr_row_ptr.data(), dcsr_row_ptr, sizeof(int) * (m + 1), hipMemcpyDeviceToHost);
hipMemcpy(hcsr_col_ind.data(), dcsr_col_ind, sizeof(int) * nnz, hipMemcpyDeviceToHost);
hipMemcpy(hcsr_val.data(), dcsr_val, sizeof(float) * nnz, hipMemcpyDeviceToHost);

// Clear rocSPARSE
rocsparse_destroy_dnmat_descr(matA);
rocsparse_destroy_spmat_descr(matB);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(ddense);

Note

This function writes the required allocation size (in bytes) to buffer_size and returns without performing the dense to sparse operation, when a nullptr is passed for temp_buffer.

Note

This function is blocking with respect to the host.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • mat_A[in] dense matrix descriptor.

  • mat_B[in] sparse matrix descriptor.

  • alg[in] algorithm for the dense to sparse computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the dense to sparse operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointermat_A, mat_B, or buffer_size pointer is invalid.

rocsparse_sparse_to_dense()#

rocsparse_status rocsparse_sparse_to_dense(rocsparse_handle handle, rocsparse_const_spmat_descr mat_A, rocsparse_dnmat_descr mat_B, rocsparse_sparse_to_dense_alg alg, size_t *buffer_size, void *temp_buffer)#

Sparse matrix to dense matrix conversion.

rocsparse_sparse_to_dense performs the conversion of a sparse matrix in CSR, CSC, or COO format to a dense matrix

rocsparse_sparse_to_dense requires multiple steps to complete. First, the user calls rocsparse_sparse_to_dense with nullptr passed into temp_buffer:

// Call sparse_to_dense to get required buffer size
size_t buffer_size = 0;
rocsparse_sparse_to_dense(handle,
                          matA,
                          matB,
                          rocsparse_sparse_to_dense_alg_default,
                          &buffer_size,
                          nullptr);
Finally, the conversion is completed by calling rocsparse_sparse_to_dense with both the buffer_size and temp_buffer:
// Call dense_to_sparse to complete conversion
rocsparse_sparse_to_dense(handle,
                          matA,
                          matB,
                          rocsparse_sparse_to_dense_alg_default,
                          &buffer_size,
                          temp_buffer);
Currently, rocsparse_sparse_to_dense only supports the algorithm rocsparse_sparse_to_dense_alg_default. See full example below.

rocsparse_sparse_to_dense supports rocsparse_datatype_f32_r, rocsparse_datatype_f64_r, rocsparse_datatype_f32_c, and rocsparse_datatype_f64_c for values arrays in the sparse matrix (stored in CSR, CSC, or COO format) and the dense matrix. For the row/column offset and row/column index arrays of the sparse matrix, rocsparse_sparse_to_dense supports the precisions rocsparse_indextype_i32 and rocsparse_indextype_i64.

Example
//     1 4 0 0 0 0
// A = 0 2 3 0 0 0
//     5 0 0 7 8 0
//     0 0 9 0 6 0
int m   = 4;
int n   = 6;

std::vector<int> hcsr_row_ptr = {0, 2, 4, 7, 9};
std::vector<int> hcsr_col_ind = {0, 1, 1, 2, 0, 3, 4, 2, 4};
std::vector<float> hcsr_val   = {1, 4, 2, 3, 5, 7, 8, 9, 6};
std::vector<float> hdense(m * n, 0.0f);

int nnz = hcsr_row_ptr[m] - hcsr_row_ptr[0];

// Offload data to device
int* dcsr_row_ptr;
int* dcsr_col_ind;
float* dcsr_val;
float* ddense;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);
hipMalloc((void**)&ddense, sizeof(float) * m * n);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(ddense, hdense.data(), sizeof(float) * m * n, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA;
rocsparse_dnmat_descr matB;

rocsparse_indextype row_idx_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_csr_descr(&matA,
                           m,
                           n,
                           nnz,
                           dcsr_row_ptr,
                           dcsr_col_ind,
                           dcsr_val,
                           row_idx_type,
                           col_idx_type,
                           idx_base,
                           data_type);

// Create dense matrix B
rocsparse_create_dnmat_descr(&matB, m, n, m, ddense, data_type, rocsparse_order_column);

// Call sparse_to_dense
size_t buffer_size = 0;
rocsparse_sparse_to_dense(handle,
                          matA,
                          matB,
                          rocsparse_sparse_to_dense_alg_default,
                          &buffer_size,
                          nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

rocsparse_sparse_to_dense(handle,
                          matA,
                          matB,
                          rocsparse_sparse_to_dense_alg_default,
                          &buffer_size,
                          temp_buffer);

// Copy result back to host
hipMemcpy(hdense.data(), ddense, sizeof(float) * m * n, hipMemcpyDeviceToHost);

// Clear rocSPARSE
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_dnmat_descr(matB);
rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(ddense);

Note

This function writes the required allocation size (in bytes) to buffer_size and returns without performing the sparse to dense operation, when a nullptr is passed for temp_buffer.

Note

This function is blocking with respect to the host.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • mat_A[in] sparse matrix descriptor.

  • mat_B[in] dense matrix descriptor.

  • alg[in] algorithm for the sparse to dense computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the sparse to dense operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointermat_A, mat_B, or buffer_size pointer is invalid.

rocsparse_sparse_to_sparse_buffer_size()#

rocsparse_status rocsparse_sparse_to_sparse_buffer_size(rocsparse_handle handle, rocsparse_sparse_to_sparse_descr descr, rocsparse_const_spmat_descr source, rocsparse_spmat_descr target, rocsparse_sparse_to_sparse_stage stage, size_t *buffer_size_in_bytes)#

Sparse matrix to sparse matrix conversion.

rocsparse_sparse_to_sparse_buffer_size calculates the required buffer size in bytes for a given stage stage.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • descr[in] descriptor of the sparse_to_sparse algorithm.

  • source[in] source sparse matrix descriptor.

  • target[in] target sparse matrix descriptor.

  • stage[in] stage of the sparse_to_sparse computation.

  • buffer_size_in_bytes[out] size in bytes of the buffer

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_value – if any required enumeration is invalid.

  • rocsparse_status_invalid_pointermat_A, mat_B, or buffer_size_in_bytes pointer is invalid.

rocsparse_sparse_to_sparse()#

rocsparse_status rocsparse_sparse_to_sparse(rocsparse_handle handle, rocsparse_sparse_to_sparse_descr descr, rocsparse_const_spmat_descr source, rocsparse_spmat_descr target, rocsparse_sparse_to_sparse_stage stage, size_t buffer_size_in_bytes, void *buffer)#

Sparse matrix to sparse matrix conversion.

rocsparse_sparse_to_sparse performs the conversion of a sparse matrix to a sparse matrix.

Example

This example converts a CSR matrix into an ELL matrix.

// It assumes the CSR arrays (ptr, ind, val) have already been allocated and filled.
// Build Source
rocsparse_spmat_descr source;
rocsparse_create_csr_descr(&source, M, N, nnz, ptr, ind, val, rocsparse_indextype_i32, rocsparse_indextype_i32, rocsparse_index_base_zero, rocsparse_datatype_f32_r);

// Build target
void * ell_ind, * ell_val;
int64_t ell_width = 0;
rocsparse_spmat_descr target;
rocsparse_create_ell_descr(&target, M, N, ell_ind, ell_val, ell_width, rocsparse_indextype_i32, rocsparse_index_base_zero, rocsparse_datatype_f32_r);

// Create descriptor
rocsparse_sparse_to_sparse_descr descr;
rocsparse_sparse_to_sparse_create_descr(&descr, source, target,  rocsparse_sparse_to_sparse_alg_default);

// Analysis phase
rocsparse_sparse_to_sparse_buffer_size(handle, descr, source, target, rocsparse_sparse_to_sparse_stage_analysis, &buffer_size);
hipMalloc(&buffer,buffer_size);
rocsparse_sparse_to_sparse(handle, descr, source, target, rocsparse_sparse_to_sparse_stage_analysis, buffer_size, buffer);
hipFree(buffer);

//
// the user is responsible to allocate target arrays after the analysis phase.
//
{ int64_t rows, cols, ell_width;
  void * ind, * val;
  rocsparse_indextype        idx_type;
  rocsparse_index_base       idx_base;
  rocsparse_datatype         data_type;

   rocsparse_ell_get(target,
                     &rows,
                     &cols,
                     &ind,
                     &val,
                     &ell_width,
                     &idx_type,
                     &idx_base,
                     &data_type);
   hipMalloc(&ell_ind,ell_width * M * sizeof(int32_t));
   hipMalloc(&ell_val,ell_width * M * sizeof(float)));
   rocsparse_ell_set_pointers(target, ell_ind, ell_val); }

// Calculation phase
rocsparse_sparse_to_sparse_buffer_size(handle, descr, source, target, rocsparse_sparse_to_sparse_stage_compute, &buffer_size);
hipMalloc(&buffer,buffer_size);
rocsparse_sparse_to_sparse(handle, descr, source, target, rocsparse_sparse_to_sparse_stage_compute, buffer_size, buffer);
hipFree(buffer);

Note

The required allocation size (in bytes) to buffer_size_in_bytes must be obtained from rocsparse_sparse_to_sparse_buffer_size for each stage, indeed the required buffer size can be different between stages.

Note

The format rocsparse_format_bell is not supported.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • descr[in] descriptor of the sparse_to_sparse algorithm.

  • source[in] sparse matrix descriptor.

  • target[in] sparse matrix descriptor.

  • stage[in] stage of the sparse_to_sparse computation.

  • buffer_size_in_bytes[in] size in bytes of the buffer

  • buffer[in] temporary storage buffer allocated by the user.

Return values:

rocsparse_status_success – the operation completed successfully.

rocsparse_extract_buffer_size()#

rocsparse_status rocsparse_extract_buffer_size(rocsparse_handle handle, rocsparse_extract_descr descr, rocsparse_const_spmat_descr source, rocsparse_spmat_descr target, rocsparse_extract_stage stage, size_t *buffer_size_in_bytes)#

Sparse matrix extraction.

rocsparse_extract_buffer_size calculates the required buffer size in bytes for a given stage stage. This routine is used in conjunction with rocsparse_extract_nnz and rocsparse_extract to extract a lower or upper triangular sparse matrix from an input sparse matrix. See rocsparse_extract for more details.

Note

This routine is asynchronous with respect to the host. This routine does support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • descr[in] descriptor of the extract algorithm.

  • source[in] source sparse matrix descriptor.

  • target[in] target sparse matrix descriptor.

  • stage[in] stage of the extract computation.

  • buffer_size_in_bytes[out] size in bytes of the buffer.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_value – if stage is invalid.

  • rocsparse_status_invalid_pointerdescr, source, target, or buffer_size_in_bytes pointer is invalid.

rocsparse_extract_nnz#

rocsparse_status rocsparse_extract_nnz(rocsparse_handle handle, rocsparse_extract_descr descr, int64_t *nnz)#

Sparse matrix extraction.

rocsparse_extract_nnz returns the number of non-zeros of the extracted matrix. The value is available after the analysis phase rocsparse_extract_stage_analysis being executed. This routine is used in conjunction with rocsparse_extract_buffer_size and rocsparse_extract to extract a lower or upper triangular sparse matrix from an input sparse matrix. See rocsparse_extract for more details.

Note

This routine is asynchronous with respect to the host. This routine does support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • descr[in] descriptor of the extract algorithm.

  • nnz[out] the number of non-zeros.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointerdescr or nnz pointer is invalid.

rocsparse_extract()#

rocsparse_status rocsparse_extract(rocsparse_handle handle, rocsparse_extract_descr descr, rocsparse_const_spmat_descr source, rocsparse_spmat_descr target, rocsparse_extract_stage stage, size_t buffer_size_in_bytes, void *buffer)#

Sparse matrix extraction.

rocsparse_extract performs the extraction of the lower or upper part of a sparse matrix into a new matrix.

rocsparse_extract requires multiple steps to complete. First, the user creates the source and target sparse matrix descriptors. For example, in the case of CSR matrix format this might look like:

// Build Source
rocsparse_spmat_descr source;
rocsparse_create_csr_descr(&source,
                           M,
                           N,
                           nnz,
                           dsource_row_ptr,
                           dsource_col_ind,
                           dsource_val,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);

// Build target
void * dtarget_row_ptr;
hipMalloc(&dtarget_row_ptr, sizeof(int32_t) * (M + 1));
rocsparse_spmat_descr target;
rocsparse_create_csr_descr(&target,
                           M,
                           N,
                           0,
                           dtarget_row_ptr,
                           nullptr,
                           nullptr,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);
Next, the user creates the extraction descriptor and calls rocsparse_extract_buffer_size with the stage rocsparse_extract_stage_analysis in order to determine the amount of temporary storage required. The user allocates this temporary storage buffer and passes it to rocsparse_extract with the stage rocsparse_extract_stage_analysis
// Create descriptor
rocsparse_extract_descr descr;
rocsparse_create_extract_descr(&descr,
                               source,
                               target,
                               rocsparse_extract_alg_default);

// Analysis phase
size_t buffer_size;
rocsparse_extract_buffer_size(handle,
                              descr,
                              source,
                              target,
                              rocsparse_extract_stage_analysis,
                              &buffer_size);
void* dbuffer = nullptr;
hipMalloc(&dbuffer, buffer_size);
rocsparse_extract(handle,
                  descr,
                  source,
                  target,
                  rocsparse_extract_stage_analysis,
                  buffer_size,
                  dbuffer);
hipFree(dbuffer);
The user then calls rocsparse_extract_nnz in order to determine the number of non-zeros that will exist in the target matrix. Once determined, the user can allocate the column indices and values arrays of the target sparse matrix:
int64_t target_nnz;
rocsparse_extract_nnz(handle, descr, &target_nnz);

void* dtarget_col_ind, 
void* dtarget_val;
hipMalloc(&dtarget_col_ind, sizeof(int32_t) * target_nnz);
hipMalloc(&dtarget_val, sizeof(float) * target_nnz);
rocsparse_csr_set_pointers(target, dtarget_row_ptr, dtarget_col_ind, dtarget_val);
Finally, the user calls rocsparse_extract_buffer_size with the stage rocsparse_extract_stage_compute in order to determine the size of the temporary user allocated storage needed for the computation of the column indices and values in the sparse target. The user allocates this buffer and completes the conversion by calling rocsparse_extract using the rocsparse_extract_stage_compute stage:
// Calculation phase
rocsparse_extract_buffer_size(handle,
                              descr,
                              source,
                              target,
                              rocsparse_extract_stage_compute,
                              &buffer_size);
hipMalloc(&dbuffer, buffer_size);
rocsparse_extract(handle,
                  descr,
                  source,
                  target,
                  rocsparse_extract_stage_compute,
                  buffer_size,
                  dbuffer);
hipFree(dbuffer);
The target row pointer, column indices, and values arrays will now be filled with the upper or lower part of the source matrix.

The source and the target matrices must have the same format (see rocsparse_format) and the same storage mode (see rocsparse_storage_mode). The attributes of the target matrix, the fill mode rocsparse_fill_mode and the diagonal type rocsparse_diag_type are used to parametrise the algorithm. These can be set on the target matrix using rocsparse_spmat_set_attribute. See full example below.

Example

This example extracts the lower part of CSR matrix into a CSR matrix.

std::vector<int> hsource_row_ptr;  // Filled with source host matrix data
std::vector<int> hsource_col_ind;  // Filled with source host matrix data
std::vector<float> hsource_val;  // Filled with source host matrix data

int* dsource_row_ptr = nullptr;
int* dsource_col_ind = nullptr;
float* dsource_val = nullptr;
hipMalloc((void**)&dsource_row_ptr, sizeof(int) * (M + 1));
hipMalloc((void**)&dsource_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dsource_val, sizeof(float) * nnz);

hipMemcpy(dsource_row_ptr, hsource_row_ptr.data(), sizeof(int) * (M + 1), hipMemcpyHostToDevice);
hipMemcpy(dsource_col_ind, hsource_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dsource_val, hsource_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);

rocsparse_handle handle;
rocsparse_create_handle(&handle);

// Build Source
rocsparse_spmat_descr source;
rocsparse_create_csr_descr(&source,
                           M,
                           N,
                           nnz,
                           dsource_row_ptr,
                           dsource_col_ind,
                           dsource_val,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);

// Build target
void * dtarget_row_ptr;
hipMalloc(&dtarget_row_ptr, sizeof(int32_t) * (M + 1));
rocsparse_spmat_descr target;
rocsparse_create_csr_descr(&target,
                           M,
                           N,
                           0,
                           dtarget_row_ptr,
                           nullptr,
                           nullptr,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);

const rocsparse_fill_mode fill_mode   = rocsparse_fill_mode_lower;
const rocsparse_diag_type diag_type   = rocsparse_diag_type_non_unit;

rocsparse_spmat_set_attribute(target, rocsparse_spmat_fill_mode, &fill_mode, sizeof(fill_mode));
rocsparse_spmat_set_attribute(target, rocsparse_spmat_diag_type, &diag_type, sizeof(diag_type));

// Create descriptor
rocsparse_extract_descr descr;
rocsparse_create_extract_descr(&descr, source, target, rocsparse_extract_alg_default);

// Analysis phase
size_t buffer_size;
rocsparse_extract_buffer_size(handle,
                              descr,
                              source,
                              target,
                              rocsparse_extract_stage_analysis,
                              &buffer_size);
void* dbuffer = nullptr;
hipMalloc(&dbuffer, buffer_size);
rocsparse_extract(handle,
                  descr,
                  source,
                  target,
                  rocsparse_extract_stage_analysis,
                  buffer_size,
                  dbuffer);
hipFree(dbuffer);

// The user is responsible to allocate target arrays after the analysis phase.
int64_t target_nnz;
rocsparse_extract_nnz(handle, descr, &target_nnz);

void* dtarget_col_ind, 
void* dtarget_val;
hipMalloc(&dtarget_col_ind, sizeof(int32_t) * target_nnz);
hipMalloc(&dtarget_val, sizeof(float) * target_nnz);
rocsparse_csr_set_pointers(target, dtarget_row_ptr, dtarget_col_ind, dtarget_val);

// Calculation phase
rocsparse_extract_buffer_size(handle,
                              descr,
                              source,
                              target,
                              rocsparse_extract_stage_compute,
                              &buffer_size);
hipMalloc(&dbuffer, buffer_size);
rocsparse_extract(handle,
                  descr,
                  source,
                  target,
                  rocsparse_extract_stage_compute,
                  buffer_size,
                  dbuffer);
hipFree(dbuffer);

Note

This routine is asynchronous with respect to the host. This routine does support execution in a hipGraph context.

Note

Supported formats are rocsparse_format_csr and rocsparse_format_csc.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • descr[in] descriptor of the extract algorithm.

  • source[in] sparse matrix descriptor.

  • target[in] sparse matrix descriptor.

  • stage[in] stage of the extract computation.

  • buffer_size_in_bytes[in] size in bytes of the buffer

  • buffer[in] temporary storage buffer allocated by the user.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_value – if stage is invalid.

  • rocsparse_status_invalid_pointerdescr, source, target, or buffer pointer is invalid.

rocsparse_check_spmat#

rocsparse_status rocsparse_check_spmat(rocsparse_handle handle, rocsparse_const_spmat_descr mat, rocsparse_data_status *data_status, rocsparse_check_spmat_stage stage, size_t *buffer_size, void *temp_buffer)#

Check matrix to see if it is valid.

rocsparse_check_spmat checks if the input matrix is valid.

rocsparse_check_spmat requires two steps to complete. First the user calls rocsparse_check_spmat with the stage parameter set to rocsparse_check_spmat_stage_buffer_size which determines the size of the temporary buffer needed in the second step. The user allocates this buffer and calls rocsparse_check_spmat with the stage parameter set to rocsparse_check_spmat_stage_compute which checks the input matrix for errors. Any detected errors in the input matrix are reported in the data_status (passed to the function as a host pointer).

Uniform Precisions:

A

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example

In this example we want to check whether a matrix is upper triangular. The matrix passed to rocsparse_check_spmat is invalid because it contains an entry in the lower triangular part of the matrix.

// 1 2 0 0
// 3 0 4 0  // <-------contains a "3" in the lower part of matrix
// 0 0 1 1
// 0 0 0 2
std::vector<int> hcsr_row_ptr = {0, 2, 4, 6, 7};
std::vector<int> hcsr_col_ind = {0, 1, 0, 2, 2, 3, 3};
std::vector<float> hcsr_val = {1, 2, 3, 4, 1, 1, 2};

int M = 4;
int N = 4;
int nnz = 7;

int* dcsr_row_ptr = nullptr;
int* dcsr_col_ind = nullptr;
float* dcsr_val = nullptr;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (M + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (M + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);

rocsparse_handle handle;
rocsparse_create_handle(&handle);

rocsparse_spmat_descr matA;
rocsparse_create_csr_descr(&matA,
                           M,
                           N,
                           nnz,
                           dcsr_row_ptr,
                           dcsr_col_ind,
                           dcsr_val,
                           rocsparse_indextype_i32,
                           rocsparse_indextype_i32,
                           rocsparse_index_base_zero,
                           rocsparse_datatype_f32_r);

const rocsparse_fill_mode fill_mode = rocsparse_fill_mode_upper;
const rocsparse_matrix_type matrix_type = rocsparse_matrix_type_triangular;

rocsparse_spmat_set_attribute(matA,
                              rocsparse_spmat_fill_mode,
                              &fill_mode,
                              sizeof(fill_mode));
rocsparse_spmat_set_attribute(matA,
                              rocsparse_spmat_matrix_type,
                              &matrix_type,
                              sizeof(matrix_type));

rocsparse_data_status data_status;

size_t buffer_size;
rocsparse_check_spmat(handle, 
                      matA, 
                      &data_status, 
                      rocsparse_check_spmat_stage_buffer_size, 
                      &buffer_size, 
                      nullptr);
 
void* dbuffer = nullptr;
hipMalloc((void**)&dbuffer, buffer_size);

rocsparse_check_spmat(handle, 
                      matA, 
                      &data_status, 
                      rocsparse_check_spmat_stage_compute, 
                      &buffer_size, 
                      dbuffer);

std::cout << "data_status: " << data_status << std::endl;

rocsparse_destroy_handle(handle);
rocsparse_destroy_spmat_descr(matA);

hipFree(dbuffer);
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);

Note

This function writes the required allocation size (in bytes) to buffer_size and returns without performing the checking operation, when stage is equal to rocsparse_check_spmat_stage_buffer_size.

Note

The sparse matrix formats currently supported are: rocsparse_format_coo, rocsparse_format_csr, rocsparse_format_csc and rocsparse_format_ell.

Note

check_spmat requires two stages to complete. The first stage rocsparse_check_spmat_stage_buffer_size will return the size of the temporary storage buffer that is required for subsequent calls to rocsparse_check_spmat. In the final stage rocsparse_check_spmat_stage_compute, the actual computation is performed.

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • mat[in] matrix descriptor.

  • data_status[out] modified to indicate the status of the data

  • stage[in] check_matrix stage for the matrix computation.

  • buffer_size[out] number of bytes of the temporary storage buffer. buffer_size is set when temp_buffer is nullptr.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the checking operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointermat, buffer_size, temp_buffer or data_status pointer is invalid.

  • rocsparse_status_invalid_value – the value of stage is incorrect.

rocsparse_spitsv#

rocsparse_status rocsparse_spitsv(rocsparse_handle handle, rocsparse_int *host_nmaxiter, const void *host_tol, void *host_history, rocsparse_operation trans, const void *alpha, const rocsparse_spmat_descr mat, const rocsparse_dnvec_descr x, const rocsparse_dnvec_descr y, rocsparse_datatype compute_type, rocsparse_spitsv_alg alg, rocsparse_spitsv_stage stage, size_t *buffer_size, void *temp_buffer)#

Sparse iterative triangular solve.

rocsparse_spitsv solves, using the Jacobi iterative method, a sparse triangular linear system of a sparse \(m \times m\) matrix, defined in CSR format, a dense solution vector \(y\) and the right-hand side \(x\) that is multiplied by \(\alpha\), such that

\[ op(A) y = \alpha x, \]
with
\[\begin{split} op(A) = \left\{ \begin{array}{ll} A, & \text{if trans == rocsparse_operation_none} \\ A^T, & \text{if trans == rocsparse_operation_transpose} \\ A^H, & \text{if trans == rocsparse_operation_conjugate_transpose} \end{array} \right. \end{split}\]

The Jacobi method applied to the sparse triangular linear system above gives

\[ y_{k+1} = y_{k} + D^{-1} ( \alpha x - (D + T) y_{k} ) \]
with \(A = D + T\), \(D\) the diagonal of \(A\) and \(T\) the strict triangular part of \(A\).

The above equation can be also written as

\[ y_{k+1} = y_{k} + D^{-1} r_k \]
where
\[ r_k = \alpha x - (D + T) y_k. \]
Starting with \(y_0 = \) y, the method iterates if \( k \lt \) host_nmaxiter and until
\[ \Vert r_k \Vert_{\infty} \le \epsilon, \]
with \(\epsilon\) = host_tol.

rocsparse_spitsv requires three stages to complete. First, the user passes the rocsparse_spitsv_stage_buffer_size stage to determine the size of the required temporary storage buffer. Next, the user allocates this buffer and calls rocsparse_spitsv again with the rocsparse_spitsv_stage_preprocess stage which will preprocess data and store it in the temporary buffer. Finally, the user calls rocsparse_spitsv with the rocsparse_spitsv_stage_compute stage to perform the actual computation. Once all calls to rocsparse_spitsv are complete, the temporary buffer can be deallocated.

rocsparse_spitsv supports rocsparse_indextype_i32 and rocsparse_indextype_i64 index precisions for storing the row pointer and column indices arrays of the sparse matrix. rocsparse_spitsv supports the following data types for \(op(A)\), \(x\), \(y\) and compute types for \(\alpha\):

Uniform Precisions:

A / X / Y / compute_type

rocsparse_datatype_f32_r

rocsparse_datatype_f64_r

rocsparse_datatype_f32_c

rocsparse_datatype_f64_c

Example
//     1 0 0 0
// A = 0 2 0 0
//     5 0 3 0
//     0 0 9 4
int m   = 4;
int n   = 4;
int nnz = 6;
float halpha = 1.0f;

std::vector<int> hcsr_row_ptr = {0, 1, 2, 4, 6};
std::vector<int> hcsr_col_ind = {0, 1, 0, 2, 2, 3};
std::vector<float> hcsr_val = {1.0f, 2.0f, 5.0f, 3.0f, 9.0f, 4.0f};
std::vector<float> hx(m, 1.0f);
std::vector<float> hy(m, 1.0f);

// Offload data to device
int* dcsr_row_ptr;
int* dcsr_col_ind;
float* dcsr_val;
float* dx;
float* dy;
hipMalloc((void**)&dcsr_row_ptr, sizeof(int) * (m + 1));
hipMalloc((void**)&dcsr_col_ind, sizeof(int) * nnz);
hipMalloc((void**)&dcsr_val, sizeof(float) * nnz);
hipMalloc((void**)&dx, sizeof(float) * m);
hipMalloc((void**)&dy, sizeof(float) * m);

hipMemcpy(dcsr_row_ptr, hcsr_row_ptr.data(), sizeof(int) * (m + 1), hipMemcpyHostToDevice);
hipMemcpy(dcsr_col_ind, hcsr_col_ind.data(), sizeof(int) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dcsr_val, hcsr_val.data(), sizeof(float) * nnz, hipMemcpyHostToDevice);
hipMemcpy(dx, hx.data(), sizeof(float) * m, hipMemcpyHostToDevice);
hipMemcpy(dy, hy.data(), sizeof(float) * m, hipMemcpyHostToDevice);

rocsparse_handle     handle;
rocsparse_spmat_descr matA;
rocsparse_dnvec_descr vecX;
rocsparse_dnvec_descr vecY;

rocsparse_indextype row_ptr_type = rocsparse_indextype_i32;
rocsparse_indextype col_idx_type = rocsparse_indextype_i32;
rocsparse_datatype  data_type = rocsparse_datatype_f32_r;
rocsparse_datatype  compute_type = rocsparse_datatype_f32_r;
rocsparse_index_base idx_base = rocsparse_index_base_zero;

rocsparse_create_handle(&handle);

// Create sparse matrix A
rocsparse_create_csr_descr(
    &matA, m, m, nnz, dcsr_row_ptr, dcsr_col_ind, dcsr_val, row_ptr_type, col_idx_type, idx_base, data_type);
rocsparse_create_dnvec_descr(&vecX, m, dx, data_type);
rocsparse_create_dnvec_descr(&vecY, m, dy, data_type);

rocsparse_int host_nmaxiter[1] = {200};
float host_tol[1] = {1.0e-6};
float host_history[200]; 

size_t buffer_size = 0;
rocsparse_spitsv(handle,
                &host_nmaxiter[0],
                &host_tol[0],
                &host_history[0],
                rocsparse_operation_none,
                &halpha,
                matA,
                vecX,
                vecY,
                compute_type,
                rocsparse_spitsv_alg_default,
                rocsparse_spitsv_stage_buffer_size,
                &buffer_size,
                nullptr);

void* temp_buffer;
hipMalloc((void**)&temp_buffer, buffer_size);

rocsparse_spitsv(handle,
                &host_nmaxiter[0],
                &host_tol[0],
                &host_history[0],
                rocsparse_operation_none,
                &halpha,
                matA,
                vecX,
                vecY,
                compute_type,
                rocsparse_spitsv_alg_default,
                rocsparse_spitsv_stage_preprocess,
                nullptr,
                temp_buffer);

rocsparse_spitsv(handle,
                &host_nmaxiter[0],
                &host_tol[0],
                &host_history[0],
                rocsparse_operation_none,
                &halpha,
                matA,
                vecX,
                vecY,
                compute_type,
                rocsparse_spitsv_alg_default,
                rocsparse_spitsv_stage_compute,
                &buffer_size,
                temp_buffer);

hipMemcpy(hy.data(), dy, sizeof(float) * m, hipMemcpyDeviceToHost);

// Clear rocSPARSE
rocsparse_destroy_spmat_descr(matA);
rocsparse_destroy_dnvec_descr(vecX);
rocsparse_destroy_dnvec_descr(vecY);

rocsparse_destroy_handle(handle);

// Clear device memory
hipFree(dcsr_row_ptr);
hipFree(dcsr_col_ind);
hipFree(dcsr_val);
hipFree(dx);
hipFree(dy);

Note

This routine does not support execution in a hipGraph context.

Parameters:
  • handle[in] handle to the rocsparse library context queue.

  • host_nmaxiter[inout] maximum number of iteration on input and number of iteration on output. If the output number of iterations is strictly less than the input maximum number of iterations, then the algorithm converged.

  • host_tol[in] if the pointer is null then loop will execute nmaxiter[0] iterations. The precision is float for f32 based calculation (including the complex case) and double for f64 based calculation (including the complex case).

  • host_history[out] Optional array to record the norm of the residual before each iteration. The precision is float for f32 based calculation (including the complex case) and double for f64 based calculation (including the complex case).

  • trans[in] matrix operation type.

  • alpha[in] scalar \(\alpha\).

  • mat[in] matrix descriptor.

  • x[in] vector descriptor.

  • y[inout] vector descriptor.

  • compute_type[in] floating point precision for the SpITSV computation.

  • alg[in] SpITSV algorithm for the SpITSV computation.

  • stage[in] SpITSV stage for the SpITSV computation.

  • buffer_size[out] number of bytes of the temporary storage buffer.

  • temp_buffer[in] temporary storage buffer allocated by the user. When a nullptr is passed, the required allocation size (in bytes) is written to buffer_size and function returns without performing the SpITSV operation.

Return values:
  • rocsparse_status_success – the operation completed successfully.

  • rocsparse_status_invalid_handle – the library context was not initialized.

  • rocsparse_status_invalid_pointeralpha, mat, x, y, descr or buffer_size pointer is invalid.

  • rocsparse_status_not_implementedtrans, compute_type, stage or alg is currently not supported.