Working with Triangular Matrices

Working with Triangular Matrices

In my code, I do matrix multiplication of a square matrix (doubles) with a vector that has the same size as the square matrix dimension. The operation is such that I only need to do matrix math for the lower triangle values of the matrix including the main diagonal (sometimes it's ones, sometimes it's not).

So far, I used the general matrix MKL function dgemv to perform the entire matrix multiplication by the vector while zeroing out the upper triangle of the matrix values so these elements have no effect upon the result vector.

Behind the scenes, what's the difference between using dgemv as I described above and dtrmv to perform this operation? Is dtrmv faster? Without using parallelism which MKL function is the fastest one to use for my operation?

Thanks.

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hello,

dtrmv will only use upper or lower triangle data, but it does not assume the upper or lower triangle data is 0. For your case, it looks you can zero the upper data, and then use dgemv. But using dtrmv, it may create a different result.

Thanks,
Chao

I did an experiment where I used dgemv with a square matrix with the upper triangle all zeroed out. I then used dtrmv. It appeared to be slightly faster. I was just wondering if dtrmv multiplied less elements in the matrix than dgemv does (multiplying every matrix element). It appears to do so.

Hello,

yes, dtrmv multiplied less elements as it is assuming the symmetric matrix, the dgemv will take the full matrix for multiplication. But have you checkif the result is right for you?

Thanks,
Chao

Thanks for the background. In my experiment, the results were the same for both methods. I am not using the parallel libraries for matrix multiplication, thus there shouldn't be any discrepancy due to that.

Thanks for the responses.

Blake

Leave a Comment

Please sign in to add a comment. Not a member? Join today