Matrix Multiplication

Matrix Multiplication

Is there a function/subroutine in MKL to specify that the product (matrix multiplication) of two different matrices would be symmetric so that matrix multiplication happens faster. I am aware of DSYR2K but that routine only helps when we are multiplying A*A' which would result in a symmetric matrix. 

E.G. I want to multiply H*Q*H' where Q is positive definite and H is any arbitrary rectangular matrix. These types of matrix multiplications arise quite a lot in statistics and inverse problems. I know before hand that output of HQH' would be a symmetric matrix. Hence, when I multiply HQ= H*Q; B=HQ *H'; I want to tell MKL that B is a symmetric matrix to reduce the cost of matrix multiplication.

 

 

10 posts / novo 0
Último post
Para obter mais informações sobre otimizações de compiladores, consulte Aviso sobre otimizações.

Hi, I have a couple of questions ( sorry, more generic and not related to MKL ) regarding your statement:

>>...the product (matrix multiplication) of two different matrices would be symmetric so that matrix multiplication
>>happens faster...

I do specialize on high-performance matrix multiplication algorithms for embedded platforms ( Strassen Heap Based Complete and Incomplete algorithms ) and I'd like to understand:

- How matrix multiplication should be changed in that case?

- How some software subsystem could evaluate in advance, especially in a real-time dynamic enviroment, that the product of two matricies will be symmetric?

I understand that it is difficult for the software system to anticipate that the product of two matrices would be symmetric, but don't you think that it also applies to other routines. For ex in MKL_SCOOMM a user has to tell the routine that the left or /right hand side matrix would be symmetric (of course it has to be square but it need not be symmetric !) so that she/he can get some comupational benefit. Now in this case it is upto the user to make sure that the output would be a symmetric matrix. 

In terms of above given terminology I can efficiently do H*Q from mkl but I cannot further reduce the cost of matrix multiplication by taking advantage of the fact that the output of HQ*H' would be a symmetric matrix. For small cases this is not even important, however it becomes an important issue when you are dealing with matrices that would be million by million in dimensions; of course such an operation would be done through Scalapack.

 As far as the applications are concerned see paper (freely available online) http://www.geosci-model-dev-discuss.net/5/3325/2012/gmdd-5-3325-2012-pri...  (the matrix multiplication algorithm is described for the Kronecker Product forms).For resons for dealing with these large dimensions see: http://onlinelibrary.wiley.com/doi/10.1029/2011WR011778/abstract 

Lastly :The matrix multiplication of two sparse matrices has been raised earlier in the forum!: In some cases (~70 %) both (HQ) and H' are sparse matrices.

 

I understand that it is difficult for a software subsystem to know in advance that the output of the matrix multiplication of two matrices would result in a symmetric matrix. However, don’t you think so that a similar situation arises in case of using SSYMM as the routine does not know that the input left or right hand side matrix is a symmetric matrix (of course it would be square) and it is up to the user to make sure to input a symmetric matrix otherwise the output would be incorrect.

Now in the case mentioned by me in my earlier post it is also up to the user to make sure that the resulting matrix of the matrix multiplication would be a symmetric matrix.

A slight modification of DGEMM can fix this by only computing upper or lower triangular part of the matrix. Additionally, similar modification to sparse routines like MKL_DCSMM can fix this problem for sparse matrices. I have also raised this issue about BLAS in my recently submitted paper titled “Technical Note: Improving computational efficiency in large linear inverse problems: an example from carbon dioxide flux estimation” (Search on Google; freely available; see section 2.3).

>>...I have also raised this issue about BLAS in my recently submitted paper titled “Technical Note: Improving computational
>>efficiency in large linear inverse problems: an example from carbon dioxide flux estimation” (Search on Google; freely
>>available; see section 2.3).

Thanks, Vineet for the feedback! I'll take a look at the paper.

Hi

Sergey 

I think, last time I did not answer your question in the context of real-time dynamic computing environment. I agree that there exists no method to find out in a real time dynamic computing environment if the output of the matrix multiplication of two arbitrary matrices would be a symmetric matrix.  However, in case of three matrices where the central matrix is a symmetric matrix and left and right hand side matrices are transposes of each other this would always be true.  For e.g. assume ‘a’ to be arbitrary and ‘q’ to be symmetric then a*q*a’ would be symmetric.

Application 

Let’s think of a simple application of computing regression coefficients where q is a full covariance matrix and a is a matrix of independent variables, then the most fundamental operation to get regression coefficient is to multiply a*q*a’ where q is mostly symmetric and these kinds of multiplications are quite common in statistics, econometrics, weather prediction, inverse modeling and maybe more‼

 Many thanks for responding and initiating a healthy discussion

 Vineet

Hi Vineet,

I did a very simple test to verify if application of a Symmetry concept ( however in a different way... ) improves performance.

The test is very simple:

If A is some matrix and B is a symmetric matrix than instead of a Classic matrix multiplication algorithm a Classic Transpose based algorithm could be used to calculate the product C.

Here are results:

For a 32x32 matrix a performace improvement was ~2.5%. Is it a good number? I think Yes, because sometimes even a 0.5% speed up makes a difference.

Situation is different if more complex algorithms, like Strassen ( recursive by design with partitioning of the source matrices ) are used but the same concept also could be used for all partitioned matrices at a Threshold Level!

Hi
Sergey

I completely agree with you that ~2.5% gain in performance is important especially when sometimes in my applications the matrices are huge reaching ~ million*million in dimensions in which case out of core operations becomes a necessity. I think the concept of 
symmetry can be useful in improving performance in case of Strassen's algorithm and theoretically it is easy to show this performance gain. However, given the recursive nature of the algorithm programmatically designing it efficiently might be tricky. 

For performance reasons in our applications, for toeplitz matrices we have now started using discrete fourier transform where we have to perform only o(n2logn) operations for obtaining the result of matrix multiplication.

Sometimes where absolute accuracy is not important we have also started using hierarchical matrices for matrix multiplication that again requires o(n2 logn) operations.

However both these methods can only be used in specific circumstances but still are extremely useful

(These algorithms I think are not available in MKL as they can only be used in very specific circumstances)

vineet

>>Technical Note: Improving computational efficiency in large linear inverse problems: an example from carbon
>>dioxide flux estimation.
>>
>>Abstract. Addressing a variety of questions within Earth science disciplines entails the inference
>>of the spatio-temporal distribution of parameters of interest based on observations of related
>>quantities. Such estimation problems often represent inverse problems that are formulated as
>>linear optimization problems. Computational limitations arise when the number of observations and/or
>>the size of the discretized state space become large, especially if the inverse problem is
>>formulated in a probabilistic framework and therefore aims to assess the uncertainty associated
>>with the estimates. This work proposes two approaches to lower the computational costs and memory
>>requirements for large linear space-time inverse problems, taking the Bayesian approach for
>>estimating carbon dioxide (CO2) emissions and uptake (a.k.a. fluxes) as a prototypical example.
>>The first algorithm can be used to efficiently multiply two matrices, as long as one can be
>>expressed as a Kronecker product of two smaller matrices, a condition that is typical when
>>multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems.
>>The second algorithm can be used to compute a posteriori uncertainties directly at aggregated
>>spatio-temporal scales, which are the scales of most interest in many inverse problems. Both
>>algorithms have significantly lower memory requirements and computational complexity relative to
>>direct computation of the same quantities (O(n2.5) vs. O(n3)). For an examined benchmark problem,
>>the two algorithms yielded a three and six order of magnitude increase in computational efficiency,
>>respectively, relative to direct computation of the same quantities. Sample computer code is
>>provided for assessing the computational and memory efficiency of the proposed algorithms for
>>matrices of different dimensions.

Hi Vineet,

I've read abstract, downloaded 3 docs related to your R&D ( will spend some time on reading next week... ) and I wonder if I could ask some set of questions? Does it make sense to start using Private Messaging of IDZ website because it won't be related to MKL?

Best regards,
Sergey

Hi Vineet,

I have a couple of questions regarding application of your algorithm. I've sent you a private message with more details. Please respond me as soon as you have time.

Best regards,
Sergey

Faça login para deixar um comentário.