This maybe purely case based, but what I am dealing with is a code that was originally written in Fortran 77 that makes use of Double Precision Arrays for storing a variety of information about a specific variable and uses IDINT to extract integers from. This design unfortunately makes it so that it must have a lot of unnecessary loops to find location indices.
DOUBLE PRECISION,DIMENSION(100):: A !CONTAINS ID INFORMATION, IT IS A LITTLE MORE COMPLEX, BUT SIMPLIFIED FOR THIS EXAMPLE DOUBLE PRECISION,DIMENSION(7,5000):: B !ROW INDEX HOLDS INFORMATION !MEANING OF ROW INDEX OF B for the Jth item ROW=IDINT(B(1,J)) COL=IDINT(B(2,J)) LAY=IDINT(B(3,J)) ID1=IDINT(B(4,J)) !LINKING ID TO OTHER PARTS OF CODE ID2=IDINT(B(5,J)) !LINKING ID TO OTHER PARTS OF CODE VAL1=B(6,J) VAL2=B(7,J) VAL3=B(8,J) !THERE ARE A LOT OF SEARCHES THAT MATCH THE ID LOCATION OF A WITH B TO PULL VAL1, VAL2, and VAL3 into other parts of the code DO I=1,100 IF (IDINT(A(I))==IDINT(B(4,I)))THEN ...USE VAL1, VAL2, and VAL3 DEPENDING ON LOCATION IN CODE END DO
What I am curious is, is there a performance penality for going with Derived Data Types compared to static arrays?
Something I like to do is the following:
TYPE BB INTEGER::R,C,L,WID,FID !(EQUIVALENT TO B(1:5,:)) DOUBLE PRECISION::Q,QMAX,QOLD !(EQUIVALENT TO B(6:8,:)) END TYPE TYPE(BB),DIMENSION(:),ALLOCATABLE:: B
If I kept everything the same, would there be any performance hit by switching to derived data types?
I even like to go as for as building up a derive data type composed of derived data types such as:
TYPE AA TYPE(AA),POINTER,DIMENSION(:):: B TYPE(CC),POINTER,DIMENSION(:):: C TYPE(DD),POINTER,DIMENSION(:):: D END TYPE TYPE(AA),DIMENSION(100):: A DO I=1,100 N= XXX !N would be only the size of the part of B from the first example "DOUBLE PRECISION,DIMENSION(7,5000):: B" that was associated with A(I); that is I in A(I) would replace ID1 in the original B. ALLOCATE(A%B(N)) END DO
I know that there could be a lot more going on speedwise, but it would be interesting to find out general opinions on derived data types in terms of speed of overall code. Speed is a major issue in this code since it is part of a large simulation program and pieces from B and A are pulled to help assemble the system matrices that are numerically solved. I think the hit I would take for using Derived Data Types will be over come by the removal of unecessary looping.
One last question, would it be better to just allocate a max six of B or if the size N for a specific A(I) changes, DEALLOCATE/ALLOCATE B such that it is always the correct size.
Thanks for all your inputs.