vectorization of nested loops.

vectorization of nested loops.

Hey everyone, I have the following set of nested loops which I would like to vectorize. If you're familiar with molecular simulations, this is a linked cell energy calculation. I checked the requirements for intel vectorization, and I believe I satisfy them. There aren't multiple branch points. All array accessing, except for envar in the inner most loop, is unit stride. I believe the if statements in these loops can be interpreted as mask-constructs. I keep getting the error 'loop not vectorized: not inner loop," hence the addition of the !dir$ imp simd collapse(4). When i compile with -vec-report2  -guide -openmp the compiler still says "loop not vectorized: not inner loop," despite the collapse command. I would like to port this to the xeon phi coprocessor, so any help with the vectorization here would be greatly appreciated. I understand this is a fair amount of code, but i have tried to add many comments for explanation. 

 !$omp parallel do schedule(dynamic) reduction(+:energy) default(private) firstprivate(listvar), &
    !$omp& shared(r,tr,cv,param,envar,alpha,GaussChar,pf)

 

    !dir$ omp simd collapse(4)
    !dir$ vector aligned
    !--- loop over all cells present
    do i = 0,listvar%ncellT-1
       

       !--- these are pointers to array r which contains the positions of particles along with the particle type. 

       !--- c1s is the location in r where the first atom in cell i is stored. c1e is where the last atom is stored

       c1s   = tr(i)%start
       c1e   = tr(i)%end

 

       !--- loop over cell neighbors
       do k=1,26
    
          !---determine this cell
          cell_neigh = cv(k,i)%cnum
              

          !--- only calculate energy between particle 1 and particle 2 once. 

          !---This is done by only calculating energies between cell pairs (i,cell_neigh) where cell_neigh is gt than i
          if(cell_neigh.gt.i)then
            
             !---minimum image criterion for distance calculation
             minx =cv(k,i)%min_x
             miny =cv(k,i)%min_y
             minz =cv(k,i)%min_z
             

             !---pointers to positions in array r. Tell where the atoms for cell cell_neigh are contained in r
             c2s = tr(cell_neigh)%start
             c2e = tr(cell_neigh)%end
             
             !--- loop over particles in each cell         
             do j = c1s,c1e
                                
                x1 = r(j)%x
                y1 = r(j)%y
                z1 = r(j)%z
                type1 = r(j)%type

 

                do l= c2s,c2e
                   
                   x2 = r(l)%x
                   y2 = r(l)%y
                   z2 = r(l)%z
                   type2 = r(l)%type
                   
                   !--- obtain displacements
                   !--- apply minimum image here
                   dx = x2-x1-minx
                   dy = y2-y1-miny
                   dz = z2-z1-minz
                
                   dr2 = dx*dx+dy*dy+dz*dz

                   

                   !--- is distance is less than cutoff, calculate energy                   
                   if(dr2.lt.param%rcut2)then
               

                      !--- obtain interaction parameters for atoms of type1 and type2
                      val(:)= envar(type1,type2)%val(:)
                      
                      dr    = sqrt(dr2)
                      dri   = 1.0d0/dr
                      dr2i  = val(1)/dr2
                      dr6i  = dr2i*dr2i*dr2i
                      dr12i = dr6i*dr6i
                              

                      !--- lennard jones energy
                      energy = energy + 4.0d0*val(2)*(dr12i - dr6i)  
                       

                      !--- electrostatic energy            
                      !energy = energy + pf*charge*(erfc(alpha*dr)-GaussChar)
                 
                   endif
                enddo
             enddo
          endif
       enddo
    enddo
    !$omp end parallel do

27 post / 0 nuovi
Ultimo contenuto
Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione

Sorry for all the spaces, I can't seem to figure out how to go back and edit my post to delete them.

If the inner loop isn't vectorizable, it seems unlikely that forcing collapse could improve the situation (if the situation is remotely eligible for collapse, which doesn't appear to be your case).

Where you have an outer loop which is vectorizable, while inner ones aren't, you can encourage the compiler to vectorize it by placing !$omp simd on that loop.  If the loop is to be both parallel and simd, that's !$omp parallel do simd.  You can't use alignment assertions in such a loop unless you can assure that the loop count is an integral multiple of vector width times number of threads.

Can you tell me how to specifically do that? All the examples I find are in c++. When I type

!$omp parallel do simd schedule(dynamic) reduction(+:energy) default(private) firstprivate(listvar),&

!$omp& shared(r,tr,cv,param,envar,alpha,GaussChar,pf)

It tells me this is a syntax error with the simd, Also, I don't know if I completely followed you when you said the loop is to be parallel and simd. I would like to code to vectorize, since I would like to put in on a MIC (hence SIMD). To put in on the MIC, it needs to be parallelized. Is that what you mean by both parallel and simd?

 

Never mind. For some reason the SIMD instruction doesn't work on my desktop, but it works fine on the supercomputer I am running the code on. However, the compiler is now throwing the following error: 'The openmp simd directives require perfectly nested loops: no code may appear before or after the loops being collapsed." I have commented out everything in the subroutine except the code I posted here. What code is it referring to that prevents the vectorization?

If you are still trying collapse in the manner you suggested above, the complaint is expected, as you have code between the levels of DO loop. 

I haven't seen rules documented for parallel do simd outer loops, assuming you remove the collapse.  The compiler may require similar rules to collapse even though you don't specify collapse.

Ok, can you tell me when I would want to collapse a loop? I feel like I do not want to collapse my loop here. If I move all the code to the innermost loops, it will require the code to go access the same array element more than once. With the collapse command present and my code arranged to what is shown below, I still get the following. I might be in over my head here.

loop was not vectorized: unsupported loop structure. This line given was the line with OMP directive

loop was not vectorized: existence of vector dependence. this refers to inner loop with index L                                                                                                                                               

loop was not vectorized: not inner loop. This refers to loop with index j

loop was not vectorized: not inner loop . This refers to loop with index k

loop was not vectorized: top test could not be found. This refer to line with outer loop with index i

 

 !$omp parallel do simd collapse(1) schedule(dynamic) reduction(+:energy) default(private) firstprivate(listvar), &
    !$omp& shared(r,tr,cv,param,envar,alpha,GaussChar,pf)

    !--- loop over all cells
    do i = 0,ncellT
       
       !--- loop over cell neighbors
       do k=1,26
          c1s   = tr(i)%start
          c1e   = tr(i)%end

          !---determine this cell
          cell_neigh = cv(k,i)%cnum
              
          if(cell_neigh.gt.i)then

             !---minimum image
             minx =cv(k,i)%min_x
             miny =cv(k,i)%min_y
             minz =cv(k,i)%min_z
             
             c2s = tr(cell_neigh)%start
             c2e = tr(cell_neigh)%end
             
             !--- loop over particles in each cell         
             do j = c1s,c1e
                x1 = r(j)%x
                y1 = r(j)%y
                z1 = r(j)%z
                type1 = r(j)%type

 

                do l= c2s,c2e
                   
                   x2 = r(l)%x
                   y2 = r(l)%y
                   z2 = r(l)%z
                   type2 = r(l)%type
                   
                   !--- obtain displacements
                   !--- apply minimum image here
                   dx = x2-x1-minx
                   dy = y2-y1-miny
                   dz = z2-z1-minz
                
                   dr2 = dx*dx+dy*dy+dz*dz
                   
                   if(dr2.lt.param%rcut2)then
               
                      val(:)= envar(type1,type2)%val(:)
                      
                      dr    = sqrt(dr2)
                      dri   = 1.0d0/dr
                      dr2i  = val(1)/dr2
                      dr6i  = dr2i*dr2i*dr2i
                      dr12i = dr6i*dr6i 

 

                     !--- lennard jones energy
                      energy = energy + 4.0d0*val(2)*(dr12i - dr6i)                     

                      !--- electrostatic energy            
                      !energy = energy + pf*charge*(erfc(alpha*dr)-GaussChar)
                 
                   endif
                enddo
             enddo
          endif
       enddo
    enddo
    !$omp end parallel do simd

If collapse(2) worked, it could help by increasing parallel chunk size, in case ncellT were not large (e.g. 2000 or so for MIC).  If outer loop vectorization were successful, proportionately larger chunks would be useful.

With the large conditional block inside, it may be difficult for the compiler to accomplish vectorization.  If the conditionals are what is preventing vectorization (rather than sequential dependencies in inner loops), collapse or application of simd on outer loop aren't so likely to be useful.

If you are able to accomplish vectorized sum reduction in the inner loop, it could be helpful to have separately named inner loop sums which in turn are summed in the parallel reduction variable in the outer loop.

So to check if it was the if statement that was giving me an issue, I commented them out and ran vec report. The same same errors were obtained, so I don't believe it is the if statements.

Do you think I may benefit from moving the OMP parallel region to the inner two loops? Ideally I would like to run this whole code on the MICs. I could run the outer two loops serially on the MIC, and the inner two loops in parallel on the MIC. I am concerned about the computational load here. If I were to parallelize the inner two loops, I don't think either loop is greater than the number of threads.

"parallel outer-vector inner"

At issue here is your data layout is AOS, which does not lend itself to vectorization.

What you would want to do is change to SOA layout.

You currently have an array of particles that constitute your collection of particles.

What you should do is organize your data  such that you have a:

Collection of particles with arrays of same attribute/dimension.

This way you can manipulate the collection en mass

Change data layout from:

do j = c1s,c1e
  x1 = r(j)%x
  y1 = r(j)%y
  z1 = r(j)%z
  type1 = r(j)%type
  do l= c2s,c2e
    x2 = r(l)%x
    y2 = r(l)%y
    z2 = r(l)%z
    type2 = r(l)%type

to:

do j = c1s,c1e
  x1 = r%x(j)
  y1 = r%y(j)
  z1 = r%z(j)
  type1 = r%type(j)
  do i= c2s,c2e
    x2 = r%x(i)
    y2 = r%y(i)
    z2 = r%z(i)
    type2 = r%type(i)

If thine eye offends thee, then add import/export functions to/from the collection that uses a particle object.

Jim Dempsey

www.quickthreadprogramming.com

Thank you very much for your response! I have written a short snippet version of the code I posted to investigate the differences in speed between AoS SoA and just arrays. However, I can't get that to vectorize. Should I post a new thread since the code is different, or is that ok for me to post here?

Post it here since it involves your problem, and hopefully your solution.

Jim Dempsey

www.quickthreadprogramming.com

Ok, so I took your advice and changed the data layout. I switched both to conventional arrays and a structure or arrays layout. I wrote the following code to analyze what's going on. The inner two loops, when written on there on, vectorize as "permuted loop was vectorized." However both fail to vectorize when nested as the inner two loops. The output of -vec-report2 -guide is shown below the code. Since this doesn't seem to be a data layout issue now, do I possibly have some kind of flow dependence issues with c1s,c1e,c2s,c2e? 

program vec

  use ifport
  implicit none

  type r
     double precision :: x(60000),y(60000),z(60000)
  end type r

  type(r) :: part

  !--- convention array
  double precision :: rnew(3,60000)
  integer :: start(1000), end(1000)



  !--- various variables
  double precision :: energy
  double precision :: x1,y1,z1,x2,y2,z2,dr2
  integer :: i,j,k,l,count
  integer :: c1s,c1e,c2s,c2e





  call srand(10)
  !--- dont care about this. this is just initializing random positions
  do i = 1,60000
     rnew(1,i) = 10*rand(); rnew(2,i) = 10*rand(); rnew(3,i) = 10*rand()
     part%x(i) = rnew(1,i); part%y(i) = rnew(2,i); part%z(i) = rnew(3,i)
  end do

  !--- dont car about this either. just assigning pointers to rnew and part
  count = 0
  do i = 1,1000
     start(i) = count*60+1
     end(i)   = (count+1)*60

     count = count + 1
  enddo
  
  
  energy =0.0d0


 
 !---main code I would like to port to MIC
  do i= 1,1000          

     do j = 1,1000

        c1s = start(i)
        c1e = end(i)
        if(j.gt.i) then

           c2s = start(j)
           c2e = end(j) 

           do l=c2s,c2e 
                 
              do k = c1s,c1e

                 x1 = rnew(1,k); y1 = rnew(2,k); z1 = rnew(3,k)               
                 x2 = rnew(1,l); y1 = rnew(2,l); z2 = rnew(3,l)

                 dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2

                 if(dr2.lt.2)then
                    energy = energy + 1
                 endif
              enddo
           enddo

	endif
     enddo
  enddo

 !---main code I would like to port to MIC in array of structure format
  do i= 1,1000          

     do j = 1,1000

        c1s = start(i)
        c1e = end(i)
        if(j.gt.i) then


           c2s = start(j)
           c2e = end(j) 
           do l=c2s,c2e 
                 
              do k = c1s,c1e

                 x1 = part%x(k); y1 = part%y(k); z1 = part%z(k)                
                 x2 = part%x(l); y2 = part%y(l); z2 = part%z(l)

                 dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2

                 if(dr2.lt.2)then
                    energy = energy + 1
                 endif
              enddo
           enddo

	endif
     enddo
  enddo


  !--- analyze inner two loops
  c1s = start(i);c1e = end(i); c2s = start(j); c2e = end(j)

  do k=c1s,c1e 
     do l = c2s,c2e

        x1 = rnew(1,k); y1 = rnew(2,k); z1 = rnew(3,k)                         
        x2 = rnew(1,l); y1 = rnew(2,l); z2 = rnew(3,l)
        
        dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2
        
        if(dr2.lt.2)then
           energy = energy + 1
        endif
     enddo
  enddo

 !--- analyze inner two loops
  c1s = start(i);c1e = end(i); c2s = start(j); c2e = end(j)

  do k=c1s,c1e 
     do l = c2s,c2e
        
        x1 = part%x(k); y1 = part%y(k); z1 = part%z(k)                        
        x2 = part%x(l); y1 = part%y(l); z2 = part%z(l)
        
        dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2
        
        if(dr2.lt.2)then
           energy = energy + 1
        endif
     enddo
  enddo
     
  stop
end program vec

compile: ifort -vec-report2 -guide vec.f90 -O3

vec.f90(50) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(52) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(61) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(83) (col.3) remark: LOOP WAS VECTORIZED

vec.f90(83) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(92) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(94) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(116) (col.3) remark:PERMUTED LOOP WAS VECTORIZED

vec.f90(133) (col.3) remark: loop was not vectorized: not inner loop.

vec.f90(132) (col.3) remark:PERMUTED LOOP WAS VECTORIZED

It may make no difference, but in a couple of places you have something like this 

  do i= 1,1000         
     do j = 1,1000
        c1s = start(i)
        c1e = end(i)
        
        if(j.gt.i) then
          // Compute i, j interaction
        endif
    end do
  end do

Surely it'd be simpler to write it as      

     do i= 1,1000  
       do j = i+1,1000
        c1s = start(i)
        c1e = end(i)
        
         // Compute i,j interaction
   
    end do
  end do

Or, does that break the vectorization?

(I'm assuming that you're relying on the compiler to hoist the loop invariant loads of c1s,c1e out of the j loop, so I haven't done that)

Sure. This is just my test code. My actual code is a little more complicated and requires the if statement. Hence, I am trying to see if I can even get the compiler to vectorize this simpler scenario.

I think the following is interesting. In the above code, when I wrote the inner two loops by themselves I got the message permuted loop was vectorized. To see if I could get it to vectorize in a subroutine, and then call that subroutine, I added the subroutine below. However, now its telling me the inner loop cannot be vectorized due to a vector dependence, and that the outer loop cannot be vectorized: not inner loop. I can't figure out where I have a vector dependence, and why simd isn't overriding it.

subroutine cell_pair_kernel(i,j,energy,rnew,start,end)
  implicit none
  
  !dir$ simd collapse(2)
  do k=c1s,c1e
     do l = c2s,c2e

        x1 = rnew(1,k); y1 = rnew(2,k); z1 = rnew(3,k)
        x2 = rnew(1,l); y2 = rnew(2,l); z2 = rnew(3,l)

        dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2
        
        if(dr2.lt.2)then
           energy = energy + 1
        endif
     enddo
  enddo

end subroutine cell_pair_kernel

 

Using V-Tune (principally to see the vectorization) on Host We find

0x1400016b0     Block 44:   0.0% 
0x1400016b0 97  lea r12, ptr [r14*8]                                  10.075ms  10.075ms 0.0% 0ms
0x1400016b8 95  add r14, 0x4                                           9.985ms  9.985ms  0.0% 0ms
0x1400016bc 97 lea r15, ptr [r12+rdx*8]                               50.080ms  50.080ms 0.0% 0ms
0x1400016c0 100 vsubpd ymm12, ymm8, ymmword ptr [rdi+r15*1+0x165a98]                     0.0% 
0x1400016ca 100 vsubpd ymm14, ymm9, ymmword ptr [rdi+r15*1+0x1dad98]  10.001ms  10.001ms 0.0% 0ms
0x1400016d4 100 vsubpd ymm15, ymm13, ymmword ptr [rdi+r15*1+0x250098] 59.965ms  59.965ms 0.0% 0ms
0x1400016de 100 vmulpd ymm12, ymm12, ymm12                            30.014ms  30.014ms 0.0% 0ms
0x1400016e3 100 vmulpd ymm14, ymm14, ymm14                           120.036ms 120.036ms 0.0% 0ms
0x1400016e8 100 vmulpd ymm15, ymm15, ymm15                             0.0% 
0x1400016ed 100 vaddpd ymm12, ymm12, ymm14                            10.018ms  10.018ms 0.0% 0ms
0x1400016f2 100 vaddpd ymm12, ymm12, ymm15                             9.999ms   9.999ms 0.0% 0ms
0x1400016f7 102 vcmppd ymm14, ymm12, ymm1, 0x1                       100.023ms 100.023ms 0.0% 0ms
0x1400016fc 103 vandpd ymm15, ymm0, ymm14                            119.941ms 119.941ms 0.0% 0ms
0x140001701 103 vaddpd ymm3, ymm15, ymm3                              60.015ms  60.015ms 0.0% 0ms
0x140001705 95  cmp r14, rax                                         289.851ms 289.851ms 0.0% 0ms
0x140001708 95  jb 0x1400016b0 <Block 44>                              0.0% 

This illustrates complete vectorization of the inner loop beginning at line 94 of your SOA format.

I would expect the Xeon Phi code generation to be similar (though using wider vectors).

Jim Dempsey

www.quickthreadprogramming.com

Vectorization may happen more readily with

energy = energy + merge(1,0,dr2<2)

Your simd directive is incorrect. How about

!$omp simd reduction(+: energy) collapse(2)

or the !dir$ equivalent?

These directives are likely to generate wrong code if they optimize with incorrect specification clauses.

If the compiler sees an alias between the energy argument and the inputs (and you didn't set -assume dummy_aliases or the like) that appears to be an optimization bug, which you might work around by using a local scalar variable inside the loop.

The code you chose for test is not quite the same as your actual code posted. Principally your particle structure was larger, thus making the strides for X, Y and Z further apart.

53      do j = 1,1000                                                                  0.0% 
54                                                                                     0.0% 
55         c1s = start(i)                                                              0.0% 
56         c1e = end(i)                                                                0.0% 
57         if(j.gt.i) then                                                             0.0% 
58                                                                                     0.0% 
59            c2s = start(j)                                                           0.0% 
60            c2e = end(j)                                                             0.0% 
61                                                                                     0.0% 
62            do l=c2s,c2e                                          10.029ms  10.029ms 0.0% 0ms
63                                                                                     0.0% 
64               do k = c1s,c1e                                    400.218ms 400.218ms 0.0% 0ms
65                                                                                     0.0% 
66                  x1 = rnew(1,k); y1 = rnew(2,k); z1 = rnew(3,k) 376.782ms 376.782ms 0.0% 0ms
67                  x2 = rnew(1,l); y1 = rnew(2,l); z2 = rnew(3,l)                     0.0% 
68                                                                                     0.0% 
69                  dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2         195.448ms 195.448ms 0.0% 0ms
70                                                                                     0.0% 
71                  if(dr2.lt.2)then                               121.946ms 121.946ms 0.0% 0ms
72                     energy = energy + 1                           9.985ms   9.985ms 0.0% 0ms
73                  endif                                                              0.0% 
74               enddo                                                                 0.0% 
75            enddo                                                                    0.0% 
76                                                                                     0.0% 
77     endif                                                                           0.0% 
78      enddo                                                                          0.0% 
79   enddo                                                                             0.0% 
80                                                                                     0.0% 
81  !---main code I would like to port to MIC in array of structure format             0.0% 
82   do i= 1,1000                                                                      0.0% 
83                                                                                     0.0% 
84      do j = 1,1000                                                                  0.0% 
85                                                                                     0.0% 
86         c1s = start(i)                                                              0.0% 
87         c1e = end(i)                                                                0.0% 
88         if(j.gt.i) then                                                             0.0% 
89                                                                                     0.0% 
90                                                                                     0.0% 
91            c2s = start(j)                                                           0.0% 
92            c2e = end(j)                                                             0.0% 
93            do l=c2s,c2e                                          10.047ms  10.047ms 0.0% 0ms
94                                                                                     0.0% 
95               do k = c1s,c1e                                    369.831ms 369.831ms 0.0% 0ms
96                                                                                     0.0% 
97                  x1 = part%x(k); y1 = part%y(k); z1 = part%z(k)  60.155ms  60.155ms 0.0% 0ms
98                  x2 = part%x(l); y2 = part%y(l); z2 = part%z(l)  39.988ms  39.988ms 0.0% 0ms
99                                                                                     0.0% 
100                  dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2        240.033ms 240.033ms 0.0% 0ms
101                                                                                    0.0% 
102                  if(dr2.lt.2)then                              100.023ms 100.023ms 0.0% 0ms
103                     energy = energy + 1                        179.956ms 179.956ms 0.0% 0ms
104                  endif                                                             0.0% 
105               enddo                                                                0.0% 
106            enddo                                                                   0.0% 
107                                                                                    0.0% 
108     endif                                                                          0.0% 
109      enddo                                                                         0.0% 
110   enddo                                                                            0.0% 

The above is as you posted with effectively stride 3

The first loop took 1114.4ms, the second 1000ms (14% faster)

Changing the first dim of rnew to 16 to account for additional properties (likely larger than necessary, but large enough to exhibit stride large enough to span cache line) yielded 1289.6ms or ~29% faster. Your speed difference on AVX would be somewhere in between. On MIC it would be hard to guess without running a test (and using appropriate size for rnew).

Jim Dempsey

 

www.quickthreadprogramming.com

Ok, than you very much! The reason why I was trying to get all loops to vectorize, because I was under the assumption that I would get poor performance on the MIC if all loops weren't vectorized, not just the inner loop. Thats why I was trying to force all four loops to vectorize. Thanks for actually running test cases on my code.

Can you tell me what you ran the code on to obtain those timings? I am running it on a MIC and was only getting about 1080 ms for the first loop. Not much of a speed up.

The first loop

x1 = rnew(1,k); y1 = rnew(2,k); z1 = rnew(3,k)               
x2 = rnew(1,l); y1 = rnew(2,l); z2 = rnew(3,l)

is likely exceeding an internal capacity of the compiler optimizer to keep track of might be fully vectorizable.

Below shows an excerpt of the fist and second loops. The first loop performs scalar operations to build up vectors using multiple fetch and merge instructions (more work), whereas the second loop uses fully vectorized fetches.

  Block 19: 
66 vmovsd xmm14, qword ptr [rbp+r14*1+0x6120]         // vmovsd is a scalar (1 double)
64 add r15, 0x4 
66 vmovsd xmm15, qword ptr [rbp+r14*1+0x6220]         // vmovsd is a scalar (1 double)
66 vmovhpd xmm13, xmm14, qword ptr [rbp+r14*1+0x61a0] // vmovhpt is a scalar (1 double) move to the upper half of 2-double vector
66 vmovhpd xmm14, xmm15, qword ptr [rbp+r14*1+0x62a0] // vmovhpd is a scalar (1 double) move to the upper half of 2-double vector
66 vmovsd xmm15, qword ptr [rbp+r14*1+0x6130]         // vmovsd is a scalar (1 double)
66 vmovhpd xmm15, xmm15, qword ptr [rbp+r14*1+0x61b0]
66 vinsertf128 ymm13, ymm13, xmm14, 0x1               // combine two 2-double vector into one 4-double vector
66 vmovsd xmm14, qword ptr [rbp+r14*1+0x6230]         // vmovsd is a scalar (1 double)
66 vmovhpd xmm14, xmm14, qword ptr [rbp+r14*1+0x62b0] // vmovhpd is a scalar (1 double) move to the upper half of 2-double vector
64 add r14, 0x200
66 vinsertf128 ymm14, ymm15, xmm14, 0x1               // combine two 2-double vector into one 4-double vector
69 vsubpd ymm15, ymm9, ymm13                          // now perform 4-wide vector operations
69 vsubpd ymm13, ymm10, ymm14 
69 vmulpd ymm14, ymm15, ymm15
69 vmulpd ymm13, ymm13, ymm13 
69 vaddpd ymm15, ymm14, ymm8
69 vaddpd ymm13, ymm15, ymm13 
71 vcmppd ymm14, ymm13, ymm1, 0x1
72 vandpd ymm15, ymm0, ymm14
72 vaddpd ymm3, ymm15, ymm3
64 cmp r15, r11
64 jb 0x140001360 <Block 19>
-----------------------------
 Block 43: 
97 lea r12, ptr [r14*8] 
95 add r14, 0x4 
97 lea r15, ptr [r12+rax*8]
100 vsubpd ymm12, ymm8, ymmword ptr [rbp+r15*1+0x759398]
100 vsubpd ymm14, ymm9, ymmword ptr [rbp+r15*1+0x7ce698]
100 vsubpd ymm15, ymm13, ymmword ptr [rbp+r15*1+0x843998]
100 vmulpd ymm12, ymm12, ymm12
100 vmulpd ymm14, ymm14, ymm14
100 vmulpd ymm15, ymm15, ymm15
100 vaddpd ymm12, ymm12, ymm14
100 vaddpd ymm12, ymm12, ymm15
102 vcmppd ymm14, ymm12, ymm1, 0x1    // TimP note non-branching merge
103 vandpd ymm15, ymm0, ymm14
103 vaddpd ymm3, ymm15, ymm3
95 cmp r14, r11
95 jb 0x1400016b0 <Block 43>

The above were gathered from VTune.

If you lack having VTune, then between the two loops insert a function call to something innocuous say SLEEPQQ(0).

You are not interested in performing the innocuous function. Your interest is in placing a break point into fully optimized code at a line number that has a single representation in the code .AND. is in close proximity of the code of interest. When you reach the break, using you debugger of choice, then select the Dissassembly window to look at the code. You will have to hunt a little because the above code does not show the loop setup and loop termination sections of the code.

Jim Dempsey

www.quickthreadprogramming.com

Ok aweome, I went to my actual code and was getting some weird stuff. My vectorization varies if the variable being summed in the innermost loop has also been passed as an input argument to the subroutine. Wrote the following example: 

program vec

  use ifport
  use modcell_en
  implicit none

  !--- convention array
  double precision :: rnew(3,60000)
  integer :: start(1000), end(1000)

  !--- bs variables
  double precision :: energy
  double precision :: x1,y1,z1,x2,y2,z2,dr2
  integer :: i,j,k,l,count
  integer :: c1s,c1e,c2s,c2e

  call srand(10)
  !--- dont care about this. this is just initializing random positions
  do i = 1,60000
     rnew(1,i) = 10*rand(); rnew(2,i) = 10*rand(); rnew(3,i) = 10*rand()
     part%x(i) = rnew(1,i); part%y(i) = rnew(2,i); part%z(i) = rnew(3,i)
  end do

  !--- dont car about this either. just assigning pointers to rnew and part
  count = 0
  do i = 1,1000
     start(i) = count*60+1
     end(i)   = (count+1)*60

     count = count + 1
  enddo
  
  
  call get_energy(energy)

 
  stop
end program vec
module modcell_en

  type r
     double precision :: x(60000),y(60000),z(60000)
  end type r

  !--- global structure of array variable
  type(r) :: part
  integer :: start(60000),end(60000)

contains
  subroutine get_energy(energy)
    implicit none
    double precision :: energy, pot
    double precision :: x1,y1,z1
    double precision :: x2,y2,z2
    double precision :: dr,dr2,dri,dr2i,dr6i,dr12i
    integer :: i,j,k,l
    integer :: c1s,c1e,c2s,c2e

    energy =0.0d0
    !---main code I would like to port to MIC in array of structure format
    do i= 1,1000          
       
       do j = 1,1000
          
          c1s = start(i)
          c1e = end(i)             
             
          c2s = start(j)
          c2e = end(j) 
          do l=c2s,c2e 
             
             do k = c1s,c1e
                
                x1 = part%x(k); y1 = part%y(k); z1 = part%z(k)                
                x2 = part%x(l); y2 = part%y(l); z2 = part%z(l)
                
                dr2 = (x2-x1)**2+(y2-y1)**2+(z2-z1)**2
                
                if(dr2.lt.2)then
                   dr = sqrt(dr2)
                   dri = 1.0d0/dr
                   dr2i = dri*dri
                   dr6i = dr2i**3
                   dr12i = dr6i**2
                   energy = energy + 4.0d0*(dr12i-dr6i)
                endif
             enddo
          enddo
       enddo
    enddo
    
    
  end subroutine get_energy
  
end module modcell_en

 

In this scenario when I vectorize with IFORT -align -c -xSSES -vec-report2 modcell_en.f90 vec.f90

innermost loop: not vectorized, existence of vector dependence

all other loops: not vectorized, not inner loop.

However, if I make a local variable be summed, say some dummy variable pot, all three inner loops vectorize!

if(dr2.lt.2)then
                   dr = sqrt(dr2)
                   dri = 1.0d0/dr
                   dr2i = dri*dri
                   dr6i = dr2i**3
                   dr12i = dr6i**2
                   pot = pot + 4.0d0*(dr12i-dr6i)
                endif

Even weirder to me, if I then try to assign that dummy variable to the variable passed to the subroutine, only the inner loop vectorizes.

if(dr2.lt.2)then
                   dr = sqrt(dr2)
                   dri = 1.0d0/dr
                   dr2i = dri*dri
                   dr6i = dr2i**3
                   dr12i = dr6i**2
                   pot = pot + 4.0d0*(dr12i-dr6i)
                endif

energy = pot

Any suggestions here? If I could get all three loops to vectorize, I would go crazy.

 

Why not use -align array32byte -msse4.1 if your target CPUs are SSE4?  If your compiler is so old that it doesn't accept -align array32byte you should tell us the version.  I have no idea what SSE5 would do. -align array64byte is important for MIC, and shouldn't hurt on host.

If your cases vary as to whether the sum reductions are vectorized, you would expect the vectorized version to be slightly more accurate.

You can't expect the compiler to simplify away the sqrt (which ought to help vectorization and certainly will help performance).  It may try odd things with approximate inverse sqrt when the option -no-prec-sqrt is in effect (and that is the only way which isn't dead slow on MIC).
 

The only important loop to vectorize is the inner most loop.

As a programming practice (and aid to compiler optimization), move your loop invariant code outside the invariant loop. IOW move the x2 = ... out of the do k loop. Although the compiler should do this, and does for most cases, it won't hurt you to do this. If you were doing this calculation by hand, wouldn't you lookup the values for x2, y2, z2 once per advance in l?

I too find it peculiar that changing the dummy variable energy for local variable pot changes the optimization.

On the newer compiler you have the

!DIR$ SIMD [clause[[,] clause] ...]

Your sketch code is using "if(dr2.lt.2) then". Use 2.0d0. The compiler, in order to maintain backwards consistency, may be required to use integer 2 and then convert to double. You will have the same issue if you use dr2 and 2.0 where dr2 is a double. (it is OK to use **2,...)

Tim's suggestion of changing the precision against performance is good, provided that you can deal with the loss in precision.

In looking at the inner loop you find that it performs a square of the distance calculation then performs a filter operation to select which pair of particles are to contribute to the final energy.

The question you need to ask (and answer) yourself: Do the preponderance of separations exclude to contributions of energy?

If the answer to this is No, meaning most of the particle paring contribute (execute the if(dr2.lt.2.0d0) section), then consider

if(dr2.gt.2.0d0) dr2 = 2.0d0 ! avoid overflow
dr = sqrt(dr2)
dri = 1.0d0/dr
dr2i = dri*dri
dr6i = dr2i**3
dr12i = dr6i**2
if(dr2.lt.2.0d0) energy = energy + 4.0d0*(dr12i-dr6i)

While the above is performing unnecessary calculations which are discarded, these calculations are occurring within different lanes of an 8-wide vector. These calculations are being performed at the same time and are essentially free (time wise). The above technique should experience speed-up when at least on average 2 out of 8 particle parings contribute to the energy.

If the pairing probability is less than 1 in 8, then consider converting the inner loop into a picking loop, where the picked items (those .lt. 2.0d0) are written to an output array. Then following the picking loop, a new second loop is executed on the sub-set (using vectors) to find the energy contribution.

Jim Dempsey

 

 

www.quickthreadprogramming.com

I should also mention, that if the picking loop strategy is used, that the entire iteration space over i and j my find clusters of near particle pairings, and conversely clusters of disparate particle pairings.

If (when) this is the situation, then consider having each thread generate a picking list in the i and j loops but not process the selected energies. At then end of the i/j loops, but inside the parallel region, have each thread publish the count of picks, have a barrier, then have each thread copy the picks to a consolidation array of picks (using accumulated pick counts of lesser threads as starting offset. Then finally perform a new omp do calculation of the energy contributions. In this manner, the high computation section (otherwise performed by the few threads hitting the clusters) is equally distributed amongst all the threads.

Jim Dempsey

www.quickthreadprogramming.com

Lascia un commento

Eseguire l'accesso per aggiungere un commento. Non siete membri? Iscriviti oggi