Parallel Hashlist was updated to version 1.43

Parallel Hashlist was updated to version 1.43

Hello all,

Parallel Hashlist was updated to version

In the previous version i have set the number of lightweight

to 128 but i have decided to
change that in version 1.43 so that you can
give a variable number of
MREWS, this will scale better and now the
constructor will look like

hash1:=TParallelHashList.create(trait, Hashnize,

and the number_of_MREWS must be less or equal to the

also i have given you two examples inside the zipfile, but
please note that
the IntToStr() that i am using insode the test files don't
scale well, but
in reality
and in fact parallel hashlist does scale very
well if you don't use


A parallel
HashList with O(1) best case and O(log(n)) worst case access that
uses lock
striping and lightweight MREWs(multiple-readers-exclusive-writer)
allows multiple threads to write and read concurently. also

maintains an independant counter , that counts the
number of entries , for
segment of the hashtable and uses a lock for
each counter, this is also for
better scalability.

Note: When i have
done those benchmarks , there was not enough/much items
organized as a
self-balancing tree in the individual chains of the
hashtable, so
almost all the items was found and inserted in O(1) , so the parallel part

in the
Amdahl equation was not much bigger compared to to the serial
part. But you
will notice in pratice that as soon as you will have more items
on the
chains of
the Hash table , organized as self-balancing tree, with
a worst case log(n)
, the
parallel part will become bigger in the Amdahl
equation and you will have
performance and scalability than the
numbers in the graph of the benchmarks
Please pass a hashsize and the
number of mrews in power of 2 to the
by using the shl
operation for example like

shl 25,1 shl 25);

Why do you have to use a power of 2 ?

read this:

"Power-of-Two Hash Table Size

Any data structures 101
book will say choose a prime for the number of
so that the
bucket's index can easily be computed by h(k) = k mod m, where k
key value and m is the bucket size. While this approach is

there are a number of issues with it, including slow
modulo performance.
ConcurrentHashMap instead uses a power-of-two

I am using modulo functions inside parallelhashlist, and using a number
locks in power of 2,
so you have to use hashsize in power of 2 , this
will make the modulo
function of the delphi
and freepascal compilers 10X

You can download parallel hashlist from:

Moulay Ramdane.

2 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.


I have updated parallelhashlist to version 1.44

What have changed in version 1.44 ?

I have corrected a bug in the constructor,

Before, itwas:

if (AHashSize mod size2) <> 0 then size2:=size2+1;

In version 1.44 i have corrected the bug by changing size2 to size1:

if (AHashSize mod size1) <> 0 then size2:=size2+1;

And after that i am setting correctly the array fcount1(the independant
counters ,
that count the number of entries , for each segment of the hashtable) and
array mrew((multiple-readers-exclusive-writer locks) like this:

for i:=0 to size2-1 do fcount1[i]:=0;
for i:=0 to size2-1 do mrew[i]:=TOmniMREW.create;

You can download parallel hashlist from:

Amine Moulay Ramdane

Leave a Comment

Please sign in to add a comment. Not a member? Join today