Hi allThe benchmarks are done, the social points are added, and the readme were read.So i can tell you that we have the winners. :)I will try to send it as soon as possible to end the stress for all of you guys !Thanks again for your participating.Regards,Anthony.

43 posts / 0 nouveau(x)
Dernière contribution
Reportez-vous à notre Notice d'optimisation pour plus d'informations sur les choix et l'optimisation des performances dans les produits logiciels Intel.

By the end of the day? :) Our stress just got increased :)

Waouw, it was really fast. I assume it was pretty intense to read all the readme(s).

Let's wait for the results now :)

Any clue on the best library used to parallelize the program ?

Has anyone got the results?

Nope... Also "send" is a vague word so i don't know if we are to get by mail anything at all...
Like the rest though, been on auto-refresh since topic opened up.

Same here :) Wondering if I'm on a different timezone or what.That being said, I'm of course not blaming you guys, you have tested and got results in no-time compared to what was announced. Can wait a few days ^^I'm guessing he meant send to each participant is ranking, but they'll probably make a global post or something with all the ranks and maybe more stuff.Edit: Just checked the French Forum. The same news was announced.Anthony Charbonnier says "The announcement will be an email to which you'll have to answer if you are one of the winners.


On twitter, anthony has said the results would be published tomorrow on the website.


Hi all,yes, we will have the results at the end of today!Regards,Anthony, Intel Software Network.

Teasing again... :)here is the procedure used for the benchs explained by our software engineer himself !1. I run a very short file with 500*300 random numbers.
The solutions running too slowly to run a large file are put on the side.
The solutions not building (example : files in a subfolder instead of root ?) are put on the side.
The solutions not running (example : "run" not executable ?) are put on the side.

2. I ran a simple file with 500*18000 random numbers.

3. First on 10 threads on a lightly loaded machine.
Three runs, only the best one kept as reference.

4. Then on 40 cores, but with 2 cores loaded
(to simulate an unexpected kernel or IO activity)
Three runs, only the best one kept as reference.

The 40 core run is the most important criteria
for the final note.

Documentation and build system is also considered for the technical note.

The solutions not building (example : files in a subfolder instead of root ?) are put on the side.The solutions not running (example : "run" not executable ?) are put on the side.

A pity there were such solutions (hopefully, not mine : ) I think, for next editions, it would be very helpful if together with the rules specification you would provide a downloadable Makefile. If the code passes the "make", then the monthly work is not trashed at this stage of evaluation. I know you gave an exemplary script, but apparently such an explicit file would save some of the participants

So 500 x 18000 was the largest test used? Also were there no tests that involved more than one input file?

We actually think about it.The benchmarker actually went into pretty much all the submissions which were not working ( around 30%) in order to make them work. Believe me, It took a lot of time.The problem here is the MTL access : if anyone would have followed the procedure and test the program correctly on the MTL, there would be no problem.
For the next contest, we will find another way to make sure that the codes are well formatted and work fine.With more than 120 submissions, a part of the benchmarks have to be automated.Regards,Anthony, Intel Software Network.

These are really small input files, I didn't have expected that !Could you tell us how fast are the best programs on the input files you used ?

Will special case optimisations be taken into consideration (example efficient multiple file processing, transposing, pruning & others) ? For instance with only one test file a lot of our branches are not being used.

In my oppinion running several input files on one run is also one of the criteria for checking which code is faster... Aproximately when are the results comming out today? :)

i think Anthony you have to make more test like,
1- for the same given data as u stated it's importetn to test also 18000x500 not only test 500x18000,
2- error condistion tests , all data are +ve and all data are -ve and 0x0 and Nrows x 1 and 1 x Ncolumns,
3- also considering the data what about testing small values data and large values data,
4- what about if the most of the data are +ve and also if the most of the data are -ve,
4- i'm sure all of th above must affect on the result.
best regard

Hi, our program was well optimized for handling enormous number of input files (you could have tested this by running ./run 40 ./input/*.txt >output) and we agree with Grigore Lupescu's opinion that you should have had more diverse testing being done.

EDIT: why would you guys give 1 star ratings to the 3 posts above????

Although I agree that the tests could have covered more cases I don't think it's correct to change the testing procedure based on the opinions of the contestants. Since the test procedure has been made public by Intel staff and the benchmarks were already run I don't think changing the test procedure would be fair at this point. Intel staff should take into consideration the (negative) feedback regarding the test procedure and maybe adress these issues in the next contest.

So you think it's unfair for further tests to be done that will better differentiate between programs that conform to the contest rules and those who don't ? They specified on the forum that we need to address all the possible things that could arise, so they should do these tests. Heck, we even managed how our program allocated memory for low RAM systems !

The test procedure has been made public and the tests have been done. Although your point might be valid changing the test procedure at this point is wrong. Do you think it would be fair it you would win one of the prizes with a test procedure and then lose with another one?

Until the leaderboard is published, no one has won. And as you stated, it would be unfair to win with a test procedure and lose with another. That's exactly my point when asking for a complete testing framework.

Hi all

Im really surprised from this smaller test benchmark,

For anybody of us take all the consideration in his program he will be the same as who did not do that,

As we know we here developing a program and all of us know that he must meet all the cases,

By the way I do not see its a good idea to test using this simple benchmark, again there may be a program design work well with the large data and the other are not and so .


Bring the axes!

"They specified on the forum that we need to address all the possible things that could arise, so they should do these tests."they advised us to optimize for every input, but they didn't have specified that everything would be tested by Intel or not. They didn't wanted to see any specific optimization regarding the test procedure and that's why they didn't gave it.If every case would have been tested by Intel, it wouldn't be really more fair : A program that performs better on large input is better than a program that performs better on small inputs, or not ? You should maybe take the average ? Ok but what if in real life, the matrix you have are not big ?

I agree with you, moreover it is difficult to make a general test platform that would "touch" every possible case.
Only aspect i am concerned is if those that handled any extra cases will receive extra points at least in the readme/technical section (25 points).

Ph0b, we were talking about a larger number of input files, not the size of one file. If we talk about the size of one file, then trust me, some algorithms will perform way better on 10x10000 size than on 200x200. Some others will take ages with 10000x10, when it should take the same time as its transpose.

Hi,Indeed, as it is written in the T&Cs, the readme,the quality of the code, and the social aspects gave you some points.Little reminder : 125 points for the speed/code/scalability , 25 points for the readme , 25 points for the social aspect.Some people who had better speed actually finished behind people with a better code/readme/forum activity.We knew that having these test could confuse some of you, but as previously said, we didn't want to be specific in the description of the test to avoid special developments.
For sure, we will try to make the test procedure better for next time. ( It was discussed and defined by one our Parallel programming specialist in EMEA .)Little surprise to make you wait : we do not have 4 laptops to win.. but 12.And.. hint: some of the people who talk here won some of these laptops. ;)Regards,
Anthony, Intel Software Network.

it would be nice if you finally published results without making us wait this much :)

Anthony, you should get some "Hitchcock Brown Belt", or so

I would consider to make a so called "Hitchcock " Belt ;) .The artile will be online in around 20 min (or less) on the homepage ofhttp://software.intel.com/fr-fr/.Anthony, Intel Software Network.

Special development is one thing, good algorithm performance on the general case is another thing. As I see it, there are 2 major aspects to take into consideration when evaluating an algorithm.

1. multiple input files. Either many small files or many mixed files (large and small). This is an important source of parallelism and scalability, since processing small files in paralel one per thread is much more efficient than using all available threads processing each file. Furthermore, some files that are considered too small can be serialized from the start, just in order to save more threads for the actual bigger files, where more threads make a difference.
2. large files. I'm talking 500 - 900MB files. As anyone who has tested large 10000 x 10000 files might have noticed, there is an important scalability bottleneck caused by memory access. A program that scales well on small 36 MB files (500 x 18000) might not scale well (or might not scale at all) with very large files. There are important optimizations and parallelization techniques that solve the cache bottleneck. These are just completely ignored when the test files are 36MB in size, just a little bigger than the shared 24MB cache.

It's quite superficial to base the results on just one 7 second test.

@Anthony: Now we're gonna refresh that page instead of this one :)

Everything is onlinehttp://software.intel.com/fr-fr/articles/winnersAcceler8/Please if you see any errors ( we are human ), tell me and i will correct it.Anthony, Intel Software Network.

our team name is c0re.and I'm really glad, we are 3rd !

What a pleasant surprise. :)
(Quite unexpected honnestly because our algorithm does not do any load
balancing. I thought it would behave far worse with the "2 fully loaded
cores" (what a strange idea to do that ;) )

I'm quite amazed by the score of the INSA4INFO team. Can't wait do see
what they have done. (I hope that more details / the codes / the README
will be added to the articles :) )

Congrats to the winners ;)

Congrats for the winners!

Congrats indeed !On another note, apparently our submission didn't run. And last time I tried on the MTL (hard enough at the end), using the exact commands shown in the example, it did. So if we could have details about what failed, so we can submit our solution to the late contest, working this time ;)

Wooow, we were really surprised when we saw the results.Congratulation to everyone !!!It's been a long, tired, but very interesting month since the beginning of the contest. Thanks for the coments.We will be glad to share and explain every part of our code and I look forward to see other algorithms too.I'm now waiting for the specific directives to publish our code (but I actually don't know if we will be allow to post it before the end of the second contest).His name doesn't appear on the main page but I want to thank especially a shadow jedi (^^), Pascal Garcia, teacher at the INSA of Rennes, whom really worked on the project too. This contest was a very good opportunity for students to share a project with experienced teachers.Thanks very much to the intel team :)

That's what happened to us. It seems it didn't unzip or compile although we made sure many times on our computer and the MTL. I hope we'll get some details about it :)

Did you checked that it worked without sourcing the script populating environment variables?
A lot of people had troubles with that during the last days and that may prvent your code from compiling.

We used the icpc compiler, and it needed some environment variables set
up. We couldn't get a script to work as the script looked like it kept
things local, and they disappeared when the script ended. In fact we
could check the variables in the script and out side to be sure of this
happening. We decided that we would put the sourcing inside the make
file, so that whenever you make, it sets the sourcing properly then
makes the executable. This also kept things local, but making the
executable was in it, so this did produce an output that worked every
time on our machine and on the MTL. We made sure that the make without
sourcing didn't work, and once we put the sourcing (in make), it started
making an executable which ran perfectly. So, automated building
covered. Our submission was zipped (which is a standard as far as we
know), with the proper password, and that worked too. So archive format

This way we (thought we) made sure that the automated system can
take our submission as it is (unzip, make clean, make, run), without a
script, which was really not explained at all how it would be handled
(another reason we wanted to avoid a script).

We tried to contact intel several times, the only proper explanation
we got was "Near to your participation, the person who ran the
benchmark wrote that
your application was not building automatically or didn't respect the
format of the archive.". Nothing more, nothing yet. This helped us in no
way identify the problem, say to resubmit for the second part, which we
now find ridiculous, with all the best result producing ideas/codes out

We really think throwing away 30% of submissions is wrong, as they
all made it through the deadline, and need to be given a chance,
especially when they are supposed to work.

+1For us, it was just the launch script with a simple line : "source /opt/intel/bin/compilervars.sh intel64"which was given... We precised that we had to launch the script before the Makefile as it was said in the rules :

- your code
And the launch script (environnement variables ...)

But they consider the code did not compile.. whereas we put quite some efforts and we were out just because of an environment variable.Next time we will use gcc compiler not intel ;)More seriously, we have sent an email about this and what would have been our rank (the "interesting code" ranking is really frustrating) in the normal "ranking", at least. But we did not get a reply yet. Is it possible, to have a proper answer for the "frustrated" minority ?That would be nice.Except of this point, we have enjoyed the contest.

Exactly our feeling. Such an enjoyable competition, yet ruined for some minor details.We tried to evade scripts for exactly this reason, complications, as it wasn't properly covered in the rules. And using gcc wasn't an option as we were getting consistently better times with the intel compiler.I wish there would be some review to this, 25-30% is a substantial number, in fact its also unfair to all participants, as those who passed did not really compete with everyone (lessens the victory).One more thing I am specially surprised about is the fact that the organizers had planned a month of testing, yet everything was done over a weekend or something, yet we received a reply that trying the code of 25% of (no more than) 200 teams was not possible to be handled by hand, lack of manpower (plan was I repeat, a month). Most of the "failed" submissions, I am sure, require very simple modifications or none at all, just proper environmnet variables set up.I just know our makefile worked like a charm when we tested on MTL.Cheers to all

Laisser un commentaire

Veuillez ouvrir une session pour ajouter un commentaire. Pas encore membre ? Rejoignez-nous dès aujourd’hui