Paralyzing reading data ???

Paralyzing reading data ???

what about after figuring out the data size, we can paralelize the reading process itself so we can gain some melliseconds more??? we can even use different algorithms on different portions of the data according to the nature of the numbers read.

13 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

yes, I think you shouldn't parallelize for small files because the overhead will be greater than the speed gained

I just had a question, is this a question or an info?

it's an idea if you think it's worth implementing please tell me :D

Paralyzing IO is not so simple. remember that the file is most likely stored on a hard drive,it reads much faster if the IO command is a one long sequential read.

asiradiaxsaid, if that is parallelizing a small input text, more or less, u only generate overhead :)U will figure that out by some trials, may be it gives a better performance but more overhead, then we can swallow this issue :) and Good Luck!Regards,

it's worth implementing but how??

I Think It Doesn't as it would take more time..

I Think It Doesn't as it would take more time..

I Think It Doesn't as it would take more time..

I think It would take longer time

Parallelizing reading files would be faster if you load the file in memory and start processing it but you will run out of memory much faster for large number of files

practical impossible

Leave a Comment

Please sign in to add a comment. Not a member? Join today