This is a nice problem! I can see this really putting good use on multithreaded techniques.Can we assume the input integers are strictly positive and less than (2^31)-1?

And is there an upper bound on the size of a potential tiling set?

# integer

## integer

For more complete information about compiler optimizations, see our Optimization Notice.

Quoting mdma

And is there an upper bound on the size of a potential tiling set?

...and also the maximum number of input sets and/or a maximum size of the input file?

Yes, the input integers would be postive values and problem size would be upper bound by an unsigned 32-bit integer.

Thanks

-Rama

Hi,

No problem. Go ahead and used signed integer and 64-bit. But note that the input integers would be positive.

Thanks

-Rama

Also, will the width/height and/or area of a rectangle fit within a signed 32-bit value?

Hi,

For this problem statement, we don't anticipate that the dimensions would be larger than a 32-bit unsigned value. So, you can stick to that assumption. I don't see the necessity of using a signed 32-bit value. Feel free to code as you deem necessary and best. The largest rectangle dimension wouldn't be more than 32-bit signed integer. I hope this info helps and clarifies this post.

Thanks

-Rama

^{32}-1 (largest value for a unsigned 32-bit value) then coders using languages with only signed types have to use a larger type.If you say the largest size is 2^{31}-1 then languages where only signed types are available can also compete.Thanks,mat.

Quoting mdma

This is a nice problem! I can see this really putting good use on multithreaded techniques.

Humm... Isn't it going to be IO bound?

All about lock-free algorithms, multicore, scalability, parallel computing and related topics:

http://www.1024cores.net

http://www.1024cores.net

Quoting Dmitriy Vyukov

Quoting mdma

This is a nice problem! I can see this really putting good use on multithreaded techniques.

Humm... Isn't it going to be IO bound?

I seriously doubt it. Unless there is some short-cut, finding all valid encodings is a O(2^n) time algorithm.

Quoting mdma

Quoting Dmitriy Vyukov

Quoting mdma

This is a nice problem! I can see this really putting good use on multithreaded techniques.

Humm... Isn't it going to be IO bound?

I seriously doubt it. Unless there is some short-cut, finding all valid encodings is a O(2^n) time algorithm.

Can you please elaborate where you can find so many options to get the exponential time? There are at most n possible widths of rectangle and when the dimensions are set the encoding validity test seems to take an order of O(n) operations: the location of the upper-left corner of every consequent square starting with the first one is determined uniquely by all squares placed before that, so they are placed into the rectangle, one by one, if they fit the rest of free space. The very first one that will not fit means that the set is not suitable for this rectangle - there is no way to undo a placement and try the other one. Thus at very maximum O(n^2) time in total. Or am I missing something?

Quoting gk4v07

Can you please elaborate where you can find so many options to get the exponential time?

All possible splittings of a sequence?

All about lock-free algorithms, multicore, scalability, parallel computing and related topics:

http://www.1024cores.net

http://www.1024cores.net

Quoting Dmitriy Vyukov

Quoting gk4v07

Can you please elaborate where you can find so many options to get the exponential time?

All possible splittings of a sequence?

Yes, but we don't need to check them all - if the first k squares don't fit the width of rectangle we can drop the rest, and the same is true for the next "row" to fill.

Quoting Dmitriy Vyukov

Quoting gk4v07

Can you please elaborate where you can find so many options to get the exponential time?

All possible splittings of a sequence?

That's what I was thinking - number pf partitions of a set/sequence.

It's just a casual guess - I've not though much about the algorithm, so a shortcut (such as discarding many improbable solutions) might reduce this to polynomial time. But I'd be surprised if it becomes I/O bound..but then it might...wel'll just have to code and see!

Computational complexity tend to be O(record_count*record_number_count), which is O(total_number_count), that is, computational complexity is linear relative to input size. So it's indeed IO bound. The good news is that target machines are able to cache dozens of GB in memory.

All about lock-free algorithms, multicore, scalability, parallel computing and related topics:

http://www.1024cores.net

http://www.1024cores.net

I believe there are some corner cases that yield quadratic complexity.

http://www.1024cores.net