I didn't know where to ask this question, so i decided to ask here..
I just read yesterday the following page, look at the the USL
(Universal Law of Computational Scalability) of Dr. Gunther,
he wrote this: ( see http://en.wikipedia.org/wiki/Neil_J._Gunther)
The relative capacity C(N) of a computational platform is given by:
C(N) = N
1 + (N - 1) + N (N - 1)
where N represents either the number of physical processors
in the hardware configuration or the number of users driving the
software application. The parameters and represent respectively
the levels of contention (e.g., queueing for shared resources) and
coherency delay (i.e., latency for data to become consistent) in the
system. The parameter also quantifies the retrograde throughput seen
in many stress tests but not accounted for in either Amdahl's law or
His website: http://www.perfdynamics.com/
If you read carefully , you will see that Dr. Gunther is using this
model to predict scalability after he simulates a relatively small
of vusers in LoadRunner ( because of licensing costs, it's cost-
and after that he finds the coefficients of the 2nd-degree polynomial
(quadratic equation) and then transform those coefficients back to the
USL parameters using the = b - a and = a.
And than he is extrapolating with the USL model to higher loads
to predict scalability.
He is also applying the model to webservers with heterogeneous
workloads. like in the following page:
Now my question follows:
Suppose we have obtained a small number of measured load-points
with Loadrunner or others tools, and we calculated the USL equation to
predict scalability of a webserver , how the USL model can predict if
the scalability/performance is limited by the network bandwidth and
not the server ?
Can it give this information ?
Amine Moulay Ramdane.