I've heard that linux servers running Apache peak at a utilization
of between 7 and 10%. No wonder no IT manager wants their servers
analyzed, this to a reasonable person sounds like a waste of
money. Typical VM systems run as high as 100%, and still provide
excellent response time to the favored applications. The wasted
processor on a dedicated server can not be used by another server,
and that fact alone gives VM a significant advantage.
An idle standard Linux distribution Linux guest server will
use about .3 percent of a g5 processor. This time is used by the
timer interrupt set at the default of 100 HZ (100 times per
second). On a P390, this consumes between 3 and 4% of the
processor. This can be reduced by reducing the timer value. David Boyes
has said he reduced this value by trial and error to 16.