UNIX Consolidation – Top tips for right sizing

There remain some good ideas in this article, which talked about Sun UK’s consolidation planning methodology but it is now mainly interesting as a then contemporary comment. See my comment. DFL 9 Jan 2015

 

  • The methodology involves defining the problem area or scope, creating a detailed catalogue and describing the systems, system costs, the cost scalability rules, the system capabilities and utilisation. We then map the current system consumptions onto a future state architecture, costing the transition and then performing a traditional investment analysis. This involves understanding the expected costs of the future state solution, and designing to obtain the benefits of new systems which are smaller, cheaper, faster and more reliable. We have some key rules of thumb when designing a future state, these include
  • Always deploy full systems; the card cage and backplane work harder justifying their cost. i.e. the cost/CPU is higher on half empty systems. ( This is a true rule of thumb, and is more appropriate for smaller systems (1-12 way)).
  • Big systems (& hence domains) will deliver higher utilisation, and hence the acquisition costs to obtain the necessary capability will be lower; you pay for unused capability as well as used.
  • Using large systems as if they were multiple small systems is silly. Only buy huge systems if you need big domains (or are at the margin of requiring big domains and value the scalability &/or flexibility).
  • The premium paid in buying scalable systems can be seen as an option purchase for a bigger system than deployed. i.e. buy 48 CPUs and an option for 24 more.

ooOOOoo

—— Big systems (& hence domains) will deliver higher utilisation, and hence the acquisition costs to obtain the necessary capability will be lower; you pay for unused capability as well as used. Using large systems as if they were multiple small systems is silly. Only buy huge systems if you need big domains (or are at the margin of requiring big domains and value the scalability &/or flexibility). — These 2 sentences appear to contradict themselves, also this seems to ignore the benefits of segmenting smaller than domains using Zones or VMWare etc.

Posted by Anonymous on September 23, 2004 at 11:10 PM PDT #

You caught me. There’s some careless language here, I was trying to make the point that there is a contradiction between high utilisation, (big domains) and cost/cycle. Running a big domain empty is silly. (I’m sure you’ll agree). Running a big system as if they were multiple smaller systems is not as cost effective unless you value flexibility etc

I don’t think I am ignoring the option of dividing systems ( although I don’t say much in this article). You correctly point out it can be done. My comments (IMO) remain relevant in determining the start point at which you begin to divide the OS managed resource. This is based on one’s wealth and the extend to which one values high utilisation & flexibility.Posted by Dave Levy on September 24, 2004 at 12:32 AM PDT #

The consulting offerings available to customers in the UK were described here……

1 Comments.

  1. This was written in 2004; it is basically an advert for our consultancy offerings where we sought using a five year TCO analysis to justify new system purchases. The increase in capability of systems and the growing maturity of Windows, Linux, and the hypervisors made this a futile long term approach and the company died because of it. I copied the article to this blog in Jan 2015 because I think it has some use as then contemporary comment and some of the computer science and economics remains true today. DFL 9 Jan 2015