Moores Law states that the number of transistors on a microprocessor will double about every two years. The law has held up pretty much since Gordon Moore, one of the founders of Intel, first published his paper on the subject in 1965. With the increase in chip capacity came a rise in speed and therefore processing power. We have powerful computers today because of the technology and innovation that drives chip design and production. When the first men landed on the moon it sometimes said that there was less computing power on the spacecraft than there is on a modern mobile phone. So weâ€™ve come a long way in 40 years or so.
For the stand-alone home or office computer or laptop, the amount of energy consumed is large in itself. Now consider taking an individual computer with all itâ€™s associated processor technology and less the peripherals, such as the screen and mouse, and multiplying that by hundreds, maybe thousands of similar devices in the same room. These rooms are euphemistically termed an â€™server farmâ€™, Â where lines upon lines of individual servers get stacked together to process information. This scenario presents processor designers at the front end and building services engineers at the back end, with the same problem, how to dissipate heat and minimize power requirements. The reason server farms exist in the first place is because our world is becoming more data-driven. And in the world of 24/7 data requirements then the server farm is indeed a practical solution.
Grouped servers or server farms generate huge amounts of heat and because the servers must be kept cool relative to their operating limits then a huge amount of energy must be expended on ventilation and air conditioning. As the energy demand goes up, so too does the cost. And because more and more companies are using these server farms to process and warehouse data, then the demand for both the faster technology and energy is rising in parallel. As the world is increasingly becoming more speed and data-driven, increasing data requirements are driving demand for more server capacity and therefore larger and more complex storage locations.
An interesting read, but a few things come to mind around the green IT space, firstly we need to move the applications to what I call the BIG THREE, Web, Citrix/application streaming and Grid/DataSynapse/Platform, etc. If they aren’t one of these three media, then the possibilities around them are going to limit what we can achieve (excluding the database of course). By that I mean, if I have a proprietary application, which cannot for whatever reason be upgraded to a web platform, be streamed in a Citrix or online java type application, or have the workload converted into a grid type application, we will need to maintain the server, the switch, the storage for the individual application server nodes.
What we need to do is:
Tier the application – how available does the application need to be and to what performance levels – if it’s a train service status in a developing economy that runs three times a day and is accessed by mobile phones, does that need the same level of service as the same application in New York accessed by thousands of iPhone users in rich media?
Tier the data center – do we need that many data centers all acting like tier 1, super cool, super available, load-balanced? Can we not have the data center running at the relevant temperature for the application or availability, if it’s Tier 2 or Tier 3, (by that we mean world ending but not brand affecting or catastrophic), can we not have those data centers slightly hotter on the basis that it might save me millions of pounds for availability that isn’t needed, that by having tier 3 running at 30 degrees, I might save a few million pounds a year in power and cooling but have a marginal effect on availability, and any support cost offset by the power saving. In this case for example, I could have data center 7 (which is 9-5 only) be powered down to low availability on minimal servers at the weekend, and then bring all nodes online on Monday at 7 am in a controlled fashion.
Virtualize the application – abstract it to its component parts in workloads, data feeds and user inputs, so that we can move it around the relevant load-balanced platforms working on a shared infrastructure basis
Virtualize the infrastructure towards shared infrastructure models where I buy the workload, the capacity availability, performance or reliability I need, I only pay on use, on the basis of application availability and reliability.
Virtualize the storage and set standards to offline more data as it becomes less needed online, by that I mean, we need to keep say 30 days trade data on the server disks, the trade data going back three years is wonderful in terms of online ness, but is an inefficient use of power and storage. At the same time, this means we need a backup and recovery process that:
Is on time
Is scalable and enables recovery in hours not those, “Michael’s at lunch, we’ve ordered the tapes, sometime next Thursday”
With a working backup, I could offline more data to cheaper and more energy-efficient storage, it might simply mean tape, it might mean cheaper disk backups for your last 6 months data, everything else on tape, etc – more efficient storage on a per-application basis
Work with deduplication of data – how much space is taken up on the shared storage with user profiles, with static application data or copies of Office or other applications for user access, it might be more efficient operationally, but is this again because it takes too long to rebuild a pc? Without limits on user profiles, we could be copying gigabytes of history, temporary files, and user data around the network which might get backed up several times along the way.