Wednesday, 22 April 2009

How green is Silicon Valley?

Well, what do we think?

Being closely involved in the world that encompasses these almighty cathedrals to technology that we call "data centres", I wonder seriously how green we can actually get them. It may be unfair to single out this particular part of the world as there's nowhere that's any less guilty at wasting energy.

The prime problem is how inefficient the actual equipment is at using the power that we put into it. There are very few "conventional" server/network chassis that can boast better than a 20% efficiency rating, and as you've most likely guessed the rest simply comes out of the box as heat.

Thus we have a situation where the problem has to be solved in the long term by the manufacturers. Moves are being made in this direction: more efficient ways of distributing the power to the servers, more use of blade servers, far more intelligent power and performace management software in the servers themselves, and so on.

Meanwhile in the short term, we just have to make the environment that it lives in far more efficient at dealing with the problem the equipment is causing us. Here are some ideas from my addled brain.

How much heat in the data centre is coming from outside the building? In certain climates, I suspect quite a lot at certain times of the year, yet I have visited places where the insulation in the area of the DC's suites has been non-existent. Conversely, how much cooling can we achieve from the outside of the building by just allowing the outside atmosphere to get in? Some method of controlled heat transfer when the outside air is cool enough cannot be beyond the whit of Man.

Can the excess heat be converted back to useable electrical power, or indeed be condensed enough to help in powering the air conditioning systems - a Combined heat/power system maybe.

How do we cool the equipment itself? The current method in an awful lot of DC's - even brand new ones - is to cool the suite itself rather than the equipment itself. This is a hangover from mainframe days when this was really the only viable way of doing it. I come from a broadcast engineering background where it has always been the norm to use contained cooling within an area, so this way has never made sense to me.

This has got to change.

CFO's have got to realise that saving money in the capital stage of a project and just throw air handling units around a large open area is only going to increase the costs to the company when it reaches the stage of going into operation, especially when a large number of governments are introducing carbon taxes in some form or another.

Bite the bullet guys and consider doing this now: there are options out there already, some of which can be easily (and relatively cheaply) retro-fitted to your existing installation.

Finally, an idea for the manufacturers out there: the Dell's HP's Cisco's of this world. Broadcasting, especially national broadcasting uses a fair number of high-power transmitters to reach it's audience. To keep these enormous beasts going they are cooled directly. The coolant pipes are plumbed directly into the output stage and the liquid pumped through these rather than having an intermediate "air stage" in the process.

It's already being done on larger scale computers... think Kray and the like. Is it really so mad an idea to do this to even small machines like say a C7000?

Be lucky and stay safe out there people!

No comments:

Post a Comment