Mainframe Strategies for Energy EfficiencySubmitted by admin on Thu, 2015-12-31 01:09
IT consumes significant energy resources. Organizations that substantially reduce energy consumption and costs will free up funds for other areas – research and development, acquiring new technologies, building additional capacity, for instance – where they can deliver greater value to the business.
This guide will help IT managers create an holistic strategy for energy-efficiency.
1. Sell the Idea to Management.
Preferably in language your corporate execs will understand.
Don't talk about saving kilowatts, or the fact the data center is responsible for a huge chunk of the company’s carbon footprint. More like: “We are spending X-amount of dollars to provide this service today, and by saving energy, we can reduce that amount while providing the same level of service."
Present your case from a "big-picture" perspective with evidence to back it up.
- Determine who pays the electric bill. What does it cost per month, to keep the servers running? What was the trend for data center power costs over the past 2 years?
- Give a projection of your IT infrastructure growth. In the coming period, how much additional capacity is required? Will any large projects affect data center demand in the future?
- Establish a metric you can use consistently to determine how efficient your data center is.
2. Measure Data Center Energy Consumption.
Pick a simple measurement, make a ratio, then improve on it.
For instance, measure how much power actually goes to IT equipment (servers, storage and network gear), and how much is sucked up by air conditioners, or lost in AC/DC (alternating current/direct current) conversions for power equipment. This measurement when expressed as a ratio is commonly known as power usage effectiveness, or PUE.
PUE is found by measuring the power “taken in” to a data center (measured at the utility electric meter), divided by the power “going out” (i.e. used to run the IT equipment). So the PUE = Total facility power divided by IT equipment power. It has been accepted by vendor consortiums like the Green Grid, the Environmental Protection Agency (EPA) and ASHRAE to measure data center efficiency.
3. Reduce Energy Consumption by Tackling Inefficiencies.
Start with the root components (such as processors), then work upwards progressively.
Application consolidation saves costs in licensing, management resources and hardware support. It also lessens the load on servers, allowing you to scale back on computing resources.
Determine which applications you can actually make a business case for. Some may be multiple versions of the same software. Some may be different products essentially doing the same thing. Others may be forgotten or legacy programs, simply taking up space.
To optimize functions, consolidate applications with others. Where duplication of functions occurs, agree on a standard application to perform that task.
Next, audit your hardware, with an eye to eliminating unnecessary servers. Be ruthless. Round up all the legacy servers in your data center and determine what they're supposed to do. For servers with unknown purposes, send out a general query. Offer a 90-day amnesty for unclaimed hardware, then pull the plug.
In this effort, virtualization is your friend. Unlike distributed servers, mainframes are inherently able to drive high utilization by running lots of disparate work on the same operating system image. They also virtualize those operating systems, so they can share access to the same physical processor resources. This has the dual benefit of reducing the actual physical number of servers required, and getting more computing done within a total fixed energy budget.
4. Use Active Power Management.
Having removed the unused servers, you'll have to deal with the necessary servers that sit idle.
Active power management lets you safely throttle down servers, or put idle servers to sleep like a laptop, saving a lot on power bills. These tools and settings only work among each manufacturer’s equipment, but they're often included in the price of the hardware (e.g. IBM’s PowerExecutive feature).
Don't worry about the on/off cycles; modern computers are designed to handle 40,000 of these before failure. You’re not likely to approach that number during the average computer’s five-to-seven-year life span.
5. Optimize your Infrastructure.
Implement proper floor plan and air conditioning design. The fundamental rule is to keep hot air and cold air separate.
Hot aisle/cold aisle is a data center floor plan where rows of cabinets are configured with air intakes facing the middle of the cold aisle. These cold aisles have perforated tiles, blowing cold air from the computer room air-conditioning (CRAC) units up through the floor.
The servers’ hot-air returns blow exhaust heat into hot aisles from the back of cabinets. Hot air is then sucked into a CRAC unit to be cooled and redistributed through cold aisles.
Use a raised-floor system with rubber gaskets under each tile to minimize air leakage. Put sensitive equipment near the middle of the cabinet row around knee-height, rather than right up against the CRAC or perforated floor.
Modern mainframes have advanced sensors and cooling control firmware that monitors and makes adjustments based on environmental factors such as temperature, humidity levels, and air density.
Instead of dedicated chillers, consider using economizers. These use outside air temperatures to cool servers directly or to cool chilled water.
Consider raising the voltage on power distribution units (PDUs) from 120 volts to 208 volts. Sounds counter-intuitive, but the higher the voltage, the more efficiently IT equipment operates.