Data Center – Định Nghĩa và Giải Pháp

Ngày 25/02/2013 đăng bởi seo3.VS [email protected]

Data Center là gì?

Data Center – Trung tâm dữ liệu được hiểu như khu vực chứa server hay phòng máy tính, data center là nơi đặt, vận hành và quản lý server, thiết bị lưu trữ.

 

Có bốn thành phần chính của một Data Center:

White space: với các data center có sử dụng sàn nâng và đơn vị tính là feet vuông, áp dụng cho bất kỳ nơi nào từ hàng trăm đến hàng ngàn feet vuông trong data center. Đối với những data center không sử dụng sàn nâng, thuật ngữ này vẫn có thể được sử dụng để thể hiện diện tích có thể sử dụng.

Support Infrastructure: thuật ngữ này đề cập đến không gian bổ sung và các trang thiết bị hỗ trợ hoạt động của data center, bao gồm máy biến áp, UPS, máy phát điện, máy điều hòa không khí phòng máy tính (Computer Room Air Conditioners – CRAC), đơn vị truyền tải từ xa (Remote Transmission Unit – RTU), máy làm lạnh, hệ thống phân phối khí… Trong một data center theo chuẩn TIER 3 với mật độ cao, cơ sở hạ tầng hỗ trợ này có thể chiếm không gian nhiều hơn 4-6 lần khoảng không trắng và phải được hạch toán trong lúc lên kế hoạch xây dựng data center

IT equipment: bao gồm rack, hệ thống cáp, server, thiết bị lưu trữ, hệ thống quản lý và các thiết bị mạng cần thiết để cung cấp dịch vụ trong data center

Operations: các nhân viên vận hành đảm bảo các hệ thống gồm thiết bị IT và cơ sở hạ tầng hoạt động ổn định, duy trì, nâng cấp và sửa chữa khi cần thiết. Phần lớn các công ty phân chia trách nhiệm rõ ràng giữa các nhóm hoạt động kỹ thuật trong CNTT và nhân viên chịu trách nhiệm cho các hệ thống công trình phụ trợ.

Data Center được quản trị như thế nào?

Việc điều hành một data center đạt hiệu quả và độ tin cậy cao đòi hỏi sự nỗ lực kết hợp của quản lý cơ sở hạ tầng và CNTT.

IT System: Server, thiết bị lưu trữ và thiết bị mạng phải được bảo trì và nâng cấp bao gồm hệ điều hành, bản vá bảo mật, các trình ứng dụng và tài nguyên hệ thống (bộ nhớ, lưu trữ, CPU).

Facilities infrastructure: Tất cả các hệ thống hỗ trợ trong data center thường xuyên vận hành hết công suất và phải được bảo trì để tiếp tục hoạt động ổn định. Hệ thống này bao gồm làm mát, độ ẩm, xử lý luồng không khí, nguồn phân phối điện, máy phát điện dự phòng và các thiết bị khác.

Monitoring: Khi một thiết bị, kết nối hay ứng dụng bị lỗi sẽ dẫn đến tình trạng lỗi các vận hành quan trọng khác. Đôi khi, một hệ thống lỗi sẽ dẫn đến tình trạng lỗi dây chuyền các ứng dụng hay hệ thống khác có liên quan đến dữ liệu hay dịch vụ bị lỗi. Ví dụ: nhiều hệ thống như kiểm soát hàng tồn kho, xử lý thẻ tín dụng, kế toán và các phần liên quan cùng chung một qui trình phức tạp như thanh toán thương mại điện tử. Một ứng dụng lỗi sẽ dẫn đến qui trình bị lỗi theo. Việc giám sát ứng dụng để đảm bảo thời gian hoạt động tối đa 24/7 phụ thuộc vào qui trình xử lý công việc của mỗi doanh nghiệp.

Hệ thống quản lý tòa nhà – Building Management System (BMS): Đối với các trung tâm tích hợp dữ liệu lớn, hệ thống BMS cho phép quản lý tập trung và liên tục cơ sở hạ tầng như nhiệt độ, độ ẩm, nguồn điện và hệ thống làm mát.

Việc quản lý hạ tầng IT thường được thuê ngoài cho công ty bên thứ ba chuyên về giám sát, bảo trì và sửa chữa.

What Is A Green Data Center?

A green data center is one that can operate with maximum energy efficiency and minimum environmental impact. This includes the mechanical, lighting, electrical and IT equipment (servers, storage, network, etc.). Within corporations, the focus on green data centers is driven primarily by a desire to reduce the tremendous electricity costs associated with operating a data center. That is, going green is recognized as a way to reduce operating expense significantly for the IT infrastructure.

The interest in green data centers is also being driven by the federal government. In 2006, Congress passed public law 109-431 asking the EPA to: “analyze the rapid growth and energy consumption of computer data centers by the Federal Government and private enterprise.”

In response, the EPA developed a comprehensive report analyzing current trends in the use of energy and the energy costs of data centers and servers in the U.S. and outlined existing and emerging opportunities for improving energy efficiency. It also made recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

According to the EPA report, the two largest consumers of electricity in the data center are:

  • Support infrastructure — 50% of total
  • General servers — 34% of total

Since then, significant strides have been made to improve the efficiency of servers. High density blade servers and storage are now offering much more compute capacity per Watt of energy. Server virtualization is allowing organizations to reduce the total number of servers they support, and the introduction of EnergyStar servers have all combined to provide many options for both the public and private sectors to reduce that 34% of electricity being spent on the general servers.

Of course, the greatest opportunity for further savings is in the support infrastructure of the data center facility itself. According to the EPA, most data centers consume 100% to 300% of additional power for the support systems than are being used for their core IT operations. Through a combination of best practices and migration to fast-payback facility improvements (like ultrasonic humidification and tuning of airflow), this overhead can be reduced to about 30% of the IT load.

WHAT ARE SOME TOP STAKEHOLDER CONCERN ABOUT DATA CENTER?

While the data center must provide the resources necessary for the end users and the enterprise’s applications, the provisioning and operation of a data center is divided (sometimes uncomfortably) between IT, facilities and finance, each with its own unique perspective and responsibilities.

IT: It is the responsibility of the business’s IT group to make decisions regarding what systems and applications are required to support the business’ operations. IT will directly manage those aspects of the data center that relate directly to the IT systems while relying on facilities to provide for the data center’s power, cooling, access and physical space.

Facilities: The facilities group is generally responsible for the physical space — for provisioning, operations and maintenance, along with other building assets owned by the company. The facilities group will generally have a good idea of overall data center efficiency and will have an understanding of and access to IT load information and total power consumption.

Finance: The finance group will be responsible for aligning near term vs. long term capital expenditures (CAPEX) to acquire or upgrade physical assets and operating expenses (OPEX) to run them with overall corporate financial operations (balance sheet and cash flow).

Perhaps the biggest challenge confronting these three groups is that by its very nature a data center rarely will be operating at or even close to its optimally defined range. With a typical life cycle of 10 years (or perhaps longer), it is essential that the data center’s design remains sufficiently flexible to support increasing power densities and various degrees of occupancy over a not insignificant period of time. This in-built flexibility should apply to power, cooling, space and network connectivity. When a facility is approaching its limits of power, cooling and space, the organization will be confronted by the need to optimize its existing facilities, expand them or establish new ones.

What Options are Available When I’m Running Out of Power, Space or Cooling?

Optimize: The quickest way to address this problem and increase available power, space and cooling is to optimize an existing facility. The biggest gains in optimization can be achieved by reducing overall server power load (through virtualization) and by improving the efficiency of the facility. For example, up to 70% of the power required to cool and humidify the data center environment can be conserved with currently available technologies such as outside air economizers, ultrasonic humidification, high efficiency transformers and variable frequency drive units (VFDs). Using these techniques when combined with new, higher density IT systems will allow many facilities to increase IT capacity while simultaneously decreasing facility overhead.

Move: If your existing data center can no longer be upgraded to support today’s more efficient (but hotter running and more energy-thirsty) higher-density deployments, there may be nothing you can do except to move to a new space. This move will likely begin with a needs assessment/site selection process and will conclude with an eventual build-out of your existing facility or a move to a new building and site.

Outsource: Besides moving forward with your own new facility, there are two other options worth consideration:

Colocation: This means moving your data center into space in a shared facility managed by an appropriate service provider. As there are a broad range of business models for how these services can be provided (including business liability), it is important to make sure the specific agreement terms match your short-and-long term needs and (always) take into account the flexibility you require so that your data center can evolve over its lifespan.
Cloud computing: The practice of leveraging shared computing and storage resources — and not just the physical infrastructure of a colocation provider — has been growing rapidly for certain niche-based applications. While cloud computing has significant quality-of-service, security and compliance concerns that to date have delayed full enterprise-wide deployment, it can offer compelling advantages in reducing startup costs, expenses and complexity.

What are some data center measurements and benchmarks and where can I find them?

PUE (Power Usage Effectiveness): Created by members of the Green Grid, PUE is a metric used to determine a data center’s energy efficiency. A data center’s PUE is arrived at by dividing the amount of power entering it by the power used to run the computer infrastructure within it. Expressed as a ratio, with efficiency improving as the ratio approaches 1, data center PUE typically range from about 1.3 (good) to 3.0 (bad), with an average of 2.5 (not so good).

DCiE (Data Center Infrastructure Efficiency): Created by members of the Green Grid, DCiE is another metric used to determine the energy efficiency of a data center, and it is the reciprocal of PUE. It is expressed as a percentage and is calculated by dividing IT equipment power by total facility power. Efficiency improves as the DCiE approaches 100%. A data center’s DCiE typically ranges from about 33% (bad) to 77% (good), with an average DCiE of 40% (not so good).

LEED Certified: Developed by the U.S. Green Building Council (USGBC), LEED is an internationally recognized green building certification system. It provides third-party verification that a building or community was designed and built using strategies aimed at improving performance across all the metrics that matter most: energy savings, water efficiency, CO2 emission reduction, the quality of the indoor environment, the stewardship of resources and the sensitivity to their impact on the general environment. For more information on LEED, go to www.usgbc.org.

The Green Grid: A not-for-profit global consortium of companies, government agencies and educational institutions dedicated to advancing energy efficiency in data centers and business computing ecosystems. The Green Grid does not endorse vendor-specific products or solutions, and instead seeks to provide industry-wide recommendations on best practices, metrics and technologies that will improve overall data center energy efficiencies. For more on the Green Grid, go to www.thegreengrid.org.

Telecommunications Industry Association (TIA): TIA is the leading trade association representing the global information and communications technology (ICT) industries. It helps develop standards, gives ICT a voice in government, provides market intelligence, certification and promotes business opportunities and world-wide environmental regulatory compliance. With support from its 600 members, TIA enhances the business environment for companies involved in telecommunications, broadband, mobile wireless, information technology, networks, cable, satellite, unified communications, emergency communications and the greening of technology. TIA is accredited by ANSI.

TIA-942: Published in 2005, the Telecommunications Infrastructure Standards for Data Centers was the first standard to specifically address data center infrastructure and was intended to be used by data center designers early in the building development process. TIA-942 covers:

  • Site space and layout
  • Cabling infrastructure
  • Tiered reliability
  • Environmental considerations

Tiered Reliability — The TIA-942 standard for tiered reliability has been adopted by ANSI based on its usefulness in evaluating the general redundancy and availability of a data center design.

Tier 1 Basic – No redundant components (N): 99.671% availability

  • Susceptible to disruptions from planned and unplanned activity
  • Single path for power and cooling
  • Must be shut down completely to perform preventive maintenance
  • Annual downtime of 28.8 hours

Tier 2 – Redundant Components (limited N+1): 99.741% availability

  • Less susceptible to disruptions from planned and unplanned activity
  • Single path for power and cooling includes redundant components (N+1)
  • Includes raised floor, UPS and generator
  • Annual downtime of 22.0 hours

Tier 3 – Concurrently Maintainable (N+1): 99.982% availability

  • Enables planned activity (such as scheduled preventative maintenance) without disrupting computer hardware operation (unplanned events can still cause disruption)
  • Multiple power and cooling paths (one active path), redundant components (N+1)
  • Annual downtime of 1.6 hours

Tier 4 – Fault Tolerant (2N+1): 99.995% availability

  • Planned activity will not disrupt critical operations and can sustain at least one worst-case unplanned event with no critical load impact
  • Multiple active power and cooling paths
  • Annual downtime of 0.4 hours

Due to the doubling of infrastructure (and space) over Tier 3 facilities, a Tier 4 facility will cost significantly more to build and operate. Consequently, many organizations prefer to operate at the more economical Tier 3 level as it strikes a reasonable balance between CAPEX, OPEX and availability.

Uptime Institute: This is a for profit organization formed to achieve consistency in the data center industry. The Uptime Institute provides education, publications, consulting, research, and stages conferences for the enterprise data center industry. The Uptime Institute is one example of a company that has adopted the TIA-942 tier rating standard as a framework for formal data center certification. However, it is important to remember that a data center does not need to be certified by the Uptime Institute in order to be compliant with TIA-942.

Is the federal government involved in data centers?

Since data centers consume a far greater share of the power grid than any other sector, they have attracted the attention of the federal government and global regulatory agencies.

Cap and Trade: Sometimes called emissions trading, this is an administrative approach to controlling pollution by providing economic incentives for achieving reductions in polluting emissions. In concept, the government sets a limit (“a cap”) on the amount of pollutants an enterprise can release into the environment. Companies that need to increase their emissions must buy (or trade) credits from those who pollute less. The entire system is designed to impose higher costs (essentially, taxes) on companies that don’t use clean energy sources. The Obama administration is proposing Cap and Trade legislation and that is expected to affect U.S. energy prices and data center economics in the near future.

DOE (Department of Energy): The U.S. Department of Energys overarching mission is to advance the national, economic, and energy security of the United States. The EPA and the DOE have initiated a joint national data center energy efficiency information program. The program is engaging numerous industry stakeholders who are developing and deploying a variety of tools and informational resources to assist data center operators in their efforts to reduce energy consumption in their facilities.

EPA (Environmental Protection Agency): The EPA is responsible for establishing and enforcing environmental standards in order to safeguard the environment and thereby improve the general state of Americas health. In May 2009 the EPA released Version 1 of the ENERGY STAR® Computer Server specification detailing the energy efficiency standards required by the agency. Servers have to carry the label.

PL 109-431: Passed in December 2006, the law instructs the EPA to report to congress the status of IT data center energy consumption along with recommendations to promote the use of energy efficient computer servers in the US. It resulted in a “Report to Congress on Server and Data Center Energy Efficiency” delivered in August 2007 by the EPA ENERGY STAR Program. This report assesses current trends in the energy use and energy costs of data centers and servers in the US and outlines existing and emerging opportunities for improved energy efficiency. It provides particular information on the costs of data centers and servers to the federal government and opportunities for reducing those costs through improved efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

What should I consider when moving my data center?

When a facility can no longer be optimized to provide sufficient power and cooling — or it can’t be modified to meet evolving space and reliability requirements — then you’re going to have to move. Successful data center relocation requires careful end-to-end planning.

Site selection: A site suitability analysis should be conducted prior to leasing or building a new data center. There are many factors to consider when choosing a site. For example, the data center should be located far from anyplace where a natural disaster — floods, earthquakes and hurricanes — could occur. As part of risk mitigation, locations near major highways and aircraft flight corridors should be avoided. The site should be on high ground, and it should be protected. It should have multiple, fully diverse fiber connections to network service providers. There should be redundant, ample power for long term needs. The list can go on and on.

Move execution: Substantial planning is required at both the old and the new facility before the actual data center relocation can begin. Rack planning, application dependency mapping, service provisioning, asset verification, transition plans, test plans and vendor coordination are just some of the factors that go into data center transition planning.

If you are moving several hundred servers, the relocation may be spread over many days. If this is the case, you will need to define logical move bundles so that interdependent applications and services can be moved together so that you will be able to stay in operation up to the day on which the move is completed.

On move day, everything must go like clockwork to avoid down time. Real time visibility into move execution through a war room or a web-based dashboard will allow you to monitor the progress of the move and be alerted to potential delays that require immediate action or remediation.

What data center technologies should I be aware of?

Alternative Energy: Solar, wind and hydro show great potential for generating electricity in an eco-friendly manner. Nuclear and hydro show great potential for grid based, green power. However, the biggest challenge when it comes to using alternative energy for your data center applications is the need for a constant supply at high service levels. If you use alternative energy but still need to buy from the local power company when hit with peak loads, many of the economic benefits youre reaping from the alternative energy source will disappear quickly. As new storage mechanisms are developed that capture and store the excess capacity so it can be accessed when needed, then alternative energy sources will play a much greater role in the data center than they do today. Water and air based storage systems show great potential as eco-friendly energy storage options.

Ambient Return: This is a system whereby air returns to the air conditioner unit naturally and unguided. This method is inefficient in some applications because it is prone to mixing hot and cold air, and to stagnation caused by static pressure, among other problems.

Chiller based cooling: A type of cooling where chilled water is used to dissipate heat in the CRAC unit (rather than glycol or refrigerant). The heat exchanger in a chiller based system can be air or water cooled. Chiller based system provide CRAC units with greater cooling capacity than DX based systems. Besides removing the DX limitation of a 24° F. spread between output and input, the chiller system can adjust dynamically based on load.

Chimney effect: Just as your home chimney leverages air pressure differences to drive exhaust, the same principle can be used in the data center. This has lead to a common design with cool air being fed below a raised floor and pulled into the data center as hot air escapes above through the chimney. This design creates a very efficient circulation of cool air while minimizing air mixing.

Cloud computing: This is a style of computing that is dynamically scalable through virtualized resources provided as a service over the Internet. In this model the customer need not be concerned with the technical details of the remote resources. (That’s why it is often depicted as a cloud in system diagrams.) There are many different types of cloud computing options with variations in security, backup, control, compliance and quality of service that must be thoroughly vetted to assure their use does not put the organization at risk.

Cogeneration: This is the use of an engine (typically diesel or natural gas based) to generate electricity and useful heat simultaneously. The heat emitted by the engine in a data center application can be used by an “absorption chiller” (a type of chiller that converts heat energy into cooling) providing cooling benefits in addition to electric power. In addition, excess electricity generated by the system can be sold back to the power grid to defray costs. In practice, the effective ROI of cogeneration is heavily dependent on the spread between the cost of electricity and fuel. The cogeneration alternative will also contribute to substantial increase in CO2 emissions for the facility. This runs counter to the trend toward eco-friendly solutions and will create a liability in Cap and Trade carbon trading.

Colocation: Colocation is one of several business models where your data center facilities are provided by another company. In the colocation option, data centers for multiple organizations can be housed in the same facility sharing common power and cooling infrastructure and facilities management. Colocation differs from a dedicated hosting provider in that the client owns its own IT systems and has greater flexibility in what systems and applications reside in their data center. The lines are blurred between the various outsourcing models with variations in rights, responsibilities and risks. For this reason, when evaluating new facilities it is important to make sure the business terms align properly with your long term needs for the space.

Containers: The idea of a data center in a container is that all the power, cooling, space and connectivity can be provisioned incrementally through self contained building blocks, or standard sized shipping containers. These containers can be placed outside your place of business to expand data center capacity or may be deployed in a warehouse type environment. The primary benefit data center containers provide are that they support rapid deployment and are integrated and tuned to support very high power densities. Containers have been embraced for use in cloud type services by Google and Microsoft. The potential downsides of containers are several: They are expensive (more per useable SF than custom built facilities), tend to be homogeneous (designed for specific brands/models of systems) and are intended for autonomous operation (the container must remain sealed to operate within specifications).

CRAC (Computer Room Air Conditioner): A CRAC is a specialized air conditioner for data center applications that can add moisture back into the air to maintain the proper humidity level required by the electronic systems.

DX cooling (direct expansion): A compressor and glycol/refrigerant based system that uses airflow to dissipate heat. The evaporator is in direct contact with the air stream, so the cooling coil of the airside loop is also the evaporator of the refrigeration loop. The term “direct” refers to the position of the evaporator with respect to the airside loop. Because a DX-based system can reduce the air temperature by a maximum of 23° F, they are much more limited in application when compared to more flexible chiller based systems.

Economizer: As part of a data center cooling system, air economizers expel the hot air generated by the servers/devices outdoors and draw in the relatively cooler outside air (instead of cooling and recirculating the hot air from the servers). Depending on the outdoor temperature, the air conditioning chiller can either be partially or completely bypassed, thereby providing what is referred to as free cooling. Naturally, this method of cooling is most effective in cooler climates.

Fan tile: A raised floor data center tile with powered fans that improve airflow in a specific area. Fan tiles are often used to help remediate hot spots. Hot spots are often the result of a haphazard rack and server layout, or an overburdened or inadequate cooling system. The use of fan tiles may alleviate a hot spot for a period of time, but improved airflow and cooling systems that reduce electricity demands generally are a better option for most facilities.

Floor to Ceiling Height: In modern, high-density data centers, the floor to ceiling height has taken on greater importance in site selection. In order to build a modern, efficient facility, best practices now call for a 36-foot (or more) raised floor plenum to distribute cool air efficiently throughout the facility (with overhead power and cabling). In addition, by leveraging the chimney effect and hot air return, the system can efficiently reject the hot air while introducing a constant flow of cool air to the IT systems. To build a facility upgradeable to 400 watts/SF, you should plan on a floor to ceiling height of at least 18 feet. Some data center designs forego a raised floor and utilize custom airflow ducting and vertical isolation. Since this is a fairly labor intensive process and is tuned to a specific rack layout, it may not be suitable for installations where the floor plan is likely to evolve over the life of the data center.

Flywheel UPS system: A low-friction spinning cylinder that generates power from kinetic energy, and continues to spin when grid power is interrupted. The flywheel provides ride-through electricity to keep servers online until the generators can start up and begin providing power. Flywheels are gaining attention as an eco-friendly and space saving alternative to traditional battery based UPS systems. The downside to flywheel power backup is that the reserve power lasts only 15-45 seconds as compared to a 20 minute window often built into battery backups.

Hot Aisle/Cold Aisle: Mixing hot air (from servers) and cold air (from air conditioning) is one of the biggest contributors to inefficiencies in the data center. It creates hot spots, inconsistent cooling and unnecessary wear and tear on the cooling equipment. A best practice to minimize air mixing is to align the racks so that all equipment exhausts in the same direction. This is achieved simply by designating the aisles between racks as either exclusively hot-air outlets or exclusively cool-air intakes. With this type of deployment, cold air is fed to the front of the racks by the raised floor and then exhausted from the hot aisles overhead.

NOC (Network Operations Center): A service responsible for monitoring a computer network for conditions that may require special attention to avoid a negative impact on performance. Services may include emergency support to remediate Denial-of-Service attacks, loss of connectivity, security issues, etc.

Rack Unit: A rack unit or U (less commonly, RU) is a unit of measure describing the height of equipment intended for mounting in a computer equipment mounting rack. One rack unit is 1.75 inches (44.45 mm) high.

RTU (Rooftop Unit): RTUs allow facilities operators to place data center air conditioning components on the building’s roof, thereby conserving raised white space while improving efficiency. In addition, as higher performance systems become available, RTUs can be easily upgraded without affecting IT operations.

Power-density: As servers and storage systems evolve to become ever more powerful and compact, they place a greater strain on the facility to deliver more power, reject more heat and maintain adequate backup power reserves (both battery backup and onsite power generation). When analyzing power-density, it is best to think in terms of Kw/rack and total power, not just watts per square foot (which is a measure of facility capacity). Note: See watts per square foot.

Power Density Paradox: Organizations with limited data center space often turn to denser equipment to make better use of the space available to them. However, due to the need for additional power, cooling and backup to drive and maintain this denser equipment, an inversion point is reached where the total need for data center space increases rather than falls. This is the power density paradox. The challenge is to balance the density of servers and other equipment with the availability of power, cooling and space in order to gain operating efficiencies and lower net costs.

Raised-floor plenum: This is the area between the data center sub floor and the raised floor tiles. It is typically used to channel pressurized cold air up through floor panels to cool equipment. It has also been used to route network and power cables, but this is not generally recommended for new data center design.

Remote hands: In a hosted or colocation data center environment, remote hands refers to the vendor-supplied, on-site support services for engineering assistance, including the power cycling of IT equipment, visual inspection, cabling and maybe even swap out of systems.

Steam Humidification: Through the natural cooling process of air conditioning, the humidity levels of a data center are reduced, just as you would find in a home or office air conditioning environment. However, due to the constant load of these AC systems, too much moisture is removed from most IT environments and must be reintroduced to maintain proper operating humidity levels for IT equipment. Most CRAC units use a relatively expensive heat/steam generation process to increase humidity. These steam-based systems also increase the outflow temperature from the CRAC unit and decrease its overall cooling effectiveness. See: Ultrasonic humidification

Ultrasonic Humidification: Ultrasonic humidification uses a metal diaphragm vibrating at ultrasonic frequencies and a water source to introduce humidity into the air. Because it does not use heat and steam to create humidity, ultrasonic systems are 95% more energy efficient than the traditional steam-based systems found in most CRAC units. Most environments can easily be converted from steam based to ultrasonic humidification.

UPS (Uninterruptible Power Supply): This is a system that provides backup electricity to IT systems in the event of a power failure until the backup power supply can kick in. UPS systems are traditionally battery and inverter based systems, with some installations taking advantage of flywheel-based technology.

VFD (Variable Frequency Drive): A system for controlling the rotational speed of an alternating current (AC) electric motor by controlling the frequency of the electrical power supplied to the motor. VFDs save energy by allowing the volume of fluid/air to adjust to match system’s demands rather than having the motor operating at full capacity only.

Virtualization: As servers have become more and more powerful, they have also (in general) become underutilized. The challenge to IT organizations has been to compartmentalize applications so they can be self contained and autonomous while at the same time sharing compute capacity with other applications on the same device. This is the challenge addressed by virtualization. Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. Through virtualization, multiple resources can reside on a single device (thereby addressing the problem of underutilization) and many systems can be managed on an enterprise-wide basis.

Watts per Square Foot: When describing a data center’s capacity, watts per square foot is one way to describe the facility’s aggregate capacity. For example, a 1,000 square foot facility with 1 MW power and cooling capacity will support an average deployment of 100 watts per square foot across its raised floor. Since some of this space may have CRAC units and hallways, the effective power density supported by the facility may be much greater (up to the 1MW total capacity). Facilities designed for 60 W/SF deployments just a few years ago cannot be upgraded to support the 400 W/SF loads demanded by modern, high density servers.

024 7303 4068