Facebook's new energy efficient data center

Facebook's new energy efficient data center

Facebook's New Green Data Center Initiatives

Introduction to Facebook's Project

  • Facebook has launched a new project focused on enhancing the efficiency of data centers and servers, indicating a shift towards greener technology.
  • The cloud is now more energy-efficient compared to traditional servers used over the last decade, reassuring consumers about environmental impacts.

Cloud Technology and Energy Efficiency

  • Cloud companies are actively working to improve energy efficiency, which aligns with consumer demand for greener processes in services like Gmail and Facebook.
  • Innovations in computing are being driven by open-source principles and crowdsourcing, fostering a new era of collaborative thinking.

Insights from Tom Furlong

  • Tom Furlong, Director of Site Operations at Facebook, discusses his role in overseeing data center operations and innovations aimed at sustainability.

Energy Efficiency Features

  • The facility employs an LED lighting system that adjusts based on occupancy, significantly improving energy efficiency compared to incandescent lights.
  • The building aims for LEED Gold certification through various sustainable practices such as rainwater reuse for toilets and heat recycling from data centers.

Cooling Systems Explained

  • An evaporative cooling system requires pure water; thus, it undergoes reverse osmosis to eliminate minerals before use.
  • A UV system is utilized to kill biological materials in the water supply for cooling systems.

Airflow Management Techniques

  • Outside air is drawn into the building through dampers; this air undergoes filtering and mixing before cooling the data center.
  • Warm air from servers can be reused during colder days to maintain optimal temperatures within the facility.

Temperature Control Strategies

  • Servers are designed to operate efficiently at higher temperatures (up to 85°F), challenging previous assumptions about server resilience under heat stress.
  • The facility uses dampers for recirculating warm air while ensuring proper mixing with cooler incoming air.

Data Center Cooling and Energy Efficiency

Overview of Air Filtration Systems

  • The data center employs a dual filtration system: a coarse filter on the outside and a finer, more expensive filter underneath. Understanding the replacement frequency of these filters is crucial due to their impact on airflow.

Balancing Airflow and Energy Consumption

  • Filters restrict airflow, causing fans to work harder and consume more energy. A balance must be struck between effective filtration and maintaining optimal airflow.

Temperature Control Mechanisms

  • Temperature sensors in the room monitor conditions to determine necessary cooling or humidification levels. Higher temperatures trigger cooling, while lower temperatures require added humidity.

Evaporative Cooling System

  • The meat bog system utilizes high-pressure pumps with adjustable nozzles for moisture control based on environmental conditions. This ensures efficient humidity management without excess water droplets entering the data center.

Mist Eliminator Functionality

  • The mist eliminator captures unincorporated water droplets, preventing moisture from accumulating inside the data center, which is critical for equipment safety.

Efficient Air Distribution

  • Collection mechanisms recycle water not used in cooling processes. Fans draw air through the system efficiently into designated spaces below.

Fan Efficiency in Data Centers

  • High-efficiency fan walls utilize small motors that adjust speed based on air demand, optimizing energy use compared to smaller server fans.

Open Compute Hardware Design

  • Open Compute hardware design minimizes server fan workload by pressurizing space below, leading to energy savings while maximizing efficiency through larger fans.

Direct Power Distribution Benefits

  • Unlike traditional setups with centralized UPS systems that incur energy losses (11% - 17%), this design allows direct power distribution from utility sources to racks, enhancing efficiency.

Heat Management Strategies

  • Servers generate heat that is managed through an evaporative cooling system rather than mechanical means. Cold air floods aisles before returning via shafts for reuse or atmospheric release.

Server Design Innovations

  • Open Compute servers are engineered for energy efficiency by optimizing component placement and using larger heatsinks for better heat dissipation.

This structured overview encapsulates key insights from the transcript regarding data center operations focused on cooling systems and energy efficiency strategies.

Simplifying Server Maintenance

Accessible Design for Technicians

  • The design of the server trays prioritizes ease of service, ensuring all components are front-accessible.
  • Technicians only need to disconnect power cords and network connections to remove a server tray.
  • This streamlined process allows for quick replacement or maintenance of server trays without complex procedures.

User Experience in Data Rendering

  • Machines dynamically render web pages by analyzing user data, such as friends and posts.
  • The system utilizes caching servers for temporary data storage, enhancing page load speed.
  • Core data is maintained in database servers, providing a permanent repository for essential information.
Video description

Cloud technologies power some of Internet's most well-known sites—Picasa, Gmail, Facebook and Zynga, just to name a few—and cloud companies are striving to make the computer processing behind these sites as energy efficient as possible. With that in mind, Facebook, Dell, HP, Rackspace, Skype, Zynga and others have teamed together to form the Open Compute Project to share best practices for making more energy efficient and economical data centers. To kick-start the project, Facebook unveiled its innovative new data center and contributed the specifications and designs to Open Compute. "Cloud companies are working hard to become more and more energy efficient...[and] this is a big step forward today in having computing be more and more green," explains Graham Weston, Chairman of Rackspace. A small team of Facebook engineers has been working on the project for two years. They custom designed the software, servers and data center from the ground up. One of the most significant features of the facility was that Facebook eliminated the centralized UPS system found in most data centers. "In a typical data center, you're taking utility voltage, you're transforming it, you're bringing it into the data center and you're distributing it to your servers," explains Tom Furlong, Director of Site Operations at Facebook. "There are some intermediary steps there with a UPS system and with energy transformations that occur that cost you money and energy—between about 11% and 17%. In our case, you do the same thing from the utility, but you distribute it straight to the rack, and you do not have that energy transformation at a UPS or at a PDU level. You get very efficient energy to the actual server. The server itself is then taking that energy and making useful work out of it." To regulate temperature in the facility, Facebook utilizes an evaporative cooling system. Outside air comes into the facility through a set of dampers and proceeds into a succession of stages where the air is mixed, filtered and cooled before being sent down into the data center itself. "The system is always looking at [the conditions] coming in", says Furlong, "and then it's trying to decide, 'what is it that I want to present to the servers? Do I need to add moisture to [the air]? How much of the warm air do I add back into it?'" The upper temperature threshold for the center is set for 80.6 degrees Fahrenheit, but it will likely be raised to 85 degrees, as the servers have proven capable of tolerating higher temperatures than had originally been thought. The servers used in the data center are unique as well. They are "vanity free"—no extra plastic and significantly fewer parts than traditional servers. And, by thoughtful placing of the memory, CPU and other parts, they are engineered to be easier to cool. Now that these plans and specifications have been released as part of the Open Compute Project, the goal is for other companies to benefit from and contribute to them. "Open source, crowd sourcing, Wikipedia—these are all capitalizing on, or enabled by, the same force," explains Weston, "which is that when things are open, there's more innovation around them." More info: Facebook announcement: http://tinyurl.com/4x67au9 Open Compute Project web site: http://opencompute.org/