User:QuartersPi/sandbox/Aquasar

Aquasar is a supercomputer (a high-performance computer) prototype created by IBM Labs in collaboration with ETH Zurich in Zürich, Switzerland and ETH Lausanne in Lausanne, Switzerland. While most supercomputers use air as their coolant of choice, the Aquasar uses hot water to achieve its great computing efficiency. Along with using hot water as the main coolant, an air-cooled section is also included to be used to compare the cooling efficiency of both coolants. The comparison could later be used to help improve the hot water coolant's performance. The research program was first termed to be: "Direct use of waste heat from liquid-cooled supercomputers: the path to energy saving, emission-high performance computers and data centers." The waste heat produced by the cooling system is able to be recycled back in the building's heating system, potentially saving money. Beginning in 2009, the three-year collaborative project was introduced and developed in the interest of saving energy and being environmentally-safe while delivering top-tier performance.[1] [2]

History

edit

Development

edit

The Aquasar supercomputer first came in to use at the Department of Mechanical and Process Engineering, at the Swiss Federal Institute of Technology Zurich (ETH Zurich) in 2010. ETH Zurich is one of two schools that is apart of the Swiss Federal Institute of Technology with the other school being ETH Lausanne. High energy efficiency, environmentally friendly computing, and high computing performance were a few of the main interests in the development of the Aquasar. A key part of being environmentally friendly was the focus of attempting to lower the output of carbon dioxide emissions. 50% of an air-cooled data center's energy consumption and carbon pollution actually comes from the cooling system of the data centers rather than from the actual computing process. The creation of the Aquasar started in 2009. It was apart of IBM's First-Of-A-Kind (FOAK) program (a program encouraging IBM researchers and clients to develop potential new technologies to assist with real world problems in business).[1] One other supercomputer would later use the same idea of a hot water coolant in their developments, the SuperMUC supercomputer. Future development of more powerful supercomputers also explored the possibilities of using on-chip cooling as their main cooling source to achieve greater computer efficiency.

Further Exploration of Hot Water Coolant

edit

An academic paper written in 2018 explored the many possibilities for developing new Exascale computing (a higher scale performance of supercomputing). Exascale supercomputers will be needed in future computing which means high energy efficiency and high cooling efficiency are needed out of these supercomputers to achieve peak performance. The scientists looked to the possibility of "on-chip" cooling, inspired because of the Aquasar supercomputer.[3]

Cooling

edit

The Aquasar supercomputer employs "on-chip" cooling.[3] It uses a unique method that uses micro-channel coolers which are directly attached to the computer's processing units (the main circuits that perform most of the computer's processing) which produce some of the most heat in the computer system.[1] Micro-channels are small channels that have a diameter under 1mm with the warm coolant liquid running through them. Water's high thermal conductivity (the ability to conduct heat) and specific heat capacity (the amount of heat required to raise the temperature of 1 gram by 1 °C) allows the warm water coolant to be set at approximately 60 °C (roughly 140 °F). Because of the high thermal conductivity of water, more heat can be carried by the water away from the processing units. Water has approximately 4,000 times more heat capacity than that of air thus allowing heat transportation to work more efficiently.[2] The high heat capacity allows for the water to absorb a great amount of heat. The water temperature allows the processing units to operate below the maximum temperature of 85 °C (roughly 185 °F).[1]

Mechanical Description

edit

Hardware

edit

The Aquasar contains water-cooled IBM BladeCenter Servers (IBM's versions of the bare-bones server computer) and air-cooled IBM BladeCenter Servers in order to contrast the performance of the hot-water coolant and the air cooling. The air-cooled and water-cooled BladeCenters are made up of IBM BladeCenter H Chassis, using a combination of IBM BladeCenter QS22 Servers and IBM BladeCenter HS22 Servers in both of the BladeCenter systems.[1] The system employs 6 teraflops (flops are a unit using to determine computing speed) and attains an energy efficiency of about 450 megaflops per watt.[1][4] Pipelines connect the individual BladeCenter servers to the main network where it is then further connected to the water transportation pipeline network. These pipelines can be disconnected and reconnected too. 10 liters of water for cooling are used with a pump, producing a flow of approximately 30 liters per minute.[4] A sensor system has also been installed into the Aquasar system to further monitor the performance. The scientists hope to optimize the system using the information they obtain from these sensors.[2]

Heat Recycling

edit

The warm water cooling system is a closed-loop system. The coolant is constantly being heated up by the processing units. The warm water is then cooled back down via a heat exchanger (a way of transferring heat between fluids). The transferred heat then gets directly used into the heating system of the building such as in the ETH Zurich building, thus allowing the heat to be reused effectively.[4] Up to around 80% of the produced heat is recaptured and reused to heat the buildings.[5] At the SuperMUC supercomputer, the heat that is created by the hot water coolant is used to further heat the rest of the campus, saving the Leibniz-Rechenzentrum campus around $1.25 USD million per year. Approximately nine kilowatts of thermal energy are put into the heating system where the waste heat will later be used to heat the ETH Zurich building.[4]

Benefits

edit

Supercomputer data centers expend 50% of their electrical demands on their conventional air cooling system. The use of computers worldwide consumes an estimated 330 terawatt-hours of energy. [2] The air cooling system is the main culprit of the high energy consumption of supercomputers. [6]The Aquasar consumes approximately 40% less energy than that of normal air-cooled supercomputers. Along with that, the ability to recycle heat back into the heating system allows the Aqusar's carbon emissions to be reduced by approximately 85% since fewer fossil-fuels are needed to be burned to provide heat into the heating system.[1] Low energy usage and liquid-cooled supercomputers are able to operate with about 3 times less energy cost than that of air-cooled datacenter supercomputers.[6]

References

edit
  1. ^ a b c d e f g "Made in IBM Labs: IBM Hot Water-Cooled Supercomputer Goes Live at ETH Zurich". www-03.ibm.com. 2010-07-02. Retrieved 2020-10-26.
  2. ^ a b c d "ETH Zurich: new Aquasar water-cooled supercomputer goes into operation". Science|Business. Retrieved 2020-10-26.
  3. ^ a b Fornaciari, William; Hernandez, Carles; Kulchewski, Michal; Libutti, Simone; Martínez, José Maria; Massari, Giuseppe; Oleksiak, Ariel; Pupykina, Anna; Reghenzani, Federico; Tornero, Rafael; Zanella, Michele (2018). "Reliable power and time-constraints-aware predictive management of heterogeneous exascale systems". Proceedings of the 18th International Conference on Embedded Computer Systems Architectures, Modeling, and Simulation - SAMOS '18. Pythagorion, Greece: ACM Press: 187–194. doi:10.1145/3229631.3239368. ISBN 978-1-4503-6494-2.
  4. ^ a b c d "IBM's Hot-Water Supercomputer Goes Live". Data Center Knowledge. 2010-07-05. Retrieved 2020-10-26.
  5. ^ Zimmermann, Severin; Meijer, Ingmar; Tiwari, Manish K.; Paredes, Stephan; Michel, Bruno; Poulikakos, Dimos (2012-07-01). "Aquasar: A hot water cooled data center with direct energy reuse". Energy. 2nd International Meeting on Cleaner Combustion (CM0901-Detailed Chemical Models for Cleaner Combustion). 43 (1): 237–245. doi:10.1016/j.energy.2012.04.037. ISSN 0360-5442.
  6. ^ a b Ruch, Patrick; Brunschwiler, Thomas; Paredes, Stephan; Meijer, Ingmar; Michel, Bruno (2013). "Roadmap towards Ultimately-Efficient Zeta-Scale Datacenters". Design, Automation & Test in Europe Conference & Exhibition (DATE), 2013. Grenoble, France: IEEE Conference Publications: 1339–1344. doi:10.7873/DATE.2013.276. ISBN 978-1-4673-5071-6.
edit