At last year’s Supercomputing Conference, we saw a number of immersive cooling options. I didn’t expect that at this year’s Supercomputing Conference, we saw a further demonstration. The so-called “two-phase immersion liquid cooling (2PILC)” system consists mainly of a server with little active cooling and is completely immersed in a liquid at low boiling point — usually an organic compound that comes into direct contact with silicon, not water or oil.
(Image via AnandTech)
The chip heats up when used, and then causes the surrounding heat to boil. Common variants of 3M Novec of Capeert (boiling point is around 59 degrees C).
The liquid gasification process rises and takes away heat, causing the heat dissipation to convection, condensation (through a cooling plate or water pipe) and circulation between the two phasestates.
These non-ionized liquids do not conduct electricity, viscosity and are relatively moderate, and can effectively promote natural convection. If the chip’s thermal design power consumption is too high, you can also add an active cooling solution that promotes convection.
The manufacturer clearly wants this solution to be widely used in servers or high-performance PCs, while keeping all accessories at reasonable temperatures while supporting ultra-dense designs.
For example, the OTTO system for TMGcore, reported bymedia AnandTech, uses 2PILC technology. It can hold up to 60kW of data center units in 16 square feet of space.
Customers only need to power the equipment, water supply, and network connectivity, and this solution also supports automatic pick-and-drop when maintenance is required. The company says 2PILC technology can effectively extend the life of hardware in a controlled environment.
Last year, one of the main directions of this technology was for encryption systems or ultra-dense coprocessors. On SC19, however, we didn’t see that much, let alone any 2PILC servers for 5G edge computing.
Interestingly, some component companies, such as VRM manufacturers, are validating their hardware for 2PILC environments. Energy efficiency is typically discussed in the data center based on power efficiency (PUE), for example, 1.50 means 1.5 MW of power consumption and 1 MW in actual efficiency.
Standard air-cooled data centerwith with PUE between 1.3 and 1.5. Some proprietary air-cooled data centers may be as low as 1.07. Liquid-cooled data centers are also around 1.05 to 1.10 PUE (depending on the construction).
The minimum PUE value for the standalone 2PILC unit we saw in the overcomputing environment this year is 1.028, but if the cost is only 1/10 of the standard air-cooled rack, I believe there are still many companies willing to adopt it.