Whitespace: One of the halls at Africa Data Centres’ JHB1 building in Midrand.
Datacentres all over the world are searching for ways to become greener and more efficient. This is also true of Africa Data Centres (ADC), which has facilities in Johannesburg and Cape Town, as well as Nairobi and Lagos. There are plans to build another 10 datacentres on the continent, including in Nigeria, Kenya, Morocco and Egypt. In South Africa, it has around 25MW of live IT load, with another 20MW in production.
Angus Hay, regional executive at ADC, says the big challenge for any datacentre is going to be energy efficiency, because, ultimately, that’s what the customer is paying for. It’s a complicated business. Energy efficiency will hinge on the design of the chillers, and how efficiently free cooling, temperature adjustments and hot and cold aisle containment are managed.
“Energy efficiency, in its own right, is a goal you need to have regardless of whether you’re green or not,” says Hay. “It’s all about design. There’s a little bit you can do on the operational side, but it’s essentially about the design of the cooling system.”
Older datacentres may struggle with equipment that’s old and doesn’t cool itself efficiently.
He says all the datacentres that ADC has built, and most of the larger datacentres in Africa, are using external, aircooled chillers, with a closed-loop water system to distribute the heat. A consequence of closed-loop cooling is that almost no water is wasted. There’s a lot of noise around the world at present about how much water datacentres consume, he adds. “We don’t consume any. It’s a huge differentiator, and a reason to move continents [to Africa], because quite a number of European and US datacentres basically evaporate water, some as the primary cooling method, some as away to improve the cooling of the chillers.”
It will also help to keep the chillers under cover. ADC’s JHB1 datacentre in Midrand has a soft-shell roof keeping its chillers in the shade.
Hay says most datacentres leave their chillers at the mercy of the sun, “which makes them about as inefficient as you can make them.
“You’re taking something that’s supposed to be cooling the environment, and you’re standing it in the sun. In European and North American environments, they would just spray water all over them.”
Electrical engineering is more or less easy to understand. Mechanical engineering is a black art.
Angus Hay, Africa Data Centres
ADC also gets better efficiency by having free cooling. When the outside temperature is below 17°, the refrigeration units are turned off. Johannesburg gets around 180 days a year where free cooling can be used, which means ADC is using 5% to 10% less energy than if it was using the chillers.
At the JHB1 datacentre, the set-point temperature is placed around 23° or 24°. Older datacentres, by contrast “are like fridges”, Hay says. The colder the set-point temperature, the more energy is used. ADC thus aims for a temperature in midpoint of the Ashrae (American Society of Heating, Refrigerating, and Air-Conditioning) range, which is between 18° and 30°.
With cold and hot aisle containment, the datacentre operator is containing the air so that it’s flowing through the racks, and going nowhere else. “You want all of the impact of that cool air to go through the racks. If you have a few racks standing in an open area, that’s fine, but if you have 300 racks in a large environment, you’ll want containment.” He says ADC has 3MW rooms where the inlet temperature is 23° or 24°, and then it’s contained. The air that’s expelled can be 30°. Older datacentres may struggle with equipment that’s old and doesn’t cool itself efficiently, and he likens it to an old PC with blocked fan ports.
Angus Hay, Africa Data Centres
It’s also important that containment is done efficiently. “You can’t have a rack with gaps in it,” he says, which will need to be covered with blanking panels so the air doesn’t leak out when it is forced through the equipment.
“It’s a balancing act. You can’t turn up the temperature too high, so that’s why we aim for a mid-point in the range. If you try to cool with air that’s too warm, it’s not going to cool.”
GPUs and CPUs typically run at 60° or 70°, which is why it’s most efficient to channel air onto them at between 18° or 24°. “We’re basically sucking energy out of the system,” Hay says.
He poses a question: “How much of the energy that goes into electronics is converted to heat? It’s a fascinating number. The answer is 100% of what goes into electronics is coming out as heat. There’s a very tiny amount, at the quantum level, that is moving the electrons around, which constitutes the actual calculations that are happening, but it’s so small that you can’t measure it. You have to extract the same amount of energy in heat that you’re putting in, in power.”
ADC measures the amount of energy it takes to extract that heat. “If you do a net heat equation for a datacentre, whatever electricity is going in, that’s what heat is coming out,” he explains. “It’s as simple as that. How you actually extract efficiency is exactly the same when working out the efficiency of an aircon.” The Power Usage Efficiency (PUE) number is the amount of energy it needs to put into the aircon system to extract the heat. He says it’s not the heat itself, it’s the energy that ADC is putting in to extract that heat. “If we’re at 1.3 PUE, we’re putting about 30% more energy, to extract the 100%.”
As to what ADC is achieving, he says it depends on the age of the halls. The company doesn’t publish a specific average number, but it’s getting 1.3 on its new builds.
Hay says he always finds it amazing that some think that datacentre operators a re m ere “landlords”. “The most complicated thing we do, by a long way, is the mechanical engineering. When you do the designs of the halls like this, it’s super complex stuff to establish whether the design is functional. You use computational fluid dynamics (CFD), which is a complex way of doing flow calculations within the data halls space, to ensure that you’re actually sucking the heat out.” The CFD models would typically be run on a high-performance computer. “Electrical engineering is more or less easy to understand,” he says. “Mechanical engineering is a black art.”
Other than an efficient system to cool the equipment, another consideration is the source of energy.
Hay says that every business nowadays will have some sort of ESG targets.
“As a company, we are reporting ESG, but we’re also sharing this with customers if they ask. If you do the logic of E SG, there’s scope 1, 2, and 3. If we turn our generators on, that’s scope 1, because we’re creating carbon on the campus and pumping it into the air. So we try not to turn the generators on.”
Scope 2, for ADC, is buying electricity, the supplier of which would be pumping carbon into the atmosphere.
“Scope 3 gets complicated because every piece of equipment we buy has gone through a manufacturing cycle that involves something,” he says.
ADC is now in RFP processes with a number of renewable energy providers. He says two years ago, if you wanted solar power, you needed to get involved and create companies and enter into joint ventures. “Now, we’ve moved to a situation where you don’t have to do anything. If you tell the world you’re running a datacentre, you will have people knocking on your door wanting to sell you solar. There are lots of people who are desperate for customers.”
He says the process of getting wheeling in place is taking a lot longer than he expected because of the relationships between the various parties, such as local councils and Eskom. “On paper, it’s all fine, it’s just taken a lot longer to actually happen. There’s not a huge number of organisations that are actually, physically, getting energy from those agreements right now. We’ve got the contracts, but we’re not actually wheeling any electrons right now.”
