data centres Archives - CIBSE Journal https://www.cibsejournal.com/tag/data-centres/ Chartered Institution of Building Services Engineers Thu, 28 Mar 2024 10:41:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Decarbonising data centres: optimising operation with artificial intelligence https://www.cibsejournal.com/technical/decarbonising-data-centres-optimising-operation-with-artificial-intelligence/ Thu, 28 Mar 2024 10:40:30 +0000 https://www.cibsejournal.com/?p=26701 Two studies have looked at the impact of AI on improving the function of data centres. Molly Tooher-Rudd highlights the potential for optimisation and maintenance

The post Decarbonising data centres: optimising operation with artificial intelligence appeared first on CIBSE Journal.

]]>
Over the past year, artificial intelligence (AI) has surged to the forefront of almost every industry and is now moving into the realm of data centres.

With the exponential growth of cloud computing, data centres (DCs) have come under scrutiny for their energy use.

AI and machine-learning algorithms are extremely good at spotting patterns in datasets. This can be embraced to improve and streamline the function of day-to-day operations, enabling real-time improvements using predictive analysis.

Specifically, AI applications for energy optimisation, early fault detection and predictive maintenance are gaining traction. In this context, two studies exemplify how AI technologies are transforming practices.

Data-centre cooling

The first study, by Zhichu Wang at the University of Hull, focuses on enhancing the performance of DC cooling systems through advanced time series machine learning.

In 2022, data centres accounted for 1% of global electricity consumption, with up to 40% of this driven by their cooling systems.

Wang developed a novel time series machine-learning model for hourly performance forecasting in an operational DC’s advanced dew-point cooling system.

The dataset used to train the model was collected over four months from a live DC. This model effectively forecasts hourly cooling system performance, enabling precise short-term and long-term energy consumption predictions.

The findings encompass a wide array of parameters relevant to managing DC cooling operations. Wang anticipates that the research could herald a new era of energy-efficient
DC operations.

The paper will be presented at CIBSE’s Technical Symposium in Cardiff from April 11-12 www.cibse.org/symposium.

Chiller plant optimisation

In Hong Kong, a second research study looked at Chiller Energy Optimisation using Artificial Intelligence. Air conditioning in Hong Kong contributes to around 30% of total electricity consumption. In line with the government’s climate action plan, which is aiming for carbon neutrality by 2050, researchers Lai Kam-Fai, Yow Kin-Fai, Wong Tat-Tong and Li Kin-Pong implemented AI-driven chiller optimisation strategies in government and public buildings. Leveraging artificial neural networks and particle swarm optimisation computational methods, the study saw energy savings of 5-10% in revitalised chiller plants.

To enhance the energy efficiency of air cooled chiller plants, a hybrid predictive operational control strategy was employed, using variable speed drive components for optimal efficiency. This approach optimised the number, sequencing, water supply and pump speeds of the chillers.

With these promising results, a rolling schedule to extend the application of AI optimisation control to the ‘AI-ready’ chiller plants in a wide spectrum of in-service government buildings and healthcare facilities is under way. The paper was presented at the ASHRAE Winter 2024 conference.

These studies indicate how AI can harness the power of data and advanced algorithms for a more sustainable future. DCs and chiller plants embracing AI-driven optimisation may be another step towards achieving carbon neutrality.  

The post Decarbonising data centres: optimising operation with artificial intelligence appeared first on CIBSE Journal.

]]>
Making a splash: recovering heat from mini data centres for leisure centres https://www.cibsejournal.com/case-studies/making-a-splash-recovering-heat-from-mini-data-centres-for-leisure-centres/ Thu, 27 Apr 2023 15:45:33 +0000 https://www.cibsejournal.com/?p=23956 A project that uses waste heat from a mini data centre to warm a public swimming pool in Devon hit the headlines last month. Molly Tooher-Rudd talks to the company behind the technology and hears about its ambitious plans to deploy ‘digital boilers’ in every sector

The post Making a splash: recovering heat from mini data centres for leisure centres appeared first on CIBSE Journal.

]]>
Data centres have become ubiquitous in our society, supporting most of our daily activities. They are essential to the functioning of the internet and the digital economy. However, data centres consume vast amounts of energy, with the industry accounting for 3% of global carbon emissions – which is more than aviation – and generating an estimated 4.51GW of waste heat in the UK each year.

One company, at least, has found an effective way of capturing some of this waste heat. Deep Green Energy has developed a technology that uses the waste heat generated by small-scale, local data centres and repurposes it to heat swimming pools and other buildings.

A recently completed project involved the installation of a small-scale data centre at a public swimming pool in Exmouth Leisure Centre, in Devon. It is expected to save the operator £22,000 in heating bills over the next year by using waste heat to warm the pool to 30°C for 60% of the time. 

This is the first commercial pool project that Deep Green has commissioned, but it will be the first of many, according to Deep Green Energy CEO Mark Bjornsgaard. ‘We’ve got seven more signed, and what seems like every swimming pool in the northern hemisphere expressing an interest. I’d love to think that we can work with up to 40 sites this year,’ he says.

Deep Green has been planning and building prototypes for the past six years, and the company is responsible for the industrial plumbing going into the non-standard swimming pool environment. 

Deep Green does ‘the difficult last-mile bit’ when it comes to installing the necessary equipment

Immersion cooling technology is used to capture the heat from the data centres. This involves immersing entire computers in oil, instead of blowing cold air over them to keep them cool, ensuring all components – including the chip – are cooled efficiently, with no need for further ambient cooling. The hot oil is then flowed through a heat exchanger, and water from the swimming pool also flows through it to be heated. Unlike normal data centres, all of the pipes are lagged to make sure as much heat as possible gets reused. 


Hopefully, we can get every swimming pool in the UK heated like this. It makes sense, especially with the cost savings. Why wouldn’t you?

Capturing heat from data centres is not a new idea. Microsoft wrote about it in a white paper in 2011, coining the term ‘data furnace’. Although it has been discussed before, says Bjornsgaard, it has not been widely implemented. ‘In terms of the technology, it’s all totally interchangeable. It’s not anything crazy; everything we’re using is known; it’s scalable. It’s just a computer in oil and a heat exchanger,’ he says.

Deep Green’s units are edge sites – smaller data centres built very close to where people live and work. The installation in Exmouth Leisure Centre, for example, is the size of a washing machine. Really fast local connections can only be done with an edge network, ensuring low latency, which is the delay before a transfer of data begins after an instruction. 

By reusing heat from mini data centres, Deep Green’s technology means less energy is required to cool larger data centres. ‘Getting computers out of big data centres and into local communities is the way forward,’ explains Bjornsgaard. ‘In 10 years’ time, we’ll need Deep Green units in swimming pools to provide the data needed to render graphics for nearby virtual reality applications, for example.’

Using edge servers means no complicated pipe network process is involved in the installation. ‘We’re breaking up the data centre and taking the heat to where it’s needed,’ says Bjornsgaard. ‘For example, the pipes in the Exmouth installation are only 3 metres away from the computers.

‘There are lots of variables to consider when you land in a swimming pool pump room. Is grid capacity enough to power the installation? Can we get sufficient internet connectivity? Can the pool use 100% of the heat produced?’

Reusing heat from data centres makes particular sense for swimming pools, he adds, which lose approximately 1°C per hour, regardless of the external temperature, and data centres produce heat 24 hours a day, so there is a consistent heat source for the pool.

Immersion cooling technology is used to capture heat from the data centres

Using Deep Green’s technology to heat a pool has another benefit. ‘The oil is brilliant protection,’ says Bjornsgaard. ‘Data centres create harsh environments; where you have chlorine contaminates and copper interacting, for example, this will eventually cause corrosion of materials. So, the great thing about immersion is that it will protect the equipment.’ 

Bjornsgaard believes adoption of the technology will be exponential, but sees a major barrier being people’s reluctance to break from the norm. 

‘We have always built data centres with evaporative cooling and cold floors, but now we are saying you have to pipe all this hot oil around your computers. Lots of people will have a natural resistance to this at first, but once they see how well it works, I think interest will peak,’ he says.

Bjornsgaard has ambitious targets for Deep Green’s units and wants to roll out the technology to all pools in the UK. ‘There are one and a half thousand public swimming pools in the UK. Hopefully, in the next five years, we can get every swimming pool heated like this. It makes sense, especially with the cost savings. Why wouldn’t you?,’ he says 

The company has a mission to replace all data centres without heat recapture by 2035, with these ‘digital boilers’ making a significant dent in carbon emissions caused by the data centre industry.

The task is daunting, Bjornsgaard admits, but with the scalability and adaptability of the technology for various applications, it is achievable.

Expanding the horizons

Deep Green has begun drafting detailed plans to implement this technology in other sectors that can effectively use large amounts of low- or medium-grade heat, including commercial and industrial sites, and domestic heating systems. ‘In homes and offices, 17% of carbon emissions comes from heating; that’s the bigger market for heat,’ says Bjornsgaard. 

‘We already have district heating systems in place, and every district heating system – every office block with a centralised boiler – could harness the power of a digital boiler.’ 

An even bigger sector will be implementing data centres next to heat pumps. ‘Slaving a Deep Green unit to a heat pump with a thermal store makes the heat pump really efficient,’ says Bjornsgaard. ‘That’s where we want to go with the technology; the retrofit opportunity is enormous.’

Waste heat would provide pre-heat to a heat source. That can be as simple as heating up the inlet water on a water source heat pump or heating a buffer tank or heat store, which is then drawn from by the heat pump when needed. This pre-heat would narrow the temperature difference for the heat pump, making the heat pump more efficient and increasing the available heat. 

‘As the cloud market grows, the number of data centres – and the heat they produce – will grow. The heat pump market is also projected to grow. So there’s a huge opportunity here if we match up the size of the cloud market with the size of the heat-demand market, and the potential for heating with our units,’ says Bjornsgaard. 

The post Making a splash: recovering heat from mini data centres for leisure centres appeared first on CIBSE Journal.

]]>
UPS for data centres: a modular approach https://www.cibsejournal.com/opinion/ups-for-data-centres-a-modular-approach/ Fri, 31 Mar 2023 09:30:57 +0000 https://www.cibsejournal.com/?p=23745 As demand for computing power grows, designers and specifiers must focus on reducing energy demand in data centres. Kohler’s Alex Emms says modular uninterruptible power supply can improve energy efficiency

The post UPS for data centres: a modular approach appeared first on CIBSE Journal.

]]>

Data centres are responsible for almost 1% of global electricity demand and 0.3% of all global CO2 emissions. Energy efficiency has been brought into focus by the volatility of the energy market – but for most data centres, it has been on the agenda for a long time. 

The efficiency of an uninterruptible power supply (UPS) has been gradually improving, but mechanical cooling systems have attracted the most attention regarding energy overheads reduction.

Many data centres have had to react to fast-growing demand and, in doing so, have been replacing or installing newer equipment. Integrating more efficient, modular systems and having a better understanding of power usage effectiveness (PUE) and other emerging units of measurement – such as total power usage effectiveness (TUE) – have become the driving forces behind the improvement in energy efficiency of data centre infrastructures. 

PUE can be defined as a measure of how efficiently power is used within a data centre, by measuring the ratio of total amount of power used to the amount of power delivered to computing equipment. Although originally an innovation by The Green Grid – a not-for-profit consortium of end-users, policy-makers, technology providers, architects, and utility companies – the thinking behind what capacity is required for a given ICT load in a data centre has always been one of the first tasks of the designer.

It is the de facto measurement for power currently, although, as we aim for wider energy efficiency, other measurements are coming into play, including water usage effectiveness (WUE) and carbon usage effectiveness (CUE). The others, although valid, are still being talked about rather than implemented, including TUE, which can be a more effective metric for calculating a data centre’s overall energy performance, but requires a greater understanding of the IT hardware in place. PUE remains the measurement needed for power consideration.

A 2020 global survey found the average data centre achieves a PUE of 1.59. In the UK, a PUE of less than 1.5 can be expected, but the lower the better. Newer, larger data centres tend to be more efficient, but there is still a need for older data centres to be aware of the changes they can make to influence their critical power support and their energy efficiency, particularly as they replace legacy systems. 


Instead of monolithic standalone systems, UPS suppliers can offer resilient, flexible, modular ones

Newer models of UPS have very advanced, multi-level and interleaved inverters with no output transformers, power management systems, and smart modes, to help reduce energy loss. 

Nowadays, instead of monolithic standalone systems, UPS suppliers can offer resilient, flexible, modular systems that are contained within a single infrastructure cabinet and can be run smarter. For example, spare modules go into standby mode when not required to support the load, while still maintaining the required level of active redundancy. 

Expansion of capacity is a matter of adding a further module, and contraction is a simple matter of turning off modules, reducing the need for systems to be always on, powered to the max 100% of the time. The aim is to allow the UPS to be loaded to 30-60% at any given load – where the UPS will be able to provide its highest efficiency rating. 

Data centre managers need to be informed to ensure the choices they make are right for mission-critical data centre applications – for today and for the future. This means collaborating with suppliers and consultants who understand current and future challenges, and who can make this decision-making process much simpler. In-depth knowledge and extensive experience, as well as a comprehensive choice of solutions that will fit the data centre’s specific needs, should be considered.

This level of expertise must be continued throughout the life of the protected power installation, to meet the challenges of providing timely UPS maintenance and adapting to evolving site requirements.

As energy costs rise and the reliability of smart-mode UPS operation is proven, data centre managers are aiming for the typical PUE to be less than 1.10 across all of Europe. This is good for business and for the planet. But research and the right partnership are key to obtaining energy efficiency and critical power backup with no compromise on reliability.

Alex Emms, is operations director at Kohler Uninterruptible Power UK

The post UPS for data centres: a modular approach appeared first on CIBSE Journal.

]]>
Cost model: data centre cooling https://www.cibsejournal.com/general/cost-model-data-centre-cooling/ Thu, 31 Oct 2019 16:45:26 +0000 https://www.cibsejournal.com/?p=13362 In this month’s cost model, Aecom’s engineering services cost-management team explores the options for cooling large data centres – from traditional air cooling with chiller and Crac units, to free cooling and water-based systems

The post Cost model: data centre cooling appeared first on CIBSE Journal.

]]>
Cooling of data centres has evolved from cooling small clusters of servers to giant server farms. While these modern large data centres are vital components of the information services economy, they consume a formidable amount of energy worldwide.

It’s been widely documented that the cooling systems – the chiller, humidifier and computer room air conditioning (Crac) units – account for 45% of the total energy consumption of a data centre, while the IT equipment accounts for 30%. This means that 1kWh consumed by the IT equipment requires another 1kWh of energy to drive the cooling and auxiliary systems.

From environmental and cost-efficiency perspectives, selecting a cooling method that can reduce this energy demand is clearly beneficial.

Traditionally, data centres have been air cooled with chillers, humidifiers and Crac units, in a variety of ‘cold aisle/hot aisle’ approaches that aren’t terribly efficient and that can result in hot spots within the data hall.

Over the years, server equipment has become more resilient and can now tolerate a greater range in temperature and humidity levels than older technology allowed. These days, legitimate alternatives to cooling are routinely considered in most data centre projects looking for greener and more efficient strategies.

Currently, in the northern hemisphere, air-cooling solutions are looking more towards free cooling, which works by using air from outside combined with reclaimed heat (winter) and evaporative cooling (summer) to provide the total cooling solution throughout the year.

Water-based cooling options are a more modern approach, and cool the inside of the servers by pumping cold water through pipes or plates. Water-cooled rack systems work well, but have an inherent risk of leaks. Understanding the cost drivers and benefits of each are crucial to advising clients effectively.

There are many factors that drive the selection of any particular option, not least capital and life-cycle costs, but also the location of the data centre and the feasibility of incorporating innovations such as free cooling or aquifer thermal energy storage (ATES).

Parameters that drive these decisions include the requirement for power usage efficiency (PUE) levels to hit planning stipulations, and for acoustics and total cost of ownership (TCO) levels to be optimised. PUE measures how efficiently the data centre uses input power – the larger the number the less efficient the solution. In addition, the selected method of supplying power and power resilience play a role in the PUE calculations – and, hence, in reality, the overall cooling and power solution combined is what is considered against the criteria to finalise the preferred solution for the client.

In this article, we are looking solely at the merits of three cooling solutions that are currently being used on projects, to ascertain the cost drivers of each and understand the cooling-only related costs. These are: air cooling by chiller and Crac units; air cooling by indirect air cooling (IAC) air handling plant; and chilled-water cooling derived from free-cooling, hybrid cooling towers with chiller assist.

Any power-supply solution, associated building works and main contractor prelims are excluded.

 

Air cooling by chiller and Crac units

Air cooling by chiller and Crac units

This chilled-water solution serves Crac downflow units typically serving cold air to the data hall white space through a floor void. Crac units normally include humidification elements to control the static electricity and all hot air is redirected back into the Crac to remove the heat for redistribution into the white space.

The source of the cooling water is via a traditional refrigeration chiller located externally, usually on the roof. There is no free cooling and chillers are sized for full peak load.

 

Air cooling by indirect air cooling AHU

Air cooling by indirect air-cooling AHU

This ‘all air’-based cooling solution incorporates air handling plant mounted externally to the white space. Treated air is distributed to the white space via ductwork or through a plenum. Air is supplied at a relatively low velocity to the cold aisle, giving more control than traditional floor-void distribution.

The hot air is returned to the IAC via ductwork and is cooled by the outdoor ambient air at a plate heat exchanger. To assist the cooling process during warm months, the ambient air is adiabatically cooled (water evaporation), which then cools the warm air at the plate heat exchanger in the IAC unit. The water used for adiabatic cooling is bulk-stored in the event of a mains supply outage. The process water is distributed from a central pump plantroom to the IAC units.

 

Chilled-water cooling derived from free-cooling, hybrid cooling towers with chiller assist

Chilled-water cooling derived from free-cooling hybrid cooling towers with chiller assist

This chilled-water solution serves Crac downflow units typically supplying cold air to the white space through a floor void. The source of the cooling water is via ‘free cooling’ cooling towers located externally, usually on the roof. Ambient air is used to cool the warm return water from the Crac units, with adiabatic cooling added during the warmer months.

At peak times, when approaching the towers’ cooling-load limits, refrigeration chillers are used to run in parallel with the cooling towers.

Table 1 is a summary of the pros and cons of each system. Bear in mind that numerous factors work in tandem within any given solution; for example, net-to-gross area, efficiency, power load, capital expenditure (capex) cost, and total cost of ownership (TCO) combine to determine the best solution for the client. Always make sure defined parameters are set to allow measurement of any solution against these critical factors. This will ensure the best-fit solution can be determined.

In reality, most data centres use air- or water-based cooling solutions, and this is where our cost comparison has focused. The future is already in place, however, with some clients opting for immersion cooling, by which servers are immersed into a liquid coolant for direct cooling of the electronic components. Immersing servers has been shown to improve rack density, cooling capacity and other design-critical factors.

Test projects where data centres are located in the sea could result in some significant changes in this industry in the future. Aecom is carrying out advisory work with Atlantis, which proposes to build a data centre on the site of its tidal-energy centre, off the coast of Scotland. It demonstrates how high-power-demand data centres could help fund the emerging tidal-power sector, thereby contributing to the future decarbonisation of the data centre sector. 

Table 1: Benefits and drawbacks of three data centre cooling systems - All based on 1500W m-2 IT load density requirement; Tier III certification. Based on experience of designing data centres

Table 2: Cost comparison of cooling methods in a 3,000m2 data centre.

Notes on the above cost:

● Hot/cold aisle containment is excluded
● Main contractor prelims and OHP are excluded
● Building/structural/architectural works, dedicated fresh air systems, and electrical infrastructure are excluded

About the authors
This article has been written by Associates Nichola Gradwell and James Garcia, of Aecom’s cost-management team in London, with assistance from Mike Starbuck and Anirban Basak, of Aecom’s engineering team.

The post Cost model: data centre cooling appeared first on CIBSE Journal.

]]>
Battery technology in UPS systems – VRLA v Li-ion https://www.cibsejournal.com/technical/battery-technology-vrla-or-li-ion-batteries/ Thu, 29 Aug 2019 15:45:50 +0000 https://www.cibsejournal.com/?p=12770 Although lead-acid batteries are long-established, with a majority market share, lithium-ion is starting to pick up pace. Alex Emms, of Kohler Uninterruptible Power, compares the two technologies

The post Battery technology in UPS systems – VRLA v Li-ion appeared first on CIBSE Journal.

]]>
For uninterruptible power supply (UPS) system builders and users today, two battery chemistries predominate: lead-acid – typically valve-regulated lead-acid (VRLA) – and lithium-ion (Li-ion).

While Li-ion has limited presence in the UPS market, it has been growing in popularity in other areas as a result of advances in technology and power output, plus a reduction in cost. Li-ion is finding large-scale use within motive power and electricity grid storage applications and, with its rapid response, is often found in wind and solar renewable energy systems.

Li-ion batteries have a better power-to-weight ratio than similarly rated VRLA types (see Table 1). They also discharge more efficiently than VRLA at high discharge rates, although this advantage becomes less important at lower rates (see Figure 1).

Charging rates from a fully discharged state are also higher, as long as the charger can deliver the required power. Full recharging can be completed in three hours, compared with a typical 80% charge in six hours for VRLA.

Table 1: Comparison of Li-ion and VRLA battery dimensions and weights *‘N’ is the number of UPS units required to meet the design load demand, ‘1’ indicates that a single UPS unit failure will not adversely affect meeting the load

Another advantage is a very wide usable temperature range, although discharge rates and longevity can normally be optimised by operating at 23°C ± 5K. Li-ion batteries have improved resilience to temperatures outside this range, with much better low-temperature discharge capabilities, than VRLA. This makes Li-ion much better suited to uncontrolled temperature environments where free cooling can be employed using the lower-temperature outside air.

NB: VRLA systems include battery management system Table 2: Percentage cost elevation of Li-ion over VRLA for various autonomies

However, like VRLA, operating at excessively high temperatures significantly reduces Li-ion batteries’ useful life. Figure 2 gives more detail on the two chemistries’ relative temperature/lifetime profiles.

Cost is another critical factor. Prices have fallen significantly – up to 85% – over the past decade, and these reductions naturally increase Li-ion’s appeal. Nonetheless, as Table 2 shows, Li-ion pricing is still a barrier.

However, we are definitely in the early stages of adoption. While prices aren’t decreasing as fast as previously, they are still tracking down, creating a significant upturn in adoption.

In Europe and the Middle East, there are lags in Li-ion adoption, but there is increasing deployment in North America and Asia. Figure 3 shows historical and projected future trends for battery pack manufacturing costs.

Design life is another factor, for which manufacturers are quoting up to 15 years. Operational life is probably nearer 10-12 years, but is not yet proven. This compares with a real-life norm of 7-8 years for VRLA.

Figure 1: VRLA versus Li-ion discharge efficiency. The discharge rate (known as C) relates to the current drawn from the battery over a period of time. 1C is the current to discharge the battery in one hour. A two-hour discharge is described as 0.5C and faster discharges, for example 30 minutes, are described as 2C.

Why not Li-ion?

Li-ion enthusiasts point to the batteries’ longevity as an advantage offsetting its higher capital cost. However, Kohler Uninterruptible Power’s experience shows that UPSs – correctly installed in a suitable environment and properly maintained and supported – are typically reliable for 15 years. This neatly matches two consecutive 7-8-year VRLA lifetimes, but raises replacement coordination issues with 12-year Li-ion batteries.

Li-ion is also disadvantaged by the true costs of achieving suitable autonomy, which is traditionally 10-15 minutes for UPSs. However, in reality, most blackouts last three minutes or less, or for closer to three hours.

While VRLA costs can be decreased by designing for this lower autonomy, the same isn’t true for Li-ion. Such short autonomies can only be achieved from more expensive higher discharge-rate cells. Accurately and cost-effectively sizing for different loads is also difficult with Li-ion’s – currently very limited – choice of capacities.

Figure 2: Expected battery life versus temperature for VRLA and Li-ion. This shows that, between 20°C and 30°C, Li-ion degrades much less than VRLA; however, at higher temperatures, degradation is similar

There is also an element of mistrust. Manufacturers have progressed considerably in addressing safety fears, through highly segregated cell designs, and mandatory advanced monitoring and management systems; however, Li-ion is still sometimes seen as unproven, and a safety risk.

End-of-life creates further problems; an exhausted Li-ion battery primarily comprises hazardous waste that’s difficult to recycle, and which is subject to high costs and restrictions during transportation.



Li-ion enthusiasts point to the batteries’ longevity as an advantage, offsetting its higher capital cost

By contrast, VRLA is up to 98% recyclable. However, as the volume of exhausted Li-ion batteries starts to grow, so will pressure to find sustainable recycling solutions. This is reflected, for example, in the US Energy Department’s launch in January of a Li-ion battery recycling research centre. The department is investing US$15m in the project, and hopes to boost the collection and recycling rate to 90% of all lithium-based technologies, up from the current rate of just 5%.

Recycling of Li-ion batteries from electric vehicles (EVs) is limited in the UK; direct recovery of precious metals from these batteries– such as cobalt, nickel and lithium – is undertaken by specialist facilities abroad, mainly in Asia, although Europe is now starting to build processing capacity.

Currently, the barriers that Li-ion faces mean its uptake is mostly limited to fast-discharge or limited-space applications that particularly need its benefits. However, UPS Li-ion battery solutions are still in their infancy, with potential for further advances.

Figure 3: EV battery manufacturing cost trends. Manufacturing costs are falling and this is expected to continue.

Prices are expected to continue falling, albeit more slowly, driven primarily by growth in the EV and motive-power industries. As this happens, and Li-ion becomes more accepted by UPS owners and operators, the technology’s penetration of the data centre battery market can be expected to increase.

This growth will be accelerated when viable recycling strategies become available. In any case, the data-centre industry is motivated to replace VRLA because of perceived reliability problems and environmental restrictions.

Bloomberg New Energy Finance (BNEF) forecasts a market share increase from 15% in 2016 to 35% in 2025. According to BNEF, this is against an expected data-centre battery backup market growth from 3.5GWh to 14GWh over the same period.

However, VRLA will also continue developing. While not mandatory for VRLA, increasing use is being made of battery monitoring and management systems. These can increase VRLA battery lifetimes, potentially by up to 30%. This can, for example, increase battery life by monitoring and warning of when attention is required, and by management of the equalisation process, which corrects the charging voltage operating range. 

About the author
Alex Emms is operations director at Kohler Uninterruptible Power

The post Battery technology in UPS systems – VRLA v Li-ion appeared first on CIBSE Journal.

]]>
Tiers for fears: How to ensure data centres keep companies flying https://www.cibsejournal.com/technical/tiers-for-fears-how-to-ensure-your-data-centre-keeps-your-business-flying/ Thu, 29 Jun 2017 15:45:21 +0000 https://www.cibsejournal.com/?p=6302 The meltdown of British Airway’s IT systems, after a power-supply failure, demonstrates how critical data centres are to business operations. Andy Pearson looks at levels of resilience and explains why cloud computing is making firms more susceptible to power cuts

The post Tiers for fears: How to ensure data centres keep companies flying appeared first on CIBSE Journal.

]]>
In May, BA suffered a catastrophic IT failure when the power supply to a key data centre was lost and the backup system was rendered ineffective. The failure shut down the airline’s IT systems, causing passenger chaos worldwide. BA has yet to explain the precise cause and sequence of events that resulted in the failure of two of its data centres.

The incident caused consternation in the data-centre sector, with many experts surprised that BA’s systems were not more resilient, and that the procedures which should have been in place to prevent this type of meltdown failed. It was not only the scale and duration of the IT power cut, but that the failure brought down a key data centre and the backup data centre, too. ‘What was the most surprising aspect of this, for me, was that BA couldn’t restart their data processors somewhere else,’ says Alan Beresford, managing director at EcoCooling.

So how are data centres designed to be resilient – and what is it about the way they are engineered that should prevent downtime and failures from occurring?

To understand resilience, you first need to appreciate how a typical data centre is arranged. The most critical area contains the data halls – rooms in which the data processing units, or servers, are housed in rows of cabinets or racks. These servers need a continuous supply of power and cooling, which is why data centres are designed with a robust set of systems to deal with power failures and to ensure cooling is always available. The measurement of how vulnerable your system is to failure determines its resilience.

The Uptime Institute, an organisation focused on business-critical systems, defines four tiers of data centre resilience: N, N+1, 2N and 2N+1, where N is the base and 2N+1 is the most resilient. This terminology is best explained using the example of standby generators serving a 1MW data centre (see panel, ‘Tiers of data resilience’).

CIBSE launches data centre performance award

A new category has been added to the CIBSE Building Performance Awards 2018: Project of the Year – Data Centre. Entries, for projects completed between 1 June 2014 and 31 August 2016, should demonstrate how a new-build or a refurbishment of a data centre meets high levels of user satisfaction and comfort. The entry also needs to demonstrate how outstanding measured building performance, energy efficiency and reduced carbon emissions has been achieved. Visit the Building Performance Awards 2018 website for more information.

It is important to note that this tiering makes no reference to the type of systems employed; it does not state which type of uninterruptible power supply should be used, or how a data centre is to be cooled. Tiering is about how the systems are arranged.

The other thing to note is that the tiering designation is about the maintainability of systems. ‘Most people would argue that a Tier III data centre is concurrently maintainable, because you can take out a piece of kit to maintain it and you don’t lose anything,’ says Robert Thorogood, executive director at consultant Hurley Palmer Flatt. ‘Some banks specify Tier IV, which means the systems are not only concurrently maintainable, but you can have a fault anywhere on the system and you still won’t lose anything, because there is more redundancy.’

Not all businesses will require the same level of resilience as a bank. Thorogood says they have to ask: ‘What will happen to my business if the data centre goes down?’

Some organisations can deal with an organised period of downtime once a year. However, increased reliance on the internet means access to it is becoming critical for more and more  businesses. Many retailers, for example, now have a 24/7 web presence and can no longer accept downtime overnight.



The measurement of how vulnerable your system is to failure determines its resilience

‘It used to be that research organisations did not require a high level of data-centre resilience; if the data centre went down, it went down. These days, because everybody relies on email and the internet, even universities want access to a Tier III data centre,’ explains Thorogood.

However, it is important to remember that not all areas in a Tier III data centre will be serviced to the same level of resilience.

‘A typical data centre will have the hall housing the computer racks, accompanied by support areas – such as storage, loading bays, security and plant, and the uninterruptible power supply (UPS); the infrastructure serving these areas will not have to be nearly as reliable as that serving the data hall,’ says Don Beaty, CEO at DLB Associates Consulting Engineers in the US, and the person responsible for starting the ASHRAE technical committee on mission critical facilities, TC9.9.

Beaty warns that – just because you have multiple systems inside a data centre – the building can still be vulnerable to single points of failure externally, particularly with data network. ‘Data centres are nothing without connectivity to the outside world; you want diverse fibre routes from different carriers coming into the building from diagonally opposite corners,’ he says. ‘However, if those fibres converge upstream, then that will become a single point of failure’.

The same issue is true for power, where it can be difficult to avoid a common supply. Very few data centres have two discrete power supplies, but it is common to have two incoming power supplies from different substations – although these can come from the same power source further upstream – with supplies entering the building on different sides.

On a Tier 3 data centre, for example, each supply – once inside the building – will be kept separate, passing through a dedicated set of transformers through a UPS, and then down dedicated cables until they reach the server. So each computer server is fed from two independent power supplies. ‘The UPS will be supported by standby generation, so – if the mains go down – the UPS batteries will take over until the standby generators fire up, synchronise and supply power,’ says Beresford.

Tiers of data resilience

Definitions are based on an example of standby generators serving a 1MW data centre

  • Tier I (N) Normal, the data centre has 2 x 500kW generators
  • Tier II (N+1) The data centre has a spare generator – so, 3 x 500kW
  • Tier III (2N) The data centre has two power supply systems, A and B,
    and each stream has two 500kW generators – 4 x 500kW in total
  • Tier IV (2N+1) Each A and B stream has 2 x 500kW generators, plus a spare 500kW generator – so, 6 x 500kW generators in total

Beresford adds that not all systems require the same level of resilience. ‘Power and fibre optics systems might be 2N, but cooling might be N+1, because it’s a lot simpler,’ he says. ‘You play tunes on the level of redundancy according to the type of technology.’

When considering resilience, it is important to ensure that, should a system fail, the operator understands how to deal with the situation. ‘When you get big data centres with multiple levels of redundancy, their operation can become very complicated,’ warns Beresford. ‘There is an alternative view that very simple systems can actually prove to be more resilient and more reliable than complicated ones.’

The ‘keep the engineering simple’ mantra has been embraced by data-centre developer and operator DigiPlex, which engages with the operational team when it puts together a design. ‘If you put a design in front of the operations guys and they don’t get it, then scrap it, because it must be easily understandable for them to operate in an emergency,’ says Geoff Fox, DigiPlex’s chief innovation and engineering officer. ‘If technicians don’t understand the system, your resilience is super weak.’

DigiPlex’s philosophy means it designs to minimise the opportunity for human error by following a 2N – rather than an N+1 – solution for data centre electrical infrastructure. ‘We found that trying to save on the cost of a generator builds in complexity to the design, results in additional costs for the switchboards and cross-connects, which makes it harder to maintain,’ says Fox. Resilience is further enhanced by using factory manufactured, prefabricated switchrooms and plantrooms, to enable quality to be controlled and to fully test the units before they arrive on site.

Sophia Flucker, director at consultant Operational Intelligence, believes commissioning the data centre before it is operational is fundamental to its resilience. She lists what she terms the ‘five levels of commissioning’ necessary to achieve resilience: factory acceptance; dead testing on site; start up on site; systems testing; and integrated systems testing.

Flucker says a comprehensive approach to commissioning is to ‘test all the components, then test the systems and their failure modes’. Sound advice, which perhaps BA will follow in the future – particularly testing in failure mode.

Read more in CIBSE’s Data Centres: an introduction to concepts and design at www.cibse.org/knowledge 

Credit: Istock polybutmono

The post Tiers for fears: How to ensure data centres keep companies flying appeared first on CIBSE Journal.

]]>