Latest AI lessons energy usage to cool a building, Deepmind’s

Latest AI research helps us to lesson the amount of the energy used to cool a building, a significant amount of the CO2 emissions produced worldwide are attributable to HVAC (heating, ventilation, and air conditioning). Only space cooling accounts for about 10% of the world’s total electricity requirement. Therefore, increasing the efficiency of HVAC systems may be essential for reducing climate change. Data-driven, autonomous, real-time decisions made at scale are becoming more and more tempting as HVAC data collection and management systems become more common.

New research by DeepMind employed reinforcement learning (RL), drawing on previous work regulating the cooling systems of Google’s data centers, to increase the energy efficiency of HVAC control in two commercial buildings. The researchers think that reinforcement learning is a good solution for HVAC control issues for a number of reasons:

Latest AI helps you check you ML(Machine Learning) models

Decisions must be made on when to turn equipment on and off as well as how hard to run each piece of equipment when it comes to HVAC control. In addition to a natural incentive function, various constraints must be met in order to maintain occupant satisfaction and ensure safe system operations. The data required to train an RL agent can be found in widely used building management systems  and increasingly popular cloud-connected BMS can be leveraged to provide automated supervisory control. There is also an essential sequential decision-making component because choices might have long-term effects. RL does not require the development, validation, and upkeep of a comprehensive and in-depth physics-based model for each and every building, in contrast to Model Predictive Control.

The SOO’s historical data collection and BCOOLER’s current data collection while it was in control of the plant were the two main data sources. The historical data consists of facility data from the SOO in charge of the system for less than a year. The AI control data, on the other hand, is replete with exploration data covering a wide range of activities and states. The team had to deal with a range of challenges, from common ones like expensive and noisy data to more uncommon ones like having many operational modes and multi-timescale dynamics. To solve these issues, they combined general RL methods with industry-specific heuristics. For info visit

Comparing the resulting system to Trane’s heuristics-based controllers, it showed a 9–13% reduction in energy consumption while still meeting system requirements. The team used domain expertise to construct unit tests that the action value function should pass in order to increase their confidence in evaluating the agent’s performance prior to deployment. They hid various actions based on the status of the environment, allowing a single agent to control a number of weather-dependent modes with different action spaces and limits.

For learning more about new AI application Lensa visit

Leave a Reply

Your email address will not be published. Required fields are marked *