Data centres are evolving with each passing day and so is the technology that goes with it.
In the keynote address on Future proof Hyperscale Data Centre Operation through Data Centre Control Systems- 3S (Scale, Speed, Security) at W.Media’s Digital Week Northeast Asia edition, Chang Cho, Founder & CEO, Onion Technology spoke about the architecture of data centres and its functions.
“A typical hyperscale data centre is capable of handling 20 to 30 MW IT load which consists of about three to six thousand racks along with more than ten thousand sqm white space to allocate about one to three lakh monitors and control points”, said Chang
The control system of a data centre consists of NOC operators, redundant control system server and M&E facilities. High end servers are used to handle larger data transactions.
Giving an example of the Facebook data centre plan in Singapore, Chang underlined that this particular data centre which is capable of 120 MW IT load along with 50 MW local solar PPA and is 10 times bigger than a normal hyperscale data centre.
The control system of an ultra hyperscale would range between 100,000 to 100,000 million points.
Super computers can be used to manipulate the data up to 1 million points but that would cost a lot, Chang pointed.
The cutting edge technology of parallel processing is used in ultra hyperscale which is adapted for E- commerce, cloud computing and others. The transaction of data can be carried out at the speed of three thousand points.
Chang further added that as time goes by the architecture of data centres will increase and for bigger data centres which would go beyond 3 million points, a highly parallel server will be required to cover as many points needed for transaction speed.
Importance of scale and speed
Giving an example to explain the importance of scale and speed, Chang pointed out how earlier the video began with SD and now the quality can go up to 8k Ultra HD.
“The more scale we have the more clear and precise data can be added”, said Chang.
In terms of speed Chang took the example of a video game. Someone playing the video game on 20 fps will be slow as compared to someone playing on 200+ fps. A similar thing happens when it comes to data centres. People look for increased speed in data centres.
Chang further added that a data centre can never go down, if some servers are down, other servers work on half the speed or normal speed. There is always zero downtime reliability.
For faster and bigger data processing capability AI, deep learning, prediction, operation optimisation can be used.
In the coming times there are bound to be innovations which will control the data centre control systems, which will be secured enough to be connected to the outside world. This is not limited to the software or IT department but also to other network devices and others which would make it safe to connect with the outside world, he said.