Over two years have passed since Beijing officials enacted security laws which sparked waves of protests and a constricting of what was described as “civil liberties” in Hong Kong. Amongst the fine text of the law, specifications include the Hong Kong Special Administrative Region Police Service maintaining the right to take various measures of searching premises, vehicles, ships, aircraft, other relevant places and electronic equipment where criminal evidence might be stored. Tech giants such as Google and Facebook responded swiftly by blocking authorities’ access to user data, with data center operators quietly creeping out of the city and relocating elsewhere.
The action of abandoning Hong Kong’s shores due to the risky threat of Chinese screening was unsurprising- and likely the best course of action, but it did also raise concerning flags over data privacy and security. This poses the question; how are uninvited or suspicious forces able to forcibly mine data from data centers, and what can organisations do to prevent this?
In January this year, researchers at Cyble, a global cyber-intelligence startup, uncovered over 20,000 instances of publicly exposed data center infrastructure management (DCIM) software, including thermal and cooling management dashboards, humidity controllers, UPS controllers, rack monitors- just about every monitoring tool was affected. More worryingly, the same researchers were also able to extract passwords used to access actual databases, alarmingly allowing further access to status reports and data assets. Default or outdated passwords were the main culprits of the easy bypassing, allowing a near-unobstructed overriding of security measures.
When it comes to data center security, locking down physical premises through the use of controlled access, constant CCTV monitoring and having a security team on-site are only the primary levels of ensuring protection. Strong authentication measures such as two-factor authentication for all staff members and a comprehensive, regularly reviewed and updated security policy are also vital in covering systems against cyber attacks. From a network perspective, having a robust firewall filters out and defends against most threats, and the remainder should be dealt with by an active monitoring system and team.
One Man’s Trash Is Another Man’s Treasure
One could say however, that the most effective way of dealing with privacy threats is through understanding the depth and detail of data held, and when the time comes- erasing it. After all, what is there to steal if unavailable in the first place? Many organisations have a culture of amassing a fortune of obsolete data, out of fear that the consequence of doing so is irreversible. Whilst this is understandable and a rather human approach towards preparedness, the reality is that holding on to outdated data poses a very tangible risk as threats of ransomware and cyberattacks grow increasingly. Old data does not mean useless data.
Studies have revealed that companies cling on to an estimated 30 per cent of antiquated, redundant data, and more than 60 per cent of organisations have over half their data stated as being “dark”- codeword for having unknown value. Although dark data is often recognised as having little purpose to organisations, the information embedded within stems from both daily consumer and corporate interaction across a plethora of systems, everything from machina data to server log files to data derived from social media. Organisations ought to have a holistic understanding of where data is stored and what is entailed within, along with a responsibility to develop a well-developed data management policy.
A data lifecycle strategy is a viable option of management; getting agreement from businesses and guaranteeing backup and storage solutions, as well as having a storage administration team in place are key to such a strategy. Despite it sounding like a straightforward solution, there is often a lot of confusion about the definition of chucking away old and the different methods of achieving it. For example, many companies mistakenly implement certain data removal methods such as factory resets or reformatting, when in actuality a wholesale wipe perpetuates the fear of organisational data management. As a result, the vast majority of organisations today have not taken the necessary steps to implement a data management process, leaving themselves vulnerable to a potential data breach.
It should be noted however, that different regions have different data laws or retention obligations, which might have a limiting effect on data protection and management. In September this year, Australian Telco company, Optus, made international headlines as the victim of one of the nation’s largest data breaches, the personal information of up to 10 million people compromised. Upsettingly, the incident was hardly the first; at least 11 data breaches affecting at least 10,000 people each had been reported in the first half of the year.
Although individual organisations undoubtedly have a duty to manage the data possessed, government policy has a hand in dictating the fluency of such action. In Australia specifically, the Telecommunications Act of 1979 requires telecom companies to retain certain data for at least 2 years- plentiful time for a mammoth amount of data to pile up. Observers of the Optus breach have touted the incident as a lesson long in the coming due to the cavalier attitude taken by both Australian companies and the government. Conversation on policy reform and increased fines on companies have been ongoing, but nothing has been set in stone.
Considering how data growth is forecasted to nearly double in size from 2022 to 2025, organisations have to revise, or if not in place at the moment- create, data deletion and management policies before increased security risks come about. Simple policy enactment will go a long way in giving cyber criminals an extra hurdle to climb over and as data breach and hacks become more common, companies need all the help they can get.