Protecting your sensitive information with data masking

Data-Masking.jpeg

In this article I highlight key data attributes organisations should understand to take better advantage of the big data revolution. I also introduce the concept of ‘data masking’ to help manage those key attributes.

 

Data Quality

The first key data attribute organisations should understand is data quality. Creating better, faster and more robust means of accessing and analysing data sets is essential to the business.

However, preserving value and maintaining integrity while protecting data privacy is a real challenge. Users have often had to compromise data quality by using stale, artificial or limited data sets.

This also leads to the issue of what copy of the data is the master and can be trusted as a current production like copy?

As data migrates across systems, controls need to be in place to ensure no data is lost, corrupted or duplicated in the process. The key to the effectiveness of such controls is making the process seamless to the user. Through the use of data virtualisation, integrity and effective data management can easily be achieved along with copy data controls whilst minimising storage tax.

In this way new data sets are complete and protected while remaining in sync across systems.

Data security and privacy

The second data attribute is data security and privacy. These days embarrassing data leaks are a regular part of the daily headline news. 

Any organisation of any size collects, stores and processes personal, financial and other information that must be protected. Information security and privacy considerations can be daunting challenges and, unless addressed, will become obstacles in development projects.

Better management of security, privacy and data quality does not have to come at the cost of speed and agility. New data technology has emerged to help raise the level of security and privacy to meet growing compliance requirements while also improving data quality assurance levels.

Compliance and the cloud

Cloud is a great enabler for rapid, agile development and testing of production like environments however the issue of getting a consistent copy of production data into the cloud that aligns to the compliance and regulatory constraints placed on any organisation is a huge headache.

An increasing complication for dev/test environments is the requirement to have certain levels of data retained in the country of origin. However, to take advantage of all the benefits of the cloud, such as sharing data across geographically separate teams, this ‘data sovereignty’ issue must be addressed.

The answer lies in not using live, sensitive data, laying it open to attack. Data masking is the process of taking production data and modifying it for use in non-production environments. This works best when combined with copy data virtualisation, taking a production like virtual copy that has all sensitive data removed and replaced with consistent, testable but fictitious data.

By doing so, organisations can consume the cloud to full effect to use the best talent available globally whilst remaining compliant and secure ensuring the risk profile is effectively managed.

Data masking defined

Data masking acts like a filter to ensure sensitive data does not enter non-production systems. Instead of original data, a structurally similar but obscured and substituted version of the data, or “dummy data”, is used for dev/ environments, for example. 

The process automatically creates secure non-production environments by replacing sensitive and confidential data (such as customer details, patient names, addresses, social security and credit card numbers) with fictitious, yet realistic, data.

By replacing this sensitive data an organisation eliminates the risk of data breaches for all the copies of data created for use by application developers, testers, outsourcers and third parties.

But, as I will explore in this series on data masking, it is a simple requirement with a number of complications. 

A typical traditional approach to data masking can involve weekly or monthly offline processes to “scrub” sensitive data sets of data that would be valuable to a hacker. In an organisation conducting millions of dollars of business across several critical databases, data masking with static scripts is unworkable.

Introducing Spectrum Dynamic Data Masking

At Spectrum we believe data masking is so important for our customers that we have partnered with a world-leading data masking software company. We now offer a complete data masking solution that caters for both on-premise and cloud-based data masking. 

The new solution forms part of our growing data management portfolio, along with our Infrastructure as a Service cloud and our copy data management platform. If you have data security, privacy or quality concerns, contact me to find out more about dynamic data masking.

Contact me about protecting your sensitive information

In my next article I will expand on dynamic data masking, outlining more of the benefits to be gained from the technology.