Virtualization Technology News and Information
Tech Experts Reflect on Data Privacy Day 2024

data privacy day 

Data Privacy Day, an international "holiday" that occurs each year on January 28, was created to raise awareness and promote privacy and data protection best practices. Data Privacy Day began in the United States and Canada in January of 2008. It is an extension of Data Protection Day in Europe, which commemorates the January 28, 1981 signing of Convention 108, the first legally binding international treaty dealing with privacy and data protection.

To honor this day, VMblog has compiled some detailed perspectives, as well as some tips for better protection of sensitive corporate data, from a number of industry experts ahead of Data Privacy Day 2024.


Nick Edwards, VP of Product Management, Menlo Security

"The explosion of Generative AI use following the launch of ChatGPT in November 2022 has opened a world of new risks and data privacy concerns. Companies must be aware of how these tools can potentially compromise or expose sensitive data. By nature, they pose a significant security risk, especially when employees inadvertently input corporate data into the platforms. When data is entered within these models, that data is used to further train the model to be more accurate. In May 2023, a group of Samsung engineers input proprietary source code into ChatGPT to see if the code for a new capability could be made more efficient. Because of the model’s self-training ability, the Samsung source code could now be used to formulate a response request from other users outside of Samsung. In response, Samsung banned ChatGPT. Our own team of researchers at Menlo Security found more than 10,000 incidents of file uploads into generative AI platforms including ChatGPT, Microsoft Bing, and Google Bard, and 3,400 instances of blocked “copy and paste” attempts by employees due to company policies around the circulation of sensitive information.

To prevent data leakage similar to the one described previously, employees should be trained in how to use these platforms securely. Organizations need to prioritize data security tools that prevent information from being shared with Generative AI platforms in the first place. While data loss protection (DLP) tools are useful, organizations need a layered approach that could include, for example, limiting what can be pasted into input fields, restricting character counts or blocking known code.

Another data privacy concern was uncovered last week, when OpenAI launched the GPT store, which allows OpenAI subscribers to create their own custom versions of ChatGPT. As exciting as this is for developers and the general public, this introduces new third-party risk since these distinct “GPTs” don’t have the same levels of security and data privacy that ChatGPT does. As generative AI capabilities expand into third-party territory, users are facing muddy waters on where their data is going. Securing access to generative AI tools is just one of the topics covered in Menlo's State of Browser Security Report, launched this week, which talks to the wider landscape of evasive threats targeting users in the browser."


Attributed to Manu Singh VP, Risk Engineering, Cowbell

"In today’s threat landscape, we are seeing the continued evolution and sophistication of cyberattack techniques and tactics, including bad actors circumventing multi-factor authentication (MFA) and accessing offline backup systems. What the industry previously considered ironclad defenses simply aren’t anymore. This Data Privacy Day, organizations should prioritize staying ahead of threats through:

  • Conducting a risk assessment to identify the vulnerabilities within the organization, and actioning on the findings. A risk assessment shows organizations what their architecture looks like, their vulnerabilities, and more. Addressing issues identified in a risk assessment puts an organization in a better position to deal with cyber incidents. If you work with a cyber insurance provider, ask them for your organization’s risk assessment report and how they can help you improve your cyber hygiene.
  • Upholding good cyber hygiene. While cybersecurity measures should be tailored to an organization based on its risk assessment, it’s important to follow basic best practices: adopt MFA, deploy an Endpoint Detection and Response (EDR) solution, keep up with patching, maintain good password hygiene by adopting a password manager, and have offline and tested backups/copies of all data."


John A. Smith, Conversant Founder and CSO

"Cyberattacks are the top global business risk of 2024. Data Privacy Week provides organizations an opportunity to raise awareness about data privacy issues and associated security risks, educate individuals about protecting their personal information, and promote more secure organizational data practices.

In today’s digital age, most enterprises obtain personal and confidential data from their employees, customers, and stakeholders, making them vulnerable to a cybersecurity attack or data breach. All organizations have a responsibility to protect their data; many (such as law firms and healthcare institutions) have a fiduciary duty to protect sensitive information regarding clients. These businesses are built on trust; and in many cases, lives and financial well being depend on it; both can be easily and irreparably harmed if data is compromised. Organizations should consider the following to increase data privacy and security within their company:

  • Adhere to regulations and compliance requirements: Enterprises should constantly review and be aware of data privacy regulations, such as GDPR, CCPA, or other regional laws.
  • Understand that compliance isn’t enough: While security frameworks and mandatory compliance standards must be met, they in no way guarantee security: These frameworks and compliance standards should be viewed as a minimum floor. Threat actors are not limited to the guardrails within these frameworks, and threat actor behavior simply changes faster than the frameworks and standards can keep pace with. It’s essential to have a layered security program across people, process, product, and policy that protects the entire security estate with redundant controls.
  • Measure your secure controls against current threat actor behaviors: By implementing robust security protocols and conducting regular security assessments against current threat tactics, organizations will know where their vulnerabilities lie and how to protect them. Threat actors are exploiting things that make the users’ experience easier, such as Help Desks that provide easy access and few verification steps, self-service password tools, weak forms of MFA, etc. To keep up, companies must trade some levels of user convenience for more stringent controls. Know your limitations: Most organizations have gaps in security controls and orchestration because they lack access to breach intelligence—how threat actors are causing damage technically. It’s those very gaps that threat actors seek and prey upon. It’s important to seek expert assistance to gain breach context and act without delay. While addressing these gaps may require additional capital investments, it will be far less than the cost of a breach, its mitigation, and the long-term fallout.
  • Change your paradigms: Systems are generally open by default and closed by exception. You should consider hardening systems by default and only opening access by exception (“closed by default and open by exception”). This paradigm change is particularly true in the context of data stores, such as practice management, electronic medical records, e-discovery, HRMS, and document management systems. How data is protected, access controls are managed, and identity is orchestrated are critically important to the security of these systems. Cloud and SaaS are not inherently safe, because these systems are largely, by default, exposed to the public internet, and these applications are commonly not vetted with the stringent security rigor.
  • Most breaches follow the same high-level pattern: While security control selection and orchestration are important, ensuring a path to recovery from a mass destruction event (without paying a ransom) should be the prime directive. Organizations should assume a mass destruction event will occur, so that if it occurs, they can have confidence in their path to recovery.

Data privacy is not just a technical concern, but a crucial tenet of ethical business practices, regulatory compliance, and maintaining the trust of individuals who interact with your business. It has become an integral part of building a secure and resilient digital economy."


Ratan Tipirneni, President & CEO of Tigera

"This Data Privacy Awareness Week, enterprises and small businesses alike should prioritize holistic cybersecurity. While Kubernetes adoption has taken off, most Kubernetes teams haven't implemented adequate posture management controls. They continue to implement the minimal level of security mandated by compliance requirements. This bubble is about to burst. This will manifest as stolen data (data exfiltration) or ransomware. However, this can be easily prevented through effective posture management to ensure that the right egress controls and micro-segmentation is in place."


Attributed to Rick Hanson, President at Delinea

"The end of privacy as we know it might be closer than you think. The world is increasingly relying on more AI and machine learning technologies. This reliance could result in privacy becoming less and less of an option for individuals, as AI’s capabilities in surveillance and data processing become more sophisticated.

2023 marked a significant leap in the authenticity of deepfakes, blurring the lines between reality and digital fabrication, and that is not slowing down any time soon. Our digital identities, extending to digital versions of our DNA, can be replicated to create digital versions of ourselves, which can lead to questioning who actually owns the rights to our online personas.

Unfortunately, advancements in AI technologies are evolving more swiftly than current regulations can keep pace with. In 2024, we can expect stricter data protection requirements across more countries and regions. But until these regulations evolve and can keep pace, it is important to reduce our risk and protect our privacy however possible.

One of the best ways to do this is to continuously check each application including what data is being collected and processed, and how it is being secured. Use a password manager or password vault to securely store credentials, and leverage multi-factor authentication (MFA) to ensure credentials don’t get exploited by forcing whoever the user is to prove its identity beyond just a username and password. In the event that a data privacy breach does occur, it is also important to have a cyber insurance policy in place to ensure you’ll have the means to continue to operate and recover."


Michael Brown, Vice President of Technology at Auvik

"The evident tension between employee monitoring and personal privacy makes it imperative for companies to find and maintain an appropriate balance that upholds critical visibility while respecting boundaries and adhering to data privacy laws.

With the continued expansion of remote and hybrid work, there is a heightened necessity for employers to keep a close eye on the way that employees are utilizing devices and applications in their daily routines. In addition to providing valuable information about the types of and ways in which technology is being used, employee monitoring ensures that installed applications are up-to-date, protects against known security vulnerabilities, and identifies potential productivity improvements. However, maintaining data privacy during this process is critical; when boundaries are overstepped and certain kinds of information is collected, this can feel invasive to employees and result in reduced morale as well as the potential violation of data privacy laws.

On one end of the spectrum, monitoring an employee’s every action provides deep visibility and potentially useful insights, but may violate an employee’s privacy. On the other hand, while a lack of monitoring protects the privacy of employee data, this choice could pose significant security and productivity risks for an organization. In most cases, neither extreme is the appropriate solution, and companies must identify an effective compromise that takes both visibility and privacy into account, allowing organizations to monitor their environments while ensuring that the privacy of certain personal employee data is respected."


Mark Sangster, Adlumin VP, Chief of Strategy

“Data privacy has never been more critical than it is now. Our personal and professional identities blur together. Our digital identities carry more currency than any other form and are easily manipulated, stolen, or disfigured. We live in a feedback loop of preferences that influence purchases that affect our preferences. In essence, the consumer is the product, and the product is the consumer. What's worse is that artificial intelligence brings great potential and existential risk regarding criminal hijacking, poisoned data lakes, and an expanded profile exposing private data being fed into large learning models.

Businesses and public organizations must prioritize privacy and security as the top risk to their operations. Fundamental security practices become the outer shield, with a focus on data and resulting obligations. In terms of artificial intelligence, companies must protect data lakes and build policies and procedures to ensure private and protected data does not erroneously leak into data sets for large learning models that can easily expose confidential and damaging information.”


Gene Fay, CEO and Host of eXecutive Security Podcast at ThreatX
"On Data Privacy Day, I encourage everyone to take a moment to think about not only protecting your own data privacy, but also those especially vulnerable to scams and data leaks: our senior citizens. You most likely understand the ways that corporations can and will use or abuse your personal information, or the ways that scammers and cyberattackers will try to get access to your sensitive data. But the seniors in your life probably don’t fully understand either of these things. Take some time to explain the ways that their data could be compromised and enable them to spot scammers and cyberattackers.
For instance, help them set up privacy settings, on their systems and applications. Encourage them to think twice before signing up for applications or services, explain the potential dangers of clicking on hyperlinks, and reinforce that banks and government agencies won’t ask for their personal information or demand payment.
This is just a start, but helping seniors understand the threats and think twice before taking any action online is the most important step, and a great Data Privacy Day activity."


Sergej Dechand, CEO & Co-Founder, Code Intelligence

"As we approach Data Privacy Day this year, we must add AI to the list of growing privacy concerns. One of the downsides of AI is that it is available to everyone. We have already seen the first reports of malicious actors leveraging self-learning AI for more effective cyberattacks. Looking forward, we have to expect these attacks to become more sophisticated. High-level industry regulation without technical specification contributes to countering this trend, but it will never be enough to enable true security because:  

A) malicious actors will find workarounds to leverage AI for their benefit
B) testing regulation will not be able to keep up with the technical complexity of AI-enhanced software development

To build secure systems in an AI-era, it will be more important than ever to remediate issues where they happen: on the technical level."


Baan Alsinawi, Managing Director Security & Risk, CISO Global

"When a customer goes to buy a product, invest in a company, or any other number of types of transactions, they are trading more than just their money for goods and services. They are trading their trust in an organization to keep their personal information confidential and private. As we've seen in recent years, sensitive data is often the target of cyber attacks. Being able to secure customer information is critical. Failing to do so can cause irreparable harm to an organization, both financially and reputation-wise, as well as loss of intellectual property. With this in mind, here are a few best practices to protect customer privacy as we approach Data Privacy Day 2024.

Conduct audits of personnel access to data, along with third-party security audits. Only collect the minimum of needed customer data and destroy the rest. And, create a plan to store and secure collected data, while making sure cybersecurity is a part of your organization's culture and priorities in 2024."


Ajay Bhatia, Global VP & GM of Digital Compliance at Veritas Technologies
"Data privacy compliance continues to become more complex. New laws putting guardrails on using personal data in the large language models (LLMs) behind GenAI tools are gaining steam. For example, the California Privacy Protection Agency is already working to update the California Consumer Privacy Act (CCPA) to address GenAI and privacy, including opt-out implications. More will follow. This type of legislation, like most other privacy regulations, will differ across continental, country and state borders, making the already complex regulatory environment even harder to navigate without help.

Whether to implement GenAI isn't really a question. The value it provides employees to streamline their jobs means it’s almost a forgone conclusion. But that must be balanced with the risks GenAI could pose when proprietary or other potentially sensitive information is fed into these systems. To ensure they remain compliant with data privacy standards, whether or not regulatory bodies enact AI-specific rules, IT leaders need to provide guardrails to employees that will limit the likelihood that they accidentally expose something they shouldn't.

AI is making the cyber threat landscape more dangerous. Cybercriminals are already using AI to improve their ransomware capabilities and launch more sophisticated attacks that threaten data privacy. It’s only going to get harder to defend against these threats without AI-powered resilience that counteracts the evolving landscape of AI-powered attacks."


Dave Russell, VP of Enterprise Strategy, Veeam

"Cyber threats like ransomware play a critical role in organizations’ ability to keep their data safe. Knowing how public attacks have gotten and considering consumer demands for better transparency into business security measures, there’s generally more awareness around ransomware in 2024. New research supports the idea that ransomware continues to be a ‘when’ not ‘if’ scenario, with 76 percent of organizations attacked at least once in the past year, and 26 percent attacked at least four times during that time. Data recovery should be a key focus around Data Privacy Day 2024, knowing that it’s still a major concern as only 13 percent of organizations say they can successfully recover during a disaster recovery situation. In 2024, the overall mindfulness of cyber preparedness will take precedence."

Raja Mukerji, Co-Founder & Chief Scientist at ExtraHop

"A key focus this Data Privacy Day should be on generative AI. As a new approach gaining attention across enterprises, concerns about data security and privacy have run rampant. Most enterprises are eager to take advantage of generative AI, however, circumstances like employees uploading sensitive corporate data and IP, the opacity of criteria used to train the model, and lack of governance and regulations introduce new challenges.
During this time of development, enterprises should focus on ways to make generative AI work for their specific needs and protocols. Visibility into AI tools is critical, and enterprises should have solutions in place that monitor how they're being both trained and used while educating employees on best practices for safe and ethical use. Investing in systems and processes that grant you this visibility and training will help position generative AI as an aid for productivity in the workplace, and help mitigate data privacy concerns. Eventually, enterprises will be able to take advantage of the opportunity to build their own unique AI tools to better serve their employees, customers, and processes, in a provably secure and repeatable manner."

Danny de Vreeze, VP IAM at Thales

"GDPR continues to set the standard for how data is stored and processed on a regional level, but 2024 will bring an increasing demand for this control in the U.S. and Canada. Enterprise organizations will meet these needs at the company level by implementing strong data encryption methods, including bring-your-own-key and hold-your-own-key features. At the individual level, users will benefit from more options to consent to the use of their data, zero-knowledge proof, and more. As we see more movement, from the U.S. side in particular, on data privacy and protection, data sovereignty will take a front seat in legislative conversations."


Michael Salinger, VP of Engineering & CISO, TrustCloud

"The advancements in Generative AI over the past year have amplified concerns around data privacy and governance, especially around the use of data in the training of models.  As the use of generative AI continues to accelerate, there will be a need to revisit standard data privacy practices in light of these advancements.  It is critical that companies that make use of Generative AI take a privacy-first perspective, and develop governance mechanisms to ensure that privacy is not sacrificed in the rush to adopt this promising technology."


Justin Daniels, Faculty at IANS Research

"Despite an increasing number of privacy laws around the world, many people still have little understanding of how much information is collected about them every hour of every day. In the United States, Congress has yet to pass meaningful privacy legislation at the federal level, resulting in a patchwork of privacy laws that vary from state to state. This lack of clear federal data privacy guidelines makes it painfully difficult for individuals to make informed decisions about how and when to share their personal data and what level of data protection to expect from the companies collecting it.

Adding to the confusion, people are increasingly likely to encounter misinformation and opinions presented as fact. Spreading misinformation is easy and nearly instantaneous in today’s digital environment, reinforcing personal bias despite the availability of trustworthy evidence. Rapid advancements in AI have aggravated the problem by making it easy to create deep fake voices and videos quickly and cheaply. Determining what is real and who to trust with your personal information has never been more difficult — or more important.

As we mark another Data Privacy Day, one goal should be for individuals to become more cautious about sharing their data for a discounted price or minor perk. As they become more data-privacy conscious, brands that protect and manage customer data responsibly will build trust with customers online, offline, and around the world."


Larry Whiteside, Jr., CISO at RegScale

"Privacy is an evolving aspect of our digital landscape, and its significance has been shaped by a pivotal driver: consumers actively expressing the importance of their data, particularly in the aftermath of numerous breaches compromising consumer information. Additionally, companies have been avidly engaging in data collection to gain valuable insights into the consumers they serve. Consequently, organizations are now under greater pressure than ever to handle data responsibly, which is particularly daunting for those managing large volumes of data. However, by adhering to a few fundamental principles, organizations can effectively navigate the demands of privacy regulations.
Principle #1 – Understand Your Data: To comprehend the privacy implications for your organization, it is imperative to be aware of the data at your disposal. This requires a thorough investigation to identify the type of data, its location, users, and access. Although seemingly simple, this task can be complex, emphasizing the critical importance of Principle #2.
Principle #2 – Establish Ownership: Ownership is key for the execution of any program or process. To ensure accountability, assemble a team of stakeholders with board-level visibility to establish policies and standards governing the organization's use, collection, and maintenance of data.
Principle #3 – Implement Sensible Controls: At a high level, three control categories—physical, technical, and administrative—need consideration. These controls serve as the linchpin for determining how to handle Privacy Data effectively and align with Privacy Regulatory mandates.
Principle #4 – Minimize Unnecessary Data: Organizations often collect data for specific purposes without establishing processes for its proper disposal once it becomes obsolete. Failure to address this exposes companies to unwarranted risks. Following Principle #1 allows organizations to identify data that should be disposed of to mitigate potential risks.
Principle #5 – Continuous Improvement: Many organizations halt their efforts after completing these fundamental exercises, which can be detrimental. A "rinse and repeat" approach can ensure that privacy measures remain effective, adapting to evolving circumstances. Ceasing at this point risks rendering previous efforts obsolete, as the context of data evolves over time."


Jeff Reich, Executive Director at the Identity Defined Security Alliance (IDSA)

"More and more of us wake up every day realizing that the amount of control that we have over our digital identities is less than we believed yesterday. Not only do each of us need to take more effective control over our identities, but we also find that the custodians of our data, whom we trust, need to do more as well.

While legislators and leaders take steps to address this issue, most are far enough removed from the actual goings-on that they don’t know how to create the appropriate laws. The time it takes to enact legislation means we are months, if not years, behind where we need to be.

The European Union’s General Data Protection Regulation (GDPR) was an excellent first step towards achieving this goal. Some US states have adopted their customized version of that. Federal laws are a patchwork, focused on specific verticals such as banking or healthcare. Adding this to the picture across the rest of the globe, and you can see the magnitude of the problem.

We have an underlying problem of poor security across many platforms and applications, leading to untrustworthy privacy provisions. This issue is compounded by the patchwork of privacy laws that drives many organizations to focus on compliance with whatever they feel applies to them. They may believe that compliance leads to security when, in fact, good security leads to compliance.

Adding AI into the equation means that we don’t know what needs to be done, by whom, and if that is even the identity that I think I am working with. Multi-Factor Authentication (MFA) is adding trust and friction at the same time. As a global society, we need to evolve to more seamless solutions that can add trust to identity management and confidence in what we do.

This may sound grim, but better days are coming! Our research tells us that although attacks are up, better controls are being used. In our annual Trends in Security Digital Identities report, we see that 68% of those surveyed feel that they have a well-managed or optimized capability for managing identities.

As you take steps to improve privacy protections on Data Privacy Day, keep that going because soon after, on the 9th of April, 2024, we will host our fourth annual Identity Management Day. This year, in addition to a great lineup, as in previous years, we will have a true day-long event, starting in Australia, with Oceania and Asia, moving through Europe, the Middle East, and Africa, with the concluding events in the Americas. Together, we will raise awareness of the need for data privacy and identity protection."


Steve Stone, head of Rubrik Zero Labs

The most compromised data

Breaches often compromise the holy trinity of sensitive data: personally identifiable information, financial records, and login credentials. As long as these lucrative data types remain decentralized across various clouds, endpoints and systems not properly monitored, they will continue to entice, and reward increasingly sophisticated attackers.

Why it's vulnerable

Over 60% of sensitive data stored across disparate–on-prem, cloud, and SaaS–environments lack unified security protocols. Cybercriminals can easily access the keys to deeply infiltrate the systems and exfiltrate the most valuable data undetected over longer periods.

How to better protect it

Reliance on prevention is simply ineffective. Organizations need cyber resilience - a combination of cyber posture and cyber recovery - to keep their business running without interruption, even in the midst of the inevitable cyberattack.


Sean Costigan, PhD, Director of Cyber Policy, Red Sift

"The lifeblood of our deeply connected global system is data. Global business is deeply challenged by the complexities of cross-border information flows, cybercrime, data privacy, new frameworks, and changing cybersecurity regulations.

Meantime, governments are becoming more proactive in issuing guidance and legislating cyber policies. The global cybersecurity landscape is witnessing considerable transformation through regulatory changes that push organizations to prioritize data protection, privacy, and risk management. At present, this is having varied results. In the US, there are now 13 states that have comprehensive data privacy laws, and additional laws will come into focus in 2024. The various efforts could give rise to greater complexity for all involved. While the US does not yet have a national, federal, comprehensive data privacy law, pressure is mounting for one. Globally, cyber regulations may be misaligned between countries, creating challenges for multinational corporations, people, and organizations, even within regulatory blocs like the European Union. Commendably, regulatory changes appear to be driving better prioritization of measures that will improve resilience. For example, new legislation around data protection and privacy is pushing cyber risk into reporting for the corporate boardroom, making cyber resilience an enterprise issue. The goal for all should be to improve confidence in cyberspace and markets by building trust and transparency."


Veronica Torres, Worldwide Privacy & Regulatory Counsel, Jumio

"In the current digital landscape, user data is more accessible than ever. With easier access to personal data, coupled with the emergence of sophisticated and user-friendly tools, cybercriminals are amplifying the scale and intricacy of their attack methods. For example, fraudsters obtain users’ data to construct sophisticated scams like synthetic identity fraud, social engineering attacks and deepfakes. Today, cybercriminals are even deploying voice-cloning technologies to execute elaborate corporate heists.  Fraudsters continuously refine  their tactics with each passing day using generative AI tools such as FraudGPT.

These fast-evolving threats require dynamic defenses to keep up. Businesses must have advanced identity verification in place including biometric-based authentication and AI-powered analytics to protect against today’s AI-powered cyberattacks. These tools help security teams mitigate threats, spot fraud patterns and dismantle threat networks before they strike. Given today’s relentless threat landscape, proactive defense is not optional, it's mission-critical to responsibly safeguard business and customer data."


Nick King, CEO and Founder, Data Kinetic

"This year’s Data Privacy Week serves as a reminder to organizations that the responsible development and deployment of AI requires proper governance, privacy standards, and self-regulation. AI engineers, business leaders, and influencers play a crucial role in shaping the future of AI, and the AI Impact Assessment Scale (AIIAS) is a key tool in this process.

The AIIAS provides a standardized method for assessing and classifying AI applications, promoting responsible AI practices, and fostering trust. Adopting and promoting the AIIAS enables the AI industry to adapt quickly to emerging challenges.

As stakeholders, it's essential to prioritize transparency, accountability, privacy, security, fairness, and human-centric design. By focusing on these principles, we can mitigate potential risks and unintended consequences.

Now is the time for AI stakeholders to unite and lead the charge for a responsible AI ecosystem. By embracing data privacy, governance, leveraging self-regulation, and implementing tools like the AIIAS, we can create a better future with AI that benefits everyone."


Jim Liddle, Chief Innovation Officer, Nasuni

"This year’s Data Privacy Week is all about ensuring that organizations maximize their data intelligence with privacy best practices. A shocking number of companies store massive volumes of data simply because they don’t know what’s in it or whether they need it.

These questions must be asked to retain maximum privacy and quality: Is the data accurate and up-to-date? Is it properly classified and ‘searchable’? Is it compliant? Does it contain personal identifiable information (PII), protected health information (PHI), or other sensitive information? Is it available on-demand or archived?

Data Privacy Week serves as a reminder to organizations to answer these questions to ensure they meet data quality, privacy, security, access, and storage requirements. This must be assessed and completed before pursuing AI initiatives that may compound risk exposure without these foundational governance guardrails in place."


Jeff Barker, vice president of product at security testing company Synack

"It’s all too easy to put something into a large language model without thinking about potential vulnerabilities in the AI application itself that could lead to a cybersecurity breach. As people look for shortcuts to do everything from writing emails to diagnosing patients, AI apps can now double as repositories of highly personal data. Even if they don’t hold personal data from the outset, LLMs can still be poisoned through poor app security, resulting in a user sharing personal information with the adversary.

For more on this issue, I would point to OWASP’s Top 10 LLM Vulnerabilities list, which includes Sensitive Information Disclosure as one of the biggest flaws already affecting this technology."


Dan Benjamin, CEO and Co-Founder of Dig Security (Acquired by Palo Alto Networks)

"As organizations moved to the cloud, their infrastructure has become increasingly fragmented. With multi-cloud and containerization becoming de-facto standards, this trend has intensified. Data storage and processing is dispersed, constantly changing, and handled by multiple vendors and dozens of tools.

To secure data, businesses found themselves investing in a broad range of tooling - including DLP for legacy systems; CSP-native solutions; compliance tools; and more. In many cases two separate tools with similar functionality are required due to incompatibility with a specific CSP or data store.

This trend is now reversing. Economic pressures and a growing consensus that licensing and management overhead have become untenable are leading organizations toward renewed consolidation. Businesses are now looking for a single pane of glass to provide unified policy and risk management across multi-cloud, hybrid, and on-premises environments. Security solutions are evolving accordingly - moving from point solutions that protect a specific data store toward more comprehensive platforms that protect the data itself, wherever it’s stored and in transit."


Ryan Ries, Chief Data Science Strategist at Mission Cloud

"Data Privacy is a very difficult topic to try and understand because there are so many rules and regulations that are constantly changing and are different state to state and country to country. People have to look at what kind of data they have and understand all the rules associated with it which is very time consuming and a serious endeavor. We often see customers that had this under control when they were a smaller company, but as they grow they have to really focus on ensuring they are doing the right things with the data and understanding what rules it falls under. There are so many different layers to data privacy and how you handle it, does it fall under PII, PHI or HIPAA? Do I need to worry about GDPR or data residency? There is a lot to consider and you need to be diligent that you are handling your data properly."


Rob Price, Director, Field Security Office at Snow Software

"Cybersecurity is an increasingly difficult field, especially as AI is already carving out new paths for cybercrime. While proactive vigilance against threat actors is an undeniable priority, the security industry is plagued with a significant skills shortage. These issues, along with many other contributing factors, are increasing the challenge of data protection.
Due to these factors, there is a much larger role for individual responsibility to protect individuals and organizations. Humans are the weakest link when it comes to cybersecurity; social media engineering in particular has created a false sense of security in sharing personal information online, which can have serious negative impacts on maintaining the security of personal data. It is imperative for individuals to think before they act when it comes to sharing any information.
Social engineering is also a very real threat, which has become more sophisticated with the assistance of AI. The effects of AI are already impacting the cybersecurity world, and maintaining privacy online is key for protecting your personal data."


Ammar Bandukwala, CTO and Co-Founder of Coder

"In their day to day, developers are not measured by how their workflows and processes reduce corporate risk. By and large, they’re measured on software delivery, and minimizing corporate risk is usually at odds with this main goal. The front line of this battle between protecting data privacy and maintaining developer productivity is the development environment.
The status quo is for developers to stream data into their local environments for analysis and testing, causing IT and security teams to heavily lock down these environments. However, the organization remains exposed to great risk as customer data, business applications, and unvetted software dependencies co-exist on decentralized devices.
CDEs flip this problem on its head by centralizing all development activity into the organization’s cloud infrastructure. By moving the development environment into the cloud, security teams can shift their focus into creating a secure barrier around the environment instead of within, unleashing developer productivity, while improving the overall security posture of the organization. It’s a rare win-win for both security teams and developers."


Sreedharan K S, Director of Compliance at ManageEngine
"Safeguarding data is always an important priority for business, as well as for individuals. With every new technology, the risk profile changes, and with it, fresh challenges arise. The widespread use of AI/ML technology has pushed the boundaries of the amount of personal data that can be collected and analyzed. This has led to the possibility of algorithms learning the behavior of a person and making decisions that impact the individual's rights. Large scale data collection increases the risk of surveillance. Therefore, regulators in various geographies have developed guidelines on the responsible use of AI/ML technologies. It is in the best interest of businesses to follow these guidelines.
Another risk is that of decisions made by AI/ML technologies based on models trained on specific types of data in the form of images, videos, text, and numbers that pertain to the model. Such data, though carefully curated for training purposes, will still likely carry inaccurate information. There should always be a process requiring that a human element review the automated decisions made by technology so corrective actions can be taken. There have been instances where employees using AI technologies to simplify tasks for their jobs have inadvertently leaked sensitive company information. Organizations should educate employees about these risks and build controls to address them. Those organizations that intend to use customer data for training AI models to improve their quality of service should communicate a clear and transparent policy. Furthermore, there should be controls on the use of this collected data so that the purpose for processing it does not transgress the published mandate. The processing of this data should be beneficial in terms of improving productivity but should absolutely not infringe on an individual's rights."


Drew Bagley, CrowdStrike Vice President & Counsel, Privacy & Cyber Policy

"While governments around the globe push to enact data protection laws, Data Privacy Day cautions that today’s cybersecurity landscape poses one of the most significant threats to privacy. This is true for organizations of all sizes that are responsible for safeguarding important data in the face of innovative threat actors and an increasingly regulated environment. Protecting against data breaches is especially challenging today, when identity-based attacks are some of the most commonly employed and hardest to detect. In fact, 80% of cyber incidents involve the misuse of valid credentials to access an organization's network. Identity is a critical threat vector that companies must address as they build their data privacy plans. This means that privacy compliance now requires defenders to pay attention to how adversaries infiltrate organizations and to assess whether they are prepared to defend against those types of attacks. This includes asking whether there is adequate visibility into security events, credentials, and data flows.
Data Privacy Day is also a reminder that aligning privacy and cybersecurity strategies is especially critical as generative AI tools continue to span enterprises. With every groundbreaking technology, there are new opportunities and risks organizations must be aware of and should anticipate. Notably, responsible AI can be a game-changer in protecting data against breaches. However, AI that lacks privacy-by-design can introduce risk. In parallel with emerging regulations, it is imperative that organizations have visibility into the types of generative AI being introduced into their environments and an understanding of the use cases. Effective data protection today combines content with context to get a real-time understanding of what data — if any — is being shared with third-party entities and what protections are in place to prevent unauthorized data exposure."


Matt Ninesling, Senior Director Tape Portfolio, Spectra Logic

"With the acceleration of ransomware attacks perpetrated by bad actors intelligently using AI to wreak havoc, enterprises see that data can be misplaced, tampered with or even lost in the cloud. This year, more than ever, data privacy emerges as a paramount security concern. However, when an organization opts to write its data to onsite, air-gapped tape storage the risk of data loss diminishes significantly. Using this method, data is protected from ransomware, software failure, even natural disasters and its privacy is maintained, potentially forever. Tape stands out as the preferred storage medium for achieving the utmost cyber resilience, and to keep information long term. Tape is Fort Knox for your data, preserving its privacy and security."


Aron Brand, CTO of CTERA

"The extensive data-processing capabilities of AI, crucial for its sophisticated functions, often involve handling sensitive personal data, heightening risks of breaches and privacy issues. The inherent complexity and lack of transparency in AI models can inadvertently lead to biases and ethical dilemmas, making it harder to comply with regulations. In response, there is a clear trend towards private AI solutions in enterprises, spurred by the need for data control and regulatory compliance. Private AI frameworks allow businesses to tailor AI models to their specific needs while retaining control of sensitive data. This strategy not only aligns more closely with privacy laws but also ensures strict internal management of data processing and storage, improving data security and adherence to regulatory standards.
A key development in private AI infrastructures is the integration of Retrieval Augmented Generation (RAG) pipelines. RAG merges the generative power of AI with data retrieval from databases or unstructured data repositories, making it ideal for applications that demand accuracy and factual correctness. Implementing RAG pipelines within an enterprise’s own infrastructure, is vital for maintaining privacy and compliance, particularly in sectors like healthcare and finance. It guarantees that AI outputs derive from secure and relevant data sources, upholding necessary transparency and control for regulatory compliance and public trust."


Sohail Iqbal, VP and CISO, Veracode

"Data Privacy Day is an important opportunity to talk about best practices for data protection. One of the main gateways to data is through applications, which have become the primary target for cybercriminals intent on stealing data. Attackers constantly look for vulnerabilities in applications that provide access to sensitive information. At the root of these applications are code flaws. Many development teams and organizations currently lack the tools to ensure their software is free of flaws and vulnerabilities that could lead to costly breaches and data security errors. They also struggle enforcing security policies that safeguard critical data.

To reduce risk and avoid a damaging breach, testing is critical. I can’t stress this enough. This allows organizations to better understand data security quality of applications and mitigate risks. Ultimately, the effectiveness of data privacy, security and protection hinges on the accuracy of data loss prevention, scalability of data security solutions and sophistication of data security policy definition and process management capabilities."


Chris Lehman, CEO, SafeGuard Cyber

"Data Privacy Day serves as a great reminder for organizations to re-evaluate the way they safeguard their data, particularly the sensitive data exchanged on business communication channels. There is a shift from email-based fraud to new channels such as SMS, WhatsApp, Signal, social media and other workplace messaging apps like Slack or Microsoft Teams. Organizations should take this as a sign – if they haven’t already – that security defenses must be fortified across every channel. Cyber adversaries will not stop at email to obtain your critical data. As business communication channels continue to evolve and the use of mobile messaging increases, the risk to data security and compliance grows. These channels are critical for business productivity, so rather than forbidding the use of them or accepting shadow use and risk exposure, we recommend invoking clear policies that establish how, what, when and where employees should communicate and will be monitored. And you don’t have to sacrifice privacy in the name of security. There is innovation taking place to address the concern of employee privacy and enterprise data collection, to ensure only critical company-related data is captured. In the interest of transparency and trust, employers should ensure all employees understand the process and protocol for business communication across all platforms."


Niels van Ingen, Chief Customer Officer for Keepit
"No one likes surprises, particularly IT executives who believe their SaaS cloud providers have taken all the necessary steps to back up customers’ critical enterprise data. This is never truer when a disaster strikes, whether from an internal mistake or an attack from the outside, leaving business operations at a complete standstill.
The unfortunate truth is that most SaaS providers don’t offer the necessary level of data backup and recovery that enterprises require to get back up and running.
And guess what? If you read the cloud agreement, you’ll discover SaaS vendors aren’t responsible for data backup. The onus is on you.
It’s easy for individuals and businesses using popular cloud-based services to believe their data is “backed up in the cloud” and easily retrievable in the event of an attack or accidental deletion. However, they quickly learn – often too late – that backup services from SaaS vendors are usually very limited, disorganized, or prohibitively expensive to access. Organizations also get surprised when learning that many SaaS providers offer a limited data retention period, where after such time, the data is permanently deleted.
That’s why the only true backup – and the last line of defense in SaaS data protection - is having granular, reliable, and fast backup and recovery capabilities, with the data stored separate from the SaaS vendor’s environment."


Alan Bavosa, VP of Security Products at Appdome
"In the spirit of Data Privacy Week, we should champion initiatives that prioritize security and resiliency.  
Protecting consumer data and privacy isn’t just about how a company uses their data internally or with partners, it is how it’s guarded from wider threats, such as cyber attackers. In fact, data privacy and cybersecurity are intrinsically interlinked - you can’t ensure consumer data is kept private if you don’t prioritize cybersecurity. And this includes the protections on a brand’s mobile app offering, especially as mobile stands as the dominant channel for people’s interactions, fueling criminals to eagerly infiltrate apps.  
If brands don’t pay attention to how they protect their consumers via mobile apps, they are putting themselves at a huge commercial and reputational risk as customers may leave. For instance, nearly three-quarters of global mobile consumers stated that they’d be likely or very likely to stop using an app and tell their friend to stop using it too following a data breach or if they discovered that it didn’t protect their data.
Clearly, brands that do have privacy and security built into their mobile applications have a lot to benefit. Not only will it address cybersecurity fears and build consumer trust, but it will put them on course to comply with regulations such as DORA (Digital Operational Resilience Act) and NIS 2 Directives that both require cybersecurity resilience."


Published Thursday, January 25, 2024 7:34 AM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<January 2024>