December 18, 2020
Security

The Difference Between a Breach and an Incident and Why It Matters

The barrage of cybersecurity headlines is constant. From the explosion of ransomware attacks to an almost 20% increase in phishing websites in 2020, the surge in cyberattacks rides a road paved with the daunting challenge of securing enterprise networks as well as a much larger remote workforce. And it seems like despite all of the security controls you put in place and regulations you have to comply with, the bad guys continue to win.

With the growing number and intensity of cyberattacks comes the high number of security alerts, causing many security professionals to succumb to alert fatigue and potentially miss critical vulnerabilities that can result in loss of revenue and brand reputation. Whether you’re dealing with malicious attempts on your organization’s data or in the throes of a full-blown data breach, you need to distinguish between the two to make the best security decisions and take the proper regulatory compliance measures.

The Difference Between an Incident and a Breach

Security “events” can come in many forms and through many attack vectors. Some may not even be harmful to your network, so we need to distinguish some differences so that you can ensure you have an incident response strategy in place and adhere to compliance regulations.

The National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide defines an event as “any observable occurrence in a system or network.” Events can include benign occurrences like a user connecting to a file share, a server receiving a web page request, a user sending an email or a firewall blocking a connection attempt, or adverse occurrences like packet floods, system crashes and unauthorized use.

NIST describes a security incident as “an occurrence that actually or potentially jeopardizes the confidentiality, integrity or availability of an information system or the information the system processes, stores or transmits or that constitutes a violation or imminent threat of violation of security policies, security procedures or acceptable use policies.” The attempted hack of the U.S. Democratic National Committee’s (DNC) voter database in 2018 is an example of a security incident. A phishing campaign that targeted the party was discovered, but hackers could not gain entry to the party’s system or modify voter information.

The Verizon 2020 Data Breach Investigations Report defines a breach as “an incident that results in the confirmed disclosure—not just potential exposure—of data to an unauthorized party.” An example includes Spotify’s data breach in November of this year when the company discovered a vulnerability on its system that may have inadvertently exposed critical account registration information.

A more extensive example of a breach is the SolarWinds supply chain attack in December 2020, where a nation-state hacker group breached its network and inserted malware in updates for Orion, a software application for I.T. inventory management and monitoring that affected over 18,000 customers, including many U.S. federal agencies. The hackers were able to bypass MFA protections, gain admin privileges and use those rights to steal a secret key (akey) from a server running Outlook Web App. The hackers then used the “akey” to generate a cookie and kept it handy when one of them with the right username and password would need it when taking over an account.

Do You Have to Report an Incident or a Breach?

It depends. Various regulatory organizations have specific requirements on what must be reported. In the case of SolarWinds, they are a U.S. publicly-traded company listed on the New York Stock Exchange, so they are subject to the Securities and Exchange Commission (SEC) regulations. In 2018, the SEC issued a Commission State and Guides on Public Company Cybersecurity Disclosures to provide specific guidance on cybersecurity incidents. The SEC believes that public companies must take all required actions to inform investors about material cybersecurity risks and incidents in a timely fashion. And in determining their disclosure obligations, companies need to weigh the potential materiality of any identified risk and, in the case of incidents, the importance of any compromised information and of the impact of the incident on the company’s operations and on the range of harm that such incidents could cause.

When it comes to the General Data Protection Regulation (GDPR), an organization must report a breach if there is an incident “leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data” that leads to a potential risk to people’s rights and freedoms. The European Data Protection Supervisor notes that while not every information security incident is a personal data breach, every personal data breach is an information security incident. Organizations that have suffered an incident that can result in “loss of control over their personal data or limitation of rights, discrimination, identity theft or fraud, financial loss, unauthorized reversal of pseudonymization, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned,” they are required to notify a Data Protection Authority (DPA) within 72 hours of becoming aware of the breach.

The California Consumer Privacy Act (CCPA) requires a business or state agency to notify any California resident whose unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person. Regardless of location, companies that serve California residents have at least $25 million in annual revenue must comply with the law. And companies of any size that have personal data on at least 50,000 people or collect more than half of their revenues from the sale of personal data also fall under the law. Specific to healthcare, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) has a Breach Notification Rule that requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured protected health information. An impermissible use or disclosure of protected health information is presumed to be a breach unless the covered entity or business associate, as applicable, demonstrates that there is a low probability that the protected health information has been compromised based on a risk assessment of several factors.

Protect Your Critical Data at All Times with Atakama

Whether you’re dealing with a security incident or data breach, Atakama keeps your data safe from cyberattacks. If you’re required to maintain the safety and integrity of critical customer data and sensitive files, Atakama can encrypt each file automatically using AES with a 256-bit key. Each file’s unique key is automatically fragmented into “key shards” and distributed to users’ physical devices.

Atakama’s decentralized architecture eliminates the need for passwords and central key stores. Even if data is exfiltrated from your network, unless the necessary pieces of the key have been correctly rebuilt before decrypting and opening the files, the files are rendered useless to the hacker. And you’ll be able to show industry regulators how your data is protected and encrypted at rest.

Request a demo to see how Atakama can protect your critical data during any security incident or breach.

Ready to try Atakama?

Request Demo