There are two types of organization now: those that have been breached, and those that just don’t know it yet.
A big part of the problem is that the traditional approach to network security, relying on perimeter-centric strategies, is failing. According to the 2014 Cyberthreat Defense Report, more than 60% of organizations fell victim to one or more successful cyberattacks last year. But it is the following statistic that shows the ineffectiveness of perimeter defenses: Studies have shown that between 66% and 90% of data breaches are identified, not by the organizations that are breached, but by third parties.
One alternative that is a strong candidate to improve the security situation is the zero-trust model (ZTM). This aggressive approach to network security monitors every piece of data possible, under the assumption that every file is a potential threat. It requires that all resources be accessed in a secure manner; that access control be on a need-to-know basis and strictly enforced; that systems verify and never trust; that all traffic be inspected, logged, and reviewed; and that systems be designed from the inside out instead of the outside in. It simplifies how information security is conceptualized by assuming there are no longer “trusted” interfaces, applications, traffic, networks or users. It takes the old model — “trust but verify” — and inverts it, because recent breaches have proved that when an organization trusts, it doesn’t verify. This model was initially developed by John Kindervag of Forrester Research and popularized as a necessary evolution of traditional overlay security models.
In ZTM, companies should also analyze employee access and internal network traffic, and grant minimal employee access privileges. ZTM also emphasizes the importance of log analysis and increased use of tools that inspect the actual content of data packets.
According to a study conducted by Forrester on behalf of IBM, many organizations are already on the path to support ZTM, with their responses indicating that they have already adopted key ZTM concepts, whether though they may not be aware of ZTM itself. This is encouraging, since it suggests that full implementation of ATM could be a mere extension of activities already in place. Specifically, depending on activity (e.g., logging and inspecting all network traffic), between 58% and 83% of respondents are already behaving in ways that support ZTM concepts.
Big data meets ZTM
Using ZTM will generate enormous volumes of real-time data, the analysis of which will have IT managers drowning in log files, vulnerability scan reports, alerts, reports and more. Adding big data analytics to the mix will give IT managers a comprehensive view of their security landscape, exposing what is at risk, how severe the risks are, how important the asset at risk is and how to fix the security weakness.
But there’s more to be gained by combining ZTM with big data. A promising approach is to apply behavioral analytics to data already resident in networks and so prevent a broad range of suspicious activities.
According to Gartner, big data analytics will play a crucial role in detecting cyberattacks. By 2016, more than 25% of global organizations will adopt big data analytics for at least one security and fraud-detection use case, up from the current 8%. Big data will change most of the product categories in the field of computer network security, including network monitoring, the authentication and authorization of users, identity management, fraud detection, and systems of governance, risk and compliance. Big data will also change the nature of the security controls, such as conventional firewalls, anti-malware and data loss prevention. In coming years, the tools of data analysis will evolve further to enable a number of advanced predictive capabilities and automated controls in real time.
Finally, the use of big data analytics in network security needs efficient data capture and analysis that can look broadly and historically across an infrastructure, sometimes trailing several months, to see when and how a breach occurred and what the consequences were. This process involves great volume, variety and velocity of data.
It’s an open field for companies to introduce new products and services and harvest the profit.
Ahmed Banafa is a professor with Kaplan University’s School of Information Technology. He has extensive experience in IT operations and management, as well as a research background in a variety of techniques and analysis. He is a certified Microsoft Office Specialist, and he has served as a reviewer and technical contributor for the publication of several business and technical books. The views expressed in this article are solely those of the author and do not represent the views of Kaplan University.