Earlier this year the US president released an executive order on cybersecurity mandating that zero trust architecture must be adopted by all government agencies by 2024, using NIST SP 800-207, the National Institute of Standards and Technology's official zero trust standard. Because the standard specifically mentions remote browser isolation (RBI) as a component of zero trust, this means that governmental organizations must implement some form of browser isolation by 2024 in order to be compliant with the NIST standard.
The national Cybersecurity and Infrastructure Security Agency (CISA) has been tasked with helping governmental organizations implement a zero trust architecture, but what is a zero trust architecture and how exactly does remote browser isolation fit into it?
The simplest way to explain zero trust is to tell you that its all about establishing valid trust relationships that people have with devices, resources, data or services. Within a zero trust system, no one is trusted and allowed complete access to a specific resource until their access has been validated as legitimate and authorized. Even if you have been allowed to access a resource, the system still will not trust that you are who you say you are and regularly ask you to prove that you are in fact the user that you say you are.
There are a number of good security reasons why you should so this, for example if your login was stolen and used to infiltrate your organizations networks, the attacker can only access what you are allowed to access, and even then only if they can authenticate that they are you, which they probably can't. In this example zero trust has prevented lateral movement by an attacker across your IT infrastructure and denied them access.
The Zero Trust mantra is never trust, always verify, and this is the opposite of what we typically do right now, where trust exists after initial authentication and authorization until the action, device, or data is detected as malicious (until something bad happens). This means that 'trusted' networks with little scrutiny will no longer exist within the organization. Instead, each and every request by a user to access a resource is analyzed on its merits using a risk-based approach, before beign verified as legitimate.
Zero Trust began as 'Trust as a Computational Concept' and was first formally defined by Stephen Paul Marsh in his 1994 thesis Formalising Trust as a Computational Concept. Trust as a Computational Concept was then coined as 'Zero Trust' and further developed by Forrester Research analyst John Kindervag into seven key tenets:
1. All data and services are resources - this helps identify where data and services lie and where access requests will originate for the appropriate security policy to be applied.
2. All communication is secured regardless of the location - corporate network resource requests are given the same level of scrutiny as non-corporate resource requests. Local area network file server access requests are just as untrusted as someone trying to log into a cloud service that is publicly accessible.
3. Access to corporate resources is granted per session - access requests should be granted only to the resources requested, not a group of resources. The concept behind this tenant is least-privilege, where a requestor should be given access only to resources needed to complete the task.
4. Use a dynamic policy to grant access to resources - a dynamic policy can take into account a myriad of characteristics to grant access including the time of day, geolocation, operating system version, the software used to access the resource, the account used for the access request, and anti-virus presence with reports of malicious activity. E.g., the policy will grant access to a requestor to a mail server if the requestor is using Outlook on Windows 10 within the United States (IP reputation can be part of the risk-based component), or a specific IP address, during business hours and their anti-virus isn't reporting malware detections (risk-based component).
5. Monitor the integrity and security of all corporate-owned assets - this ties into the dynamic policy above in that the requestor device should have no fixable vulnerabilities present, but it goes a step further. There should be a mechanism to automate operating system patching; additionally, monitor and report installed updates and updates pending installation for other software installed on the operating system. Associated devices, such as employee-owned devices, should have access to fewer resources than corporate-owned devices if permitted access at all.
6. Authentication and authorization are dynamic and enforced before access - if a requestor wants to read a file they must go through authentication, authorization, and dynamic policy assessment. If a requestor wants to change a file after they read it they have to go through the entire process again. One of the goals of Zero Trust is to have as much of the process invisible to the requestor as possible, unless there's access denial or one-time password multi-factor authentication that's required in addition to biometrics, username and password, or access card.
7. Inventory and logging - the more information policy enforcement points have about requestors, network traffic, and types of requests, the more accurate risk-based access decisions will be with dynamic policies. You want to keep the risk to your system low by allowing access only to low-risk requestors, and to do that effectively, you need data from every system to be correlated.
The tenets of Zero Trust boil down to using multi-factor authentication, designing networks using micro-segmentation, and implementing continuous authentication. All government agencies will have to adopt the tenets of Zero Trust by 2024. As with many government-mandated standards, Zero Trust adoption will trickle through corporate, enterprise, and other business policies, eventually landing with most individual users as well; it will become a standard for all, not just federal workers, meaning that Zero Trust will become part of the foundation of our cybersecurity.
Browser isolation also forms a part of this foundation and plays a key role in Zero Trust by isolating malware found in web pages away from the end-user. In zero trust speak this means that you're prohibiting the execution of untrusted code from the internet on the endpoint, or in laymens terms it means that we don't trust a website unless we have verified that its safe. Browser isolation is also used as a policy enforcement point for web traffic, phishing detection, web category blocking, and data loss prevention.
Because of the President's mandate, the Executive Order on Zero Trust and the adoption of the standard by government agencies, there is no getting around the fact that zero trust, and by extension browser isolation, is the future of cybersecurity. With the NIST zero trust standard requiring a browser isolation component within the architecture, browser isolation is looking set to become the future of endpoint security and in my view should definitely be considered as an eighth zero trust key tenet.
Like what we write? Follow WEBGAP on Twitter for more!