At R360, we align to the NIST 800-207 standard for Zero Trust. This is the most vendor neutral, comprehensive standard, not just for government entities, but for any organization. It also encompasses other elements from organizations like Forrester’s ZTX and Gartner’s CARTA. Finally, the NIST standard ensures compatibility and protection against modern attacks for a cloud-first, work from anywhere model most enterprise need to achieve.
As a response to the increasing number of high profile security breaches, in May 2021 the Biden administration issued an executive order mandating U.S. Federal Agencies adhere to NIST 800-207 as a required step for Zero Trust implementation within 6 months. As a result, the standard has gone through heavy validation and inputs from a range of commercial customers, vendors, and government agencies stakeholders – which is why many private organizations view it as the defacto standard for private enterprises as well.
Unfortunately, government agencies are ahead of corporations in adopting and implementing zero trust security architecture with 72% of government organizations already utilizing a zero trust framework in comparison to 56% of companies, according to a report released by IT company Okta.
Zero Trust seeks to address the following key principles based on the NIST guidelines:
Execution of this framework combines advanced technologies such as risk based multi-factor authentication, identity protection, next-generation endpoint security, and robust cloud workload technology to verify a user or systems identity, consideration of access at that moment in time, and the maintenance of system security. Zero Trust also requires consideration of encryption of data, securing email, and verifying the hygiene of assets and endpoints before they connect to applications.
Zero Trust is a significant departure from traditional network security which followed the “trust but verify” method. The traditional approach automatically trusted users and endpoints within the organization’s perimeter, putting the organization at risk from malicious internal actors and legitimate credentials taken over by malicious actors, allowing unauthorized and compromised accounts wide-reaching access once inside. This model became obsolete with the cloud migration of business transformation initiatives and the acceleration of a distributed work environment due to the pandemic that started in 2020.
Zero Trust Architecture therefore requires organizations to continuously monitor and validate that a user and their device has the right privileges and attributes. It also requires enforcement of policy that incorporates risk of the user and device, along with compliance or other requirements to consider prior to permitting the transaction. It requires that the organization know all of their service and privileged accounts, and can establish controls about what and where they connect. One-time validation simply won’t suffice, because threats and user attributes are all subject to change.
As a result, organizations must ensure that all access requests are continuously vetted prior to allowing access to any of your enterprise or cloud assets. That’s why enforcement of Zero Trust policies rely on real-time visibility into 100’s of user and application identity attributes such as:
The use of analytics must be tied to trillions of events, broad enterprise telemetry, and threat intelligence to ensure better algorithmic AI/ML model training for hyper accurate policy response. Organizations should thoroughly assess their IT infrastructure and potential attack paths to contain attacks and minimize the impact if a breach should occur. This can include segmentation by device types, identity, or group functions. For example, suspicious protocols such as RDP or RPC to the domain controller should always be challenged or restricted to specific credentials.
More than 80% of all attacks involve credentials use or misuse in the network. With constant new attacks against credentials and identity stores, additional protections for credentials and data extend to email security and secure web gateway (CASB) providers. This helps ensure greater password security, integrity of accounts, adherence to organizational rules, and avoidance of high-risk shadow IT services.
To start with, our Network Layer utilizes WatchGuard Fireboxes as our network routers. They have GeoLocation blocking, Gateway and Intelligent Antivirus, Bot Net Detection, Intrusion Detection, Data Loss Prevention and Web Blockers enabled to prevent outside threats from getting to our network. These services actively block known countries and networks where hackers originate their malicious activity from as well as block websites from being accessed internally and malicious files being downloaded.
On the routers we have WatchGuard AuthPoint Multi-Factor Authentication enabled requiring all users to login on our website, approve a push notification, before they can get to connecting in via Remote Desktop. This protects our Remote Desktop Gateways and Servers from being accessed without MFA being approved as well as from port scans and denial of service attacks.
As a second layer of protection, our Remote Desktop servers also have WatchGuard AuthPoint MFA enabled. All users will need to approve a push notification a 2nd time in order to login to their desktop. This is to make sure the network they are remoting in from isn’t an open network where other users could try to remote in as well. Without our dual layer of MFA you cannot even get to your data, similar to a safe deposit box in a bank vault with multiple security checkpoints.
Our Zero Trust also applies to applications that we cannot secure from the customer side. These include Outlook, DropBox, OneDrive, Google Drive, Zoom, Microsoft Teams, WebEx and any remote access or conferencing software. This is because 99% of customers don’t have MFA enabled on these applications, don’t have AntiVirus, AntiSpam, AntiMalware applications or DMARC Policies to actively protect and block the data within these applications from syncing malicious files onto our servers.
WatchGuard Endpoint Protection Detection and Response (EPDR) is installed on all of our devices to stay ahead of cyber threats, including fileless malware and ransomware. EPDR blocks unknown and trusted applications from being installed or executing malicious code and prevents websites and files from being accessed based on machine learning and artificial intelligence telemetry.
Tier I: Our Microsoft Hyper-V Servers take a Microsoft Volume Shadow Copy (VSS) of the RAID array that the Virtual Remote Desktop servers’ hard drives (VHDX) are stored on at 5 AM, 1 PM and 9 PM CST daily. VSS is a “local” snapshot of the hard drives that we store for 2 days so that if a server, drive or file is corrupted, we can immediately pull the last known good copy out of VSS and replace the corrupted VHDXs.
Tier II: All our Remote Desktop servers and Data Redirection servers (Desktop, My Documents, QuickBooks) have Cove Data Protection installed. Cove runs an hourly backup of each servers’ System State, C and D drive and all files and folders. Backups are streamed each hour offsite to Cove’s cloud in AWS. All hourly backups can be retrieved going back 28 days. This allows us to restore an entire server or individual files from the cloud backup.
Tier III: For our mission critical servers like Domain Controllers, Remote Desktop Gateway, and Redirection servers, we have a local “Speed Vault” that the hourly backups are synchronized with Cove’s offsite cloud back so that those servers can be restored locally unless needing to pull from the cloud backup. These servers also have a local Standby Image the is ready to spin up based on each hourly backup.
Tier IV: Customers are responsible for creating their own QuickBooks Backup Files *.qbb and cutting and pasting them to their local machine(s) to have their own personal backup/archive of the data. QBB files should be created weekly or monthly or before making major changes to your QBW Company Data File or prior to any major data imports that may affect your data. Copying them to a local pc ensures having a replica in the case of a server outage or inaccessibility.
Ready to get started? Let’s find the right hosting for you.Let's Go