Skip to content

Canonical OpenStack vs Red Hat OpenStack

OpenStack is a prominent platform used to build and manage cloud infrastructure through open-source. Today, there are several OpenStack distributions available. However, Red Hat OpenStack and Canonical OpenStack are the two most popular ones. Although both offer robust cloud solutions, their approaches, features, and support models differ significantly.

This article explores these variations in great detail, therefore guiding companies in choosing their cloud infrastructure.

Overview of Canonical OpenStack

Canonical OpenStack, also called Charmed OpenStack, is built on Ubuntu. Its goal is to make the OpenStack deployment and administration process more efficient.

It uses Canonical’s products, such as Juju for orchestration and MAAS, Metal as a Service for hardware provisioning to enable users to automate the whole lifecycle of their cloud infrastructure.

Key Features of Canonical OpenStack

  • Model-Driven Operations

Using a model-driven approach, Canonical OpenStack makes the management of cloud resources simpler and scaling them possible.

  • Automation

The heavily automated deployment procedure helps to save time and complexity in building an OpenStack cloud.

  • Flexible Deployment Options

Depending on organizational requirements for flexibility, they can choose between self-managed or Canonical-managed deployments pick depending on.

  • Integration with Kubernetes

Canonical lets one run virtual machines and containers on the same platform, therefore enabling a consistent method of workload management.

Overview of Red Hat OpenStack

Red Hat OpenStack Platform or RHOSP is deployed on top of Red Hat Enterprise Linux. This enables it to integrate tightly with other Red Hat products. Red Hat stresses stability, security, and enterprise-grade support. As a result, it has become a popular choice for companies seeking a robust cloud solution.

Key Features of Red Hat OpenStack

  • Enterprise Support

Red Hat offers extensive support options, including managed services that cover deployment, upgrades, and ongoing maintenance.

  • Integration with Red Hat Ecosystem

It integrates seamlessly with other Red Hat solutions like Ansible for automation and Satellite for systems management.

  • Comprehensive Monitoring Tools

RHOSP includes centralized logging, performance monitoring, and availability monitoring tools to ensure optimal cloud operation.

Simple Comparison Table

FeatureCanonical OpenStack (Charmed OpenStack)Red Hat OpenStack Platform
DistributionUbuntuRed Hat Enterprise Linux
Deployment MethodologyCharm-based, declarativeAnsible-based, procedural
Management ToolsJujuRed Hat CloudForms
Support ModelCanonical’s commercial supportRed Hat’s commercial support
Integration with Other ProductsTightly integrated with other Canonical products (e.g., Kubernetes, Ceph)Tightly integrated with other Red Hat products (e.g., Red Hat Enterprise Virtualization, Red Hat CloudForms)
PricingSubscription-based, per-node pricingSubscription-based, per-node pricing
FocusSimplicity, automation, scalabilityEnterprise-grade, stability, security
Target AudienceDevelopers, DevOps teams, cloud service providersLarge enterprises, IT departments
Community InvolvementStrong contributor to the OpenStack communityActive contributor to the OpenStack community

 

Comparing Canonical OpenStack vs Red Hat OpenStack

  • Release Cadence

Canonical OpenStack release cycle occurs every six months. However , its Long-Term Support (LTS) releases occur every 18 months. As a result,  customers can get new features and improvements more frequently. Red Hat release cycle is also every six-month release cycle, but while Canonical LTS is every 18 months Red Hat’s own is every two years. This provides stability, but it may cause delays in accessing new features when compared to Canonical’s approach.

  • Bare-Metal Provisioning Tool

For bare-metal provisioning, Canonical OpenStack uses MAAS, enabling customers to control physical servers inside their cloud environment effectively. Red Hat OpenStack uses Ironic as its bare-metal provisioning tool, which is also efficient but could require operating skills different from MAAS.

  • Maximum Support Timeline

Canonical OpenStack offers a maximum support timeline of five years for its releases. This shorter support period may require organizations to plan upgrades more frequently. However, Red Hat OpenStack has a longer maximum support timeline of ten years, which can appeal to enterprises looking for long-term stability and support without frequent upgrades.

  • Managed Services

Canonical offers managed services for OpenStack through its solution called BootStack. This fully managed service allows Canonical to use their expertise to build, monitor, and maintain your private cloud. They handle everything from initial deployment to operations management, including software updates, backups, and monitoring. However, there is also an option to self-manage your infrastructure with the help of Canonical.

Similarly, Red Hat OpenStack offers managed services. This gives organizations the option to outsource the management of their cloud infrastructure to Red Hat. This capability is especially useful for firms that lack in-house knowledge of the system. Red Hat also works with managed service providers (MSPs) to offer OpenStack as a managed private cloud solution. As a result, companies can experience minimized disruptions while maintaining operational control​.

  • Support Options

Selecting an OpenStack distribution requires much consideration including support. Canonical provides flexible support choices allowing users to select between fully managed services or self-managed configurations. This adaptability serves companies with different degrees of expertise in cloud infrastructure management. Red Hat, on the other hand, offers robust business support including thorough maintenance programs tailored for large-scale deployments.

  • Upgrade Process

Canonical’s method supports automated upgrades that can be scheduled, ensuring it is free from significant downtime. On the other hand, the Red Hat upgrading process is manual and could be complex. This could cause problems for companies during the maintenance window, therefore slowing down or stopping the workflow over that period.

  • Ecosystem Integration

Canonical OpenStack is designed to fit quite well with a variety of third-party components. It also leverages MAAS, Metal as a Service, for hardware provisioning and Juju for service orchestration. By means of OpenStack Interoperability Lab (OIL), Canonical examines hundreds of setups to guarantee interoperability with several hardware and software solutions.

Red Hat, on the other hand, is closely linked with its ecosystem. For companies now using Red Hat products, this connection offers a cohesive experience. Such integration could, however, restrict flexibility and perhaps lock customers into the Red Hat environment.

  • Cost Structure

For companies running several instances across different hardware configurations, Canonical offers a per-host pricing model, which can be more predictable and economical. Red Hat’s per-socket-pair price, on the other hand, can result in more expenses in settings with few sockets but many physical servers.

  • Monitoring Tools

Though both systems have monitoring features, their scope and complexity vary. Through its Landscape tool, Canonical offers basic monitoring. For sophisticated monitoring requirements, you may need other setups. Red Hat, on the other hand, offers a whole suite of monitoring tools so that companies may have a better understanding of their cloud operations without resorting to third-party solutions.

  • Subscription Model

Canonical OpenStack does require a subscription for its basic services. Users could thus utilize and control their cloud infrastructure totally free from ongoing licensing costs. However, Red Hat OpenStack depends on a per socket-pair model subscription, so it can be rather expensive (around USD 6,300 per socket-pair). This approach may result in greater costs for businesses with plenty of physical servers.

Data Protection for OpenStack

Storware backup and recovery provides comprehensive data protection for OpenStack environments, including both Red Hat and Canonical distributions. Its agentless architecture ensures seamless integration without impacting performance. Storware can protect a wide range of OpenStack components, including instances, volumes, and metadata. Additionally, it offers granular restore options, allowing you to recover specific files or entire instances as needed. With Storware, you can safeguard your critical OpenStack data and ensure business continuity in case of unexpected events.

 

Conclusion

Choosing between Canonical OpenStack and Red Hat OpenStack finally comes down to an organization’s particular needs. So you must consider that when looking at their differences. With customizable support choices appropriate for many contexts, Canonical’s Charmed OpenStack excels in automation and ease of use. Red Hat’s product, on the other hand, distinguishes itself for its enterprise-grade dependability and all-encompassing support system designed for big companies looking for robust cloud solutions.

Understanding these variations fully will help you choose the appropriate distribution that fits your operational needs and strategic objectives in creating a sustainable cloud infrastructure.

About Storware
Storware is a backup software producer with over 10 years of experience in the backup world. Storware Backup and Recovery is an enterprise-grade, agent-less solution that caters to various data environments. It supports virtual machines, containers, storage providers, Microsoft 365, and applications running on-premises or in the cloud. Thanks to its small footprint, seamless integration into your existing IT infrastructure, storage, or enterprise backup providers is effortless.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How Unified Endpoint Management Supports Zero Trust Architecture

“Never trust, always verify.” It’s more than just a catchy phrase, it’s the core principle behind the Zero Trust security model.

But where threats constantly evolve, how can businesses ensure they’re truly following this mandate? Are traditional security methods enough to keep up with the complexities of digital frameworks? How do you ensure that every device, user, and access point in your network is secure?

Unified endpoint management (UEM) in Zero trust security
Implementing Zero-Trust with UEM
From mobile phones and laptops to IoT devices, each endpoint represents a potential vulnerability. As businesses adopt the Zero Trust model, Unified Endpoint Management (UEM) appears to be a key solution. It’s not just about managing devices, it’s about continuously verifying them, controlling access, and monitoring activity to prevent unauthorized access at every turn.

Let’s find out how UEM enforces the principle of “never trust, always verify” across all endpoints, ensuring that your network remains secure in a world where trust is a luxury no one can afford.

Understanding Zero Trust Architecture
Digital threats are rapidly evolving in sophistication, so relying on outdated security models that trust users and devices by default simply won’t cut it. Zero Trust flips this concept on its head, demanding rigorous checks at every level of access and interaction.

So, what exactly does Zero Trust entail? It’s about ensuring that every access request—whether from inside or outside the network—undergoes continuous verification. There’s no inherent trust based on a user’s location or device, instead, every request is treated with suspicion until proven otherwise. This means that even if a device or user is inside the corporate network, they’re not automatically granted access to all resources. Instead, access is tightly controlled and continuously validated.

Key principles of Zero Trust include:

Least Privilege Access: Users and devices are given the minimum level of access necessary to perform their functions. This reduces the risk of unauthorized access and limits the potential damage in case of a breach.
Micro-Segmentation: The network is divided into smaller, isolated segments, so even if one segment is compromised, the threat is contained and doesn’t spread across the entire network.
Continuous Monitoring: Regularly checking the health and behavior of users and devices helps detect anomalies and real-time potential threats, ensuring that any suspicious activity is addressed immediately.
With people accessing corporate resources from different locations and devices, the traditional security perimeter has all but disappeared. This shift requires a new approach to security.

Unified Endpoint Management: Beyond Device Management
UEM is an all-in-one toolkit that goes beyond the basic device management we’re used to. It’s about harmonizing and securing every device, application, and data within your business. While traditional device management might have focused solely on deploying and maintaining hardware, UEM takes a broader approach, integrating multiple aspects of IT management into a single platform.

UEM is a comprehensive solution that manages and regulates a diverse range of devices and applications. It unifies the management of mobile devices, desktops, applications, VR devices, wearables, rugged devices, digital screens, and IoT devices, all within a single platform. It’s designed to streamline the oversight of various endpoints, from smartphones and tablets to laptops and IoT devices.

But UEM doesn’t stop there; it also encompasses application management, identity management, and data security, providing a holistic view of your entire IT ecosystem.

The evolution of UEM is a testament to its growing importance. It began as Mobile Device Management (MDM), focusing on mobile devices and their security. As the IT sector evolved, so did MDM, transforming into UEM to address the complexities of managing a wider range of endpoints, ensuring that every endpoint remains secure, compliant, and optimized, regardless of where or how it’s used.

UEM and Zero Trust: The Perfect Synergy
When it comes to integrating security and management, UEM and Zero Trust are the duo that complement each other seamlessly. Many CXOs now recognize that pairing UEM with Zero Trust is essential for safeguarding their companies. Here’s how UEM supports and enhances the Zero Trust model, making it a powerful ally:

1. Enforcing the Principle of Least Privilege
Zero Trust is all about ensuring that users and devices have only the minimum level of access based on their level of responsibility. UEM plays an important role by meticulously managing and controlling access across all devices. It ensures that permissions are granted based on role, necessity, and context, so users get just enough access to do their job. This granular control minimizes potential risks and enforces the Zero Trust principle of least privilege effectively.

2. Continuous Monitoring and Threat Detection
In the Zero Trust model, constant vigilance is key. UEM’s robust monitoring capabilities align perfectly with this need. It continuously tracks user behavior and device health, looking out for any anomalies that could indicate a threat. This real-time oversight ensures that any suspicious activity is detected and addressed promptly, keeping your network secure from emerging threats.

Scalefusion UEM further enhances this approach by enabling overall centralized visibility, allowing organizations to monitor all endpoints from a single platform. This centralization streamlines threat detection and response, making it easier to identify potential vulnerabilities and take swift action to mitigate risks.

3. Identity and Access Management (IAM) Integration
Effective security requires tight integration between identity management and access controls. UEM enhances Zero Trust by working seamlessly with IAM solutions. This integration ensures that access permissions are managed consistently and securely across all endpoints, enforcing Zero Trust principles by validating every access request against strict identity and access controls. Solutions like Scalefusion OneIdP play a key role here by focusing on conditional access management, ensuring that users are granted access based on real-time conditions and policies, further strengthening the security framework.

4. Multi-Factor Authentication (MFA)
MFA is a critical component of Zero Trust, adding an extra layer of security to user authentication. UEM solutions enable the implementation of MFA, requiring users to provide multiple forms of verification —such as something they know (a password), something they have (a smartphone or security token), and something they are (biometric data like a fingerprint or facial recognition). By integrating MFA with UEM, these solutions streamline the enforcement of MFA policies, ensuring compliance and security across all endpoints.

5. Single Sign-On (SSO)
SSO simplifies user access by allowing individuals to log in to multiple applications with a single set of credentials, streamlining the login process and maximizing productivity.

Enterprises increasingly recognize the value of SSO for enhancing multi-application usage and centralizing user activity monitoring, which facilitates tracking resource utilization and identifying behavioral patterns. UEM solutions support SSO capabilities, improving user experience while reducing the risk of password-related vulnerabilities. This approach aligns seamlessly with Zero Trust principles by ensuring that access is managed efficiently and securely.

6. Automated Response and Remediation
One of the key challenges in maintaining the Zero Trust environment is quickly responding to potential threats and vulnerabilities. UEM enhances Zero Trust by automating response and remediation processes. For instance, when a device exhibits suspicious behavior or fails to meet compliance standards, UEM can automatically take remediation actions, such as isolating the device, blocking access, or initiating a security scan. This automation supports Zero Trust’s requirement for continuous monitoring and quick responses, ensuring potential threats are managed effectively.

Future Outlook: The Role of Automation in UEM and Zero Trust
1. Automation in Zero Trust Enforcement
Enforcing Zero Trust policies manually is increasingly impractical, especially for large-scale enterprises with numerous endpoints. Automation helps streamline and enhance the enforcement of Zero Trust principles. It allows for real-time compliance checks, automatic threat detection, and seamless policy application. By taking over routine tasks and monitoring, automation ensures that security measures are consistently applied, reducing human error and accelerating response times to potential threats.

2. Emerging Trends
Several key trends are shaping the future of UEM and Zero Trust. The integration of Internet of Things (IoT) devices is one such trend, as these devices become more prevalent in business environments. UEM solutions are evolving to include advanced capabilities for managing and securing a wide range of connected devices, such as smart thermostats in offices and connected medical devices in healthcare, ensuring only authorized access and continuous monitoring for compliance and anomalies.

Another notable trend is the shift toward edge computing. As data processing moves closer to the source of data generation, securing these edge environments is important. UEM will increasingly focus on extending Zero Trust principles to decentralized endpoints, ensuring comprehensive protection even in the most distributed IT environment.

UEM as the Backbone of the Zero Trust Architecture
UEM supports Zero Trust by ensuring continuous verification of every device, enforcing strict access controls, and maintaining constant monitoring. From automating threat responses to integrating identity management, UEM enhances the effectiveness of Zero Trust principles. By managing and securing all endpoints, Scalefusion UEM helps create a strong fortress around your digital environment, making it harder for threats to breach and move within your network.

Now is the time to assess your current security posture. It’s worth exploring how UEM in Zero Trust security can be a critical component of your strategy. With the right approach, you can walk through modern threats and complexities and secure your digital environment.

Scalefusion UEM, integrated with IAM, can help achieve this synergy seamlessly. By combining robust endpoint management with identity and access controls, Scalefusion enables you to implement Zero Trust principles efficiently, ensuring that every endpoint is continuously secured and compliant.

About Scalefusion
Scalefusion’s company DNA is built on the foundation of providing world-class customer service and making endpoint management simple and effortless for businesses globally. We prioritize the needs and feedback of our customers, making sure that they are at the forefront of all decision-making processes. We are dedicated to providing comprehensive customer support services, and place emphasis on customer-centric thinking throughout the organization.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How to Securely Access Internal Web Applications Without a VPN

Introduction  

As a solution engineer, I’ve seen firsthand how critical it is to secure access to internal resources to protect sensitive data from unauthorized access. Implementing Two-Factor Authentication (2FA) for intranet applications and internal web resources is one of the most effective methods to strengthen security and reduce risk. With Thinfinity’s Role-Based Access Control (RBAC) capabilities and seamless integration with major identity providers, organizations can efficiently enforce 2FA and maintain secure remote access to internal systems using Thinfinity’s Web Application Gateway (WAG) as an SSL VPN.

 

Why Enforce 2FA for Intranet Applications with SSL VPN?

Implementing SSL for VPN connections, specifically using Thinfinity’s Web Application Gateway (WAG), is crucial for ensuring a secure way to access internal web resources. Here’s why enforcing 2FA with SSL VPN can make a difference for intranet security:

Intranet applications and internal web resources often hold critical information about an organization. Limiting access to only authorized personnel is key to preventing data breaches. Relying on passwords alone is no longer enough, as they are vulnerable to attacks like phishing, brute force, and credential stuffing. Implementing 2FA adds an extra layer of security by requiring users to provide a secondary factor, like a code from an authenticator app or biometric verification, along with their password.

Two-Factor Authentication reduces the risk of compromised credentials, as an attacker would need both the password and the second factor to gain access. Enforcing 2FA for intranet applications and internal resources is a smart strategy for strengthening the security of an organization’s network and reducing the risk of unauthorized access.

Thinfinity’s RBAC and SSL VPN with 2FA: A Comprehensive Security Solution

Thinfinity Workspace strengthens security by integrating Role-Based Access Control (RBAC) with major identity providers like Okta, Microsoft Azure AD, and Google Identity, making it an ideal SSL VPN solution with 2FA support. This integration makes it easy to enforce 2FA for internal applications, ensuring that only authorized users can access specific resources.

Using RBAC, IT administrators can assign specific roles to users, defining which intranet applications or internal resources they can access. This follows the principle of least privilege, meaning users only get access to what they need for their roles, which helps reduce the attack surface. Combining RBAC with 2FA reduces both external and insider threats, providing an extra level of security.

 

How Thinfinity’s SSL VPN Makes 2FA Enforcement Simple

Thinfinity offers a straightforward process for enforcing 2FA across intranet applications. The key components include:

  1. Integration with Identity Providers: Thinfinity integrates with popular identity providers, including Okta, Azure Active Directory, and Google Identity. These integrations allow Thinfinity to provide SSL VPN functionality with advanced 2FA enforcement, ensuring the highest level of security. This integration allows administrators to enforce 2FA policies that are already set up in these platforms, making management easier and more consistent.
  2. RBAC Configuration: Administrators can use Thinfinity’s RBAC features to configure user roles and permissions, ensuring access is tightly controlled based on user roles. This minimizes the chances of unauthorized access to sensitive resources.
  3. Clientless Access Using SSL VPN: Thinfinity provides a clientless, browser-based way to access internal web resources using its Web Application Gateway (WAG) as an SSL VPN. This SSL for VPN approach ensures secure connectivity without needing any additional software, simplifying the user experience. Users can securely access these resources from any device, and with 2FA enabled, they must verify their identity twice—using a password and a second factor—before they can access the resources.
  4. Enhanced Logging and Monitoring: Thinfinity’s integration with identity providers also supports enhanced logging and monitoring of access attempts. Administrators can see who is accessing internal applications, when, and from where. This information is important for compliance and helps quickly identify suspicious activity.

 

Benefits of Enforcing 2FA with Thinfinity’s SSL VPN

Secure Remote Access with SSL VPN: Enforcing 2FA through Thinfinity’s SSL VPN ensures that remote access to internal resources is secure, making it difficult for unauthorized users to access intranet applications without proper verification.

Alignment with Zero Trust Security: Thinfinity’s RBAC and 2FA capabilities are consistent with Zero Trust security principles. Users are verified every time they try to access a resource, and only those meeting strict authentication requirements are allowed in.

Simplified User Experience: With Thinfinity’s clientless access through its SSL VPN, users do not need to install extra software to access internal resources. The 2FA process is smooth, which increases security without adding unnecessary complexity.

Compliance and Risk Mitigation: Enforcing 2FA helps organizations meet industry regulations that require multi-factor authentication for accessing sensitive data. Additionally, it reduces the risk of data breaches by ensuring that a password alone is not enough for access.

 

Thinfinity as an Alternative to Fortinet and Sophos SSL VPN Solutions

When comparing SSL VPN solutions, Thinfinity’s Web Application Gateway (WAG) stands out as a strong alternative to Fortinet and Sophos. One of the primary advantages of Thinfinity WAG is its seamless integration of 2FA through popular identity providers, such as Microsoft Azure AD and Google Authenticator, which allows organizations to enforce 2FA in a straightforward and consistent way.

Fortinet SSL VPN 2FA with Microsoft Authenticator: Fortinet’s SSL VPN solution allows administrators to enforce 2FA, including using Microsoft Authenticator. However, managing users and configuring authentication policies can be cumbersome. Thinfinity, on the other hand, provides an intuitive interface for configuring 2FA, with the flexibility to integrate with any of the major identity providers.

Sophos SSL VPN 2FA: Sophos also supports 2FA in its SSL VPN solution, typically using Google Authenticator. Thinfinity WAG, however, provides a more comprehensive and adaptable solution for secure remote access by leveraging RBAC, clientless access, and full integration with a variety of identity providers.

Fortinet SSL VPN 2FA with Google Authenticator: Similar to Sophos, Fortinet supports 2FA using Google Authenticator. Thinfinity WAG not only offers compatibility with Google Authenticator but also integrates with other identity providers, enhancing flexibility for organizations looking to enforce robust authentication policies across multiple platforms.

How to Implement 2FA with Thinfinity’s SSL VPN

Setting up 2FA for internal web resources with Thinfinity is a straightforward process:

Deploy Thinfinity Gateway and Broker: Install the Thinfinity Gateway and Secondary Broker within your network. This setup allows secure access to intranet applications without opening inbound ports.

Go to the tutorial ➡️

Integrate with Identity Providers: Connect Thinfinity to your identity provider (such as Okta or Azure AD). This integration uses existing user directories and 2FA settings to make the authentication process seamless.

Go to the tutorial ➡️

Set Up RBAC Policies: Define user roles and permissions to ensure that each user has access only to the resources they need. This supports the principle of least privilege.

Go to the tutorial ➡️

Enable 2FA: Enforce 2FA policies from your identity provider to ensure that every user must authenticate using a second factor when accessing internal applications.

Go to the tutorial ➡️

Conclusion: Strengthen Security with 2FA for Internal Resources using SSL VPN

Enforcing 2FA for intranet applications and internal web resources is essential for organizations aiming to improve security and reduce the risk of unauthorized access. Thinfinity’s RBAC capabilities and integration with leading identity providers make it easy to enforce 2FA effectively.

By combining 2FA, RBAC, and clientless access through SSL VPN, Thinfinity offers a powerful solution for secure remote access to internal applications. Organizations can be confident that only authorized users are accessing sensitive data and that these users are properly authenticated each time.

About Cybele Software Inc.
We help organizations extend the life and value of their software. Whether they are looking to improve and empower remote work or turn their business-critical legacy apps into modern SaaS, our software enables customers to focus on what’s most important: expanding and evolving their business.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Cross-Site Request Forgery Cheat Sheet

“Aren’t you a little short for a Stormtrooper?” In this iconic Star Wars moment, Princess Leia lazily responds to Luke Skywalker, disguised as one of her Stormtrooper captors and using authentication information to open her cell.

 

In other words, Star Wars acts as an analogy for a cross-site request forgery (CSRF) attack. In a CSRF attack, malicious actors use social engineering so that end-users will give them a way to “hide” in their authenticated session. Disguised as the victim, the attackers can make changes and engage in transactions based on the account’s permissions.

 

With a cross-site request forgery cheat sheet, you can learn the basic principles underlying these attacks and some best mitigation practices.

What is Cross-Site Request Forgery (CSRF)?

A cross-site request forgery (CSRF) attack involves inheriting the victim’s identity and privileges so that the attacker can perform actions within the site. Typically, browser requests include credential information, like a user’s:

  • Session cookie
  • IP address
  • Windows domain credentials

 

After a user authenticates into the site, the attackers target functions that allow them to make changes, like:

  • Changing an email address
  • Creating a new password
  • Making a purchase
  • Transferring funds
  • Elevating privileges

 

The site treats these forged, authenticated requests as legitimate and authorized. The attacks focus on making changes within the site because any data requested would go to the victim.

 

CSRF attacks can also be called:

  • XSRF
  • Sear Surf attacks
  • Session Riding
  • Cross-Site Reference Forgery
  • Hostile Linking

 

Three Types of CSRF Attacks

Malicious actors can deploy three types of CSRF attacks.

LOGIN CSRF Attack

In a login CSRF attack, malicious actors:

  • Get the user to log into an account the threat actor controls
  • Victim adds personal data to the account
  • Attackers log into the account to collect data and victim activity history

 

Stored CSRF Flaws

Attackers can store an attack on a vulnerable site using fields that accept HTML using:

  • IMG tag
  • IFRAME tag

This increases the damage of the attack for two reasons:

  • Victims may “trust” the compromised site.
  • Victims may already be authenticated into the site.

 

Client-side CSRF

The client-side CSRF attack manipulates the client-side JavaScript program’s requests or parameters, sending a forged request that tricks the target site. These attacks rely on input validation issues so the server-side has no way to determine whether the request was intentional.

How does a CSRF attack work?

At a high level, attackers do two things:

  • Create the malicious code
  • Use social engineering to trick the victim

 

CSRF attacks rely on:

  • Web browsers handling session-related information
  • Attackers’ knowledge of web application URLs, requests, or functionality
  • Application session management only using browser information
  • HTML tags that provide immediate HTTP[S] resource access

 

By clicking on the malicious URL or script, the victim sets up the attacker’s ability to exploit:

  • GET requests: Browser submits the unauthorized request.
  • POST requests: Victim clicking on a link or submit button executes the action.
  • HTTP methods: APIs using PUT or DELETE could have requests embedded into an exploit page, but same-origin policy restrictions in browsers can protect against these unless the website explicitly allows these requests.

 

How is Cross-Site Request Forgery Different from Cross-Site Scripting (XSS)?

 

These attacks exploit different aspects of web interactions:

  • Cross-Site Request Forgery: leverages use identity to take state-changing actions without victim consent
  • Cross-site scripting: inject malicious code into web pages to manipulate user input and access sensitive data

 

Best Practices for Mitigating CSRF Attack Risk

A successful CSRF attack exploits specific application vulnerabilities and a user’s privileges. Following some best practices, you can mitigate these risks.

 

Use Synchronizer Token Patterns

As the most effective mitigation, many frameworks include CSRF protection by default so you may not have to build one yourself. The server-side-generated CSRF tokens should be:

  • Unique per user per session
  • Secret
  • Unpredictable

 

The server-side component verifies the token’s existence and validity, comparing it to the token in the user session and the site should reject the request without it.

 

The mitigation uses per-session tokens because they offer the end-user a better experience. A per-request token would be more secure by limiting the available time frame for using them. However, for every user interaction, the site would need to generate a new token.

Alternative: Signed Double-Submit Cookie Patterns

In cases where you can’t use the synchronizer token, you could substitute the easy-to-implement, stateless Double-Submit Cookie pattern. With the Signed Double-Submit Cookie, you have a secret key that only the server knows to mitigate injection risks that would compromise the victim’s session.

 

While the Naive Double-Submit Cookie methods may be easier to implement and scale, attackers can bypass the protection more easily through:

  • Subdomain exploitation
  • Man-in-the-middle (MitM) attacks

 

Disallow Simple Requests

Simple requests are cross-origin HTTP requests that can be sent directly from the browser to the target service without getting prior approval. If the site uses <form> tags that allow users to submit data, the application should include additional protections. Some examples of additional protections include:

  • Ensuring servers or APIs do not accept text/plain content types
  • Implementing custom request headers for AJAX/APIs to prevent usability issues that using a double-submit cookie would create

 

Implement Client-side CSRF Mitigations

Since client-side CSRF attacks bypass traditional mitigations, you should implement the following:

  • Independent requests: Ensure attacker controllable inputs cannot generate asynchronous requests
  • Input validation: Ensure that input formats and request parameter values only work for non-state-changing operations
  • Predefined Request Data: Store safe request data in the JavaScript code

 

SameSite (Cookie Attribute)

The browser uses this attribute to determine whether to send cookies with cross-site requests and has three potential values:

  • Strict: prevents the browser from sending the cookie to the target site in all cross-site browsing contexts that involve following a regular link
  • Lax: maintains a logged-in session when the user follows an external link, but blocks high-risk request methods

 

Verify Origin with Standard Headers

This method examines the HTTP request header value for:

  • Source origin: where it comes from
  • Target origin: where it’s going to

 

When these match, the site accepts the request as legitimate. If they do not match, it discards the request.

Involve the User

Involving users means they have to take action that mitigates risks from unauthorized operations. Some examples include using:

  • Re-authentication mechanisms
  • One-time tokens

 

While CAPTCHA requires user interaction, it does not always differentiate user sessions. While it would make attacker success more difficult, it isn’t a suggested mitigation technique.

 

Graylog Security: Mitigating CSRF Risk with High Fidelity Alerts

Graylog Security provides prebuilt content that maps security events to MITRE ATT&CK so organizations can enhance their security posture. By combining Sigma rules and MITRE ATT&CK, you can create high-fidelity alerting rules that enable robust threat detection, lightning-fast investigations, and streamlined threat hunting. For example, with Graylog’s security analytics, you can monitor user activity for anomalous behavior indicating a potential security incident. By mapping this activity to the MITRE ATT&CK Framework, you can detect and investigate adversary attempts at using Valid Accounts to gain Initial Access, mitigating risk by isolating compromised accounts earlier in the attack path and reducing impact.

Graylog’s risk scoring capabilities enable you to streamline your threat detection and incident response (TDIR) by aggregating and correlating the severity of the log message and event definitions with the associated asset, reducing alert fatigue and allowing security teams to focus on high-value, high-risk issues.

 

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.