Graylog軟體改版相當頻繁,7月4日釋出6.3.1版,今年上半年發布重大更新主要是在今年4月底RSAC大會期間,他們宣布推出6.2版,當中主打幾個特色,都會在付費版本(Graylog Enterprise與Graylog Security)提供,像是:針對去年秋季改版增加的資料湖與資料路由功能,加以延伸,能在擷取資料集之前,預覽工作團隊所需的資料是否在資料湖,隨後可運用選定資料擷取(Selective Data Retrieval)功能,隨需存取小範圍的記錄資訊,可大幅減少授權耗用量。
除了上述鎖定資安事件記錄管理用途的產品,Graylog旗下還有API安全解決方案,稱為Graylog API Security,可透過擷取API流量,探查存取的API,以及這些API是由合法使用者、惡意攻擊者、合作廠商、內部人員使用的,並運用內建與自定的特徵碼,自動偵測與警示當中的互動,確認是否存在網路攻擊、資料外洩行為。
Graylog API Security的推出,源自2023年7月併購新創資安公司Resurface,2024年1月發表Graylog API Security 3.6,2月釋出免費版——僅能以單節點部署、儲存16 GB資料(容量一旦超過,將刪掉舊資料,留給新進資料)。
產品資訊
Graylog ●代理商:台灣二版 ●建議售價:開放版免費,企業版每年1.5萬美元起,資安版每年1.8萬美元起,API Security版每年1.8萬美元起 ●作業系統需求:Linux(Ubuntu 20.04, 22.04、RHEL 7/8/9、SUSE Linux Enterprise Server 12/15、Debian 10/11/12)、Docker ●系統基礎元件: Graylog、Data Node、MongoDB(5.0.7版至7.x版),選用OpenSearch(1.1.x版至2.15.x版)
Any company that processes payments knows the pain of an audit under the Payment Card Industry Data Security Standard (PCI DSS). Although the original PCI DSS had gone through various updates, the Payment Card Industry Security Standards Council (PCI SSC) took feedback from the global payments industry to address evolving security needs. The March 2022 release of PCI DSS 4.0 incorporated changes that intend to promote security as an iterative process while ensuring continued flexibility so that organizations could achieve security objectives based on their needs.
To give companies time to address new requirements, audits will begin incorporating the majority of the new changes beginning March 31, 2025. However, some issues will be included in audits beginning immediately.
Why did the Payment Card Industry Security Standards Council (PCI SSC) update the standard?
At a high level, PCI DSS 4.0 responds to changes in IT infrastructures arising from digital transformation and Software-as-a-Service (SaaS) applications. According to PCI SSC’s press release, changes will enhance validation methods and procedures.
When considering PCI DSS 4.0 scope, organizations need to implement controls around the following types of account data:
Cardholder Data: Primary Account Number (PAN), Cardholder Name, Expiration Date, Service Code
Sensitive Authentication Data (SAD): Full track data (magnetic stripe or chip equivalent), card verification code, Personal Identification Numbers (PINs)/PIN blocks.
To get a sense of how the PCI SSC shifted focus when drafting PCI DSS 4.0, you can take a look at how the organization renamed some of the Requirements:
PCI Categories
PCI 3.2.1
PCI 4.0
Build and Maintain a Secure Network and Systems
Install and maintain a firewall configuration to protect cardholder data
Do not use vendor-supplied defaults for system passwords and other security parameters.
Install and maintain network security controls
Apply secure configurations to all system components
Protect Cardholder Data
(Updated to Protect Account Data in 4.0)
Protect stored cardholder data
Encrypt transmission of cardholder data across open, public networks
3. Protect stored account data
4. Protect cardholder data with strong cryptography during transmission over open, public networks
Maintain a Vulnerability Management Program
Protect all systems against malware and regularly update anti-virus software or programs
Develop and maintain secure systems and applications
5. Protect all systems and networks from malicious software
6. Develop and maintain secure systems and software
Implement Strong Access Control Measures
Restrict access to cardholder data by business need to know
Identify and authenticate access to system components
Restrict physical access to cardholder data
7. Restrict access to system components and cardholder data by business need to know
8. Identify users and authenticate access to system components
9. Restrict physical access to cardholder data
Regularly Monitor and Test Networks
Track and monitor all access to network resources and cardholder data
Regularly test security systems and processes
10. Log and monitor all access to system components and cardholder data
11. Test security of systems and networks regularly
Maintain an Information Security Policy
Maintain a policy that addresses information security for all personnel
12. Support information security with organizational policies and programs
While PCI SSC expanded the requirements to address larger security and privacy issues, many of them remain fundamentally the same as before. According to the Summary of Changes, most updates fall into one of the following categories:
Evolving requirement: changes that align with emerging threats and technologies or changes in the industry
Clarification or guidance: updated wording, explanation, definition, additional guidance, and/or instruction to improve people’s understanding
Structure or format: content reorganization, like combining, separating, or renumbering requirements
For organizations that have previously met PCI DSS compliance objectives, those changes place little additional burden.
However, PCI DSS 4.0 does include changes to Requirements that organizations should consider.
What new Requirements are immediately in effect for all entities?
While additions are effective beginning March 31, 2025, three primary issues affect current PCI audits.
Holistically, PCI DSS now includes the following sub requirement across Requirements 2 through 11:
Roles and responsibilities for performing activities for Requirement are documented, assigned, and understood.
Additionally, under Requirement 12, all entities should be:
Performing a targeted risk analysis for each PCI DSS requirement according to the documented, customized approach
Documenting and confirming PCI DSS scope every 12 months
What updates are effective March 31, 2025 for all entities?
As the effective date for all requirements draws closer, organizations should consider the major changes that impact their business, security, and privacy operations.
Requirement 3
PCI DSS 4.0 incorporates the following new requirements:
Minimizing the SAD stored prior to completion and retaining it according to data retention and disposal policies, procedures and processes
Encrypting all SAD stored electronically
Implementing technical controls to prevent copying/relocating PAN when using remote-access technologies unless requiring explicit authorization
Rendering PAN unreadable with keyed cryptographic hashes unless requiring explicit authorization
Implementing disk-level or partition-level encryption to make PAN unreadable
Requirement 4
PCI DSS 4.0 incorporates the following new requirements:
Confirming that certificates safeguarding PAN during transmission across open, public networks are valid, not expired or revoked
Maintaining an inventory of trusted keys and certificates
Requirement 5
PCI DSS 4.0 incorporates the following new requirements:
Performing a targeted risk analysis to determine how often the organization evaluates whether system components pose a malware risk
Performing targeted risk analysis to determine how often to scan for malware
Performing anti-malware scans when using removable electronic media
Implementing phishing attack detection and protection mechanisms
Requirement 6
PCI DSS 4.0 incorporates the following new requirements:
Maintaining an inventory of bespoke and custom software for vulnerability and patch management purposes
Deploying automated technologies for public-facing web applications to continuously detect and prevent web-based attacks
Managing payment page scripts loaded and executed in consumers’ browsers
Requirement 7
PCI DSS 4.0 incorporates the following new requirements:
Reviewing all user accounts and related access privileges
Assigning and managing all application and system accounts and related access privileges
Reviewing all application and system accounts and their access privileges
Requirement 8
PCI DSS 4.0 incorporates the following new requirements:
Implementing a minimum complexity level for passwords used as an authentication factor
Implementing multi-factor authentication (MFA) for all CDE access
Ensuring MFA implemented appropriately
Managing interactive login for system or application accounts
Using passwords/passphrases for application and system accounts
Protecting passwords/passphrases for application and system accounts against misuse
Requirement 9
PCI DSS 4.0 incorporates the following new requirements:
Performing targeted risk analysis to determine how often POI devices should be inspected
Requirement 10
PCI DSS 4.0 incorporates the following new requirements:
Automating the review of audit logs
Performing a targeted risk analysis to determine how often to review system and component logs
Detecting, receiving alerts for, and addressing critical security control system failures
Promptly responding to critical security control system failures
Requirement 11
PCI DSS 4.0 incorporates the following new requirements:
Managing vulnerabilities not ranked as high-risk or critical
Performing internal vulnerability scans using authenticated scanning
Deploying a change-and-tamper-detection mechanism for payment pages
Requirement 12
PCI DSS 4.0 incorporates the following new requirements:
Documenting the targeted risk analysis that identifies how often to perform it so it supports each PCI DSS Requirement
Documenting and reviewing cryptographic cypher suites and protocols
Reviewing hardware and software
Reviewing security awareness program at least once every 12 months and updating as necessary
Including in training threats to CD, like phishing and related attacks and social engineering
Including acceptable technology use in training
Performing targeted risk analysis to determine how often to provide training
Including in incident response plan the alerts from change-and-tamper detection mechanism for payment pages
Implementing incident response procedures and initiating them upon PAN detection
What updates are applicable to service providers only?
In some cases, new Requirements apply only to issuers and companies supporting those issuing services and storing sensitive authentication data. Only one of these immediately went into effect, the update to Requirement 12:
TPSPs support customers’ requests for PCI DSS compliance status and information about the requirements for which they are responsible
Effective March 31, 2025
Service providers should be aware of the following updates:
Requirement 3:
Encrypting SAD
Documenting the cryptographic architecture that prevents people from using cryptographic keys in production and test environments
Requirement 8
Requiring customers to change passwords at least every 90 days or dynamically assessing security posture when not using additional authentication factors
Requirement 11
Multi-tenant service providers supporting customers for external penetration testing
Detecting, receiving alerts for, preventing, and addressing covert malware communication channels using intrusion detection and/or intrusion prevention techniques
Requirement 12
Documenting and confirming PCI DSS scope every 6 months or upon significant changes
Documenting, reviewing, and communicating to executive management the impact that significant organizational changes have on PCI DSS scope
Graylog Security and API Security: Monitoring, Detection, and Incident Response for PCI DSS 4.0
Graylog Security provides the SIEM capabilities organizations need to implement Threat Detection and Incident Response (TDIR) activities and compliance reporting. Graylog Security’s security analytics and anomaly detection functionalities enable you to aggregate, normalize, correlate, and analyze activities across a complex environment for visibility into and high-fidelity alerts for critical security monitoring and compliance issues like:
Access monitoring, including malicious and accidental insider threats
By incorporating Graylog API Security into your PCI DSS monitoring and incident response planning, you enhance your security and compliance program by mitigating risks and detecting incidents associated with Application Programming Interfaces (APIs). With Graylog’s end-to-end API threat monitoring, detection, and response solution, you can augment the outside-in monitoring from Web Application Firewalls (WAF) and API gateways with API discovery, request and response capture, automated risk assessment, and actionable remediation activities.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
If you grew up in the 80s and 90s, you probably remember your most beloved Trapper Keeper. The colorful binder contained all the folders, dividers, and lined paper to keep your middle school and high school self as organized as possible. Parsing JSON, a lightweight data format, is the modern, IT environment version of that colorful – perhaps even Lisa Frank themed – childhood favorite.
Parsing JSON involves transforming structured information into a format that can be used within various programming languages. This process can range from making JSON human-readable to extracting specific data points for processing. When you know how to parse JSON, you can improve data management, application performance, and security with structured data that allows for aggregation, correlation, and analysis.
What is JSON?
JSON, or JavaScript Object Notation, is a widely-used, human-readable, and machine-readable data exchange format. JSON structures data using text, representing it through key-value pairs, arrays, and nested elements, enabling data transfers between servers and web applications that use Application Programming Interfaces (APIs).
JSON has become a data-serialization standard that many programming languages support, streamlining programmers’ ability to integrate and manipulate the data. Since JSON makes it easy to represent complex objects using a clear structure while maintaining readability, it is useful for maintaining clarity across nested and intricate data models.
Some of JSON’s key attributes include:
Requires minimal memory and processing power
Easy to read
Supports key-value pairs and arrays
Works with various programming languages
Offers standard format for data serialization and transmission
How to make JSON readable?
Making JSON data more readable enables you to understand and debug complex objects. Some ways to may JSON more readable include:
Pretty-Print JSON: Pretty-printing JSON formats the input string with indentation and line breaks to make hierarchical structures and relationships between object values clearer.
Delete Unnecessary Line Breaks: Removing redundant line breaks while converting JSON into a single-line string literal optimizes storage and ensures consistent string representation.
Use Tools and IDEs: Tools and extensions in development environments that auto-format JSON data can offer an isolated view to better visualize complex JSON structures.
Reviver Function in JavaScript: Using the parse() method applies a reviver function that modifies object values during conversion and shapes data according to specific needs.
What does it mean to parse JSON?
JSONs are typically read as a string, so parsing JSON is the process of converting the string into an object to interpret the data in a programming language. For example, in JSON, a person’s profile might look like this:
When you parse this JSON data in JavaScript, it might look like this:
Name: Jane Doe Age: 30 Is Developer: true Skills: JavaScript, Python, HTML, CSS| Project 1: Weather App, Completed: true Project 2: E-commerce Website, Completed: false
Even though the information looks the same, it’s easier to read because you removed all of the machine-readable formatting.
Partial JSON parsing
Partial JSON parsing is especially advantageous in environments like Python, where not all fields in the data may be available or necessary. With this flexible input handling, you can ensure model fields have default values to manage missing data without causing errors.
For example, if you only want to know the developer’s name, skills, and completed projects, partial JSON parsing allows you to extract the information you want and focus on specific fields.
Why is JSON parsing important?
Parsing JSON transforms the JSON data so that you can handle complex objects and structured data. When you parse JSON, you can serialize and deserialize data to improve data interchange, like for web applications.
JSON parsing enables:
Data Interchange: Allows for easy serialization and deserialization of data across various systems.
Dynamic Parsing: Streamlines integration for web-based applications as a subset nature of JavaScript
Security: Reduces injection attack risks by ensuring data conforms to expected format.
Customization: Transforms raw data into structured, usable objects that can be programmatically manipulated, filtered, and modified according to specific needs.
How to parse a JSON file
Parsing a JSON file involves transforming JSON data from a textual format into a structured format that can be manipulated within a programming environment. Modern programming languages provide built-in methods or libraries for parsing JSON data so you can easily integrate and manipulate data effectively. Once parsed, JSON data can be represented as objects or arrays, allowing operations like sorting or mapping.
Parsing JSON in JavaScript
Most people use the JSON.parse() method for converting string form JSON data into JavaScript objects since it can handle simple and complex objects. Additionally, you may choose to implement the reviver function to manage custom data conversions.
Parsing JSON in PHP
PHP provides the json_decode function so you can translate JSON strings into arrays or objects. Additionally, PHP provides functions that validate the JSON syntax to prevent exceptions that could interrupt execution.
Parsing JSON in Python
Parsing JSON in python typically means converting JSON strings into Python dictionaries with the json module. This module provides essential functions like loads() for strings and load() for file objects which are helpful for managing JSON-formatted API data.
Parsing JSON in Java
Developers typically use one of the following libraries to parse JSON in Java:
Jackson: efficient for handling large files and comes with an extensive feature set
Gson: minimal configuration and setup but slower for large datasets
json: built-in package providing a set of classes and methods
JSON Logging: Best Practices
Log files often have complex, unstructured text-based formatting. When you convert them to JSON, you can store and search your logs more easily. Over time, JSON has become a standard log format because it creates a structured database that allows you to extract the fields that matter to normalize them against other logs that your environment generates. Additionally, as an application’s log data evolves, JSON’s flexibility makes it easier to add or remove fields. Since many programming language either include structured JSON logging in their libraries or offer third-party libraries,
Log from the Start
Making sure that your application generates logs is critical from the very beginning. Logs enable you to debug the application or detect security vulnerabilities. By inserting the JSON logs from the start, you make your testing easier and build security monitoring into the application.
Configure Dependencies
If your dependencies can also generate JSON logs, you should consider configuring it because the structure format makes parsing and analyzing database logs easier.
Format the Schema
Since your JSON logs should be readable and parseable, you want to keep them as compact and streamlined as possible. Some best practices include:
Focusing on objects that need to be read
Flattening structures by concatenating keys with a separator
Using a uniform data type in each field
Parsing exception stack traces into attribute hierarchies
Incorporate Context
JSON enables you to include information about what you’re logging for insight into an event’s immediate context. Some context that helps correlate issues across your IT environment include:
User identifiers
Session identifiers
Error messages
Graylog: Correlating and Analyzing Logs for Operations and Security
With Graylog’s parsing JSON functions, you can parse out useful information, like destination address, response bytes, and other data that helps monitor security incidents or answer IT questions. After extracting the data you want, you can use the Graylog Extended Log Format (GELF) to normalize and structure all log data. Graylog’s purpose-built solution provides lightning-fast search capabilities and flexible integrations that allow your team to collaborate more efficiently.
Graylog Operations provides a cost-efficient solution for IT ops so that organizations can implement robust infrastructure monitoring while staying within budget. With our solution, IT ops can analyze historical data regularly to identify potential slowdowns or system failures while creating alerts that help anticipate issues.
With Graylog’s security analytics and anomaly detection capabilities, you get the cybersecurity platform you need without the complexity that makes your team’s job harder. With our powerful, lightning-fast features and intuitive user interface, you can lower your labor costs while reducing alert fatigue and getting the answers you need – quickly.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
In today’s digital age, the internet has become an integral part of our daily lives. From working remotely to streaming movies, we rely on the internet for almost everything. However, slow internet speeds can be frustrating and can significantly affect our productivity and entertainment. Despite advancements in technology, many people continue to face challenges with their internet speeds, hindering their ability to fully utilize the benefits of the internet. In this blog, we will explore how Dan McDowell, Professional Services Engineer decided to take matters into his own hands and get the data over time to present to his ISP.
Over the course of a few months, I noticed slower and slower internet connectivity. Complaints from neighbors (we are all on the same ISP) lead me to take some action. A few phone calls with “mixed” results were not good enough for me so I knew what I needed, metrics!
Why Metrics?
Showing data without a doubt is one of the most powerful ways to prove a statement. How often do you hear one of the following when you call in for support:
Did you unplug it and plug it back in?
It’s probably an issue with your router
Oh, wireless must be to blame
Test it directly connected to your computer!
Nothing is wrong on our end, must be yours…
In my scenario I was able to prove without a doubt that this wasn’t a “me” problem. Using data I gathered by running this script every 30 minutes over a few weeks time I was able to prove:
This wasn’t an issue with my router
The was consistent connectivity slowness at the same times every single day of the week and outside of those times my connectivity was near the offered maximums.
Something was wrong on their end
Clearly, they were not spec’d to handle the increase in traffic when people stop working and start streaming
I used their OWN speed test server for all my testing. It was only one hop away.
This was all the proof I needed:
End Result?
I sent in a few screenshots of my dashboards, highlighting the clear spikes during peak usage periods. I received a phone call not even 10 minutes later from the ISP. They replaced our local OLT and increased the pipe to their co-lo. What a massive increase in average performance!
Ookla Speedtest has a CLI tool?!
Yup. This can be configured to use the same speedtest server (my local ISP runs one) each run meaning results are valid and repeatable. Best of all, it can output JSON which I can convert to GELF with ease! In short, I setup a cron job to run my speed test script every 30 minutes on my Graylog server and output the results, converting the JSON message into GELF which NetCat sends to my GELF input.
Move the script to a common location and make it executable mkdir /scripts mv speedtest.sh /scripts/ chmod +x /scripts/speedtest.sh
Getting Started
Login to your Graylog instance
Navigate to System → Content Packs
Click upload.
Browse to the downloaded location of the Graylog content pack and upload it to your instance
Install the content pack
This will install a Stream, pipeline, pipeline rule (routing to stream) and dashboard
Test out the script!
ssh / console to your linux system hosting Graylog/docker
Manually execute the script: /scripts/speedtest.sh localhost 12201 Script Details: <path to script> <ip/dns/hostname> <port>
Check out the data in your Graylog
Navigate to Streams → Speed Tests
Useful data appears!
Navigate to Dashboards → ISP Speed Test
Check out the data!
Manually execute the script as much as you like. More data will appear the more you run it.
Automate the Script!
This is how I got the data to convince my ISP that something was actually wrong. Setup a CRON job that runs every 30 minutes and within a few day you should see some time related changes.
ssh or console to your linux system hosting the script / Graylog
Create a CRONTAB to run the script every 30 minutes
create crontab (this will be for the currently logged in user OR root if sudo su was used)
crontab -e
Set the script to run every 30 minutes (change as you like)
That’s it! As long as the user the crontab was made for has permissions, the script will run every 30 minutes and the data will go to Graylog . The dashboard will continue to populate for you automatically.
Bonus Concept – Monitor you Sites WAN Connection(s)
This same script could be used to monitor WAN connections at different sites. Without any extra fields, we could use the interface_externalIp or source fields provided by the speedtest cli/sending host to filter by site location, add a pipeline rule to add a field biased on a lookup table or add a single field to the speedtest GELF message (change the script slightly) to provide that in the original message, etc. Use my dashboard to make a new dashboard with tabs for per-site and a summary page! The possibilities are endless.
Most of all, go have fun!
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
For Star Trek fans, space may be the final frontier, but in security, discovering Application Programming Interfaces (APIs) could be the technology equivalent. In the iconic episode “The Trouble with Tribbles,” the legendary starship Enterprise discovers a space station that becomes overwhelmed by little fluffy, purring, rapidly reproducing creatures called “tribbles.” In a modern IT department, APIs can be viewed as the digital tribble overwhelming security teams.
As organizations build out their application ecosystems, the number of APIs integrated into their IT environments continues to expand. Organizations and security teams can become overwhelmed by the sheer number of these software “tribbles,” as undiscovered and unmanaged APIs create security blindspots.
API discovery is a critical component for any security program because it expands the organization’s attack surface.
What is API discovery?
API discovery is a manual or automated process that identifies, documents, and catalogs an organization’s APIs so that security teams can monitor the application-to-application data transfers. To manage all APIs that the organization integrated into its ecosystem, organizations need a comprehensive inventory that includes:
Internal APIs: interfaces between a company’s backend information and application functionality
External APIs: interfaces exposed over the internet to non-organizational stakeholders, like external developers, third-party vendors, and customers
API discovery enables organizations to identify and manage the following:
Shadow (“Rogue”) APIs: unchecked or unsupervised APIs
Deprecated (“Zombie”) APIs: unused yet operational APIs without the necessary security updates
What risks do undocumented and unmanaged APIs pose?
Threat actors can exploit vulnerabilities in these shadow and deprecated APIs, especially when the development and security teams have no way to monitor and secure them.
Unmanaged APIs can expose sensitive data, including information about:
Software interface: the two endpoints sharing data
Technical specifications: the way the endpoints share data
Function calls: verbs (GET, DELETE) and nouns (Data, Access) that indicate business logic
Why is API discovery important?
Discovering all your organization’s APIs enhances security by incorporating them into:
Risk assessments: enabling API vulnerability identification, prioritization, and remediation
Compliance: mitigate risks arising from accidental sensitive data exposures that lead to compliance violations, fines, and penalties
Vendor risk management: visibility into third-party security practices by understanding the services, applications, and environments that they can impact
Incident response: faster detection, investigation, and response times by understanding potential entry points, impacted services, and data leak paths
Policy enforcement: ensuring all internal and external APIs follow the company’s security policies and best practices
Training and awareness: providing appropriate educational resources for developers and IT staff
Beyond the security use case, API discovery provides these additional benefits:
Faster integrations by understanding available endpoints, methods, and data formats
Microservice architecture management by tracking services, health status, and interdependencies
Enhanced product innovation and value by understanding API capabilities and limitations
Increased revenue by understanding API usage
Using automation for API discovery
While developers can manually discover APIs, the process is expensive, inefficient, and risky. Manual API discovery processes are limited because they are:
Time-consuming: With the average organization integrating over 9,000 known APIs, manual processes for identifying unknown or unmanaged APIs can be overwhelming, even in a smaller environment.
Error-prone: Discovering all APIs, including undocumented ones and those embedded in code, can lead to incomplete discovery, outdated information, or incorrect documentation.
Automated tools make API discovery more comprehensive while reducing overall costs. Automated API discovery tools provide the following benefits:
Efficiency: Scanners can quickly identify APIs, enabling developers to focus on more important work.
Accurate, comprehensive inventory: API discovery tools can identify embedded and undocumented APIs, enhancing security and documentation.
Cost savings: Automation takes less time to scan for updated information, reducing maintenance costs.
What to look for in an API discovery tool
While different automated tools can help you discover the APIs across your environment, you should know the capabilities that you need and what to look for.
Continuous API Discovery
Developers can deliver new builds multiple times a day, continuously changing the API landscape and risk profile. For an accurate inventory and comprehensive visibility, you should look for a solution that scans:
All API traffic at runtime
Categorizes API calls
Sorts incoming traffic into domain buckets
For example, when discovering APIs by domain, the solution includes cases where:
Domains are missing
Public or Private IP addresses are used
With the ability to identify shadow and deprecated APIs, the solution should give you a way to add domains to the:
Monitoring list so you can start tracking them in the system
Prohibited list so that the domain should never be used
Vulnerability Identification
An API discovery solution that analyzes all traffic can also identify potential security vulnerabilities. When choosing a solution, you should consider whether it contains the following capabilities:
Captures unfiltered API request and response detail
Enhances details with runtime analysis
Creates an accessible datastore for attack detection
Identified common threats and API failures aligned to OWASP and MITRE guidance
Automatic remediation tops with actionable solutions that enable the teams to optimize critical metrics like Mean Time to Response (MTTR)
Risk Assessment and Scoring
Every identified API and vulnerability increases the organization’s risk. To appropriately mitigate risk arising from previously unidentified and unmanaged APIs, the solution should provide automated risk assessment and scoring. With visibility into the type of API and the high-risk areas that should be prioritized, Security and DevOps teams can focus on the most risky APIs first.
Graylog API Security: Continuous, Real-Time API Discovery
Graylog API Security is continuous API security, scanning all API traffic at runtime for active
attacks and threats. Mapped to security and quality rules, Graylog API Security captures
complete request and response details, creating a readily accessible datastore for attack
detection, fast triage, and threat intelligence. With visibility inside the perimeter,
organizations can detect attack traffic from valid users before it reaches their applications.
Graylog API Security captures details to immediately identify valid traffic from malicious
actions, adding active API intelligence to your security stack. Think of it as a “security
analyst in-a-box,” automating API security by detecting and alerting on zero-day attacks
and threats. Our pre-configured signatures identify common threats and API failures and
integrate with communication tools like Slack, Teams, Gchat, JIRA or via webhooks.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
As cyber threats targeting critical infrastructure continue to evolve, the energy sector remains a prime target for malicious actors. Protecting the electric grid requires a strong regulatory framework and robust cybersecurity monitoring practices. In the United States, the Federal Energy Regulatory Commission (FERC) and the North American Electric Reliability Corporation (NERC) play key roles in safeguarding the power system against cyber risks.
Compliance with the NERC Critical Infrastructure Protection (NERC CIP) standards provides a baseline for mitigating security risk, but organizations should implement security technologies that help them streamline these processes.
Who are FERC and NERC?
The Federal Energy Regulatory Commission (FERC) is the governmental agency that oversees the power grid’s reliability. Since the Energy Policy Act of 2005 that granted FERC these powers, the rise of smart technologies across the energy industry expanded. This led to the Energy Independence and Security Act of 2007 (EISA) which led to FERC and the National Institute of Standards and Technology (NIST) to coordinate cybersecurity reliability standards that protect the industry.
However, to develop these reliability standards, FERC certified the North American Electric Reliability Corporation (NERC). Currently, NERC has thirteen published and enforceable Critical Infrastructure Protection (CIP) standards plus one more awaiting approval.
What are the NERC CIP requirements?
The cybersecurity Reliability Standards are broken out across nine documents, each detailing the different requirements and controls for compliance.
CIP-002: BES Cyber System Categorization
This CIP creates “bright-line” criteria for how to categorize BES Cyber Systems based on impact that an outage would cause. The publication separates BES Cyber Systems into three general categories:
High Impact
Medium Impact
Low Impact
CIP-003-8: Security Management Controls
This publication, with its most recent iteration being enforceable in April 2026, requires Responsible Entities to create policies, procedures, and processes for high or medium impact BES Cyber Systems, including:
Cyber security awareness: training delivered every 15 calendar months
Physical security controls: protections for assets, locations within an asset containing low impact BES systems, and Cyber Assets
Electronic access controls: controls that limit inbound and outbound electronic access for assets containing low impact BES Cyber Systems
Cyber security incident response: identification, classification, and response to Cyber Security incidents, including establishing role and responsibilities for testing (every 36 months) and handling incidents, including updating Cyber Security Incident response plan within 180 days of a reportable incident
Transient cyber asset and removable media malicious code risk mitigation: Plans for implementing, maintaining, and monitoring anti-virus, application allowlists, and other methods to detect malicious code
Vendor electronic remote access security controls: processes for remote access to mitigate risks, including ways to determine and disable remote access and detect known or suspected malicious communications from vendor remote access
CIP-004-7: Personnel & Training
Every Responsible Entity needs to have one or more documented processes and provide evidence to demonstrate implementation of:
Security awareness training
Personnel risk assessments prior to granting authorized electronic or unescorted physical access
Access management programs
Access revocation programs
Access management, including provisioning, authorizing, and terminating access
CIP-005-7: Electronic Security Perimeter(s)
To mitigate risks, Responsible Entities need to have controls for permitting known and controlled communications need documented processes and evidence of:
Connection to network using a routable protocol protected by an Electronic Security Perimeter (ESP)
Permitting and documenting the reasoning for necessary communications while denying all other communications
Limiting network accessibility to management Interfaces
Performing authentication when allowing remove access through dial-up connectivity
Monitoring to detect known or suspected malicious communications
Implementation of controls, like encryption or physical access restrictions, to protect data confidentiality and integrity
Remote access management capabilities, multi-factor authentication and multiple methods for determining active vendor remote access
Multiple methods for disabling active vendor remote access
One or more methods to determine authenticated vendor-initiated remote access, terminating these remote connections, and controlling ability to reconnect
Most of these requirements fall under the umbrella of network security monitoring. For example, many organizations will implement tools like:
Once organizations can define baselines for normal network traffic, they can implement detections that alert their security teams to potential incidents.
CIP-006-6: Physical Security of BES Cyber Systems
To prove management of physical access to these systems, Responsible Entities need documented processes and evidence that include:
Physical security plan with defined operation or procedural controls for restricting physical access
Controls for managing authorized unescorted access
Monitoring for unauthorized physical access
Alarms or alerts for responding to detected unauthorized access
Logs that must be retained for 90 days for managing entry of individuals authorized for unescorted physical access
Visitor control program that includes continuous escort for visitors, logging visitors, and retaining visitor logs
Maintenance and testing programs for the physical access control system
Many organizations use technologies to help manage physical security, like badges or smart alarms. By incorporating these technologies into the overarching cybersecurity monitoring, Responsible Entities can correlate activities across the physical and digital domains.
Example: Security Card Access in buildings showing entry and exit times.
By tracking both physical access and digital access to BES Cyber Systems, Responsible Entities can improve their overarching security posture, especially given the interconnection between physical and digital access to systems.
CIP-007-6: System Security Management
To prove that they have the technical, operational, and procedural system security management capabilities, Responsible Entities need documented processes and evidence that include:
System hardening: disabling or preventing unnecessary remote access, protection against physical input/output ports used for network connectivity, risk mitigation to prevent CPU or memory vulnerabilities
Patch management process: evaluating security patch applicability at least once every 35 calendar days and tracking, evaluating, and installing security patches
Malicious code prevention: methods for deterring, detecting, or preventing malicious code and mitigating the threat of detected malicious code
Monitoring for security events: logging security events per system capabilities, generating security event alerts, retaining security event logs, and reviewing summaries or samplings of logged security events
System access controls: authentication enforcement methods, identification and inventory of all known default or generic accounts, identification of people with authorized access to shared accounts, changing default passwords, technical or procedural controls for password-only authentication, including forced changes at least once every 15 calendar months, limiting the number of unsuccessful authentication attempts and generating
Having a robust threat detection and incident response (TDIR) solution enables Responsible Parties to leverage user and entity behavior analytics (UEBA) with the rest of their log data so they can handle security functions like:
CIP-008-6: Incident Reporting and Response Planning
To mitigate risk to reliable operation, Responsible Entities need documented incident response plans and evidence that include:
Processes for identifying, classifying, and responding to security incidents
Roles and responsibility for the incident response groups or individuals
Incident handling procedures
Testing incident response plan at least once every 15 calendar months
Retaining records for reportable and other security incidents
Reviewing, updating, and communicating lessons learned, changes to the plan based on lessons learned, notifying people of changes
Security analytics enables Responsible Entities to enhance their incident detection and response capabilities. By building detections around MITRE ATT&CK tactics, techniques, and procedures (TTPs), security teams can connect the activities occurring in their environments with real-world activities to investigate an attacker’s path faster. Further, with high-fidelity Sigma rule detections aligned to the ATT&CK framework, Responsible Entities improve their incident response capabilities.
In the aftermath of an incident or incident response test, organizations need to develop reports that enable them to identify lessons learned. These include highlighting:
Key findings
Actions taken
Impact on stakeholders
Incident ID
Incident summary that includes type, time, duration, and affected systems/data
To improve processes, Responsible Entities need to organize the different pieces of evidence into an incident response report that showcases the timeline of events.
Further, they need to capture crucial information about the incident, including:
Nature of threat
Business impact
Immediate actions taken
When/how incident occurred
Who/what was affected
Overall scope
CIP-009-6: Recovery Plans for BES Cyber Systems
To support continued stability, operability, and reliability, Responsible Entities need documented recovery plans with processes and evidence for:
Activation of recovery plan
Responder roles and responsibilities
Backup and storage of information required for recovery and verification of backups
Testing recovery plan at least once every 15 calendar months
Reviewing, updating, and communicating lessons learned, changes to the plan based on lessons learned, notifying people of changes
CIP-010-4: Configuration Change Management and Vulnerability Assessments
To prevent and detect unauthorized changes, Responsible Entities need documentation and evidence of configuration change management and vulnerability assessment that includes:
Authorization of changes that can alter behavior of one or more cybersecurity controls
Testing changes prior to deploying them in a production environment
Verifying identity and integrity of operating systems, firmware, software, or software patches prior to installation
Monitoring for unauthorized changes that can alter the behavior of one or more cybersecurity controls at least once every 35 calendar days, including at least one control for configurations affecting network accessibility, CPU and memory, installation, removal, or updates to operating systems, firmware, software, and cybersecurity patches, malicious code protection, security event logging or alerting, authentication methods, enabled or disabled account status
Engaging in vulnerability assessment at least once every 15 calendar months
Performing an active vulnerability assessment in a test environment and documenting the results at least once every 36 calendar months
Performing vulnerability assessments for new systems prior to implementation
CIP-011-3: Information Protection
To prevent unauthorized access, Responsible Entities need documented information protection processes and evidence of:
Methods for identifying, protecting, and securely handling BES Cyber System Information (BCSI)
Methods for preventing the unauthorized retrieval of BCSI prior to system disposal
CIP-012-1: Communications between Control Centers
To protect the confidentiality, integrity, and availability assessment monitoring data transmitted between Control Centers, Responsible Entities need documented processes for and evidence of:
Risk mitigation for unauthorized disclosure and modification or loss of availability of data
Identification of risk mitigation methods
Identification of where methods are implemented
Assignment of responsibilities when different Responsible Entities own or operate Control Centers
To mitigate data exfiltration risks, Responsible Parties need to aggregate, correlate, and analyze log data across:
Network traffic logs
Antivirus logs
UEBA solutions
With visibility into abnormal data downloads, they can more effectively monitor communications between control centers.
CIP-013-2: Supply Chain Risk Management
To mitigate supply chain risks, Responsible Entities need documented security controls and evidence of:
Procurement processes for identifying and assessing security risks related to installing vendor equipment and software and switching vendors
Receiving notifications about vendor-identified incidents related to products or services
Coordinating responses to vendor-identified incidents related to products or services
Notifying vendors when no longer granting remote or onsite access
Vendor disclosure of known vulnerabilities related to products or services
Verifying software and patch integrity and authenticity
Coordination controls for vendor-initiated remote access
Review and obtain approval for the supply chain risk management plan
CIP-015-1: Internal Network Security Monitoring
While this standard is currently awaiting approval by the NERC Board of Trustees, Responsible Entities should consider preparing for publication and enforcement with documented processes and evidence of monitoring internal networks’ security, including the implementation of:
Network data feeds using a risk-based rationale for monitoring network activity, including connections, devices, and network communications
Detections for anomalous network activity
Evaluating anomalous network activity
Retaining internal network security monitoring data
Protecting internal network security monitoring data
Graylog Security: Enabling the Energy Sector to Comply with NERC CIP
Using Graylog Security, you can rapidly mature your TDIR capabilities without the complexity and cost of traditional Security Information and Event Management (SIEM) technology. Graylog Security’s Illuminate bundles include detection rulesets so that you have content, like Sigma detections, enabling you to uplevel your security alert, incident response, and threat hunting capabilities with correlations to ATT&CK tactic, techniques, and procedures (TTPs).
By leveraging our cloud-native capabilities and out-of-the-box content, you gain immediate value from your logs. Our anomaly detection ML improves over time without manual tuning, adapting rapidly to new data sets, organizational priorities, and custom use cases so that you can automate key user and entity access monitoring.
With our intuitive user interface, you can rapidly investigate alerts. Our lightning-fast search capabilities enable you to search terabytes of data in milliseconds, reducing dwell times and shrinking investigations by hours, days, and weeks.
To learn how Graylog Security can help you implement robust threat detection and response, contact us today.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
Managing configurations in a complex environment can be like playing a game of digital Jenga. Turning off one port to protect an application can undermine the service of a connected device. Writing an overly conservative firewall configuration can prevent remote workforce members from accessing an application that’s critical to getting their work done. In the business world that runs on Software-as-a-Service (SaaS) applications and the Application Programming Interfaces (APIs) that allow them to communicate, a lot of your security is based on the settings you use and the code that you write.
Security misconfigurations keep creeping up the OWASP Top 10 Lists for applications, APIs, and mobile devices because they are security weaknesses that can be difficult to detect until an attacker uses them against you. With insight into what security misconfigurations are and how to mitigate risk, you can create the programs and processes that help you protect your organization.
What are Security Misconfigurations?
Security misconfigurations are insecure default settings that remain in place during and after system deployment. They can occur anywhere within the organization’s environment because they can arise from:
Operating systems
Network devices their settings
Web servers
Databases
Applications
Organizations typically implement hardening across their environment by changing settings to limit where, how, when, and with whom technologies communicate. Some examples of security misconfigurations may include failing to:
Disable or uninstall unnecessary features, like ports, services, accounts, API HTTP verbs, API logging features
Change default passwords
Limit the information that error messages send to users
Update operating systems, software, and APIs with security patches
Set secure values for application servers, application frameworks, libraries, and databases
Use Transport Layer Security (TLS) for APIs
Restrict Cross-Origin resource sharing (CORS)
Security Misconfigurations: Why Do They Happen?
Today’s environments consist of complex, interconnected technologies. While all the different applications and devices make business easier, they make security configuration management far more challenging.
Typical reasons that security misconfigurations happen include:
Complexity: Highly interconnected systems can make identifying and implementing all possible security configurations difficult.
Patches: Updating software and systems can have a domino effect across all interconnected technologies that can change a configuration’s security.
Hardware upgrades: Adding new servers or moving to cloud can change configurations at hardware and software level.
Troubleshooting: Fixing a network, application, or operating system issue to maintain service availability may impact other configurations.
Unauthorized changes: Failing to follow change management processes for adding new technologies or fixing issues can impact interconnections, like users connecting corporate email to authorize API access for an unsanctioned web application.
Poor documentation: Failure to document baselines and configuration changes can lead to lack of visibility across the environment.
Common Types of Security Misconfiguration Vulnerabilities
To protect your systems against cyber attacks, you should understand what some common security misconfigurations are and what they look like.
Improperly Configured Databases: overly permissive access rights or lack of authentication
Unsecured Cloud Storage: lack of encryption or weak access controls
Default or Weak Passwords: failure to change passwords or poor password hygiene leading to credential-based attacks
Misconfigured Firewalls or Network Settings: poor network segmentation, permissive firewall settings, open ports left unsecured
Outdated Software or Firmware: failing to install software, firmware, or API security updates or patches that fix bugs
Inactive Pages: failure to include noopener or noreferrer attributes in a website or web application
Unneeded Services/Features: leaving network services available and ports open, like web servers, file share servers, proxy servers FTP servers, Remote Desktop Protocol (RDP), Virtual Network Computing (VNC), and Secure Shell Protocol (SSH)
Inadequate Access Controls: failure to implement and enforce access policies that limit user interaction, like the principle of least privilege for user access, deny-by-default for resources, or lack of API authentication and authorization
Unprotected Folders and Files: using predictable, guessable file names and locations that identify critical systems or data
Improper error messages: API error messages returning data such as stack traces, system information, database structure, or custom signatures
Best Practices for Preventing Security Misconfiguration Vulnerabilities
As you connect more SaaS applications and use more APIs, monitoring for security misconfigurations becomes critical to your security posture.
Implement a hardening process
Hardening is the process of choosing the configurations for your technology stack that limit unauthorized external access and use. For example, many organizations use the CIS Benchmarks that provide configuration recommendations for over twenty-five vendor product families. Organizations in the Defense Industrial Base (DIB) use the Department of Defense (DoD) Security Technical Implementation Guides (STIGs).
Your hardening processes should include a change management process that:
Sets and documents baselines
Identifies changes in the environment
Reviews whether changes are authorized
Allows, blocks, or rolls back changes as appropriate
Updates baselines and documentation to reflect allowed changes
Implement a vulnerability management and remediation program
Vulnerability scanners can identify common vulnerabilities and exposures (CVEs) on network-connected devices. Your vulnerability management and remediation program should:
Define critical assets: know the devices, resources, and users that impact the business the most
Assign ownership: identify the people responsible for managing and updating critical assets
Identify vulnerabilities: use penetration tests, red teaming, and automated tools, like vulnerability scanners
Prioritize vulnerabilities: combine a vulnerability’s severity and exploitability to determine the ones that pose the highest risk to the organization’s business operations
Identify and monitor key performance indicators (KPIs): set metrics to determine the program’s effectiveness, including number of assets managed, number of assets scanned per month, frequency of scans, percentage of scanned assets containing vulnerabilities, percentage of vulnerabilities fixed within 30, 60, and 90 days
Monitor User and Entity Activity
Security misconfigurations often lead to unauthorized access. To mitigate risk, you should implement best authentication, authorization, and access practices that include:
Multifactor Authentication: requiring users to provide two or more of the following: something they know (password), something they have (token/smartphone), or something they are (fingerprint or face ID)
Role-based access controls (RBAC): granting the least amount of access to resources based on their job functions
Activity baselines: understanding normal user and entity behavior to identify anomalous activity
Monitoring: identifying activity spikes like file permission changes, modifications, and deletions across email servers, webmail, removable media, and DNS
Implement and monitor API Security
APIs are the way that applications talk to one another, often sharing sensitive data. Many companies struggle to manage the explosion of APIs that their digital transformation strategies created, creating security weaknesses that attackers seek to exploit. To mitigate these risks, you should implement a holistic API security monitoring program that includes:
Continuously discovering APIs across the environment
Scanning all API traffic at runtime
Categorizing API calls
Sorting API traffic into domain buckets
Automatically assessing risk
Prioritizing remediation action using context that includes activity and intensity
Capturing unfiltered API request and response details
Graylog Security and Graylog API Security: Helping Detect and Remediate Security Misconfigurations
Built on the Graylog Platform, Graylog Security gives you the features and functionality of a SIEM while eliminating the complexity and reducing costs. With our easy to deploy and use solution, you get the combined power of centralized log management, data enrichment and normalization, correlation, threat detection, incident investigation, anomaly detection, and reporting.
Graylog API Security is continuous API security, scanning all API traffic at runtime for active attacks and threats. Mapped to security and quality rules like OWASP Top 10, Graylog API Security captures complete request and response detail, creating a readily accessible datastore for attack detection, fast triage, and threat intelligence. With visibility inside the perimeter, organizations can detect attack traffic from valid users before it reaches their applications.
With Graylog’s prebuilt content, you don’t have to worry about choosing the server log data you want because we do it for you. Graylog Illuminate content packs automate the visualization, management, and correlation of your log data, eliminating the manual processes for building dashboards and setting alerts.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.