ESET Research 已聯繫一項學術研究的作者,該研究名為《勒索軟體 3.0:自我創作和 LLM-Orchestrated》,其研究原型與在 VirusTotal 上發現的 PromptLock 樣本非常相似。 這進一步印證了我們的信念:PromptLock 只是概念驗證,而是有可能發生的惡意軟體。即便如此,我們的發現仍然站得住腳–這些發現的樣本代表了已知的首例 AI 驅動的勒索軟體案例。 勒索軟體 3.0 代表了 LLM 編排的勒索軟體的第一個威脅模型和研究原型。與傳統惡意軟體不同,此原型僅需要在二進位檔案中嵌入自然語言提示;惡意程式碼由 LLM 在執行時間動態合成,從而產生適應執行環境的多態變體。該系統在無需人工參與的閉環攻擊活動中執行偵察、有效載荷生成和個性化勒索。
ESET 研究人員發現了第一個已知的 AI 驅動勒索軟體。該惡意軟體被 ESET 命名為 PromptLock,能夠竊取、加密甚至銷毀數據,但銷毀功能似乎尚未在惡意軟體中實現。 雖然 PromptLock 並未在實際攻擊中被發現,而是被認為是一個概念驗證 (PoC) 或正在進行的工作,但 ESET 的發現表明,惡意使用公開可用的 AI 工具可能會增強勒索軟體和其他普遍存在的網路威脅。
Graylog軟體改版相當頻繁,7月4日釋出6.3.1版,今年上半年發布重大更新主要是在今年4月底RSAC大會期間,他們宣布推出6.2版,當中主打幾個特色,都會在付費版本(Graylog Enterprise與Graylog Security)提供,像是:針對去年秋季改版增加的資料湖與資料路由功能,加以延伸,能在擷取資料集之前,預覽工作團隊所需的資料是否在資料湖,隨後可運用選定資料擷取(Selective Data Retrieval)功能,隨需存取小範圍的記錄資訊,可大幅減少授權耗用量。
除了上述鎖定資安事件記錄管理用途的產品,Graylog旗下還有API安全解決方案,稱為Graylog API Security,可透過擷取API流量,探查存取的API,以及這些API是由合法使用者、惡意攻擊者、合作廠商、內部人員使用的,並運用內建與自定的特徵碼,自動偵測與警示當中的互動,確認是否存在網路攻擊、資料外洩行為。
Graylog API Security的推出,源自2023年7月併購新創資安公司Resurface,2024年1月發表Graylog API Security 3.6,2月釋出免費版——僅能以單節點部署、儲存16 GB資料(容量一旦超過,將刪掉舊資料,留給新進資料)。
產品資訊
Graylog ●代理商:台灣二版 ●建議售價:開放版免費,企業版每年1.5萬美元起,資安版每年1.8萬美元起,API Security版每年1.8萬美元起 ●作業系統需求:Linux(Ubuntu 20.04, 22.04、RHEL 7/8/9、SUSE Linux Enterprise Server 12/15、Debian 10/11/12)、Docker ●系統基礎元件: Graylog、Data Node、MongoDB(5.0.7版至7.x版),選用OpenSearch(1.1.x版至2.15.x版)
Any company that processes payments knows the pain of an audit under the Payment Card Industry Data Security Standard (PCI DSS). Although the original PCI DSS had gone through various updates, the Payment Card Industry Security Standards Council (PCI SSC) took feedback from the global payments industry to address evolving security needs. The March 2022 release of PCI DSS 4.0 incorporated changes that intend to promote security as an iterative process while ensuring continued flexibility so that organizations could achieve security objectives based on their needs.
To give companies time to address new requirements, audits will begin incorporating the majority of the new changes beginning March 31, 2025. However, some issues will be included in audits beginning immediately.
Why did the Payment Card Industry Security Standards Council (PCI SSC) update the standard?
At a high level, PCI DSS 4.0 responds to changes in IT infrastructures arising from digital transformation and Software-as-a-Service (SaaS) applications. According to PCI SSC’s press release, changes will enhance validation methods and procedures.
When considering PCI DSS 4.0 scope, organizations need to implement controls around the following types of account data:
Cardholder Data: Primary Account Number (PAN), Cardholder Name, Expiration Date, Service Code
Sensitive Authentication Data (SAD): Full track data (magnetic stripe or chip equivalent), card verification code, Personal Identification Numbers (PINs)/PIN blocks.
To get a sense of how the PCI SSC shifted focus when drafting PCI DSS 4.0, you can take a look at how the organization renamed some of the Requirements:
PCI Categories
PCI 3.2.1
PCI 4.0
Build and Maintain a Secure Network and Systems
Install and maintain a firewall configuration to protect cardholder data
Do not use vendor-supplied defaults for system passwords and other security parameters.
Install and maintain network security controls
Apply secure configurations to all system components
Protect Cardholder Data
(Updated to Protect Account Data in 4.0)
Protect stored cardholder data
Encrypt transmission of cardholder data across open, public networks
3. Protect stored account data
4. Protect cardholder data with strong cryptography during transmission over open, public networks
Maintain a Vulnerability Management Program
Protect all systems against malware and regularly update anti-virus software or programs
Develop and maintain secure systems and applications
5. Protect all systems and networks from malicious software
6. Develop and maintain secure systems and software
Implement Strong Access Control Measures
Restrict access to cardholder data by business need to know
Identify and authenticate access to system components
Restrict physical access to cardholder data
7. Restrict access to system components and cardholder data by business need to know
8. Identify users and authenticate access to system components
9. Restrict physical access to cardholder data
Regularly Monitor and Test Networks
Track and monitor all access to network resources and cardholder data
Regularly test security systems and processes
10. Log and monitor all access to system components and cardholder data
11. Test security of systems and networks regularly
Maintain an Information Security Policy
Maintain a policy that addresses information security for all personnel
12. Support information security with organizational policies and programs
While PCI SSC expanded the requirements to address larger security and privacy issues, many of them remain fundamentally the same as before. According to the Summary of Changes, most updates fall into one of the following categories:
Evolving requirement: changes that align with emerging threats and technologies or changes in the industry
Clarification or guidance: updated wording, explanation, definition, additional guidance, and/or instruction to improve people’s understanding
Structure or format: content reorganization, like combining, separating, or renumbering requirements
For organizations that have previously met PCI DSS compliance objectives, those changes place little additional burden.
However, PCI DSS 4.0 does include changes to Requirements that organizations should consider.
What new Requirements are immediately in effect for all entities?
While additions are effective beginning March 31, 2025, three primary issues affect current PCI audits.
Holistically, PCI DSS now includes the following sub requirement across Requirements 2 through 11:
Roles and responsibilities for performing activities for Requirement are documented, assigned, and understood.
Additionally, under Requirement 12, all entities should be:
Performing a targeted risk analysis for each PCI DSS requirement according to the documented, customized approach
Documenting and confirming PCI DSS scope every 12 months
What updates are effective March 31, 2025 for all entities?
As the effective date for all requirements draws closer, organizations should consider the major changes that impact their business, security, and privacy operations.
Requirement 3
PCI DSS 4.0 incorporates the following new requirements:
Minimizing the SAD stored prior to completion and retaining it according to data retention and disposal policies, procedures and processes
Encrypting all SAD stored electronically
Implementing technical controls to prevent copying/relocating PAN when using remote-access technologies unless requiring explicit authorization
Rendering PAN unreadable with keyed cryptographic hashes unless requiring explicit authorization
Implementing disk-level or partition-level encryption to make PAN unreadable
Requirement 4
PCI DSS 4.0 incorporates the following new requirements:
Confirming that certificates safeguarding PAN during transmission across open, public networks are valid, not expired or revoked
Maintaining an inventory of trusted keys and certificates
Requirement 5
PCI DSS 4.0 incorporates the following new requirements:
Performing a targeted risk analysis to determine how often the organization evaluates whether system components pose a malware risk
Performing targeted risk analysis to determine how often to scan for malware
Performing anti-malware scans when using removable electronic media
Implementing phishing attack detection and protection mechanisms
Requirement 6
PCI DSS 4.0 incorporates the following new requirements:
Maintaining an inventory of bespoke and custom software for vulnerability and patch management purposes
Deploying automated technologies for public-facing web applications to continuously detect and prevent web-based attacks
Managing payment page scripts loaded and executed in consumers’ browsers
Requirement 7
PCI DSS 4.0 incorporates the following new requirements:
Reviewing all user accounts and related access privileges
Assigning and managing all application and system accounts and related access privileges
Reviewing all application and system accounts and their access privileges
Requirement 8
PCI DSS 4.0 incorporates the following new requirements:
Implementing a minimum complexity level for passwords used as an authentication factor
Implementing multi-factor authentication (MFA) for all CDE access
Ensuring MFA implemented appropriately
Managing interactive login for system or application accounts
Using passwords/passphrases for application and system accounts
Protecting passwords/passphrases for application and system accounts against misuse
Requirement 9
PCI DSS 4.0 incorporates the following new requirements:
Performing targeted risk analysis to determine how often POI devices should be inspected
Requirement 10
PCI DSS 4.0 incorporates the following new requirements:
Automating the review of audit logs
Performing a targeted risk analysis to determine how often to review system and component logs
Detecting, receiving alerts for, and addressing critical security control system failures
Promptly responding to critical security control system failures
Requirement 11
PCI DSS 4.0 incorporates the following new requirements:
Managing vulnerabilities not ranked as high-risk or critical
Performing internal vulnerability scans using authenticated scanning
Deploying a change-and-tamper-detection mechanism for payment pages
Requirement 12
PCI DSS 4.0 incorporates the following new requirements:
Documenting the targeted risk analysis that identifies how often to perform it so it supports each PCI DSS Requirement
Documenting and reviewing cryptographic cypher suites and protocols
Reviewing hardware and software
Reviewing security awareness program at least once every 12 months and updating as necessary
Including in training threats to CD, like phishing and related attacks and social engineering
Including acceptable technology use in training
Performing targeted risk analysis to determine how often to provide training
Including in incident response plan the alerts from change-and-tamper detection mechanism for payment pages
Implementing incident response procedures and initiating them upon PAN detection
What updates are applicable to service providers only?
In some cases, new Requirements apply only to issuers and companies supporting those issuing services and storing sensitive authentication data. Only one of these immediately went into effect, the update to Requirement 12:
TPSPs support customers’ requests for PCI DSS compliance status and information about the requirements for which they are responsible
Effective March 31, 2025
Service providers should be aware of the following updates:
Requirement 3:
Encrypting SAD
Documenting the cryptographic architecture that prevents people from using cryptographic keys in production and test environments
Requirement 8
Requiring customers to change passwords at least every 90 days or dynamically assessing security posture when not using additional authentication factors
Requirement 11
Multi-tenant service providers supporting customers for external penetration testing
Detecting, receiving alerts for, preventing, and addressing covert malware communication channels using intrusion detection and/or intrusion prevention techniques
Requirement 12
Documenting and confirming PCI DSS scope every 6 months or upon significant changes
Documenting, reviewing, and communicating to executive management the impact that significant organizational changes have on PCI DSS scope
Graylog Security and API Security: Monitoring, Detection, and Incident Response for PCI DSS 4.0
Graylog Security provides the SIEM capabilities organizations need to implement Threat Detection and Incident Response (TDIR) activities and compliance reporting. Graylog Security’s security analytics and anomaly detection functionalities enable you to aggregate, normalize, correlate, and analyze activities across a complex environment for visibility into and high-fidelity alerts for critical security monitoring and compliance issues like:
Access monitoring, including malicious and accidental insider threats
By incorporating Graylog API Security into your PCI DSS monitoring and incident response planning, you enhance your security and compliance program by mitigating risks and detecting incidents associated with Application Programming Interfaces (APIs). With Graylog’s end-to-end API threat monitoring, detection, and response solution, you can augment the outside-in monitoring from Web Application Firewalls (WAF) and API gateways with API discovery, request and response capture, automated risk assessment, and actionable remediation activities.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
If you grew up in the 80s and 90s, you probably remember your most beloved Trapper Keeper. The colorful binder contained all the folders, dividers, and lined paper to keep your middle school and high school self as organized as possible. Parsing JSON, a lightweight data format, is the modern, IT environment version of that colorful – perhaps even Lisa Frank themed – childhood favorite.
Parsing JSON involves transforming structured information into a format that can be used within various programming languages. This process can range from making JSON human-readable to extracting specific data points for processing. When you know how to parse JSON, you can improve data management, application performance, and security with structured data that allows for aggregation, correlation, and analysis.
What is JSON?
JSON, or JavaScript Object Notation, is a widely-used, human-readable, and machine-readable data exchange format. JSON structures data using text, representing it through key-value pairs, arrays, and nested elements, enabling data transfers between servers and web applications that use Application Programming Interfaces (APIs).
JSON has become a data-serialization standard that many programming languages support, streamlining programmers’ ability to integrate and manipulate the data. Since JSON makes it easy to represent complex objects using a clear structure while maintaining readability, it is useful for maintaining clarity across nested and intricate data models.
Some of JSON’s key attributes include:
Requires minimal memory and processing power
Easy to read
Supports key-value pairs and arrays
Works with various programming languages
Offers standard format for data serialization and transmission
How to make JSON readable?
Making JSON data more readable enables you to understand and debug complex objects. Some ways to may JSON more readable include:
Pretty-Print JSON: Pretty-printing JSON formats the input string with indentation and line breaks to make hierarchical structures and relationships between object values clearer.
Delete Unnecessary Line Breaks: Removing redundant line breaks while converting JSON into a single-line string literal optimizes storage and ensures consistent string representation.
Use Tools and IDEs: Tools and extensions in development environments that auto-format JSON data can offer an isolated view to better visualize complex JSON structures.
Reviver Function in JavaScript: Using the parse() method applies a reviver function that modifies object values during conversion and shapes data according to specific needs.
What does it mean to parse JSON?
JSONs are typically read as a string, so parsing JSON is the process of converting the string into an object to interpret the data in a programming language. For example, in JSON, a person’s profile might look like this:
When you parse this JSON data in JavaScript, it might look like this:
Name: Jane Doe Age: 30 Is Developer: true Skills: JavaScript, Python, HTML, CSS| Project 1: Weather App, Completed: true Project 2: E-commerce Website, Completed: false
Even though the information looks the same, it’s easier to read because you removed all of the machine-readable formatting.
Partial JSON parsing
Partial JSON parsing is especially advantageous in environments like Python, where not all fields in the data may be available or necessary. With this flexible input handling, you can ensure model fields have default values to manage missing data without causing errors.
For example, if you only want to know the developer’s name, skills, and completed projects, partial JSON parsing allows you to extract the information you want and focus on specific fields.
Why is JSON parsing important?
Parsing JSON transforms the JSON data so that you can handle complex objects and structured data. When you parse JSON, you can serialize and deserialize data to improve data interchange, like for web applications.
JSON parsing enables:
Data Interchange: Allows for easy serialization and deserialization of data across various systems.
Dynamic Parsing: Streamlines integration for web-based applications as a subset nature of JavaScript
Security: Reduces injection attack risks by ensuring data conforms to expected format.
Customization: Transforms raw data into structured, usable objects that can be programmatically manipulated, filtered, and modified according to specific needs.
How to parse a JSON file
Parsing a JSON file involves transforming JSON data from a textual format into a structured format that can be manipulated within a programming environment. Modern programming languages provide built-in methods or libraries for parsing JSON data so you can easily integrate and manipulate data effectively. Once parsed, JSON data can be represented as objects or arrays, allowing operations like sorting or mapping.
Parsing JSON in JavaScript
Most people use the JSON.parse() method for converting string form JSON data into JavaScript objects since it can handle simple and complex objects. Additionally, you may choose to implement the reviver function to manage custom data conversions.
Parsing JSON in PHP
PHP provides the json_decode function so you can translate JSON strings into arrays or objects. Additionally, PHP provides functions that validate the JSON syntax to prevent exceptions that could interrupt execution.
Parsing JSON in Python
Parsing JSON in python typically means converting JSON strings into Python dictionaries with the json module. This module provides essential functions like loads() for strings and load() for file objects which are helpful for managing JSON-formatted API data.
Parsing JSON in Java
Developers typically use one of the following libraries to parse JSON in Java:
Jackson: efficient for handling large files and comes with an extensive feature set
Gson: minimal configuration and setup but slower for large datasets
json: built-in package providing a set of classes and methods
JSON Logging: Best Practices
Log files often have complex, unstructured text-based formatting. When you convert them to JSON, you can store and search your logs more easily. Over time, JSON has become a standard log format because it creates a structured database that allows you to extract the fields that matter to normalize them against other logs that your environment generates. Additionally, as an application’s log data evolves, JSON’s flexibility makes it easier to add or remove fields. Since many programming language either include structured JSON logging in their libraries or offer third-party libraries,
Log from the Start
Making sure that your application generates logs is critical from the very beginning. Logs enable you to debug the application or detect security vulnerabilities. By inserting the JSON logs from the start, you make your testing easier and build security monitoring into the application.
Configure Dependencies
If your dependencies can also generate JSON logs, you should consider configuring it because the structure format makes parsing and analyzing database logs easier.
Format the Schema
Since your JSON logs should be readable and parseable, you want to keep them as compact and streamlined as possible. Some best practices include:
Focusing on objects that need to be read
Flattening structures by concatenating keys with a separator
Using a uniform data type in each field
Parsing exception stack traces into attribute hierarchies
Incorporate Context
JSON enables you to include information about what you’re logging for insight into an event’s immediate context. Some context that helps correlate issues across your IT environment include:
User identifiers
Session identifiers
Error messages
Graylog: Correlating and Analyzing Logs for Operations and Security
With Graylog’s parsing JSON functions, you can parse out useful information, like destination address, response bytes, and other data that helps monitor security incidents or answer IT questions. After extracting the data you want, you can use the Graylog Extended Log Format (GELF) to normalize and structure all log data. Graylog’s purpose-built solution provides lightning-fast search capabilities and flexible integrations that allow your team to collaborate more efficiently.
Graylog Operations provides a cost-efficient solution for IT ops so that organizations can implement robust infrastructure monitoring while staying within budget. With our solution, IT ops can analyze historical data regularly to identify potential slowdowns or system failures while creating alerts that help anticipate issues.
With Graylog’s security analytics and anomaly detection capabilities, you get the cybersecurity platform you need without the complexity that makes your team’s job harder. With our powerful, lightning-fast features and intuitive user interface, you can lower your labor costs while reducing alert fatigue and getting the answers you need – quickly.
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.
In today’s digital age, the internet has become an integral part of our daily lives. From working remotely to streaming movies, we rely on the internet for almost everything. However, slow internet speeds can be frustrating and can significantly affect our productivity and entertainment. Despite advancements in technology, many people continue to face challenges with their internet speeds, hindering their ability to fully utilize the benefits of the internet. In this blog, we will explore how Dan McDowell, Professional Services Engineer decided to take matters into his own hands and get the data over time to present to his ISP.
Over the course of a few months, I noticed slower and slower internet connectivity. Complaints from neighbors (we are all on the same ISP) lead me to take some action. A few phone calls with “mixed” results were not good enough for me so I knew what I needed, metrics!
Why Metrics?
Showing data without a doubt is one of the most powerful ways to prove a statement. How often do you hear one of the following when you call in for support:
Did you unplug it and plug it back in?
It’s probably an issue with your router
Oh, wireless must be to blame
Test it directly connected to your computer!
Nothing is wrong on our end, must be yours…
In my scenario I was able to prove without a doubt that this wasn’t a “me” problem. Using data I gathered by running this script every 30 minutes over a few weeks time I was able to prove:
This wasn’t an issue with my router
The was consistent connectivity slowness at the same times every single day of the week and outside of those times my connectivity was near the offered maximums.
Something was wrong on their end
Clearly, they were not spec’d to handle the increase in traffic when people stop working and start streaming
I used their OWN speed test server for all my testing. It was only one hop away.
This was all the proof I needed:
End Result?
I sent in a few screenshots of my dashboards, highlighting the clear spikes during peak usage periods. I received a phone call not even 10 minutes later from the ISP. They replaced our local OLT and increased the pipe to their co-lo. What a massive increase in average performance!
Ookla Speedtest has a CLI tool?!
Yup. This can be configured to use the same speedtest server (my local ISP runs one) each run meaning results are valid and repeatable. Best of all, it can output JSON which I can convert to GELF with ease! In short, I setup a cron job to run my speed test script every 30 minutes on my Graylog server and output the results, converting the JSON message into GELF which NetCat sends to my GELF input.
Move the script to a common location and make it executable mkdir /scripts mv speedtest.sh /scripts/ chmod +x /scripts/speedtest.sh
Getting Started
Login to your Graylog instance
Navigate to System → Content Packs
Click upload.
Browse to the downloaded location of the Graylog content pack and upload it to your instance
Install the content pack
This will install a Stream, pipeline, pipeline rule (routing to stream) and dashboard
Test out the script!
ssh / console to your linux system hosting Graylog/docker
Manually execute the script: /scripts/speedtest.sh localhost 12201 Script Details: <path to script> <ip/dns/hostname> <port>
Check out the data in your Graylog
Navigate to Streams → Speed Tests
Useful data appears!
Navigate to Dashboards → ISP Speed Test
Check out the data!
Manually execute the script as much as you like. More data will appear the more you run it.
Automate the Script!
This is how I got the data to convince my ISP that something was actually wrong. Setup a CRON job that runs every 30 minutes and within a few day you should see some time related changes.
ssh or console to your linux system hosting the script / Graylog
Create a CRONTAB to run the script every 30 minutes
create crontab (this will be for the currently logged in user OR root if sudo su was used)
crontab -e
Set the script to run every 30 minutes (change as you like)
That’s it! As long as the user the crontab was made for has permissions, the script will run every 30 minutes and the data will go to Graylog . The dashboard will continue to populate for you automatically.
Bonus Concept – Monitor you Sites WAN Connection(s)
This same script could be used to monitor WAN connections at different sites. Without any extra fields, we could use the interface_externalIp or source fields provided by the speedtest cli/sending host to filter by site location, add a pipeline rule to add a field biased on a lookup table or add a single field to the speedtest GELF message (change the script slightly) to provide that in the original message, etc. Use my dashboard to make a new dashboard with tabs for per-site and a summary page! The possibilities are endless.
Most of all, go have fun!
About Graylog At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.
About Version 2 Digital
Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.
Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.