Skip to content

強化您的防禦:健全弱點管理計劃的價值

現代安全風險管理策略中不可或缺的支柱。

 

在過去,居家安全意味著在晚上四處走動,親手檢查每一扇窗戶和門都已上鎖。這是一個手動、謹慎的過程,基於一個簡單的真理:任何一個未上鎖的入口,都等同於向竊賊敞開大門。

今天,企業組織在數碼規模上面臨著類似的挑戰。網絡犯罪分子不斷地探測未上鎖的數碼門窗——也就是存在於流程和技術中的安全弱點。這種威脅不僅是理論上的:《2025年資料外洩調查報告》顯示,在20%的資料外洩事件中,弱點利用是其中一個因素,年增率高達驚人的34%。

隨著攻擊者越來越專注於這些弱點,一個健全的弱點管理計劃不再僅僅是最佳實踐;它已成為任何現代安全風險管理策略中不可或缺的支柱。

什麼是弱點管理計劃?

弱點管理計劃建立了一個標準化、主動的框架,用於識別、分類、修復和緩解整個組織數碼環境中的弱點——包括其系統、網絡、應用程式和設備。雖然它通常從弱點掃描開始,但一個成熟的計劃是一個全面性的持續循環,旨在系統性地降低風險。

一個成功的計劃其核心要素包括:

  • 弱點識別 (Vulnerability Identification):利用先進的工具和威脅情資來發現潛在的弱點。
  • 弱點評估 (Vulnerability Assessment):評估每個弱點的嚴重性和潛在影響,以排定處理的優先順序。
  • 修復與緩解 (Remediation and Mitigation):實施措施以修復弱點或降低其潛在影響。
  • 持續監控與報告 (Continuous Monitoring and Reporting):確保進行中的評估,並對組織的安全態勢保持清晰的可見度。

弱點管理的生命週期:一個持續的防禦循環

有效的弱點管理不是一次性的專案,而是一個永續的生命週期,包含多個獨特且相互關聯的階段:

  1. 探索 (Discovery):主動掃描所有系統,建立數碼基礎設施中現有弱點的完整清單。
  2. 資產優先級排序 (Asset Prioritization):集中力量處理影響最關鍵資產的弱點——那些對維持業務營運至關重要的資產。
  3. 評估 (Assessment):根據弱點對組織可能產生的潛在影響進行分類和排序,以智慧地指導修復工作。
  4. 修復 (Remediation):透過應用安全修補程式來緩解風險,或者在製造商未提供安全更新時,實施補償性控制。
  5. 驗證與監控 (Verification and Monitoring):確認修復工作成功,且保護措施如預期般運作。
  6. 報告 (Reporting):隨著時間的推移,溝通趨勢和進展,以驗證計劃的有效性並找出需要改進的領域。

關鍵術語:弱點 vs. 威脅 vs. 風險

弱點 (Vulnerability):系統、安全程序或內部控制中的一個弱點或缺陷,可能被威脅所利用。

威脅 (Threat):可能對營運或資產產生不利影響的潛在事件或情況,例如攻擊者試圖入侵系統。

風險 (Risk):當威脅利用弱點時可能造成的損失或損害。它通常是事件發生的可能性及其所帶來影響的函數。

簡而言之,當威脅行為者可以利用弱點來達成其目的(如部署勒索軟體或竊取資料)時,該弱點便構成了風險。

弱點管理 vs. 弱點評估

弱點評估是弱點管理的一個關鍵組成部分,但兩者並不相同:

  • 目的:弱點評估是針對當前弱點的單一時間點快照。弱點管理則是一個持續性的長期策略計劃。
  • 範圍:評估是一次性的審查。管理則涵蓋從探索到報告的整個生命週期。
  • 頻率:評估是定期執行的。管理則是一個持續不斷的過程。

有效弱點管理的常見障礙

  • 獲得高層支持 (Gaining Executive Buy-In):由於這是一種主動性控制,弱點管理的價值很難量化。從可能將其視為成本中心的高層主管那裡,獲得必要的預算、政策和領導支持是一大主要障礙。
  • 準確評估風險 (Accurately Assessing Risk):像CVSS這樣的標準評分系統很有用,但通常缺乏業務情境。真正的風險評估需要理解資產對組織的關鍵性,而這是一般評分無法提供的。
  • 實現完整的資產可見度 (Achieving Full Asset Visibility):組織無法保護他們看不到的東西。未受管理的設備(如員工自有的智慧型手機)的擴散造成了盲點,使攻擊面的很大一部分未被監控。
  • 缺乏正式的政策與流程 (Lacking Formal Policies and Processes):如果沒有一個可重複的優先級排序和修復框架,相關工作就會變得手動、不一致且容易出錯。
  • 優先級排序的挑戰 (Struggling with Prioritization):資產可見度差、風險評分籠統以及流程不一致的組合,使得幾乎不可能知道首先要修復哪些弱點,導致團隊感到不知所措。
  • 孤立的團隊協作 (Siloed Team Collaboration):弱點管理是一項團隊運動,需要安全、DevOps和IT營運團隊之間的協調。如果沒有一個集中的溝通和追蹤平台,流程就會中斷,修復速度也會變慢。

Graylog:具備情境感知能力的風險評分與資產優先級排序

Graylog Security 透過提供推動智慧弱點管理所需的情境,直接應對這些挑戰。我們的平台允許您對每台機器和使用者資產的重要性進行分類,將它們分組為低、中、高和關鍵等級別的優先順序。

這種分類為我們的 資產風險評分 (Asset Risk Scores) 提供了動力,該評分將事件層級的風險與關鍵情境(包括日誌資料來源、資產優先級和相關弱點)相結合。這使您的安全團隊能夠專注於真正重要的安全事件——那些影響您最關鍵和最脆弱資產的事件。

建立在強大的 Graylog 平台之上,Graylog Security 提供了 SIEM 的全部功能,卻沒有其複雜性和高昂成本。我們易於使用的解決方案將集中式日誌管理、資料豐富化、威脅偵測、事件調查和報告整合到單一平台中。

借助 Graylog Illuminate 內容包,我們能為您自動化最重要日誌資料的視覺化和關聯分析,讓您能專注於安全,而非設定。

關於 Graylog
Graylog 通過完整的 SIEM、企業日誌管理和 API 安全解決方案,提升公司企業網絡安全能力。Graylog 集中監控攻擊面並進行深入調查,提供卓越的威脅檢測和事件回應。公司獨特結合 AI / ML 技術、先進的分析和直觀的設計,簡化了網絡安全操作。與競爭對手複雜且昂貴的設置不同,Graylog 提供強大且經濟實惠的解決方案,幫助公司企業輕鬆應對安全挑戰。Graylog 成立於德國漢堡,目前總部位於美國休斯頓,服務覆蓋超過 180 個國家。

關於 Version 2 Digital
資安解決方案 專業代理商與領導者
台灣二版 ( Version 2 ) 是亞洲其中一間最有活力的 IT 公司,多年來深耕資訊科技領域,致力於提供與時俱進的資安解決方案 ( 如EDR、NDR、漏洞管理 ),工具型產品 ( 如遠端控制、網頁過濾 ) 及資安威脅偵測應 變服務服務 ( MDR ) 等,透過龐大銷售點、經銷商及合作伙伴,提供廣被市場讚賞的產品及客製化、在地化的專業服務。

台灣二版 ( Version 2 ) 的銷售範圍包括台灣、香港、中國內地、新加坡、澳門等地區,客戶涵 蓋各產業,包括全球 1000 大跨國企業、上市公司、公用機構、政府部門、無數成功的中小企業及來自亞 洲各城市的消費市場客戶。

進階持續性威脅(APT):潛伏在您網絡中的無聲威脅

我們都曾得過「那種感冒」—— 大多數症狀都已消失,卻留下了惱人、持續數週的咳嗽。在網絡安全的世界裡,「進階持續性威脅」(Advanced Persistent Threat, APT)就相當於您數碼環境中那頑固的咳嗽,只是其危害遠不止於此。這是一種悄悄地侵入您網絡,然後潛伏在陰影中,耐心等待以達成其目標的攻擊。

理解這些無聲的、長期的威脅,是為您的企業建立真正具韌性防禦的第一步。

什麼是進階持續性威脅?

「進階持續性威脅」(APT)是一種高度複雜、目標明確的網絡攻擊,惡意行為者在未經授權的情況下存取網絡,並在其中潛伏一段很長的時間而不被發現。與專注於快速獲利的常見網絡犯罪分子不同,APT 攻擊者採的是「放長線釣大魚」的策略。這個名字本身就說明了一切:

  • 進階 (Advanced):攻擊者使用複雜且通常是客製化的工具和技術來突破防線。他們有條不紊、資金充裕且耐心十足。
  • 持續性 (Persistent):這不是一次性的事件。其主要目標是在目標網絡內建立一個長期的立足點,維持數月甚至數年的存取權限,以持續收集情報。
  • 威脅 (Threat):攻擊背後是一個有組織的人為對手——而不僅僅是一個自動化腳本。這些威脅行為者通常是組織嚴密的團體,針對政府機構、國防承包商和大型企業等高價值實體,以進行商業或國際間諜活動。

他們的主要目標是竊取資料和收集情報,而非破壞系統以造成干擾。

無聲入侵的剖析:APT 的生命週期

APT 攻擊按部就班地分階段展開。儘管具體工具可能有所不同,但其策略流程是一致的。

階段一:滲透 – 悄然潛入

第一步是獲取初始存取權限。攻擊者就像竊賊在踩點一樣,仔細尋找進入的途徑。

  • 偵察 (Reconnaissance):他們掃描網絡尋找漏洞、識別設定不當的系統,並收集有關員工和基礎設施的情報。
  • 初始存取 (Initial Access):他們利用偵察結果突破邊界防禦。常見方法包括針對性的網絡釣魚攻擊以竊取憑證、利用未修補的軟體漏洞,甚至從暗網上的「初始存取權限代理人」(Initial Access Brokers)購買存取權限。
  • 建立立足點 (Establish a Foothold):一旦進入內部,他們會立即部署像後門(backdoors)或 Rootkit 等工具。這確保即使最初的入口被發現並關閉,他們仍能維持對受駭系統的存取權限。

階段二:擴張 – 繪製領土地圖

在確保立足點後,攻擊者開始進行探索。此階段的重點是更深入地滲透網絡並獲取更多控制權。

  • 橫向移動 (Lateral Movement):攻擊者在系統之間悄悄移動,繪製網絡架構圖,並找出儲存有價值資料的位置。
  • 權限提升 (Privilege Escalation):初始入侵通常是透過權限有限的標準用戶帳戶。接著,攻擊者會努力提升其權限,通常是針對並奪取管理員帳戶。獲得此等級的存取權限讓他們能夠停用安全控制、操控系統並自由行動。

階段三:竊取 – 執行劫案

這是他們所有努力的最終階段。在繪製了網絡地圖並獲得了特權存取權限後,攻擊者開始竊取目標資料。

  • 資料收集與外洩 (Data Collection & Exfiltration):他們收集、加密並壓縮敏感資料,然後將其傳輸到自己的伺服器。為避免被偵測,他們通常會以緩慢、少量的方式竊取資料,以模仿正常的網絡流量。
  • 掩蓋蹤跡 (Covering Their Tracks):為在竊取資料期間分散安全團隊的注意力,APT 集團可能會發動聲東擊西的攻擊,例如分散式阻斷服務(DDoS)攻擊或勒索軟體攻擊。
  • 持續潛伏 (Remaining Embedded):即使在初次竊取資料後,攻擊者可能仍選擇隱藏在網絡中,以便在未來發動更多攻擊或長期持續竊取資訊。

獵捕數碼魅影:如何偵測 APT

由於 APT 的設計旨在隱匿,因此偵測極具挑戰性。安全團隊必須從尋找響亮的警報,轉變為追獵那些微小的異常——當這些線索串連起來時,便揭示了一個隱藏入侵者的故事。關鍵跡象包括:

  • 異常的登入活動:尋找不尋常的模式,特別是特權帳戶,例如在非正常辦公時間或從意外的地理位置登入。
  • 意外的資料流:監控異常的網絡流量,例如大量資料傳輸到外部伺服器或不尋常的內部資料打包,這可能表示資料正準備被竊取。
  • 廣泛存在的後門木馬程式:發現旨在維持持續存取權限的複雜惡意軟體,且存在於多台機器上,是 APT 的一個強烈指標。
  • 微小且持續的問題:看似微不足道的、反覆出現的小異常或無法解釋的帳戶鎖定,可能是一個更大規模、有組織攻擊的一部分。

建立具韌性的防禦以對抗 APT

要防禦一個有耐心且資源充足的對手,需要一個多層次、主動積極的安全策略。關鍵的最佳實踐包括:

  • 縮小攻擊面:定期應用安全更新、實施嚴格的防火牆規則,並持續掃描和修復漏洞,以減少攻擊者的入侵點。
  • 實施嚴格的存取控制:遵循「最小權限原則」,確保用戶只能存取其工作所必需的資料和系統。部署特權存取管理(PAM)解決方案,以密切監控和控制高價值帳戶。
  • 採納主動積極的心態:不要等待警報響起。實施並自動化威脅狩獵(threat hunting),主動搜尋入侵指標。將您的防禦措施對應到 MITRE ATT&CK 等框架,有助於您專注於已知 APT 集團使用的戰術和技術。
  • 保護遠端連線:使用虛擬私人網絡(VPN)加密所有遠端連線,使攻擊者更難在傳輸過程中攔截資料。

結論:保持警惕的重要性

進階持續性威脅並非吵雜的「砸了就跑」式搶劫;它們是耐心、有條不紊的間諜活動。要偵測和緩解它們,需要從被動的應對姿態,根本性地轉變為主動的、持續的警戒狀態。透過了解它們的方法、追獵其存在的微小跡象,並建立深入、主動的防禦,企業可以將其網絡從一個獵場,轉變為一座堅不可摧的堡壘。

關於 Graylog
Graylog 通過完整的 SIEM、企業日誌管理和 API 安全解決方案,提升公司企業網絡安全能力。Graylog 集中監控攻擊面並進行深入調查,提供卓越的威脅檢測和事件回應。公司獨特結合 AI / ML 技術、先進的分析和直觀的設計,簡化了網絡安全操作。與競爭對手複雜且昂貴的設置不同,Graylog 提供強大且經濟實惠的解決方案,幫助公司企業輕鬆應對安全挑戰。Graylog 成立於德國漢堡,目前總部位於美國休斯頓,服務覆蓋超過 180 個國家。

關於 Version 2 Digital
資安解決方案 專業代理商與領導者
台灣二版 ( Version 2 ) 是亞洲其中一間最有活力的 IT 公司,多年來深耕資訊科技領域,致力於提供與時俱進的資安解決方案 ( 如EDR、NDR、漏洞管理 ),工具型產品 ( 如遠端控制、網頁過濾 ) 及資安威脅偵測應 變服務服務 ( MDR ) 等,透過龐大銷售點、經銷商及合作伙伴,提供廣被市場讚賞的產品及客製化、在地化的專業服務。

台灣二版 ( Version 2 ) 的銷售範圍包括台灣、香港、中國內地、新加坡、澳門等地區,客戶涵 蓋各產業,包括全球 1000 大跨國企業、上市公司、公用機構、政府部門、無數成功的中小企業及來自亞 洲各城市的消費市場客戶。

【媒體報導】Graylog持續強化事件管理,增設資料湖預覽與掌握敵情的防禦

受限於本身的財力,小型機構或中小企業若想建置專業等級的事件管理系統(LMS)、安全資訊與事件管理系統(SIEM),甚至資安維運中心(SOC),可能會考慮開放原始碼的軟體,而在這個領域當中,較受歡迎的產品,有ELK stack(囊括Elasticsearch、Logstash、Kibana)、OpenSearch、Wazuh、Graylog,而在2024年臺灣資安大會,剛好有講者以免費版本的Graylog為題,介紹他藉此打造資安戰情中心的經驗。

Graylog目前有幾家廠商在臺灣推廣,一家是節省工具箱提供商業版本的服務,另一家是代理多種IT管理系統與資安系統品牌的廠商台灣二版,去年開始宣傳Graylog最新動態。

事件管理系統是主力產品,以開放原始碼版本為核心,延伸提供進階功能的企業版,以及著重資安功能的安全版

Graylog提供多種形式的解決方案,長期發展的事件管理系統是最廣為人知的產品。

首先是以免費使用、開放原始碼聞名的Graylog Open,提供事件記錄集中管理功能,能夠橫跨多種IT環境執行資料的收集、解析、充實、分析;接著是基於Graylog Open、提供更多進階功能的Graylog Enterprise,具備交互關連(Correlation)、歸檔(Archiving)與轉送(Forwarder)、團隊多人協作與統計報表產生,以及多種雲端服務與應用程式的整合支援。

值得注意的是,Graylog Enterprise又被稱為Graylog Operations,原因是2022年5月Graylog Enterprise更名為Graylog Operations兩年後又將名字改回Graylog Enterprise

而在Graylog Enterprise的基礎上,延伸出全代管的SaaS服務Graylog Cloud(2021年3月推出),

以及強化資安團隊所需功能的Graylog Security,能以此進行威脅的偵測、調查、應變工作(2021年10月推出)。

針對各種事件記錄資料解析能力的擴充,Graylog提供易用的內容套件(content packs),稱為Graylog Illuminate(2020年6月推出),目前提供60種,透過內建的處理流程、解析規則、查詢表,協助Graylog Enterprise與Graylog Security充實與正規化處理資料內容,更有效率地分析不同來源的記錄。

Graylog軟體改版相當頻繁,7月4日釋出6.3.1版,今年上半年發布重大更新主要是在今年4月底RSAC大會期間,他們宣布推出6.2版,當中主打幾個特色,都會在付費版本(Graylog Enterprise與Graylog Security)提供,像是:針對去年秋季改版增加的資料湖與資料路由功能,加以延伸,能在擷取資料集之前,預覽工作團隊所需的資料是否在資料湖,隨後可運用選定資料擷取(Selective Data Retrieval)功能,隨需存取小範圍的記錄資訊,可大幅減少授權耗用量。

  

6.2版另一個新功能是提供「掌握敵情而成的防禦機制(Adversary Informed Defense)」,能自動辨識多個敵對團體的攻擊戰略、伎倆與程序(TTP),執行一連串的偵測,並根據每次額外確認的偵測結果以指數的方式增加風險評分,當中不僅基於MITRE ATT&CK列出的戰略與伎倆來標記偵測,也運用真實世界的威脅活動細部資訊,以便指認自身環境存在網路攻擊活動,而且就算是分散在多天、多週、多月的個別行動,也能依此方式進行辨識。

  

跨足API防護應用,能夠監控與偵測API流量,產生攻擊警示與有問題的執行階段活動

除了上述鎖定資安事件記錄管理用途的產品,Graylog旗下還有API安全解決方案,稱為Graylog API Security,可透過擷取API流量,探查存取的API,以及這些API是由合法使用者、惡意攻擊者、合作廠商、內部人員使用的,並運用內建與自定的特徵碼,自動偵測與警示當中的互動,確認是否存在網路攻擊、資料外洩行為。

Graylog API Security的推出,源自2023年7月併購新創資安公司Resurface,2024年1月發表Graylog API Security 3.6,2月釋出免費版——僅能以單節點部署、儲存16 GB資料(容量一旦超過,將刪掉舊資料,留給新進資料)。

產品資訊

Graylog
●代理商:台灣二版
●建議售價:開放版免費,企業版每年1.5萬美元起,資安版每年1.8萬美元起,API Security版每年1.8萬美元起
●作業系統需求:Linux(Ubuntu 20.04, 22.04、RHEL 7/8/9、SUSE Linux Enterprise Server 12/15、Debian 10/11/12)、Docker
●系統基礎元件: Graylog、Data Node、MongoDB(5.0.7版至7.x版),選用OpenSearch(1.1.x版至2.15.x版)

【註:規格與價格由廠商提供,因時有異動,正確資訊請洽廠商】

媒體報導轉載:iThome( https://www.ithome.com.tw/review/169622 

關於 Graylog
Graylog 通過完整的 SIEM、企業日誌管理和 API 安全解決方案,提升公司企業網絡安全能力。Graylog 集中監控攻擊面並進行深入調查,提供卓越的威脅檢測和事件回應。公司獨特結合 AI / ML 技術、先進的分析和直觀的設計,簡化了網絡安全操作。與競爭對手複雜且昂貴的設置不同,Graylog 提供強大且經濟實惠的解決方案,幫助公司企業輕鬆應對安全挑戰。Graylog 成立於德國漢堡,目前總部位於美國休斯頓,服務覆蓋超過 180 個國家。

關於 Version 2 Digital
資安解決方案 專業代理商與領導者
台灣二版 ( Version 2 ) 是亞洲其中一間最有活力的 IT 公司,多年來深耕資訊科技領域,致力於提供與時俱進的資安解決方案 ( 如EDR、NDR、漏洞管理 ),工具型產品 ( 如遠端控制、網頁過濾 ) 及資安威脅偵測應 變服務服務 ( MDR ) 等,透過龐大銷售點、經銷商及合作伙伴,提供廣被市場讚賞的產品及客製化、在地化的專業服務。

台灣二版 ( Version 2 ) 的銷售範圍包括台灣、香港、中國內地、新加坡、澳門等地區,客戶涵 蓋各產業,包括全球 1000 大跨國企業、上市公司、公用機構、政府部門、無數成功的中小企業及來自亞 洲各城市的消費市場客戶。

What To Know About Parsing JSON

Any company that processes payments knows the pain of an audit under the Payment Card Industry Data Security Standard (PCI DSS). Although the original PCI DSS had gone through various updates, the Payment Card Industry Security Standards Council (PCI SSC) took feedback from the global payments industry to address evolving security needs. The March 2022 release of PCI DSS 4.0 incorporated changes that intend to promote security as an iterative process while ensuring continued flexibility so that organizations could achieve security objectives based on their needs.

 

To give companies time to address new requirements, audits will begin incorporating the majority of the new changes beginning March 31, 2025. However, some issues will be included in audits beginning immediately.

 

Why did the Payment Card Industry Security Standards Council (PCI SSC) update the standard?

At a high level, PCI DSS 4.0 responds to changes in IT infrastructures arising from digital transformation and Software-as-a-Service (SaaS) applications. According to PCI SSC’s press release, changes will enhance validation methods and procedures.

 

When considering PCI DSS 4.0 scope, organizations need to implement controls around the following types of account data:

  • Cardholder Data: Primary Account Number (PAN), Cardholder Name, Expiration Date, Service Code
  • Sensitive Authentication Data (SAD): Full track data (magnetic stripe or chip equivalent), card verification code, Personal Identification Numbers (PINs)/PIN blocks.

 

To get a sense of how the PCI SSC shifted focus when drafting PCI DSS 4.0, you can take a look at how the organization renamed some of the Requirements:

 

 

PCI CategoriesPCI 3.2.1PCI 4.0
Build and Maintain a Secure Network and Systems
  1. Install and maintain a firewall configuration to protect cardholder data
  2. Do not use vendor-supplied defaults for system passwords and other security parameters.
  1. Install and maintain network security controls
  2. Apply secure configurations to all system components

Protect Cardholder Data

(Updated to Protect Account Data in 4.0)

  1. Protect stored cardholder data
  2. Encrypt transmission of cardholder data across open, public networks

3. Protect stored account data

 

4. Protect cardholder data with strong cryptography during transmission over open, public networks

 

Maintain a Vulnerability Management Program
  1. Protect all systems against malware and regularly update anti-virus software or programs
  2. Develop and maintain secure systems and applications

5. Protect all systems and networks from malicious software

6. Develop and maintain secure systems and software

Implement Strong Access Control Measures
  1. Restrict access to cardholder data by business need to know
  2. Identify and authenticate access to system components
  3. Restrict physical access to cardholder data

7. Restrict access to system components and cardholder data by business need to know

8. Identify users and authenticate access to system components

9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks
  1. Track and monitor all access to network resources and cardholder data
  2. Regularly test security systems and processes

10. Log and monitor all access to system components and cardholder data

11. Test security of systems and networks regularly

Maintain an Information Security Policy
  1. Maintain a policy that addresses information security for all personnel
12. Support information security with organizational policies and programs

 

While PCI SSC expanded the requirements to address larger security and privacy issues, many of them remain fundamentally the same as before. According to the Summary of Changes, most updates fall into one of the following categories:

  • Evolving requirement: changes that align with emerging threats and technologies or changes in the industry
  • Clarification or guidance: updated wording, explanation, definition, additional guidance, and/or instruction to improve people’s understanding
  • Structure or format: content reorganization, like combining, separating, or renumbering requirements

 

For organizations that have previously met PCI DSS compliance objectives, those changes place little additional burden.

 

However, PCI DSS 4.0 does include changes to Requirements that organizations should consider.

 

What new Requirements are immediately in effect for all entities?

While additions are effective beginning March 31, 2025, three primary issues affect current PCI audits.

 

Holistically, PCI DSS now includes the following sub requirement across Requirements 2 through 11:

Roles and responsibilities for performing activities for Requirement are documented, assigned, and understood.

 

Additionally, under Requirement 12, all entities should be:

  • Performing a targeted risk analysis for each PCI DSS requirement according to the documented, customized approach
  • Documenting and confirming PCI DSS scope every 12 months

 

What updates are effective March 31, 2025 for all entities?

As the effective date for all requirements draws closer, organizations should consider the major changes that impact their business, security, and privacy operations.

 

Requirement 3

PCI DSS 4.0 incorporates the following new requirements:

  • Minimizing the SAD stored prior to completion and retaining it according to data retention and disposal policies, procedures and processes
  • Encrypting all SAD stored electronically
  • Implementing technical controls to prevent copying/relocating PAN when using remote-access technologies unless requiring explicit authorization
  • Rendering PAN unreadable with keyed cryptographic hashes unless requiring explicit authorization
  • Implementing disk-level or partition-level encryption to make PAN unreadable

 

Requirement 4

PCI DSS 4.0 incorporates the following new requirements:

  • Confirming that certificates safeguarding PAN during transmission across open, public networks are valid, not expired or revoked
  • Maintaining an inventory of trusted keys and certificates

 

Requirement 5

PCI DSS 4.0 incorporates the following new requirements:

  • Performing a targeted risk analysis to determine how often the organization evaluates whether system components pose a malware risk
  • Performing targeted risk analysis to determine how often to scan for malware
  • Performing anti-malware scans when using removable electronic media
  • Implementing phishing attack detection and protection mechanisms

 

Requirement 6

PCI DSS 4.0 incorporates the following new requirements:

  • Maintaining an inventory of bespoke and custom software for vulnerability and patch management purposes
  • Deploying automated technologies for public-facing web applications to continuously detect and prevent web-based attacks
  • Managing payment page scripts loaded and executed in consumers’ browsers

 

Requirement 7

PCI DSS 4.0 incorporates the following new requirements:

  • Reviewing all user accounts and related access privileges
  • Assigning and managing all application and system accounts and related access privileges
  • Reviewing all application and system accounts and their access privileges

 

Requirement 8

PCI DSS 4.0 incorporates the following new requirements:

  • Implementing a minimum complexity level for passwords used as an authentication factor
  • Implementing multi-factor authentication (MFA) for all CDE access
  • Ensuring MFA implemented appropriately
  • Managing interactive login for system or application accounts
  • Using passwords/passphrases for application and system accounts
  • Protecting passwords/passphrases for application and system accounts against misuse

 

Requirement 9

PCI DSS 4.0 incorporates the following new requirements:

  • Performing targeted risk analysis to determine how often POI devices should be inspected

 

Requirement 10

PCI DSS 4.0 incorporates the following new requirements:

  • Automating the review of audit logs
  • Performing a targeted risk analysis to determine how often to review system and component logs
  • Detecting, receiving alerts for, and addressing critical security control system failures
  • Promptly responding to critical security control system failures

 

Requirement 11

PCI DSS 4.0 incorporates the following new requirements:

  • Managing vulnerabilities not ranked as high-risk or critical
  • Performing internal vulnerability scans using authenticated scanning
  • Deploying a change-and-tamper-detection mechanism for payment pages

 

Requirement 12

PCI DSS 4.0 incorporates the following new requirements:

  • Documenting the targeted risk analysis that identifies how often to perform it so it supports each PCI DSS Requirement
  • Documenting and reviewing cryptographic cypher suites and protocols
  • Reviewing hardware and software
  • Reviewing security awareness program at least once every 12 months and updating as necessary
  • Including in training threats to CD, like phishing and related attacks and social engineering
  • Including acceptable technology use in training
  • Performing targeted risk analysis to determine how often to provide training
  • Including in incident response plan the alerts from change-and-tamper detection mechanism for payment pages
  • Implementing incident response procedures and initiating them upon PAN detection

 

What updates are applicable to service providers only?

In some cases, new Requirements apply only to issuers and companies supporting those issuing services and storing sensitive authentication data. Only one of these immediately went into effect, the update to Requirement 12:

  • TPSPs support customers’ requests for PCI DSS compliance status and information about the requirements for which they are responsible

 

Effective March 31, 2025

Service providers should be aware of the following updates:

 

  • Requirement 3:
    • Encrypting SAD
    • Documenting the cryptographic architecture that prevents people from using cryptographic keys in production and test environments
  • Requirement 8
    • Requiring customers to change passwords at least every 90 days or dynamically assessing security posture when not using additional authentication factors
  • Requirement 11
    • Multi-tenant service providers supporting customers for external penetration testing
    • Detecting, receiving alerts for, preventing, and addressing covert malware communication channels using intrusion detection and/or intrusion prevention techniques
  • Requirement 12
    • Documenting and confirming PCI DSS scope every 6 months or upon significant changes
    • Documenting, reviewing, and communicating to executive management the impact that significant organizational changes have on PCI DSS scope

 

Graylog Security and API Security: Monitoring, Detection, and Incident Response for PCI DSS 4.0

 

Graylog Security provides the SIEM capabilities organizations need to implement Threat Detection and Incident Response (TDIR) activities and compliance reporting. Graylog Security’s security analytics and anomaly detection functionalities enable you to aggregate, normalize, correlate, and analyze activities across a complex environment for visibility into and high-fidelity alerts for critical security monitoring and compliance issues like:

 

By incorporating Graylog API Security into your PCI DSS monitoring and incident response planning, you enhance your security and compliance program by mitigating risks and detecting incidents associated with Application Programming Interfaces (APIs). With Graylog’s end-to-end API threat monitoring, detection, and response solution, you can augment the outside-in monitoring from Web Application Firewalls (WAF) and API gateways with API discovery, request and response capture, automated risk assessment, and actionable remediation activities.

 

 

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Graylog:為什麼「修補漏洞」 並非「網路安全」 的終極目標

摩根大通最近的一項分析批評了 CVSS 評分機制,指出缺乏上下文導致了誤導性的優先排序。在網路安全領域,修補漏洞經常被視為最重要的目標。只要修補好所有 CVE,是不是就安全了?事實並非如此。我們深知修補漏洞並不如表面看起來那麼簡單,也未必能達到理想效果。有限的資源、業務中斷,以及海量的漏洞數量,讓即使是針對關鍵或高危漏洞的 100% 修補,都像是難以觸及的目標。

修補漏洞固然重要,但它並不是保護環境的唯一解決方案。

修補漏洞的主要挑戰

1. 漏洞數量持續激增
每年公開的漏洞數量快速增加。國家漏洞數據庫(NVD)每年記錄數以萬計的新漏洞。當安全掃描工具不斷生成大量的「關鍵」警報時,您該如何選擇哪些漏洞優先修補?

2. 業務中斷與運行風險
修補漏洞往往需要停機、測試,甚至可能導致關鍵系統出現問題。對於擁有舊有基礎設施的公司企業來說,修補生產環境中的伺服器可能引發意想不到的問題,甚至比漏洞本身帶來的風險更高。

3. 資源不足
預算、專業人員和工具的限制,讓網路安全團隊經常不堪重負。當團隊花費大量時間修補漏洞時,可能無法專注於其他重要任務,例如事件響應、用戶教育或威脅搜捕。

4. 漏洞利用的實際情況
並非每個漏洞都會被武器化,也不是每個漏洞都能在您的環境中被利用。然而,傳統的漏洞管理往往將所有漏洞視為同樣緊急,這會導致修補疲勞並浪費寶貴的資源。

 

為什麼不需要追求100%漏洞修補

事實是,試圖修補每個漏洞不僅不切實際,還會耗費不必要的資源。網路安全的關鍵並非追求完美,而是確保優先處理對企業真正有威脅的問題。

不追求100%修補的原因如下:

1. 並非所有漏洞都帶來實際風險
對於未暴露的系統或沒有已知利用方式的漏洞,可能不需要立即處理。過度專注於低風險漏洞,反而會忽視那些真正高風險的威脅。

2. 攻擊者專注於有價值的目標
攻擊者並不關心您的漏洞修補率,而是尋找能快速接近高價值資產的途徑。如果不加區分地修補所有漏洞,可能會忽略真正需要保護的核心資產。

3. 運行數據比靜態數據更重要
靜態漏洞掃描只能指出潛在的問題,而運行時數據能揭示當前的實際威脅情況。理解漏洞是否正在被利用,是分辨理論風險和現實威脅的關鍵。

 

Graylog 的解決方案:基於資產風險的運行時分析

在 Graylog,我們的目標不是修補每個漏洞,而是幫助公司企業徹底理解風險。我們基於資產的風險管理方法,結合漏洞數據和即時事件監控,幫助您聚焦於真正重要的威脅。

1. 即時事件監控的必要性
傳統漏洞管理就像查看靜態地圖,您只能看到地形,卻無法看到動態變化。而 Graylog 提供了全新的方式,將運行時活動納入分析,幫助您回答以下問題:

  • 該漏洞是否正在被攻擊者積極利用?
  • 受影響系統是否與已知惡意 IP 通信?
  • 系統是否出現異常進程或行為?

即時洞察幫助您優先處理最重要的漏洞,減少浪費資源的低效修補。

2. Graylog 的優勢:關注發生中的威脅
修補漏洞只是針對「可能發生的事情」,而 Graylog 讓您專注於「正在發生的事情」。通過關聯日誌數據、威脅情報和資產行為,我們幫助您識別真正的妥協跡象(IOC)以及攻擊者的策略、技術與行動(TTP)。

3. 檢測實際攻擊
Graylog 的重點不僅在於識別潛在風險,還能快速捕捉並應對現實中的攻擊事件。這讓您的團隊能夠將時間和資源用於處理活躍的威脅,而不是低優先級的修補工作。

 

結論:專注於真正重要的事情
在網路安全中,追求完美往往會導致忽視真正的重點。試圖修補每個漏洞,就像鎖上所有窗戶,卻讓入侵者從大門進入一樣。通過 Graylog 的基於資產的風險管理方法,您可以聚焦於當前的威脅和高風險漏洞,減少不必要的浪費,讓資源得到最佳利用,從而有效保護您的企業安全。

關於 Graylog
Graylog 通過完整的 SIEM、企業日誌管理和 API 安全解決方案,提升公司企業網絡安全能力。Graylog 集中監控攻擊面並進行深入調查,提供卓越的威脅檢測和事件回應。公司獨特結合 AI / ML 技術、先進的分析和直觀的設計,簡化了網絡安全操作。與競爭對手複雜且昂貴的設置不同,Graylog 提供強大且經濟實惠的解決方案,幫助公司企業輕鬆應對安全挑戰。Graylog 成立於德國漢堡,目前總部位於美國休斯頓,服務覆蓋超過 180 個國家。

關於 Version 2 Digital
資安解決方案 專業代理商與領導者
台灣二版 ( Version 2 ) 是亞洲其中一間最有活力的 IT 公司,多年來深耕資訊科技領域,致力於提供與時俱進的資安解決方案 ( 如EDR、NDR、漏洞管理 ),工具型產品 ( 如遠端控制、網頁過濾 ) 及資安威脅偵測應 變服務服務 ( MDR ) 等,透過龐大銷售點、經銷商及合作伙伴,提供廣被市場讚賞的產品及客製化、在地化的專業服務。

台灣二版 ( Version 2 ) 的銷售範圍包括台灣、香港、中國內地、新加坡、澳門等地區,客戶涵 蓋各產業,包括全球 1000 大跨國企業、上市公司、公用機構、政府部門、無數成功的中小企業及來自亞 洲各城市的消費市場客戶。

Monitoring for PCI DSS 4.0 Compliance

If you grew up in the 80s and 90s, you probably remember your most beloved Trapper Keeper. The colorful binder contained all the folders, dividers, and lined paper to keep your middle school and high school self as organized as possible. Parsing JSON, a lightweight data format, is the modern, IT environment version of that colorful – perhaps even Lisa Frank themed – childhood favorite.

 

Parsing JSON involves transforming structured information into a format that can be used within various programming languages. This process can range from making JSON human-readable to extracting specific data points for processing. When you know how to parse JSON, you can improve data management, application performance, and security with structured data that allows for aggregation, correlation, and analysis.

What is JSON?

JSON, or JavaScript Object Notation, is a widely-used, human-readable, and machine-readable data exchange format. JSON structures data using text, representing it through key-value pairs, arrays, and nested elements, enabling data transfers between servers and web applications that use Application Programming Interfaces (APIs).

 

JSON has become a data-serialization standard that many programming languages support, streamlining programmers’ ability to integrate and manipulate the data. Since JSON makes it easy to represent complex objects using a clear structure while maintaining readability, it is useful for maintaining clarity across nested and intricate data models.

 

Some of JSON’s key attributes include:

  • Requires minimal memory and processing power
  • Easy to read
  • Supports key-value pairs and arrays
  • Works with various programming languages
  • Offers standard format for data serialization and transmission

 

How to make JSON readable?

Making JSON data more readable enables you to understand and debug complex objects. Some ways to may JSON more readable include:

  • Pretty-Print JSON: Pretty-printing JSON formats the input string with indentation and line breaks to make hierarchical structures and relationships between object values clearer.
  • Delete Unnecessary Line Breaks: Removing redundant line breaks while converting JSON into a single-line string literal optimizes storage and ensures consistent string representation.
  • Use Tools and IDEs: Tools and extensions in development environments that auto-format JSON data can offer an isolated view to better visualize complex JSON structures.
  • Reviver Function in JavaScript: Using the parse() method applies a reviver function that modifies object values during conversion and shapes data according to specific needs.

 

What does it mean to parse JSON?

JSONs are typically read as a string, so parsing JSON is the process of converting the string into an object to interpret the data in a programming language. For example, in JSON, a person’s profile might look like this:

{ “name”: “Jane Doe”, “age”: 30, “isDeveloper”: true, “skills”: [“JavaScript”, “Python”, “HTML”, “CSS”], }, “projects”: [ { “name”: “Weather App”, “completed”: true }, { “name”: “E-commerce Website”, “completed”: false } ] }

When you parse this JSON data in JavaScript, it might look like this:

Name: Jane Doe
Age: 30
Is Developer: true
Skills: JavaScript, Python, HTML, CSS|
Project 1: Weather App, Completed: true
Project 2: E-commerce Website, Completed: false

 

Even though the information looks the same, it’s easier to read because you removed all of the machine-readable formatting.

Partial JSON parsing

Partial JSON parsing is especially advantageous in environments like Python, where not all fields in the data may be available or necessary. With this flexible input handling, you can ensure model fields have default values to manage missing data without causing errors.

 

For example, if you only want to know the developer’s name, skills, and completed projects, partial JSON parsing allows you to extract the information you want and focus on specific fields.

 

Why is JSON parsing important?

Parsing JSON transforms the JSON data so that you can handle complex objects and structured data. When you parse JSON, you can serialize and deserialize data to improve data interchange, like for web applications.

 

JSON parsing enables:

  • Data Interchange: Allows for easy serialization and deserialization of data across various systems.
  • Dynamic Parsing: Streamlines integration for web-based applications as a subset nature of JavaScript
  • Security: Reduces injection attack risks by ensuring data conforms to expected format.
  • Customization: Transforms raw data into structured, usable objects that can be programmatically manipulated, filtered, and modified according to specific needs.

 

How to parse a JSON file

Parsing a JSON file involves transforming JSON data from a textual format into a structured format that can be manipulated within a programming environment. Modern programming languages provide built-in methods or libraries for parsing JSON data so you can easily integrate and manipulate data effectively. Once parsed, JSON data can be represented as objects or arrays, allowing operations like sorting or mapping.

 

Parsing JSON in JavaScript

Most people use the JSON.parse() method for converting string form JSON data into JavaScript objects since it can handle simple and complex objects. Additionally, you may choose to implement the reviver function to manage custom data conversions.

 

Parsing JSON in PHP

PHP provides the json_decode function so you can translate JSON strings into arrays or objects. Additionally, PHP provides functions that validate the JSON syntax to prevent exceptions that could interrupt execution.

 

Parsing JSON in Python

Parsing JSON in python typically means converting JSON strings into Python dictionaries with the json module. This module provides essential functions like loads() for strings and load() for file objects which are helpful for managing JSON-formatted API data.

 

Parsing JSON in Java

Developers typically use one of the following libraries to parse JSON in Java:

  • Jackson: efficient for handling large files and comes with an extensive feature set
  • Gson: minimal configuration and setup but slower for large datasets
  • json: built-in package providing a set of classes and methods

 

JSON Logging: Best Practices

Log files often have complex, unstructured text-based formatting. When you convert them to JSON, you can store and search your logs more easily. Over time, JSON has become a standard log format because it creates a structured database that allows you to extract the fields that matter to normalize them against other logs that your environment generates. Additionally, as an application’s log data evolves, JSON’s flexibility makes it easier to add or remove fields. Since many programming language either include structured JSON logging in their libraries or offer third-party libraries,

Log from the Start

Making sure that your application generates logs is critical from the very beginning. Logs enable you to debug the application or detect security vulnerabilities. By inserting the JSON logs from the start, you make your testing easier and build security monitoring into the application.

Configure Dependencies

If your dependencies can also generate JSON logs, you should consider configuring it because the structure format makes parsing and analyzing database logs easier.

Format the Schema

Since your JSON logs should be readable and parseable, you want to keep them as compact and streamlined as possible. Some best practices include:

  • Focusing on objects that need to be read
  • Flattening structures by concatenating keys with a separator
  • Using a uniform data type in each field
  • Parsing exception stack traces into attribute hierarchies

Incorporate Context

JSON enables you to include information about what you’re logging for insight into an event’s immediate context. Some context that helps correlate issues across your IT environment include:

  • User identifiers
  • Session identifiers
  • Error messages

 

Graylog: Correlating and Analyzing Logs for Operations and Security

 

With Graylog’s parsing JSON functions, you can parse out useful information, like destination address, response bytes, and other data that helps monitor security incidents or answer IT questions. After extracting the data you want, you can use the Graylog Extended Log Format (GELF) to normalize and structure all log data. Graylog’s purpose-built solution provides lightning-fast search capabilities and flexible integrations that allow your team to collaborate more efficiently.

Graylog Operations provides a cost-efficient solution for IT ops so that organizations can implement robust infrastructure monitoring while staying within budget. With our solution, IT ops can analyze historical data regularly to identify potential slowdowns or system failures while creating alerts that help anticipate issues.

With Graylog’s security analytics and anomaly detection capabilities, you get the cybersecurity platform you need without the complexity that makes your team’s job harder. With our powerful, lightning-fast features and intuitive user interface, you can lower your labor costs while reducing alert fatigue and getting the answers you need – quickly.

 

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How I used Graylog to Fix my Internet Connection

In today’s digital age, the internet has become an integral part of our daily lives. From working remotely to streaming movies, we rely on the internet for almost everything. However, slow internet speeds can be frustrating and can significantly affect our productivity and entertainment. Despite advancements in technology, many people continue to face challenges with their internet speeds, hindering their ability to fully utilize the benefits of the internet. In this blog, we will explore how Dan McDowell, Professional Services Engineer decided to take matters into his own hands and get the data over time to present to his ISP.

Speedtest-Overview

 

Over the course of a few months, I noticed slower and slower internet connectivity. Complaints from neighbors (we are all on the same ISP) lead me to take some action. A few phone calls with “mixed” results were not good enough for me so I knew what I needed, metrics!

Why Metrics?

Showing data without a doubt is one of the most powerful ways to prove a statement. How often do you hear one of the following when you call in for support:

  • Did you unplug it and plug it back in?
  • It’s probably an issue with your router
  • Oh, wireless must be to blame
  • Test it directly connected to your computer!
  • Nothing is wrong on our end, must be yours…

In my scenario I was able to prove without a doubt that this wasn’t a “me” problem. Using data I gathered by running this script every 30 minutes over a few weeks time I was able to prove:

  • This wasn’t an issue with my router
    • The was consistent connectivity slowness at the same times every single day of the week and outside of those times my connectivity was near the offered maximums.
  • Something was wrong on their end
    • Clearly, they were not spec’d to handle the increase in traffic when people stop working and start streaming
    • I used their OWN speed test server for all my testing. It was only one hop away.
    • This was all the proof I needed:
  • End Result?
    • I sent in a few screenshots of my dashboards, highlighting the clear spikes during peak usage periods. I received a phone call not even 10 minutes later from the ISP. They replaced our local OLT and increased the pipe to their co-lo.
      What a massive increase in average performance!

Ookla Speedtest has a CLI tool?!

Yup. This can be configured to use the same speedtest server (my local ISP runs one) each run meaning results are valid and repeatable. Best of all, it can output JSON which I can convert to GELF with ease! In short, I setup a cron job to run my speed test script every 30 minutes on my Graylog server and output the results, converting the JSON message into GELF which NetCat sends to my GELF input.

PORT 8080 must be open outbound!

How can I even?

Prerequisites

1. Install netcat, speedtest and gron.

Debain/Ubuntu

2. curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.deb.sh | sudo bash
sudo apt install speedtest gron ncat

RHEL/CentOS/Rocky/Apline

wget https://download-ib01.fedoraproject.org/pub/fedora/linux/releases/37/Everything/x86_64/os/Packages/g/gron-0.7.1-4.fc37.x86_64.rpm

sudo dnf install gron-0.7.1-4.fc37.x86_64.rpm curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.rpm.sh | sudo bash

sudo dnf install speedtest netcat

 

3. You also need a functional Graylog instance with a GELF input running.

4. My speedtest script and Graylog content pack (contains dashboard, route rule and a stream)

  1. Grab the script
    wget https://raw.githubusercontent.com/graylog-labs/graylog-playground/main/Speed%20Test/speedtest.sh
  1. Move the script to a common location and make it executable
    mkdir /scripts
    mv speedtest.sh /scripts/
    chmod +x /scripts/speedtest.sh

Getting Started

  1. Login to your Graylog instance
  2. Navigate to System → Content Packs
  3. Click upload.
  4. Browse to the downloaded location of the Graylog content pack and upload it to your instance
  5. Install the content pack
  6. This will install a Stream, pipeline, pipeline rule (routing to stream) and dashboard
  7. Test out the script!
    1. ssh / console to your linux system hosting Graylog/docker
    2. Manually execute the script:
      /scripts/speedtest.sh localhost 12201
      Script Details: <path to script> <ip/dns/hostname> <port>
  1. Check out the data in your Graylog
    1. Navigate to Streams → Speed Tests
    2. Useful data appears!
    3. Navigate to Dashboards → ISP Speed Test
      1. Check out the data!
  2. Manually execute the script as much as you like. More data will appear the more you run it.

Automate the Script!

This is how I got the data to convince my ISP that something was actually wrong. Setup a CRON job that runs every 30 minutes and within a few day you should see some time related changes.

  1. ssh or console to your linux system hosting the script / Graylog
  2. Create a CRONTAB to run the script every 30 minutes
    1. create crontab (this will be for the currently logged in user OR root if sudo su was used)

crontab -e

    1. Set the script to run every 30 minutes (change as you like)

*/30 * * * * /scripts/speedtest.sh localhost 12201

  1. That’s it! As long as the user the crontab was made for has permissions, the script will run every 30 minutes and the data will go to Graylog . The dashboard will continue to populate for you automatically.

Bonus Concept – Monitor you Sites WAN Connection(s)

This same script could be used to monitor WAN connections at different sites. Without any extra fields, we could use the interface_externalIp or source fields provided by the speedtest cli/sending host to filter by site location, add a pipeline rule to add a field biased on a lookup table or add a single field to the speedtest GELF message (change the script slightly) to provide that in the original message, etc. Use my dashboard to make a new dashboard with tabs for per-site and a summary page! The possibilities are endless.

Most of all, go have fun!

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Why API Discovery Is Critical to Security

For Star Trek fans, space may be the final frontier, but in security, discovering Application Programming Interfaces (APIs) could be the technology equivalent. In the iconic episode “The Trouble with Tribbles,” the legendary starship Enterprise discovers a space station that becomes overwhelmed by little fluffy, purring, rapidly reproducing creatures called “tribbles.” In a modern IT department, APIs can be viewed as the digital tribble overwhelming security teams.

 

As organizations build out their application ecosystems, the number of APIs integrated into their IT environments continues to expand. Organizations and security teams can become overwhelmed by the sheer number of these software “tribbles,” as undiscovered and unmanaged APIs create security blindspots.

 

API discovery is a critical component for any security program because it expands the organization’s attack surface.

 

What is API discovery?

API discovery is a manual or automated process that identifies, documents, and catalogs an organization’s APIs so that security teams can monitor the application-to-application data transfers. To manage all APIs that the organization integrated into its ecosystem, organizations need a comprehensive inventory that includes:

  • Internal APIs: interfaces between a company’s backend information and application functionality
  • External APIs: interfaces exposed over the internet to non-organizational stakeholders, like external developers, third-party vendors, and customers

 

API discovery enables organizations to identify and manage the following:

  • Shadow (“Rogue”) APIs: unchecked or unsupervised APIs
  • Deprecated (“Zombie”) APIs: unused yet operational APIs without the necessary security updates

 

What risks do undocumented and unmanaged APIs pose?

Threat actors can exploit vulnerabilities in these shadow and deprecated APIs, especially when the development and security teams have no way to monitor and secure them.

 

Unmanaged APIs can expose sensitive data, including information about:

  • Software interface: the two endpoints sharing data
  • Technical specifications: the way the endpoints share data
  • Function calls: verbs (GET, DELETE) and nouns (Data, Access) that indicate business logic

 

Why is API discovery important?

Discovering all your organization’s APIs enhances security by incorporating them into:

  • Risk assessments: enabling API vulnerability identification, prioritization, and remediation
  • Compliance: mitigate risks arising from accidental sensitive data exposures that lead to compliance violations, fines, and penalties
  • Vendor risk management: visibility into third-party security practices by understanding the services, applications, and environments that they can impact
  • Incident response: faster detection, investigation, and response times by understanding potential entry points, impacted services, and data leak paths
  • Policy enforcement: ensuring all internal and external APIs follow the company’s security policies and best practices
  • Training and awareness: providing appropriate educational resources for developers and IT staff

 

Beyond the security use case, API discovery provides these additional benefits:

  • Faster integrations by understanding available endpoints, methods, and data formats
  • Microservice architecture management by tracking services, health status, and interdependencies
  • Enhanced product innovation and value by understanding API capabilities and limitations
  • Increased revenue by understanding API usage

 

Using automation for API discovery

While developers can manually discover APIs, the process is expensive, inefficient, and risky. Manual API discovery processes are limited because they are:

  • Time-consuming: With the average organization integrating over 9,000 known APIs, manual processes for identifying unknown or unmanaged APIs can be overwhelming, even in a smaller environment.
  • Error-prone: Discovering all APIs, including undocumented ones and those embedded in code, can lead to incomplete discovery, outdated information, or incorrect documentation.
  • Resource-intensive: Manual discovery processes require manual inventory maintenance.

 

Automated tools make API discovery more comprehensive while reducing overall costs. Automated API discovery tools provide the following benefits:

  • Efficiency: Scanners can quickly identify APIs, enabling developers to focus on more important work.
  • Accurate, comprehensive inventory: API discovery tools can identify embedded and undocumented APIs, enhancing security and documentation.
  • Cost savings: Automation takes less time to scan for updated information, reducing maintenance costs.

 

 

What to look for in an API discovery tool

While different automated tools can help you discover the APIs across your environment, you should know the capabilities that you need and what to look for.

Continuous API Discovery

Developers can deliver new builds multiple times a day, continuously changing the API landscape and risk profile. For an accurate inventory and comprehensive visibility, you should look for a solution that scans:

  • All API traffic at runtime
  • Categorizes API calls
  • Sorts incoming traffic into domain buckets

For example, when discovering APIs by domain, the solution includes cases where:

  • Domains are missing
  • Public or Private IP addresses are used

With the ability to identify shadow and deprecated APIs, the solution should give you a way to add domains to the:

  • Monitoring list so you can start tracking them in the system
  • Prohibited list so that the domain should never be used

 

 

Vulnerability Identification

An API discovery solution that analyzes all traffic can also identify potential security vulnerabilities. When choosing a solution, you should consider whether it contains the following capabilities:

  • Captures unfiltered API request and response detail
  • Enhances details with runtime analysis
  • Creates an accessible datastore for attack detection
  • Identified common threats and API failures aligned to OWASP and MITRE guidance
  • Automatic remediation tops with actionable solutions that enable the teams to optimize critical metrics like Mean Time to Response (MTTR)

Risk Assessment and Scoring

Every identified API and vulnerability increases the organization’s risk. To appropriately mitigate risk arising from previously unidentified and unmanaged APIs, the solution should provide automated risk assessment and scoring. With visibility into the type of API and the high-risk areas that should be prioritized, Security and DevOps teams can focus on the most risky APIs first.

 

Graylog API Security: Continuous, Real-Time API Discovery

Graylog API Security is continuous API security, scanning all API traffic at runtime for active

attacks and threats. Mapped to security and quality rules, Graylog API Security captures

complete request and response details, creating a readily accessible datastore for attack

detection, fast triage, and threat intelligence. With visibility inside the perimeter,

organizations can detect attack traffic from valid users before it reaches their applications.

 

Graylog API Security captures details to immediately identify valid traffic from malicious

actions, adding active API intelligence to your security stack. Think of it as a “security

analyst in-a-box,” automating API security by detecting and alerting on zero-day attacks

and threats. Our pre-configured signatures identify common threats and API failures and

integrate with communication tools like Slack, Teams, Gchat, JIRA or via webhooks.

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

FERC and NERC: Cyber Security Monitoring for The Energy Sector

As cyber threats targeting critical infrastructure continue to evolve, the energy sector remains a prime target for malicious actors. Protecting the electric grid requires a strong regulatory framework and robust cybersecurity monitoring practices. In the United States, the Federal Energy Regulatory Commission (FERC) and the North American Electric Reliability Corporation (NERC) play key roles in safeguarding the power system against cyber risks.

 

Compliance with the NERC Critical Infrastructure Protection (NERC CIP) standards provides a baseline for mitigating security risk, but organizations should implement security technologies that help them streamline these processes.

Who are FERC and NERC?

The Federal Energy Regulatory Commission (FERC) is the governmental agency that oversees the power grid’s reliability. Since the Energy Policy Act of 2005 that granted FERC these powers, the rise of smart technologies across the energy industry expanded. This led to the Energy Independence and Security Act of 2007 (EISA) which led to FERC and the National Institute of Standards and Technology (NIST) to coordinate cybersecurity reliability standards that protect the industry.

 

However, to develop these reliability standards, FERC certified the North American Electric Reliability Corporation (NERC). Currently, NERC has thirteen published and enforceable Critical Infrastructure Protection (CIP) standards plus one more awaiting approval.

What are the NERC CIP requirements?

The cybersecurity Reliability Standards are broken out across nine documents, each detailing the different requirements and controls for compliance.

 

CIP-002: BES Cyber System Categorization

This CIP creates “bright-line” criteria for how to categorize BES Cyber Systems based on impact that an outage would cause. The publication separates BES Cyber Systems into three general categories:

  • High Impact
  • Medium Impact
  • Low Impact

 

CIP-003-8: Security Management Controls

This publication, with its most recent iteration being enforceable in April 2026, requires Responsible Entities to create policies, procedures, and processes for high or medium impact BES Cyber Systems, including:

  • Cyber security awareness: training delivered every 15 calendar months
  • Physical security controls: protections for assets, locations within an asset containing low impact BES systems, and Cyber Assets
  • Electronic access controls: controls that limit inbound and outbound electronic access for assets containing low impact BES Cyber Systems
  • Cyber security incident response: identification, classification, and response to Cyber Security incidents, including establishing role and responsibilities for testing (every 36 months) and handling incidents, including updating Cyber Security Incident response plan within 180 days of a reportable incident
  • Transient cyber asset and removable media malicious code risk mitigation: Plans for implementing, maintaining, and monitoring anti-virus, application allowlists, and other methods to detect malicious code
  • Vendor electronic remote access security controls: processes for remote access to mitigate risks, including ways to determine and disable remote access and detect known or suspected malicious communications from vendor remote access

 

CIP-004-7: Personnel & Training

Every Responsible Entity needs to have one or more documented processes and provide evidence to demonstrate implementation of:

  • Security awareness training
  • Personnel risk assessments prior to granting authorized electronic or unescorted physical access
  • Access management programs
  • Access revocation programs
  • Access management, including provisioning, authorizing, and terminating access

CIP-005-7: Electronic Security Perimeter(s)

To mitigate risks, Responsible Entities need to have controls for permitting known and controlled communications need documented processes and evidence of:

  • Connection to network using a routable protocol protected by an Electronic Security Perimeter (ESP)
  • Permitting and documenting the reasoning for necessary communications while denying all other communications
  • Limiting network accessibility to management Interfaces
  • Performing authentication when allowing remove access through dial-up connectivity
  • Monitoring to detect known or suspected malicious communications
  • Implementation of controls, like encryption or physical access restrictions, to protect data confidentiality and integrity
  • Remote access management capabilities, multi-factor authentication and multiple methods for determining active vendor remote access
  • Multiple methods for disabling active vendor remote access
  • One or more methods to determine authenticated vendor-initiated remote access, terminating these remote connections, and controlling ability to reconnect

 

Most of these requirements fall under the umbrella of network security monitoring. For example, many organizations will implement tools like:

 

Once organizations can define baselines for normal network traffic, they can implement detections that alert their security teams to potential incidents.

CIP-006-6: Physical Security of BES Cyber Systems

To prove management of physical access to these systems, Responsible Entities need documented processes and evidence that include:

  • Physical security plan with defined operation or procedural controls for restricting physical access
  • Controls for managing authorized unescorted access
  • Monitoring for unauthorized physical access
  • Alarms or alerts for responding to detected unauthorized access
  • Logs that must be retained for 90 days for managing entry of individuals authorized for unescorted physical access
  • Visitor control program that includes continuous escort for visitors, logging visitors, and retaining visitor logs
  • Maintenance and testing programs for the physical access control system

 

Many organizations use technologies to help manage physical security, like badges or smart alarms. By incorporating these technologies into the overarching cybersecurity monitoring, Responsible Entities can correlate activities across the physical and digital domains.

Example: Security Card Access in buildings showing entry and exit times.

 

By tracking both physical access and digital access to BES Cyber Systems, Responsible Entities can improve their overarching security posture, especially given the interconnection between physical and digital access to systems.

CIP-007-6: System Security Management

To prove that they have the technical, operational, and procedural system security management capabilities, Responsible Entities need documented processes and evidence that include:

  • System hardening: disabling or preventing unnecessary remote access, protection against physical input/output ports used for network connectivity, risk mitigation to prevent CPU or memory vulnerabilities
  • Patch management process: evaluating security patch applicability at least once every 35 calendar days and tracking, evaluating, and installing security patches
  • Malicious code prevention: methods for deterring, detecting, or preventing malicious code and mitigating the threat of detected malicious code
  • Monitoring for security events: logging security events per system capabilities, generating security event alerts, retaining security event logs, and reviewing summaries or samplings of logged security events
  • System access controls: authentication enforcement methods, identification and inventory of all known default or generic accounts, identification of people with authorized access to shared accounts, changing default passwords, technical or procedural controls for password-only authentication, including forced changes at least once every 15 calendar months, limiting the number of unsuccessful authentication attempts and generating

 

Having a robust threat detection and incident response (TDIR) solution enables Responsible Parties to leverage user and entity behavior analytics (UEBA) with the rest of their log data so they can handle security functions like:

  • Privileged access management (PAM)
  • Password policy compliance
  • Abnormal privilege escalation
  • Time spent accessing a resource
  • Brute force attack detection

 

CIP-008-6: Incident Reporting and Response Planning

To mitigate risk to reliable operation, Responsible Entities need documented incident response plans and evidence that include:

  • Processes for identifying, classifying, and responding to security incidents
  • Roles and responsibility for the incident response groups or individuals
  • Incident handling procedures
  • Testing incident response plan at least once every 15 calendar months
  • Retaining records for reportable and other security incidents
  • Reviewing, updating, and communicating lessons learned, changes to the plan based on lessons learned, notifying people of changes

 

Security analytics enables Responsible Entities to enhance their incident detection and response capabilities. By building detections around MITRE ATT&CK tactics, techniques, and procedures (TTPs), security teams can connect the activities occurring in their environments with real-world activities to investigate an attacker’s path faster. Further, with high-fidelity Sigma rule detections aligned to the ATT&CK framework, Responsible Entities improve their incident response capabilities.

 

In the aftermath of an incident or incident response test, organizations need to develop reports that enable them to identify lessons learned. These include highlighting:

  • Key findings
  • Actions taken
  • Impact on stakeholders
  • Incident ID
  • Incident summary that includes type, time, duration, and affected systems/data

 

To improve processes, Responsible Entities need to organize the different pieces of evidence into an incident response report that showcases the timeline of events.

 

Further, they need to capture crucial information about the incident, including:

  • Nature of threat
  • Business impact
  • Immediate actions taken
  • When/how incident occurred
  • Who/what was affected
  • Overall scope

 

CIP-009-6: Recovery Plans for BES Cyber Systems

To support continued stability, operability, and reliability, Responsible Entities need documented recovery plans with processes and evidence for:

  • Activation of recovery plan
  • Responder roles and responsibilities
  • Backup and storage of information required for recovery and verification of backups
  • Testing recovery plan at least once every 15 calendar months
  • Reviewing, updating, and communicating lessons learned, changes to the plan based on lessons learned, notifying people of changes

 

CIP-010-4: Configuration Change Management and Vulnerability Assessments

To prevent and detect unauthorized changes, Responsible Entities need documentation and evidence of configuration change management and vulnerability assessment that includes:

  • Authorization of changes that can alter behavior of one or more cybersecurity controls
  • Testing changes prior to deploying them in a production environment
  • Verifying identity and integrity of operating systems, firmware, software, or software patches prior to installation
  • Monitoring for unauthorized changes that can alter the behavior of one or more cybersecurity controls at least once every 35 calendar days, including at least one control for configurations affecting network accessibility, CPU and memory, installation, removal, or updates to operating systems, firmware, software, and cybersecurity patches, malicious code protection, security event logging or alerting, authentication methods, enabled or disabled account status
  • Engaging in vulnerability assessment at least once every 15 calendar months
  • Performing an active vulnerability assessment in a test environment and documenting the results at least once every 36 calendar months
  • Performing vulnerability assessments for new systems prior to implementation

 

CIP-011-3: Information Protection

To prevent unauthorized access, Responsible Entities need documented information protection processes and evidence of:

  • Methods for identifying, protecting, and securely handling BES Cyber System Information (BCSI)
  • Methods for preventing the unauthorized retrieval of BCSI prior to system disposal

CIP-012-1: Communications between Control Centers

To protect the confidentiality, integrity, and availability assessment monitoring data transmitted between Control Centers, Responsible Entities need documented processes for and evidence of:

  • Risk mitigation for unauthorized disclosure and modification or loss of availability of data
  • Identification of risk mitigation methods
  • Identification of where methods are implemented
  • Assignment of responsibilities when different Responsible Entities own or operate Control Centers

 

To mitigate data exfiltration risks, Responsible Parties need to aggregate, correlate, and analyze log data across:

  • Network traffic logs
  • Antivirus logs
  • UEBA solutions

 

With visibility into abnormal data downloads, they can more effectively monitor communications between control centers.

 

CIP-013-2: Supply Chain Risk Management

To mitigate supply chain risks, Responsible Entities need documented security controls and evidence of:

  • Procurement processes for identifying and assessing security risks related to installing vendor equipment and software and switching vendors
  • Receiving notifications about vendor-identified incidents related to products or services
  • Coordinating responses to vendor-identified incidents related to products or services
  • Notifying vendors when no longer granting remote or onsite access
  • Vendor disclosure of known vulnerabilities related to products or services
  • Verifying software and patch integrity and authenticity
  • Coordination controls for vendor-initiated remote access
  • Review and obtain approval for the supply chain risk management plan

 

CIP-015-1: Internal Network Security Monitoring

While this standard is currently awaiting approval by the NERC Board of Trustees, Responsible Entities should consider preparing for publication and enforcement with documented processes and evidence of monitoring internal networks’ security, including the implementation of:

  • Network data feeds using a risk-based rationale for monitoring network activity, including connections, devices, and network communications
  • Detections for anomalous network activity
  • Evaluating anomalous network activity
  • Retaining internal network security monitoring data
  • Protecting internal network security monitoring data

 

Graylog Security: Enabling the Energy Sector to Comply with NERC CIP

Using Graylog Security, you can rapidly mature your TDIR capabilities without the complexity and cost of traditional Security Information and Event Management (SIEM) technology. Graylog Security’s Illuminate bundles include detection rulesets so that you have content, like  Sigma detections, enabling you to uplevel your security alert, incident response, and threat hunting capabilities with correlations to ATT&CK tactic, techniques, and procedures (TTPs).

By leveraging our cloud-native capabilities and out-of-the-box content, you gain immediate value from your logs. Our anomaly detection ML improves over time without manual tuning, adapting rapidly to new data sets, organizational priorities, and custom use cases so that you can automate key user and entity access monitoring.

With our intuitive user interface, you can rapidly investigate alerts. Our lightning-fast search capabilities enable you to search terabytes of data in milliseconds, reducing dwell times and shrinking investigations by hours, days, and weeks.

To learn how Graylog Security can help you implement robust threat detection and response, contact us today.

 

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Security Misconfigurations: A Deep Dive

Managing configurations in a complex environment can be like playing a game of digital Jenga. Turning off one port to protect an application can undermine the service of a connected device. Writing an overly conservative firewall configuration can prevent remote workforce members from accessing an application that’s critical to getting their work done. In the business world that runs on Software-as-a-Service (SaaS) applications and the Application Programming Interfaces (APIs) that allow them to communicate, a lot of your security is based on the settings you use and the code that you write.

 

Security misconfigurations keep creeping up the OWASP Top 10 Lists for applications, APIs, and mobile devices because they are security weaknesses that can be difficult to detect until an attacker uses them against you. With insight into what security misconfigurations are and how to mitigate risk, you can create the programs and processes that help you protect your organization.

What are Security Misconfigurations?

Security misconfigurations are insecure default settings that remain in place during and after system deployment. They can occur anywhere within the organization’s environment because they can arise from:

  • Operating systems
  • Network devices their settings
  • Web servers
  • Databases
  • Applications

 

Organizations typically implement hardening across their environment by changing settings to limit where, how, when, and with whom technologies communicate. Some examples of security misconfigurations may include failing to:

  • Disable or uninstall unnecessary features, like ports, services, accounts, API HTTP verbs, API logging features
  • Change default passwords
  • Limit the information that error messages send to users
  • Update operating systems, software, and APIs with security patches
  • Set secure values for application servers, application frameworks, libraries, and databases
  • Use Transport Layer Security (TLS) for APIs
  • Restrict Cross-Origin resource sharing (CORS)

 

Security Misconfigurations: Why Do They Happen?

Today’s environments consist of complex, interconnected technologies. While all the different applications and devices make business easier, they make security configuration management far more challenging.

 

Typical reasons that security misconfigurations happen include:

  • Complexity: Highly interconnected systems can make identifying and implementing all possible security configurations difficult.
  • Patches: Updating software and systems can have a domino effect across all interconnected technologies that can change a configuration’s security.
  • Hardware upgrades: Adding new servers or moving to cloud can change configurations at hardware and software level.
  • Troubleshooting: Fixing a network, application, or operating system issue to maintain service availability may impact other configurations.
  • Unauthorized changes: Failing to follow change management processes for adding new technologies or fixing issues can impact interconnections, like users connecting corporate email to authorize API access for an unsanctioned web application.
  • Poor documentation: Failure to document baselines and configuration changes can lead to lack of visibility across the environment.

Common Types of Security Misconfiguration Vulnerabilities

To protect your systems against cyber attacks, you should understand what some common security misconfigurations are and what they look like.

  • Improperly Configured Databases: overly permissive access rights or lack of authentication
  • Unsecured Cloud Storage: lack of encryption or weak access controls
  • Default or Weak Passwords: failure to change passwords or poor password hygiene leading to credential-based attacks
  • Misconfigured Firewalls or Network Settings: poor network segmentation, permissive firewall settings, open ports left unsecured
  • Outdated Software or Firmware: failing to install software, firmware, or API security updates or patches that fix bugs
  • Inactive Pages: failure to include noopener or noreferrer attributes in a website or web application
  • Unneeded Services/Features: leaving network services available and ports open, like web servers, file share servers, proxy servers FTP servers, Remote Desktop Protocol (RDP), Virtual Network Computing (VNC), and Secure Shell Protocol (SSH)
  • Inadequate Access Controls: failure to implement and enforce access policies that limit user interaction, like the principle of least privilege for user access, deny-by-default for resources, or lack of API authentication and authorization
  • Unprotected Folders and Files: using predictable, guessable file names and locations that identify critical systems or data
  • Improper error messages: API error messages returning data such as stack traces, system information, database structure, or custom signatures

Best Practices for Preventing Security Misconfiguration Vulnerabilities

As you connect more SaaS applications and use more APIs, monitoring for security misconfigurations becomes critical to your security posture.

Implement a hardening process

Hardening is the process of choosing the configurations for your technology stack that limit unauthorized external access and use. For example, many organizations use the CIS Benchmarks that provide configuration recommendations for over twenty-five vendor product families. Organizations in the Defense Industrial Base (DIB) use the Department of Defense (DoD) Security Technical Implementation Guides (STIGs).

 

Your hardening processes should include a change management process that:

  • Sets and documents baselines
  • Identifies changes in the environment
  • Reviews whether changes are authorized
  • Allows, blocks, or rolls back changes as appropriate
  • Updates baselines and documentation to reflect allowed changes

Implement a vulnerability management and remediation program

Vulnerability scanners can identify common vulnerabilities and exposures (CVEs) on network-connected devices. Your vulnerability management and remediation program should:

  • Define critical assets: know the devices, resources, and users that impact the business the most
  • Assign ownership: identify the people responsible for managing and updating critical assets
  • Identify vulnerabilities: use penetration tests, red teaming, and automated tools, like vulnerability scanners
  • Prioritize vulnerabilities: combine a vulnerability’s severity and exploitability to determine the ones that pose the highest risk to the organization’s business operations
  • Identify and monitor key performance indicators (KPIs): set metrics to determine the program’s effectiveness, including number of assets managed, number of assets scanned per month, frequency of scans, percentage of scanned assets containing vulnerabilities, percentage of vulnerabilities fixed within 30, 60, and 90 days

 

Monitor User and Entity Activity

Security misconfigurations often lead to unauthorized access. To mitigate risk, you should implement best authentication, authorization, and access practices that include:

  • Multifactor Authentication: requiring users to provide two or more of the following: something they know (password), something they have (token/smartphone), or something they are (fingerprint or face ID)
  • Role-based access controls (RBAC): granting the least amount of access to resources based on their job functions
  • Activity baselines: understanding normal user and entity behavior to identify anomalous activity
  • Monitoring: identifying activity spikes like file permission changes, modifications, and deletions across email servers, webmail, removable media, and DNS

 

Implement and monitor API Security

APIs are the way that applications talk to one another, often sharing sensitive data. Many companies struggle to manage the explosion of APIs that their digital transformation strategies created, creating security weaknesses that attackers seek to exploit. To mitigate these risks, you should implement a holistic API security monitoring program that includes:

  • Continuously discovering APIs across the environment
  • Scanning all API traffic at runtime
  • Categorizing API calls
  • Sorting API traffic into domain buckets
  • Automatically assessing risk
  • Prioritizing remediation action using context that includes activity and intensity
  • Capturing unfiltered API request and response details

 

 

Graylog Security and Graylog API Security: Helping Detect and Remediate Security Misconfigurations

Built on the Graylog Platform, Graylog Security gives you the features and functionality of a SIEM while eliminating the complexity and reducing costs. With our easy to deploy and use solution, you get the combined power of centralized log management, data enrichment and normalization, correlation, threat detection, incident investigation, anomaly detection, and reporting.

 

Graylog API Security is continuous API security, scanning all API traffic at runtime for active attacks and threats. Mapped to security and quality rules like OWASP Top 10, Graylog API Security captures complete request and response detail, creating a readily accessible datastore for attack detection, fast triage, and threat intelligence. With visibility inside the perimeter, organizations can detect attack traffic from valid users before it reaches their applications.

 

With Graylog’s prebuilt content, you don’t have to worry about choosing the server log data you want because we do it for you. Graylog Illuminate content packs automate the visualization, management, and correlation of your log data, eliminating the manual processes for building dashboards and setting alerts.

 

About Graylog 
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.