Skip to content

了解協調的不真實行為 (CIB):它是什麼以及它如何影響公眾

The term Coordinated Inauthentic Behavior (CIB) is used frequently in the news to describe the propagation of misinformation, misrepresentation, and other types of negative online influence operations. As seen in the news as of late, reports of CIB have recently led to the large-scale removal of accounts and pages on social media platforms. An example of CIB could be a political news site purportedly headquartered in America but operates from Macedonia or a Russian-created social media account. The said account might use a fictitious name as well as random images as a way to feign American perspectives blogging about US politics. 

It can take the following two forms:

  1. Coordinated inauthentic behavior (CIB) regarding domestic non-government campaigns
  2. Coordinated inauthentic behavior in the case of a foreign or government actor, termed as Foreign or Government Interference (FGI)

The objectives of both variants are the same. They are a part of larger coordinated campaigns that seek to influence public perspectives across social media platforms to further their agendas, both politically and socially.

What is Coordinated Inauthentic Behavior (CIB)?

Any domestic, non-government initiatives/campaigns that comprise groups of accounts and pages on the internet, especially social media, aiming to deceive people regarding who they are and what they do is often regarded as Coordinated Inauthentic Behavior (CIB). Whether they are accounts, pages, or groups, such behavior occurs when numerous bogus identities/personas collaborate to promote a specific idea/item or media subject with an ulterior  intent. It comprises influence operations aimed at manipulating public opinion for a strategic purpose. Their objective could be financial or political. For instance, during the Covid-19 outbreak, a network of web pages was active in spreading coronavirus misinformation.

What Impact Does CIB Have on the Regular Public?

Coordinated Inauthentic Behavior intends to manipulate public debate, push users towards political and social extremes, and inevitably lead to inter-community and inter-religious opinion clashes. The goal of CIB is to sway public opinion or coerce users with financial scams (if the objective is financial exploitation).

The potential for misinformation to impact international politics and public opinion is large, and has proven time and time again. CIB goes a step further, intentionally targeting and misleading individuals instead of merely propagating false news. A large problem with CIB lies in its ability to shift public opinion in such a short period of time, therefore making the removal of said account almost useless in the long term as its original goal has been accomplished. 

Identifying CIB on Facebook and Other Social Media Platforms

In recent years, the global increase of trolls and bots that manipulate public discussions on social media has caused significant challenges for political elections, natural disaster communication systems, and global health emergencies such as  the Covid-19 pandemic. However, progress has been made in using standard supervised learning to combat adversaries.

If you know where to look, coordinated inauthentic behavior by people and organizations on social media is simple to spot. Different indications on Facebook pages and groups, like those mentioned below, can help users better comprehend the data they’re viewing and the intentions of those behind it.

  1. The Section on ‘Page Transparency’

Every Facebook page features a “Page Transparency” feature that allows viewers to see countries from which the page admins upload information. The section is available on both mobile and desktop views. However, this option does not apply to Facebook groups.

  1. Posts with Multiple ‘Like and Share’ Requests Might Signal a Problem

It might indicate organized inauthentic conduct if a page is overloaded with photographs and memes urging users to like and share the content. According to Snopes’ study, though it does not always point towards questionable activity, an overload of this type of media is frequently associated  with inauthentic pages trying to gain more traction.

  1. ‘Blue Ticked’ Verified Pages

Blue badges appear next to the group or profile name on verified pages. Be it on Facebook, Twitter, or Instagram, the blue tick next to the user’s profile name represents an authenticated account. If you see one of these, it implies that the page or profile belongs to an authorized individual or organization. An unverified page, i.e., one without the blue badge representing your favorite celebrity asking for money for some social cause, is unlikely to be genuine. Being more cautious about what accounts are acting as certain organizations or people is an important part of staying safe online. 

  1. Check the ‘Page Creation Date’ 

Check the date the page/group/profile was created, especially for politically focused forums involving serious debates. For instance, it is a red flag if a page regarding some hot-button American political issue was created merely a week ago and shows that the real page managers are people from another nation. It takes time for outsiders to get involved in a country’s debate on a serious domestic issue. You can click the “Page Transparency” link on a page or the “About” tab in a group to determine the creation date.

  1. Examine the Administrators and Moderators of a Facebook Community

Since the Facebook groups (but not pages) disclose their administrators, moderators, and members, you may check the “Members” section on a group to check who is operating it and whether the admins appear to be authenticated individuals.

Examples/Case Studies of CIB

The following are a few well-known campaigns involving CIB that occurred in recent times. 

  1. #SaveTheChildren Campaign

The #SaveTheChildren campaign purposefully propagated the notion that a “cabal” of celebrities and political figures participated in satanic, ritual sexual assault of children worldwide.

In 2020, a conspiracy protest movement known as #SaveTheChildren surged throughout the United States, Canada, the United Kingdom, and Europe, sparking hundreds of in-person marches and protests. The #SaveTheChildren campaign’s claimed purpose was to raise awareness about the atrocities of “child sex trafficking.” 

The main inspiration behind the campaign was the QAnon conspiracy movement, which started in October 2017 by an anonymous 4chan website user later known as “Q.” They had claimed to be privy to top-secret government intel suggesting that Hillary Clinton was wanted by the Federal government and was about to be arrested, among other fraudulent theories.

  1. Ebola and the United States Border

Brian Kolfage, a Trump supporter and anti-immigrant activist, raised millions of dollars in internet donations to build a wall at the US-Mexico border. After two days, when the US government ordered the work to be stopped, he tweeted that an “insider” had notified him that there were nine migrants with “proven” Ebola cases at the Texas border due to which construction had been stopped. This assertion was false, but the Ebola hoax quickly spread across the country on social media and in right-wing organizations. 

He used disinformation to promote panic as a way to exploit the issue of immigration and gather support for his political aim of curbing immigration—a long-standing pledge of then-US President Donald Trump. 

  1. The Milk Tea Alliance 

It is an online multinational network of young people manipulating the media under the hashtag #MilkTeaAlliance. Youngsters from Thailand, Hong Kong, Taiwan, and Myanmar are among its supporters. The alliance uses the hashtag #MilkTeaAlliance to combat what they see as authoritarianism, either directed at the CCP (Chinese Communist Party) or their governments.

It surfaced in April 2020, following the commencement of an online campaign by pro-Chinese Communist Party (CCP) accounts to harass a Thai celebrity and his fans. A loosely organized group of young, largely Southeast Asian, pro-democracy netizens banded together, culminating in a meme war between the two sides on Twitter.

  1. The Antifa Fires Rumor

During the Oregon wildfires in September 2020, allegations circulated locally and globally that left-wing activists were to blame. The evidence alleging “anti-fa” involvement was based on a series of misinterpretations made by public authorities. The rumor was boosted by far-right political influencers, bogus Antifa Twitter accounts, and various anonymous trolling communities on the 4chan website.

  1. Hammer” and “Scorecard

The 2020 US presidential election was disturbed by unfounded accusations of widespread voting fraud, promoted by former President Donald Trump, whose allegations came to be known as “the big lie.” The idea that prompted this coordinated behavior is said to have included two aspects, Hammer and Scorecard, where an alleged government-run supercomputer called “Hammer,” and the system software, the “Scorecard” worked in tandem… The allegation was that the “Hammer and Scorecard” operation influenced real votes across the country in favor of President Joe Biden.

Final Words

With the ever increasing accessibility and widespread popularity of the internet and social media, influence operations and new deceptive behaviors will continue to emerge and spread despite pertinent regulations. Social media networks must continue to work to identify and stop Coordinated Inauthentic Behavior or CIB campaigns and any other kind of large-scale misinformation campaigns. However, as previously noted, users must also stay educated and cautious about the phenomenon. It will help them recognize CIB activity and take precautions to avoid falling into traps.

References

  1. Aziz, Z. (2020, November 2). What is Coordinated Inauthentic Behavior? Nisos. https://www.nisos.com/blog/what-is-coordinated-inauthentic-behavior/
  2. Meta. (2018, December 6). Coordinated inauthentic behavior

https://about.fb.com/news/tag/coordinated-inauthentic-behavior/

  1. Graham, T. (2020, May 29). Detecting and analyzing coordinated inauthentic behavior on social media. QUT Centre for Data Science. 

https://research.qut.edu.au/qutcds/events/detecting-and-analysing-coordinated-inauthentic-behaviour-on-social-media/

  1. Gleicher, N. (2018, December 6). Coordinated inauthentic behavior explained. Meta. https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/
  2. Johnson, S. (2021, December 21). How to spot ‘coordinated inauthentic behavior’ on Facebook, according to Snopes. Lifehacker. 

https://lifehacker.com/how-to-spot-coordinated-inauthentic-behavior-on-faceb-1848253059

  1. McGregor, S. (2020, September 17). What even is ‘coordinated inauthentic behavior’ on platforms? Wired

https://www.wired.com/story/what-even-is-coordinated-inauthentic-behavior-on-platforms/

#CIB #Facebook #vicarius_blog

As part of our mission to secure the world’s OT, IoT and Cyber Physical infrastructures, we invest resources into offensive research of vulnerabilities and attack techniques.

Ripple20 are 19 vulnerabilities revealed by Israeli firm JSOF that affect millions of OT and IOT devices. The vulnerabilities reside in a TCP/IP stack developed by Treck, Inc. The TCP/IP stack is widely used by manufacturers in the OT and IoT industries and thus affects a tremendous amount of devices.

Among the affected devices are Cisco Routers, HP Printers, Digi IoT devices, PLCs by Rockwell Automation and many more. Official advisories by companies who confirmed having affected devices can be found here, in the “More Information” section.

The most critical vulnerabilities are three that can cause a stable Remote Code Execution (CVE-2020-11896, CVE-2020-11897, CVE-2020-11901) and another that can cause the target device’s memory heap to be leaked (CVE-2020-11898).

On behalf of our customers, we set out to explore the real impact of these vulnerabilities, which we’re now sharing with the public.

The research has been conducted by researchers Maayan Fishelov and Dan Haim, and has been managed by SCADAfence’s Co-Founder and CTO, Ofer Shaked.

Exploitability Research
We set out to check the exploitability of these vulnerabilities, starting with CVE-2020-11898 (the heap memory leak vulnerability), one of the 19 published vulnerabilities.

We created a Python POC script that is based on JSOF official whitepaper for this vulnerability. According to JSOF, the implementation is very similar to CVE-2020-11896, which is an RCE vulnerability that is described in the whitepaper. Also mentioned about the RCE vulnerability: “Variants of this Issue can be triggered to cause a Denial of Service or a persistent Denial of Service, requiring a hard reset.”

Trial Results:
Test 1 target: Samsung ProXpress printer model SL-M4070FR firmware version V4.00.02.18 MAY-08-2017. This device is vulnerable according to the HP Advisory.

Test 1 result: The printer’s network crashed and required a hard reset to recover. We were unable to reproduce the heap memory leak as described, and this vulnerability would have been tagged as unauthenticated remote DoS instead, on this specific printer.

Test 2 target: HP printer model M130fw. This device is vulnerable according to the HP Advisory.

Test 2 result: Although reported as vulnerable by the manufacturer, we were unable to reproduce the vulnerability, and we believe that this device isn’t affected by this vulnerability. We believe that’s because the IPinIP feature isn’t enabled on this printer, which we’ve verified with a specially crafted packet.

Test 3 target: Undisclosed at this stage due to disclosure guidelines. We will reveal this finding in the near future.

Test 3 result: We found an unreported vendor and device, on which we can use CVE-2020-11898 to remotely leak 368 bytes from the device’s heap, disclosing sensitive information. No patch is available for this device. Due to our strict policy of using Google’s Responsible Disclosure, we’ve reported this to the manufacturer, to allow them to make a patch available prior to the publication date.

Key Takeaways
We’ve confirmed the exploitability vulnerabilities on our IoT lab devices.

On the negative side: The vulnerabilities exist on additional products that are unknown to the public. Attackers are likely to use this information gap to attack networks.
On the positive side: Some devices that are reported as affected by the manufacturers are actually not affected, or are affected by other vulnerabilities. It might require attackers to tailor their exploits to specific products, increasing the cost of exploitation, and prevent them from using the vulnerability on products that are reported as vulnerable.

SCADAfence Research Recommendations
Check your asset inventory and vulnerability assessment solutions for unpatched products affected by Ripple20.
The SCADAfence Platform creates an asset inventory with product and software versions passively and actively, and allows you to manage your CVEs across all embedded and Windows devices.
Prioritize patching or other mitigation measures based on: Exposure to the internet, exposure to insecure networks (business LAN and others), criticality of the asset.
This prioritization can automatically be obtained from tools such as the SCADAfence Platform.
Detect exploitation based on network traffic analysis.
The SCADAfence Platform detects usage of these exploits in network activity by searching for patterns that indicate usage of this vulnerability in the TCP/IP communications.
If you have any questions or concerns about Ripple20, please contact us and we’ll be happy to assist you and share our knowledge with you or with your security experts.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About SCADAfence
SCADAfence helps companies with large-scale operational technology (OT) networks embrace the benefits of industrial IoT by reducing cyber risks and mitigating operational threats. Our non-intrusive platform provides full coverage of large-scale networks, offering best-in-class detection accuracy, asset discovery and user experience. The platform seamlessly integrates OT security within existing security operations, bridging the IT/OT convergence gap. SCADAfence secures OT networks in manufacturing, building management and critical infrastructure industries. We deliver security and visibility for some of world’s most complex OT networks, including Europe’s largest manufacturing facility. With SCADAfence, companies can operate securely, reliably and efficiently as they go through the digital transformation journey.

探索更多來自 台灣二版有限公司 的內容

立即訂閱即可持續閱讀,還能取得所有封存文章。

Continue reading