Focus
Hybrid Intelligence as a Carrier of Disinformation and Hybrid Threats in Cyberspace
(Volume 26, No. 1, 2025)
08 ožu 2025 11:13:00
586 views

Author: Nikola Mlinac

 

DOI: https://doi.org/10.37458/nstf.26.1.2

Review paper
Received: 23 October 2024

Accepted: February 20, 2025

 

 

Abstract: Social networks have become powerful media and communication tools that provide adequate support to state actors in cyberspace when planning and execution of influence operations. In this context, new patterns in planning and conducting covert offensive information operations will be presented, where artificial intelligence systems used by social networks play a crucial role. On a tactical level, these systems are utilized to exploit users' personal data on social networks regarding their political, ideological, and religious beliefs, as well as tendencies towards violent extremism, radicalism and terrorism, to create hybrid threats. The main hybrid threat presented here is automated and anonymous disinformation that adapts to these beliefs and tendencies. Hybrid intelligence is depicted as a key factor that has enabled the use of this category of user data for the creation of hybrid threats in cyberspace.

Preuzmite članak u PDF formatu

The article aims to underscore that artificial intelligence systems used by social networks have enabled more effective exploitation of weaknesses in political and social systems based on personal data about the beliefs and tendencies of social media users who are not sufficiently aware of it. The application of hybrid intelligence has further complicated the counteraction and timely recognition, mitigation, and deterrence of the potential harmful consequences of hybrid threats.

 

Keywords: artificial intelligence systems, social networks, influence operations, cyberspace, social vulnerabilities, disinformation, hybrid threats.

 

Introduction
This article aims to demonstrate that machine learning, deep learning, recommendation algorithm systems and automated fake accounts (bots) constitute key artificial intelligence systems, whereby different cyberspace actors use social networks as tactical tools in providing support for influence operation planning and execution. The intention is to underscore that the political, ideological and religious beliefs, and principles and values of social network users as well as their affinities for different forms of violent extremism, radicalism and terrorism are of great importance to the aforementioned systems in the context of creating hybrid threats when it comes to pre-planned, covert and targeted offensive information-psychological operations. 
Hybrid intelligence is considered the application of said artificial intelligence systems in creating automated and anonymous disinformation. Such disinformation is used for the targeted creation of hybrid threats and steering them in desired directions. Such threats are not a novelty when it comes to resolving international disputes and conflicts. The novelty, rather, lies in the tools and possibilities for their creation as well as the increased complexity of their timely recognition and deterrence. Threats that are reinforced by hybrid intelligence in cyberspace are considered hybrid threats. The context of hybrid threats will primarily be presented within the information domain of confrontations between international state actors. In that context, hybrid threats will be presented through the utilisation of the aforementioned artificial intelligence systems and user data on social networks in order to exploit social vulnerabilities for the creation of such threats.
The application of the aforementioned artificial intelligence systems on social networks has, with the possibility of automated and anonymous offensive activity adapted to social vulnerabilities, brought about a paradigm shift in the planning and execution of covert offensive information operations. Such offensive activities have become anonymous and automated, with social networks and hybrid intelligence becoming tools wielded by state actors to implement their own policies in an efficient manner and, consequently, create hybrid threats. Target audiences may, in that context, include states, political decision-makers, the general population, communities, groups or individuals that use social networks to express their political, ideological and religious beliefs and tendencies towards different forms of violent extremism, radicalism and terrorism.
Hybrid conflicts are observed through continuous economic, social, political and security crises and situations which, as a rule, precede any hybrid warfare and primarily take place and are kept within cyberspace by creating hybrid threats, where – due to their many advantages – artificial intelligence systems used by social networks play a key role (Mlinac, 2022). In times like these, interfering in electoral processes is viewed as a major hybrid threat with potential strategic consequences. Hybrid warfare in considered as a means of resolving international disputes, where force of arms is applied only as a final resort (Mlinac, 2022).
The United States of America and the Russian Federation are seen as key actors which, at times of hybrid conflicts and hybrid warfare, use social networks and hybrid intelligence to create hybrid threats. The context of creating hybrid threats will be presented through the example and within the context of the continuous conflicts that preceded the ongoing war in Ukraine, where the United States and Russia employed social networks in different geographical areas to support their planning and execution of influence operations. The U.S. hybrid threats in the context of hybrid warfare will be exemplified by the 2015-2020 civil and proxy war in Syria, whereas those in the context of hybrid conflict will be illustrated through the examples of 2021-2022 influence operations in Central Asian states. 

Russia’s hybrid threats in hybrid warfare will be exemplified by the 2014-2015 military intervention in Ukraine, while hybrid conflict will be illustrated through the examples of interference in the 2016 U.S. presidential elections and the 2017-2018 parliamentary elections in the Baltic states (Estonia, Lithuania and Latvia), as well as in France and Germany. These examples illustrate the role of artificial intelligence systems in cyberspace at the tactical level of offensive activity with potential and actual strategic consequences in terms of providing efficient support for planning and executing influence operations. Different levels of applying hybrid intelligence depended on the context of given operational, tactical and strategic goals (Mlinac, 2022).

 

The notion of hybridity in international conflicts and influence operations in cyberspace 

For the past roughly fifteen years, the academic, scientific, political and military/security communities have been using the term “hybridity” to describe international economic, social, political and security crises and upheavals that, in some cases, escalated into open armed conflicts. There are many examples of wars and conflicts where new information and communication technologies (ICTs) administered by the artificial intelligence (AI) systems used by social networks assumed a key role in providing adequate and efficient information support to the planning and execution of influence operations (Tuđman, 2009, pp. 25-45 and p. 29).  
The AI systems used by social networks based on principles that disregard fundamental ethical and moral norms, but rather serve commercial interests, offer new possibilities in planning and creating various threats and contribute to the effective reinforcement of such threats through the dissemination of automated and anonymous disinformation which may additionally – if someone so desires – be tailored to political, religious and ideological preferences as well as specific categories of tendencies among target audiences (TAs), such as violent radicalism, terrorism and violent extremism.
Hybridity is not a novel concept. It emerged as far back as ancient times to depict the application of technological solutions to support conflict and warfare strategies (Popescu, 2015). Accordingly, the concept of hybridity indicates conflict and warfare tactics that are as old as the very phenomenon of conflicts and wars. The Western academic, scientific, political and military/security communities have reinvented the concept of hybridity to describe in the best possible manner the growing role of cyberspace and its related ICTs in the warfare model applied by Russia, first in its military intervention in Georgia in 2008 and then again in Ukraine in 2014-2015. However, the notion of hybridity appeared somewhat earlier in Nemeth’s 2002 study entitled “Future War and Chechnya: A Case for Hybrid Warfare” (Nemeth, 2002). The author utilised the concept of hybridity to depict the dependence of combat effectiveness on the capacity to exploit cyberspace and new ICTs in the war waged by Chechen rebels against the Russian authorities. 
Following the emergence of the first social networks – Facebook in 2004, YouTube in 2005 and Twitter in 2006 – it became clear that they could be used beyond the scope and purpose for which they were originally designed, i.e., connecting friends and families, pursuing business opportunities, sharing ideas and providing global networked communication. Thus, in many subsequent instances of armed conflicts and economic, social, political and security crises and upheavals, the aforementioned social networks proved to be efficient tools for achieving political goals.
The notion of hybridity in the form of “new old conflicts and wars” in cyberspace is viewed as, and understood to imply, any exploitation of the power of AI systems to manage and plan covert offensive information operations, where the planners and executors of such operations aim to utilise social network user personal data and AI systems to pursue political goals. Personal data primarily refers to the exploitation of the aforementioned political, religious and ideological beliefs as well as tendencies towards different forms of violent extremism, radicalism and terrorism. In the context of exploiting said systems and personal data, political goals can be recognised in interests in exerting a short- or long-term influence on the outcomes of international economic, social, political and security crises and upheavals.  

We may say that hybridity in cyberspace is a term that basically describes the technological power of AI systems to exert influence, whereby different actors can, in the short or long run, efficiently shape or reshape value and belief systems and tendencies among their TAs in line with their own needs. Thanks to such possibilities, globally accessible social networks have become strong tactical influencing tools providing effective support in the planning and execution of offensive information operations.

 

Principal artificial intelligence systems used in planning and conducting offensive information operations on social networks

 

The principal artificial intelligence systems used to plan and conduct covert offensive information operations on social networks include, as mentioned earlier, machine learning, deep learning, recommendation algorithm systems and automated fake accounts (bots). Machine learning adds to the efficiency of such operations in that it helps their planners and executors by capturing huge amounts of personal data where it identifies “useful patterns and correlations among different data,” on which basis it “draws conclusions on future behaviour and, in accordance with such conclusions, determines further human behaviour” (Crnčić, 2020, p. 29). 
Machine learning provides a better and faster understanding of different situations, ensures greater precision, accelerates decision-making processes and, thus, complements human evaluation and prediction. Deep learning is used to predict desired outcomes. Machine learning and deep learning select TAs based on their value, belief and principle systems, tendencies, interests, motives, identified weaknesses and vulnerabilities, and recognise their decision-making drivers. Recommendation algorithm systems arouse user interest in, and – in the long run – focus their attention only on a specific set of information items, limiting their access to new knowledge, whereas bots ensure automated and anonymous dissemination of a limited set of data that suit the interests of attackers.
Owing to the above-described capabilities, AI systems have allowed cyberspace to accommodate new efficient patterns for planning and executing covert offensive information operations/psychological operations. New patterns of psychological operations have become globally accessible; they can be planned and executed at all influence levels, in individual, group and mass settings. The objectives of such activities outside the context of wars and armed conflicts may be directed towards the creation of disinformation and hybrid threats at local, regional or global levels. AI systems have enabled the automation and anonymity of offensive activity and its adaptation to social vulnerabilities. The immediacy, anonymity, automation and adaptation of activity with a view to generating desired processes (Nadler, Crain and Donovan 2018; Stoica 2020; RPA 2021) constitutes a new pattern of creating meta-propaganda, pseudo-events and pseudo-knowledge, that is, information superiority (Akrap 2011, p. 310.; Tuđman 2008, p. 13 and pp. 124-125; Tuđman 2013, p. 19).

Types of hybrid threats, critical social vulnerabilities and social network user data used to create hybrid threats in cyberspace

Due to a number of the aforementioned advantages offered by AI systems and because cyberspace is not adequately regulated by law, just as these AI systems are not adequately regulated by moral and ethical norms and such norms in any case do not provide adequate protection of user data on beliefs and preferences among social network users, cyberspace has become an ideal environment for social networks to grow into a powerful and efficient tool used to create disinformation and hybrid threats. In the context of the abuse of machine learning, deep learning, recommendation algorithm systems and bots for the creation of efficient disinformation and hybrid threats, we can recognise the tactical and strategic benefits offered by such systems in their planning and execution. 
The tactical benefits are reflected in the fact that AI systems can expose social network users to constant, automated and anonymous disinformation which, when this is in someone’s interest, may accordingly be tailored to their political, ideological and religious beliefs as well as tendencies towards terrorism and violent radicalism and extremism. This opens possibilities for AI systems to use the aforementioned social network user data to create disinformation and hybrid threats in the pursuance of political agendas.
The utilisation of user data and AI systems to create disinformation and hybrid threats constitutes a major novelty in the shifting paradigm of international conflicts and wars. Specifically, the concept of hybrid threats implies a reality whereby actions and processes at the tactical level can yield significant results at the strategic level (Akrap and Mandić, 2020, p. 14). This key paradigm shift in offensive activity has been driven by AI systems. These systems have made it possible to identify social vulnerabilities based on social network user preferences and tendencies, and to tailor disinformation accordingly. By following that pattern, they have increased the efficiency of offensive activities through hybrid threats. It is also worth noting that their efficiency in hybrid conflicts relies on the technological exploitation of social vulnerabilities identified by AI systems through political, ideological and religious beliefs among social network users as well as their tendencies towards terrorism and different forms of violent radicalism and extremism. This fact is best reflected in the definition of hybrid threats “as a set of potential manifestations of particular hybrid operations which entail targeted and organised action towards a TA in order to exploit (incite, deepen) existing and create new vulnerabilities and foster feelings of division, insecurity, defeatism, powerlessness, hopelessness, ambiguity, suspicion, disruption and collapse of democratic structures and processes as well as the attenuation and control of the defence system” (Akrap, 2019, pp. 37-39). 
The exploitation of user data on political, ideological and religious beliefs and tendencies towards terrorism and different forms of violent radicalism and extremism as well as the use of AI systems to identify social vulnerabilities based on the described category of user data within a targeted political or social setting constitute the aforementioned key paradigm shift in offensive information and psychological activity in cyberspace. Owing to this capability, the aforementioned AI systems, which manage information operations on social networks as part of international conflicts, offer state actors efficacy in providing information support when planning and executing influence operations. 

Table 1. Basic types of hybrid threats and the purposes for their creation in the context of influence operations (Heap, Hansen and Gill, 2021. pp. 10-11).