Talking about AI: Definition
Artificial Intelligence (AI) – AI refers to simulations of human intelligence on machines, allowing them to perform tasks that normally require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in the field, covering a wide range of technologies and methodologies, including machine learning (ML) and deep learning.
Machine Learning (ML) – ML is a subset of AI focused on developing algorithms and statistical models that allow machines to learn based on data and make predictions or decisions. ML is a specific approach within AI, highlighting data-driven learning and improvement over time.
Deep Learning (DL) – Deep learning is a special subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced format of ML is particularly effective for tasks such as image and speech recognition, and has become an important component of many AI applications.
Major Language Models (LLM) – LLMS is a type of AI model designed to understand and generate human-like text by being trained on a wide range of text datasets. These models are specific applications for deep learning focused on natural language processing tasks and are essential for many modern AI-driven language applications.
Generation AI (genai) – Genai refers to an AI system that allows you to create new content such as text, images, music, and more based on trained data. This technology often leverages LLM and other deep learning technologies to generate original and creative outputs, and introduces the advanced capabilities of AI in content generation.
Summary: The good and bad of AI
Almost every day, the sacred milestones of the Turing Test become almost naive irrelevant as computer interfaces evolve from comparable to human language to comparable to similar ones, indistinguishable and undoubtedly superior. (1).
The development of large-scale language models (LLMS) began with advances in natural language processing (NLP) in the early 2000s, but a major breakthrough occurred in Ashish Vaswani’s 2017 paper. This allows us to train larger models on a vast dataset, greatly improving language understanding and generation.
Like other technologies, LLM is neutral and can be used by both attackers and defenders. The key question is which will benefit more, faster or faster?
Let me explain that question in a bit more detail. This is just an excerpt from our coverage in Security Navigator 2025, but covers some of the key points that should be relevant to everyone who works in the context of security or technology. If you want to read more about “fast injection” techniques and how to use them productively in AI security technologies, we recommend getting a full report.
AI in defense operations
- It could improve general office productivity and communication
- Possible to improve search, research, and open source intelligence
- Possible to enable efficient international and intercultural communication
- It may help collate and summarise diverse and unstructured text data sets
- May help with security intelligence and event information documentation
- We may assist in the analysis of potentially malicious emails and files
- We may assist in identifying fraudulent, fake or deceptive text, images or video content.
- It may assist in security testing functions such as reconnaissance and vulnerability detection.
Some form of AI has been used for a long time in a variety of security technologies.
As an example:
- Intrusion Detection System (IDS) and Threat Detection. Security vendor Darktrace uses ML to autonomously detect and respond to threats in real time by leveraging ML algorithms that train behavioral analysis and historical data to flag suspicious deviations from normal activity.
- Detection and prevention of phishing. ML models are used in products such as Proofpoint and Microsoft Defender, which use ML algorithms to identify and block phishing attacks to analyze email content, metadata, and user behavior using ML algorithms to identify phishing attempts.
- Endpoint detection and response (EDR). EDR products like Crowdstrike Falcon leverage ML to identify abnormal behavior and detect and mitigate cyber threats at endpoints.
- Microsoft Copilot for security. Microsoft’s AI-driven solutions are designed to assist security professionals by leveraging generated AI, including OpenAI’s GPT model, to streamline threat detection, incident response, and risk management.
AI in attack operations
- Possible to improve general office productivity and communication for bad actors
- Possible to improve search, research, and open source intelligence
- Possible to enable efficient international and intercultural communication
- It may help collate and summarize diverse and unstructured text datasets (such as social media profiles for phishing/spear phishing attacks)
- It may assist in attack processes such as reconnaissance and vulnerability detection.
- We may help you create reliable texts for cyberattack methods such as phishing, waterholes, and fraud.
- fraudulent, fake or deceptive text, images, or
- Video content.
- It may facilitate accidental data leaks and unauthorized access
- It may present a new, vulnerable and attractive attack surface.
Real-world examples of AI in attack operations are relatively rare. Notable instances include automatic exploit generation (AEG) for MIT(2) IBM Deep Rocker(3)has demonstrated malware equipped with AI. These remain proof of concept for now. In 2019, our research team published two AI-based attacks using topic modeling(4), It shows the unpleasant potential of AI for network mapping and email classification. Although we have not seen extensive use of such features, in October 2024, our CERT reported(5) Rhadamanthys Malware-as-a-Service (MAAS) incorporates AI to perform optical character recognition (OCR) on images containing sensitive information such as passwords, marking the closest real-world instance of AI-driven attack capabilities.
Security Navigator 2025 is here – Download now
The newly released Security Navigator 2025 provides important insights into current digital threats, documenting 135,225 incidents and 20,706 confirmed violations. It serves as a guide to navigating a safer digital landscape than just a report.
What is inside?#
- Depted Detailed Analysis: Statistics from Cybersocks, Vulnerability Scans, Pen Tests, Certificates, Cy-X and Ransomware Observations.
- Future Future: Learn security predictions and stories from the field.
- Security Deep-Dives: Be informed of Hacktivist activities and emerging trends related to LLMS/generated AI.
Take one step ahead with cybersecurity. Your essential guide is waiting for you!
Get copy now
LLM is increasingly used aggressively, especially in scams. A notable example is the UK engineering group Arup(6)They reportedly lost $25 million to a scammer who used their senior manager’s digital clone audio to order financial remittances during a video conference.
Is AI threatening?
To systematically consider the potential risks of LLM technology, we examine four perspectives: risks that do not adopt LLMS, existing AI threats, new threats unique to LLMS, and broader risks as LLMS is integrated into business and society. These aspects are visualized with the following graphics:

Branch 1: Inappropriate risks
Many of the clients we talk about feel pressured to adopt LLM. CISO is particularly concerned about the “risk of non-employment” driven by three main factors.
- Loss of Efficiency: Leaders believe that LLMs like Copilot and ChatGpt will increase workers’ efficiency and lag behind their competitors who employ them.
- Loss of Opportunity: LLM is seen as unable to uncover new business opportunities, products or market channels and take advantage of the risk of losing competitiveness.
- Loss of Marketability: With AI dominating the debate, companies worry that they will remain irrelevant in the market by not introducing the AI they offer.
These concerns are valid, but assumptions are often untested. For example, a survey conducted by Upwork Research Agency in July 2024 (7) “96% of C-Suite leaders hope that AI tools will increase productivity.” However, the report states that “nearly half (47%) of employees using AI don’t know how to achieve the productivity their employers expect, with 77% saying that these tools actually reduced productivity and added them to their workloads.
The marketing value of “equipped with AI” is still under discussion. A recent FTC report noted that consumers have expressed concerns about the entire AI lifecycle, particularly the limited attractive pathways of AI-based product decisions.
Companies need to consider the true costs of adopting LLM, including direct costs such as licensing, implementation, testing, training, and more. There is also an opportunity cost, as resources allocated to LLM adoption may have been invested elsewhere.
Security and privacy risks, along with broader economic externalities, require significant power and water use, including the large resource consumption of LLM training. According to one article (8)Microsoft’s AI data centers could consume more electricity than all of India within the next six years. Apparently, millions of people are cooled by millions of gallons of water.
Beyond resource tensions, there is ethical concern as creative works are often used to train models without the consent of creators. It inspires artists, writers and scholars. Furthermore, AI concentrations in a small number of owners can affect business, society and geopolitics as these systems accumulate wealth, data, and control. While LLMS promises productivity improvements, companies risk sacrificing direction, vision and autonomy for convenience. When weighing the risk of non-diagnostics, potential benefits should be carefully balanced with direct, indirect, and external costs, including security. Without a clear understanding of the value LLMS brings, companies may find that risk and costs outweigh rewards.
Branch 2: Existing threats from AI
In mid-October 2024, the World Watch security intelligence feature published an advisory summarizing the use of AI by offensive actors as follows: One of the most common ways national alliances and state-sponsored threat groups employ AI in their kill chains is to use generated AI chatbots such as ChatGPT for malicious purposes. These uses are assessed as varying depending on the individual group’s own capabilities and interests.
- North Korean threat actors are said to be using LLM to better understand publicly reported vulnerabilities (9)basic script tasks and target reconnaissance (including dedicated content creation used in social engineering).
- Iranian groups were seen generating phishing emails and using LLMS for web scraping (10).
- Chinese groups such as charcoal typhoons have abused LLMS for advanced commands representing post-commercial actions (10).
Openai disclosed on October 9th (11) Since the beginning of the year, it has disrupted over 20 ChatGPT abuse aimed at debugging and developing malware, spreading false information, avoiding detection and launching spear phishing attacks. These malicious uses were attributed to Chinese (sweet customers) and Iranian threat actors (CyberAv3ngers and Storm-0817). China’s cluster sweet spectrum (tracked by the Palo Alto Network as TGR-STA-0043) even targeted Openai employees in a spear phishing attack.
Recently, it has been observed that state-sponsored threat groups also run uninformed and influential campaigns targeting, for example, US presidential elections. Several campaigns stemming from threat actors in Iran, Russia and China have leveraged AI tools to erode public trust in the US democratic system or to distrust candidates. In Digital Defense Report 2024, Microsoft confirmed this trend, adding that these threat actors are leveraging AI to create fake text, images and videos.
Cybercrime
In addition to leveraging legitimate chatbots, cybercriminals have also created “dark LLMS” (models trained for fraudulent purposes) such as fraud, wormmpt, darkgemini. These tools are used to automate and enhance phishing campaigns, help less skilled developers create malware and generate fraud-related content. They are usually promoted to Darkweb and Telegram with an emphasis on the crime feature of the model.
Some financially motivated threat groups are also adding AI to malware stocks. The recent World Watch Advisory for newer versions of Rhadamanthys Infostealer describes new AI-dependent features and analyzes images that contain important information such as passwords and recovery phrases.
Continuing surveillance of Cybercrime forums and marketplaces observed a clear increase in malicious services supporting social design activities, including:
- Deepfakeespecially for sexttortor and romance schemes. Over time, this technology is becoming more convincing and cheaper.
- AI-equipped phishing and BEC tools It is designed to make it easier to create phishing pages, social media content and email copies.
- Audio phishing with AI. In a report released on July 23, Google revealed (12) AI-powered vising (or voice spoofing) facilitated by commercial voice synthesizers was a new threat.
Exploitation of vulnerabilities
AI still faces limitations when used to create exploit code based on CVE descriptions. As technology improves and becomes more readily available, it will be interesting for both cybercriminals and state support actors. LLM, which can autonomously locate critical vulnerabilities, can leverage code and use them against targets, and can have a deep impact on the threat situation. Therefore, leveraging development skills could make it accessible to anyone who has access to advanced AI models. Fortunately, the source code for most products is not readily available for training such models, but open source software may present useful test cases.
Branch 3: New Threats from LLMS
The new threats born from the widespread adoption of LLM depend on how and where the technology is used. This report focuses strictly on LLMS and should consider whether it is in the hands of attackers, businesses, or society as a whole. In the case of businesses, are they consumers of LLM services or providers? For providers, are they building their own models, procuring models, or procuring full functionality from other models?
Each scenario poses a variety of threats and requires customized controls to mitigate the risks inherent to that use case.
Threats to consumers
While consumers use external provider Genai products and services, providers create or enhance consumer services that leverage LLM, whether they develop an internal model or use third-party solutions. Many companies may employ both roles over time.
It is important to recognize that employees are already using public or local genai for work or personal purposes and almost certainly using it as they are pose additional challenges to the company. For those who consume external LLM services, whether they are businesses or individual employees, the key risks revolve around data security, taking into account additional compliance and legal concerns. Key data-related risks include:
Data leak: Workers may unintentionally disclose sensitive data to LLM systems, such as CHATGPT, either directly or through the nature of the query.
Hallucinations: Genai can generate inaccurate, misleading or inappropriate content that employees may incorporate into their work, creating legal liability. When generating code, it can be buggy or unsettling (13).
Intellectual Property Rights: As companies use data to train LLMS and incorporate production volume into intellectual property, unresolved ownership questions can be liable for violations of rights.
Genai production only increases productivity when it is accurate, appropriate and legal. Output generated in unregulated AI can pose false information, liability, or legal risk to your business.
Threats to providers
If a company chooses to integrate LLM into its own systems or processes, a completely different set of threats will emerge. These can be broadly categorized as follows:
Model-related threats
Trained or coordinated LLMs are highly valuable to developers and are subject to threats to confidentiality, integrity and availability.
In the latter case, threats to your own model include:
- Model theft.
- Hostile “addictions” to negatively affect the accuracy of the model.
- Destruction or confusion of the model.
- The legal liability that may emerge from the model produces false, misrepresentation, misleading, inappropriate or illegal content.
However, when an organization implements Genai within a technology environment, it rates the most meaningful new threats as emerge from an increase in attack surface.
Genai as an offensive surface
Genai is a complex new technology consisting of millions of codelines that extend the attack surface and introduce new vulnerabilities.
Common genai tools such as ChatGpt and Microsoft Copilot become widely available, and thus no longer offer a significant competitive advantage on their own. The true power of LLM technology lies in integrating it with business-specific data or systems to improve customer service and internal processes. One important way is to use an interactive chat interface with Genai. Here, the user interacts with a chatbot that generates coherent, context-aware responses.
To enhance this, the chat interface must take advantage of features such as searched generation (RAG) and APIs. Genai processes user queries, RAG retrieves related information from its own knowledge base, and API connects Genai to the backend system. This combination allows chatbots to provide contextually accurate output while interacting with complex back-end systems.
However, Genai is often traced back as a security perimeter between users and corporate backend systems, often introducing a critical new attack surface directly to the Internet. Like the graphical web application interface that emerged in the 2000s, chat interfaces have the potential to convert digital channels to make business clients simple and intuitive access. Unlike graphical web interfaces, the non-deterministic nature of Genai means that even developers don’t fully understand the internal logic, creating great opportunities for vulnerability and exploitation. Attackers have already developed tools to take advantage of this opacity, leading to similar potential security challenges that are found in early web applications that still plague security defenders today.
Trick llms from “guardrail”
The Open Web Application Security Project (OWASP) identifies “fast injection” as the most important vulnerability in Genai applications. This attack manipulates the language model by embedding specific instructions in user input, triggering unintended or harmful responses, revealing sensitive information and bypassing safeguards. The attacker creates input that overrides the standard behavior of the model.
Just like in the early days of web application hacking, tools and resources quickly emerge to discover and take advantage of rapid injection. Given the complexity of LLM and the digital infrastructure needed to connect the chat interface to its own system, we expect chat interface hacking to remain a critical cybersecurity issue for many years.
As these architectures grow, traditional security practices such as secure development, architecture, data security, identity and access management become even more important to ensure proper approval, access control and privilege management in this evolving situation.
When the “NSFW” AI chatbot site Muah.AI was compromised in October 2024, hackers described the platform as “a handful of open source projects duct tape together.” Apparently, according to reports, “finding vulnerabilities that provided access to the platform’s database was not an issue at all.” We expect such reports to become common in the coming years.
Conclusion: The same is not a new dimension
Like other powerful technologies, we naturally fear the impact that LLMS has on the enemy’s hands. Much attention is paid to the question of how AI “accelerates threats.” The uncertainty and anxiety that arises from this obvious change in the threat landscape is of course utilised to insist on greater investment in security.
But while some are certainly changing, many of the threats highlighted by today’s warning scholars are facing LLM technology and require us less than we continue to do consistently what we already know. For example, all of the following threat actions have already been implemented with support for ML and other forms of AI, although perhaps enhanced by LLMS. (14) (or in fact, no AI):
- Online impersonation
- Cheap and trustworthy phishing emails and sites
- Fake voice
- translation
- Predictive password cracking
- Discovering vulnerabilities
- Technology hacking
The notion that the enemy could carry out such activities more frequently or more easily is a source of concern, but does not necessarily require fundamental changes in security practices and technology.
On the other hand, LLMS as an offensive surface is highly underrated. It is important to learn the lessons of previous technology revolutions (such as web applications and APIs) so that they do not repeat them by recklessly employing somewhat untestable technologies that have not been tested at the boundaries of open cyberspace and our key internal assets. Companies explicitly advise being extremely careful and hardworking in assessing the potential benefits of deploying Genai as an interface, with the potential risks that such complex, untested technologies will ensure. Essentially, we already know at least the same access and data safety issues from the dawn of the cloud era and the erosion of the boundaries of the classic company that followed.
Despite the groundbreaking innovations we are observing, security “risks” still consist of fundamentally the products of threats, vulnerabilities and impacts, and if LLM is not there yet, they cannot be magically created. If these elements already exist, the risks that businesses must deal with are largely independent of the existence of AI.
This is just an excerpt from research conducted with AI and LLMS. To read the expert stories about how full story and more detailed advisory, as well as quick injections work to operate the LLMS and work outside the safety guardrail, or how defenders use AI to detect subtle signals of compromise in vast networks: it’s all in Security Navigator 2025.