Safe and Inclusive E‑Society: How Lithuania Is Bracing for AI‑Driven Cyber Fraud

Cyber Security

Presentation of the KTU Consortium Mission ‘A Safe and Inclusive Digital Society’ at the Innovation Agency event ‘Innovation Breakfast: How Mission-Oriented Science and Innovation Programmes Will Address Societal Challenges’.

Technologies are evolving fast, reshaping economies, governance, and daily life. Yet, as innovation accelerates, so do digital risks. Technological change is no longer abstract for such a country as Lithuania, as well. From e-signatures to digital health records, the country depends on secure systems. 

Cybersecurity has become not only a technical challenge but a societal one – demanding the cooperation of scientists, business leaders, and policymakers. In Lithuania, this cooperation has taken a concrete form – the government-funded national initiative. Coordinated by the Innovation Agency Lithuania, the project aims to strengthen the country’s e-security and digital resilience. 

Under this umbrella, universities and companies with long-standing expertise are working hand in hand to transform scientific knowledge into market-ready, high-value innovations. Several of these solutions are already being tested in real environments, for example, in public institutions and critical infrastructure operators. As Martynas Survilas, Director of the Innovation Development Department at the Innovation Agency Lithuania, explains:

“Our goal is to turn Lithuania’s scientific potential into real impact – solutions that protect citizens, reinforce trust in digital services, and help build an inclusive, innovative economy. The era of isolated research is over. In practice, science and business must work together to keep pace with complex, multilayered threats.”

A National Mission: Safe and Inclusive E-Society

Among three strategic national missions launched under this program, one stands out for its relevance to the global digital landscape: “Safe and Inclusive E-Society”, coordinated by Kaunas University of Technology (KTU).

AI‑Driven Cyber Fraud

The mission aims to increase cyber resilience and reduce the risks of personal data breaches, with a focus on everyday users of public and private e-services, contributing directly to Lithuania’s transformation into a secure, digitally empowered society. Its total value exceeds €24.1 million.

The KTU consortium includes top Lithuanian universities – Vilnius Tech and Mykolas Romeris University – as well as leading cybersecurity companies such as NRD Cyber Security, Elsis PRO, Transcendent Group Baltics, and the Baltic Institute of Advanced Technology, together with industry association Infobalt and the Lithuanian Cybercrime Competence, Research and Education Center. 

The mission’s research and development efforts cover a broad spectrum of cybersecurity challenges that define today’s digital landscape. Teams are developing smart, adaptive, and self-learning buildings. In the financial sector, new AI-driven defense systems are being built to protect FinTech companies and their users from fraud and data breaches. Industrial safety is strengthened through prototypes of threat-detection sensors for critical infrastructure, while hybrid threat management systems are being tailored for use in public safety, education, and business environments. Other research focuses on combating disinformation through AI models that automatically detect coordinated bot and troll activity, as well as on creating intelligent platforms for automated cyber threat intelligence and real-time analysis. 

AI Fraud: A New Kind of Threat

According to Dr. Rasa Brūzgienė, Associate Professor at the Department of Computer Sciences at Kaunas University of Technology, the emergence of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) has fundamentally changed the logic of fraud against e-government services.

“Until now, the main defense relied on pattern-based detection – for example, automated filters and firewalls could recognize recurring fraud patterns, typical phrases or structures,” she explains. “However, GenAI has eliminated that ‘pattern’ boundary. Today, criminals can use generative models to create contextually accurate messages. Models know how to write without grammatical errors, use precise terminology, and even replicate the communication style of institutions. This means that modern phishing emails no longer resemble ‘classic fraud’ but become difficult to recognize even for humans, let alone automated filters.”

She emphasizes that both the scale and the quality of attacks have evolved: “The scale has increased because GenAI allows for the automated generation of thousands of different, non-repeating fraudulent messages. The quality has increased because these messages are personalized, multilingual, and often based on publicly available information about the victim. The result: traditional firewalls and spam filters lose their effectiveness because their detectors can no longer rely on formal features of words, phrases, or structure. The main change is no longer mass scale, but realism. In other words, modern attacks don’t look like fraud – they look like normal legal communication.”

AI‑Driven Cyber Fraud

Criminals today, Dr. Brūzgienė warns, have access to a broad arsenal of AI tools. They use models such as GPT-4, GPT-5, Claude, and open-source alternatives like Llama, Falcon, and Mistral – as well as darker variants such as FraudGPT, WormGPT, or GhostGPT, specifically designed for malicious activities. “They can clone voices using ElevenLabs or Microsoft’s VALL-E from just a few seconds of someone speaking. For creating fake faces and videos, they use StyleGAN, Stable Diffusion, DALL-E, and DeepFaceLab, along with lip-sync solutions like Wav2Lip and First-Order-Motion,” she notes.

Even more concerning, she adds, is how these tools are orchestrated together: “Criminals produce photorealistic face photos, deepfake videos, and document copies with meticulously edited metadata. LLMs generate high-quality, personalized phishing texts and onboarding dialogues, TTS and voice-cloning models recreate a victim’s or employee’s voice, and image generation tools produce ‘liveness’ videos that fool verification systems. Automated AI agents then handle the rest – creating accounts, uploading documents, and responding to challenges. These multimodal chains can bypass both automated and human verification based on trust.”

“The scary part,” Dr. Brūzgienė concludes, “is how accessible all of this has become. Commercial TTS solutions like ElevenLabs and open-source implementations of VALL-E provide high-quality voice cloning to anyone. Stable Diffusion, DeepFaceLab, and similar tools make it easy to generate photorealistic images or deepfakes quickly. Because of this accessibility, a single operator can create hundreds of convincing, different, yet interconnected fake profiles in a short time. We are already seeing such cases in attempts to open fake accounts in financial institutions and crypto platforms.”

AI-Powered Social Engineering

Another new frontier is adaptive AI-driven social engineering. Attackers no longer rely on static scripts – they use LLMs that adapt to a victim’s reactions in real time.

Bots start with automated reconnaissance, scraping social media, professional directories, and leaked databases to build personalized profiles. Then, the LLM crafts initial messages that mirror a person’s professional tone or institutional language. If there’s no response, the system automatically switches channels – from email to SMS or Slack – and changes tone from formal to urgent. If a target hesitates, the AI generates plausible reassurance, quoting real internal policies or procedures.

In one typical scenario, a “colleague” writes via work email, follows up on LinkedIn, and then calls using a cloned voice – all orchestrated by connected AI tools. Dr. Brūzgienė describes this as a new stage of cybercrime evolution: “Social engineering has become scalable, intelligent, and deeply personal. Each victim experiences a unique, evolving deception designed to exploit their psychological and behavioral weak points.”

Lithuania’s Cyber Defense Leadership

Lithuania’s digital ecosystem – known for its advanced e-government architecture and centralized electronic identity (eID) systems – faces unique challenges. However, it also demonstrates remarkable progress. The country has risen steadily in international indices, ranking 25th globally in the Chandler Good Government Index (CGGI) and 33rd in the Government AI Readiness Index (2025).

Lithuania’s AI strategy (2021–2030), updated in 2025, has prioritized AI-driven cyber defense, anomaly detection, and resilience-building. The National Cyber Security Centre (NKSC) integrates AI into threat monitoring, reducing ransomware incidents by fivefold between 2023 and 2024. Collaboration with NATO, ENISA, and EU partners further enhances Lithuania’s hybrid defense capabilities.

“We see cyber resilience not just as a technical task but as a foundation for democracy and economic growth,” says Survilas. “Through the safe and inclusive e-society mission, we are not only protecting our digital infrastructure but also empowering citizens to trust and participate in the digital world. AI will inevitably be used for malicious purposes, but we can also use AI to defend. The key is collaboration across sectors and continuous education. This mission is one of the tools helping us turn that idea into concrete projects, pilots, and services for people in Lithuania.”

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.