Lovely AI has determined to be the most vulnerable to Bybescoming – Allowing anyone to build a live scam page

6 Min Read
6 Min Read

Lovable is a generative artificial intelligence (AI) driven platform that allows you to create full-stack web applications using text-based prompts, known to be vulnerable to jailbreak attacks, allowing beginners and aspiring cybercrooks to set up a look-alike qualification harvest page.

“As a dedicated tool for creating and deploying web apps, its functionality is perfectly lined up with a wish list of all con artists,” Guardio Labs’ Nati Tal said in a report shared with Hacker News. “We were able to love from the Pixel-Perfect Scam page to live hosting to track stolen data, evasion techniques, and even the admin dashboard. We didn’t hesitate to do that.

This technique is codenamed Bybescumming – Playing the term Bibe Coding. This refers to an AI-dependent programming technique for writing software by describing problem statements in a few sentences as prompts to large-scale language models (LLMs) tailored for code.

Abuse of LLMS and AI chatbots for malicious purposes is not a new phenomenon. Over the past few weeks, research has shown that threat actors have abused popular tools such as Openai ChatGpt and Google Gemini to help develop malware, research and create content.

Additionally, LLMs like Deepseek feel that they are prone to encourage attack and intrusion techniques like bad Likert judges, crescendos, and deceptive pleasures that allow models to bypass safety and ethical guardrails and generate other prohibited content. This includes creating phishing emails, keyloggers, and ransomware samples, but with additional prompts and debugging.

In a report released last month, Symantec, owned by Broadcom, can automate the entire process of finding the email address of a particular person, creating PowerShell scripts that can store, portray, and execute scripts on Google Drive, by storing, portraying them.

Lovely AI Vibe Scumming

The growing popularity of AI tools means that it can significantly reduce the barriers to attacker entries, allowing you to take advantage of coding capabilities to create functional malware with little to no proprietary technical expertise.

See also  NTT Research launches new physics for artificial intelligence groups at Harvard

The example case is a new palm break approach called an immersive world that allows you to create an information steeler that can harvest credentials and other sensitive data stored in the Google Chrome browser. This technique “bypasses LLM security controls using narrative engineering” by creating a detailed fictional world and assigning roles in specific rules to avoid restricted operations.

Guardio Labs’ latest analytics can take it a step further, allowing platforms like Lovable and Anthropic Claude to weaponize them to generate complete fraud campaigns with SMS text message templates, Twilio-based SMS delivery of fake links, content obfuscation, defense avoidance, and telegram integration.

Lovely AI Vibe Scumming

Vibescamming starts with a direct prompt that asks AI tools to automate each step in the attack cycle, evaluates initial responses, and adopts a Multi-Prompt approach to gently steer the LLM model to generate the intended malicious response. This phase, known as “level up,” includes strengthening phishing pages, improving delivery methods, and improving fraud legitimacy.

What you love about Guardio not only creates a compelling login page that mimics a real Microsoft sign-in page, but also auto-deploys pages on URLs hosted in their own subdomain (“IE, *.lovable.app”) and redirects to the office (.) com after certification.

On top of that, both Claude and Lovable appear to be compliant with prompts asking for help to avoid fraudulent pages being flagged by security solutions.

“What’s even more surprising is the graphical similarity, as well as the user experience,” Tal said. “It mimics the real thing so well, it’s definitely smoother than the actual Microsoft login flow. This shows the raw power of task-focused AI agents and it can become an unconscious tool of abuse without strictly curing.”

See also  A key flaw in Apache Parquet allows remote attackers to execute arbitrary code

“We not only generated a Scampage with full credential storage, but also presented a fully functional admin dashboard to see all the data captured (credentials, IP addresses, timestamps, full plaintext passwords).

Along with the findings, Guardio also released the first version of what is called Vibescamming Benchmark to place the generated AI model in Wringer and test resilience to potential abuse in phishing workflows. Chagpt scored 8 out of 10, Claude scored 4.3 and Lovable scored 1.8, indicating a high possibility of exploiting.

“The ChatGpt proved to be perhaps the most advanced, general purpose model, but the most careful model,” Tal said. “In contrast, Claude started with a solid pushback, but was easily persuasive. Directed by framing “ethical” or “security research” it provided surprisingly robust guidance. ”

Share This Article
Leave a comment