{"id":7043,"date":"2026-05-11T14:17:41","date_gmt":"2026-05-11T14:17:41","guid":{"rendered":"https:\/\/delimiter.online\/blog\/hugging-face-malware\/"},"modified":"2026-05-11T14:17:41","modified_gmt":"2026-05-11T14:17:41","slug":"hugging-face-malware","status":"publish","type":"post","link":"https:\/\/delimiter.online\/blog\/hugging-face-malware\/","title":{"rendered":"Fake OpenAI Privacy Filter Tops Hugging Face, Steals Windows Data"},"content":{"rendered":"<p>A fraudulent repository on the <a href=\"https:\/\/delimiter.online\/blog\/softbank-robotics-data-center-ipo\/\" title=\"Hugging Face\">Hugging Face<\/a> platform climbed to the number one spot on the trending list by posing as <a href=\"https:\/\/delimiter.online\/blog\/purple-team-challenges\/\" title=\"OpenAI\">OpenAI<\/a>\u2019s official Privacy Filter model, ultimately delivering a Rust-based information-stealing <a href=\"https:\/\/delimiter.online\/blog\/cybersecurity-threats-15\/\" title=\"malware\">malware<\/a> to Windows users.<\/p>\n<p>The repository, identified as \u201cOpen-OSS\/privacy-filter,\u201d was a direct impersonation of the legitimate model released by OpenAI late last month under the name \u201copenai\/privacy-filter.\u201d The malicious copy included the original project\u2019s entire description and documentation to appear authentic.<\/p>\n<p>According to security researchers at ReversingLabs, who first disclosed the incident, the fake repository garnered more than 244,000 downloads before it was taken down. The campaign targeted developers and AI enthusiasts who were eager to test OpenAI\u2019s new privacy-focused tool.<\/p>\n<h2>How the Attack Worked<\/h2>\n<p>The malicious repository did not contain the actual AI model weights. Instead, it hosted a Python package that, when installed, triggered a chain of payloads. The initial script downloaded a Rust-based binary designed to collect sensitive information from infected Windows systems.<\/p>\n<p>Researchers stated that the malware targeted credentials, browser cookies, cryptocurrency wallet data, and other personal files. The use of Rust, a memory-safe programming language, allowed the malware to operate with lower detection rates compared to traditional C++ or Python payloads.<\/p>\n<p>The attackers replicated every detail of the legitimate repository, including the license file, model card, and example scripts. This level of precision made it difficult for casual users to differentiate the fake from the official release.<\/p>\n<h2>Platform Response and Implications<\/h2>\n<p>Hugging Face removed the malicious repository shortly after being notified by ReversingLabs. The platform has not yet released a detailed statement on the incident, but the event has raised concerns about the security of open-source AI model distribution.<\/p>\n<p>Security experts noted that this incident mirrors a broader trend of supply chain attacks targeting AI and machine learning platforms. As open-weight models grow in popularity, threat actors are increasingly exploiting user trust in official repositories.<\/p>\n<p>\u201cThis attack demonstrates how easily a well-crafted impersonation can bypass user scrutiny on popular AI hosting platforms,\u201d said a ReversingLabs analyst. \u201cUsers must verify the publisher\u2019s identity and check for digital signatures before downloading any model files.\u201d<\/p>\n<p>OpenAI has not commented directly on the malicious repository. However, the company\u2019s official Privacy Filter model remains available on Hugging Face, and the company has encouraged users to download only from verified channels.<\/p>\n<h2>Implications for the Developer Community<\/h2>\n<p>The incident has prompted calls for stronger verification mechanisms on Hugging Face and similar platforms. Suggestions include mandatory code signing, two-factor authentication for repository ownership, and real-time scanning of uploaded packages for known malicious patterns.<\/p>\n<p>For developers, this case serves as a reminder to inspect repository metadata, check the publisher\u2019s history, and avoid running code from unverified sources. Even high-ranking trending repositories can harbor malicious code.<\/p>\n<p>The use of Rust for malware development is also a growing concern. While the language offers performance and safety advantages for legitimate applications, it simultaneously makes reverse engineering and detection more challenging for cybersecurity firms.<\/p>\n<p>The investigation into the fake repository\u2019s origin is ongoing. Researchers have not yet identified the individual or group behind the campaign, nor have they determined the full scope of data theft. Affected users are advised to run full antivirus scans and rotate passwords for any accounts accessed from the compromised system.<\/p>\n<p>As the AI ecosystem expands, similar attacks will likely increase. Platforms will need to adapt their security postures to protect users from impersonation and supply chain attacks. Developers are urged to adopt a zero-trust approach when downloading third-party models or packages.<\/p>\n<p>Source: GeekWire<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A fraudulent repository on the Hugging Face platform climbed to the number one spot on the trending list by posing as OpenAI\u2019s official Privacy Filter model, ultimately delivering a Rust-based information-stealing malware to Windows users. The repository, identified as \u201cOpen-OSS\/privacy-filter,\u201d was a direct impersonation of the legitimate model released by OpenAI late last month under [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":7044,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[505],"tags":[3354,544,265,8254,951],"class_list":["post-7043","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-security","tag-hugging-face","tag-malware","tag-openai","tag-rust","tag-supply-chain-attack"],"_links":{"self":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/7043","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/comments?post=7043"}],"version-history":[{"count":0,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/7043\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media\/7044"}],"wp:attachment":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media?parent=7043"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/categories?post=7043"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/tags?post=7043"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}