When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Data poisoning can be used to make the AI model do your bidding.
Or AI models can be convinced to give erroneous output by modifying the data sent into a trained model.
Both are incredibly difficult to ferret out and guard against.
That was not the case with the nefarious models uploaded to Hugging Faces AIcollaborationrepository.
Global VP and CISO in Residence at Zscaler.
Most focus on the data training stage and the algorithms themselves.
If training data is corrupted, alternate AI algorithms can be used to deploy the impacted model.
A 100% cybersecure AI model can be built and poisoned using training data.
There is no defense other than validating all the predictive output, which is very expensive computationally.
as part of the overall threat universe.
We’ve featured the best database software.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.
If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro