Mithridatium: An Open-Source Toolkit for Verifying the Integrity of Pretrained Machine Learning Models
Why Mithridatium? Today’s ML ecosystem assumes that pretrained models are safe. In reality, the model file itself can be a silent attack vector: - poisoned tra...