
Securing AI Model Weights
Versandkostenfrei!
Versandfertig in über 4 Wochen
40,99 €
inkl. MwSt.
PAYBACK Punkte
20 °P sammeln!
As frontier artificial intelligence (AI) models--that is, models that match or exceed the capabilities of the most advanced models at the time of their development--become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights--the learnable parameters that encode the core intelligence of an AI--from theft by a variety of potential attackers. Specifically, the authors (1) identify 38 meaningfully distinct attack vectors, (2) explore a variety of potential attacker operational capacities, fro...
As frontier artificial intelligence (AI) models--that is, models that match or exceed the capabilities of the most advanced models at the time of their development--become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights--the learnable parameters that encode the core intelligence of an AI--from theft by a variety of potential attackers. Specifically, the authors (1) identify 38 meaningfully distinct attack vectors, (2) explore a variety of potential attacker operational capacities, from opportunistic (often financially driven) criminals to highly resourced nation-state operations, (3) estimate the feasibility of each attack vector being executed by different categories of attackers, and (4) define five security levels and recommend preliminary benchmark security systems that roughly achieve the security levels. This report can help security teams in frontier AI organizations update their threat models and inform their security plans, as well as aid policymakers engaging with AI organizations in better understanding how to engage on security-related topics.