Backdoor:
A hidden trigger in an AI that activates harmful behavior, like a secret password that unlocks trouble.
Fine-tune:
Teaching an already-trained AI new skills or knowledge, like giving advanced lessons to someone who knows the basics.
Model Weights:
The "memory muscles" of an AI that determine how it responds to information based on its training.
Adversarial Testing:
Deliberately trying to trick an AI with challenging inputs to find weaknesses, like testing locks before thieves do.