System Prompt:
Hidden instructions that tell an AI how to behave, like a secret rulebook only the AI can see.
Red-team Exercises:
Security experts deliberately trying to break into systems to find weaknesses before real attackers do.
Guardrails:
Built-in safety limits that keep AI from doing harmful things, like bumpers in bowling lanes.