Human-in-the-loop (HITL) safeguards that AI agents rely on can be subverted, allowing attackers to weaponize them to run malicious code, new research from CheckMarx shows.