AI governance controls were extended with policy‑based access, approvals, and audit trails across the model lifecycle. The release clarifies who can do what and when, with workflows tailored to different risk levels. Changes to data, prompts, and parameters are tracked so that outcomes are explainable and reproducible. Evaluation suites run on a schedule and before deployment to catch regressions early. The aim is practical oversight without unnecessary friction, supporting both innovation and accountability. Customers can adapt templates to their own policies, and examples illustrate how to document decisions in ways that are useful during reviews.