AI’s growing role in enterprise environments has heightened the urgency for Chief Information Security Officers (CISOs) to drive effective AI governance. When it comes to any emerging technology, governance is hard – but effective governance is even harder. The first instinct for most organizations is to respond with rigid policies. Write a policy document, circulate a set of restrictions, and hope the risk is contained. However, effective governance doesn’t work that way. It must be a living system that shapes how AI is used every day, guiding organizations through safe transformative change without slowing down the pace of innovation.

For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job.

In turn, they cannot lead a “department of no” where AI adoption initiatives are stymied by the organization’s security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. Over the course of this article, I’ll share three components that can help CISOs make that shift and drive AI governance programs that enable safe adoption at scale.

1. Understand What’s Happening on the Ground

When ChatGPT first arrived in November 2022, most CISOs I know scrambled to publish strict policies that told employees what not to do. It came from a place of positive intent considering sensitive data leakage was a legitimate concern. However, while policies written from that “document backward” approach are great in theory, they rarely work in practice. Due to how fast AI is evolving, AI governance must be designed through a “real-world forward” mindset that accounts for what’s really happening on the ground inside an organization. This requires CISOs to have a foundational understanding of AI: the technology itself, where it is embedded, which SaaS platforms are enabling it, and how employees are using it to get their jobs done.

AI inventories, model registries, and cross-functional committees may sound like buzzwords, but they are practical mechanisms that can help security leaders develop this AI fluency. For example, an AI Bill of Materials (AIBOM) offers visibility into the components, datasets, and external services that will feed an AI model. Just as a software bill of materials (SBOM) clarifies third-party dependencies, an AIBOM ensures leaders know what data is being used, where it came from, and what risks it introduces.

Model registries serve a similar role for AI systems already in use. They track which models are deployed, when they were last updated, and how they’re performing to prevent “black box sprawl” and inform decisions about patching, decommissioning, or scaling usage. AI committees ensure that oversight doesn’t fall on security or IT alone. Often chaired by a designated AI lead or risk officer, these groups include representatives from legal, compliance, HR, and business units – turning governance from a siloed directive into a shared responsibility that bridges security concerns with business outcomes.

2. Align Policies to the Speed of the Organization

Without real-world forward policies, security leaders often fall into the trap of codifying controls they cannot realistically deliver. I’ve seen this firsthand through a CISO colleague of mine. Knowing employees were already experimenting with AI, he worked to enable the responsible adoption of several GenAI applications across his workforce. However, when a new CIO joined the organization and felt there were too many GenAI applications in use, the CISO was directed to ban all GenAI until one enterprise-wide platform was selected. Fast forward one year later, that single platform still hadn’t been implemented, and employees were using unapproved GenAI tools that exposed the organization to shadow AI vulnerabilities. The CISO was stuck trying to enforce a blanket ban he couldn’t execute, fielding criticism without the authority to implement a workable solution.

This kind of scenario plays out when policies are written faster than they can be executed, or when they fail to anticipate the pace of organizational adoption. Policies that look decisive on paper can quickly become obsolete if they don’t evolve with leadership changes, embedded AI functionality, and the organic ways employees integrate new tools into their work. Governance must be flexible enough to adapt, or else it risks leaving security teams enforcing the impossible.

The way forward is to design policies as living documents. They should evolve as the business does, informed by actual use cases and aligned to measurable outcomes. Governance also can’t stop at policy; it needs to cascade into standards, procedures, and baselines that guide daily work. Only then do employees know what secure AI adoption really looks like in practice.

3. Make AI Governance Sustainable

Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren’t formally approved. The goal for security leaders shouldn’t be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.

Sustainable governance also stems from Utilizing AI and Protecting AI, two pillars of the SANS Institute’s recently published Secure AI Blueprint[1]. To govern AI effectively, CISOs should empower their SOC teams to effectively utilize AI for cyber defense – automating noise reduction and enrichment, validating detections against threat intelligence, and ensuring analysts remain in the loop for escalation and incident response. They should also ensure the right controls are in place to protect AI systems from adversarial threats, as outlined in the SANS Critical AI Security Guidelines[2].

Learn More at SANS Cyber Defense Initiative 2025

This December, SANS will be offering LDR514: Security Strategic Planning, Policy, and Leadership[3] at SANS Cyber Defense Initiative 2025[4] in Washington, D.C. This course is designed for leaders who want to move beyond generic governance advice and learn how to build business-driven security programs that steer organizations to safe AI adoption. It will cover how to create actionable policies, align governance with business strategy, and embed security into culture so you can lead your enterprise through the AI era securely.

If you’re ready to turn AI governance into a business enabler, register for SANS CDI 2025 here[5].

Note: This article was contributed by Frank Kim, SANS Institute Fellow.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News[6], Twitter[7] and LinkedIn[8] to read more exclusive content we post.

By admin