Whenever a powerful new technology appears on the enterprise landscape, it’s easy to get lost in the buzz and ignore the basics, particularly IT governance. That was the mistake some companies made decades ago with the advent of commercial virtualization technology. We saw it again when cloud computing went mainstream circa 2010. And for some businesses, it’s happening now, in the midst of AI adoption.
It’s easy enough to understand why organizations overlook governance in the context of technologies such as AI. In the face of endless pressure to adopt new technology, lest they fall behind the competition, they have a tendency to focus on building more than on maintaining.
But this is a dangerous mistake. The longer it takes to bring governance policies up to speed with innovations like AI, the more businesses risk failing to design and operate new solutions in ways that mitigate technical, security and cost-management challenges.
This is why, although AI may still feel like a fledgling technology to many businesses, now is the time to begin updating governance strategies for the AI era. If you wait until AI is already powering mission-critical business operations to think about how you’ll manage AI governance risks, it’s too late.
To that end, here’s a practical guide to establishing an AI governance strategy.
The basics of AI governance
When I talk about AI governance, I’m referring to the policies and procedures that businesses employ to use AI technology effectively, efficiently and securely. Like any technology, AI poses a variety of risks, and governance practices help keep those risks in check.
Importantly, establishing AI governance doesn’t require rethinking your company’s IT governance strategy from the ground up. It’s about building upon or extending existing governance policies and processes to address AI.
Indeed, part of the very point of a governance strategy is that it provides a foundation for managing all types of technologies by establishing best practices and keeping them in place as a business’s technology portfolio evolves. You shouldn’t have to reinvent the wheel to accommodate AI governance, although you should expect to have to make some changes.
A practical approach to AI governance
For AI governance in particular, there are three main areas where the typical business will likely need to update its governance policies and practices.
SysOps and AI
The first is system operations, or SysOps. SysOps governance for AI encompasses managing the various types of technical risks and challenges that could negatively impact AI investments.
Specifically, the questions that an effective AI SysOps governance strategy should address include:
- Data availability: Is sufficient data available to support AI needs like training and inference?
- Data quality: Is the data that powers AI workloads of appropriate quality?
- AI architectures: Which large language models (LLMs) will the business use?
- AI implementation: Will the business implement AI models in-house, rely on third-party vendors or use a combination of these approaches?
SecOps for AI
In the context of AI, security operations, or SecOps, must grapple with some special security requirements that don’t apply to other types of workloads, such as:
- LLM selection: Which criteria will define whether a model is acceptable for use from a security standpoint? For example, should the business only use models that offer an assurance that they won’t leak data to third parties?
- Prompt filtering: How will the business monitor prompts that users input into AI models? How will it handle issues like malicious prompts that attackers inject designed to circumvent safety controls within the model.
- Output filtering: Similarly, how will the organization monitor model output and filter it to control for risks like exposing sensitive data to a user who shouldn’t be able to view it?
AI FinOps
FinOps, which is short for financial operations, is a well-established governance practice in the realm of cloud computing, where organizations use FinOps to help rein in unnecessary cloud spending. As AI becomes increasingly importance, extending FinOps governance to manage AI costs will become critical, too.
To that end, businesses that use AI must ensure that their FinOps strategies can accommodate priorities like the following:
- Query cost tracking: How will the organization monitor the cost of every query that users submit to an LLM?
- Quality vs. cost comparison: How will the company evaluate whether the quality of LLM output justifies the cost of running a query? This is critical because model costs and quality can vary widely.
- Rebilling: If it’s necessary to integrate AI costs into a rebilling process so that the specific user or group within the organization is charged, how will the organization implement that process?
Evolving IT governance for the AI era
AI governance can seem like a daunting topic because AI in the enterprise poses novel types of risks that many businesses have never encountered before, at least not in the particular form that they take within AI workloads.
But when you break it down, you realize that those risks are not as unfamiliar as they may seem. With a few changes or additions, existing governance policies can address them.
But again, the critical thing is to ensure that your business begins making these governance changes at the same time that it adopts AI. Don’t wait until you’re already experiencing unchecked risks to implement governance controls designed to mitigate them. Build governance into your AI strategy from day one.
By Eamonn O’Neill is the co-founder and CTO of Lemongrass.