Without clear guardrails, AI systems can introduce bias, security vulnerabilities, legal exposure, and reputational risk. In fact, a lack of oversight is one of the leading reasons AI initiatives stall or fail to scale. *
AI governance provides the structure needed to responsibly manage AI technologies—defining roles, setting ethical standards, ensuring regulatory compliance, and aligning AI with business goals. It helps organizations mitigate risk while building trust among stakeholders, employees, and customers.
As AI becomes deeply embedded in decision-making processes, from hiring to healthcare, governance ensures systems are fair, secure, transparent, and aligned with human values.
With governance, companies can confidently innovate, scale solutions responsibly, and unlock AI’s full enterprise value.
Adopt a responsible AI governance program that establishes accountability and escalation paths, decision rights, and oversight structures across your AI lifecycle.
Evaluate risks across your AI use cases, applications, and systems using qualitative and quantitative assessments to identify, assess, and mitigate threats, while enhancing security and ensuring compliance.
Provide comprehensive AI ethics and literacy training for all your employees and relevant stakeholders across the AI value chain, enabling them to understand AI’s opportunities, risks, security, privacy, legal obligations, and potential harms.
Engage independent audits to evaluate AI systems for fairness, accuracy, security, and compliance—ensuring accountability and informed governance.
Mitigate risk and ensure compliance
Navigate requirements like the EU AI Act, NIST AI RMF, and internal policies to avoid reputational damage.
Drive the development and maturity of AI governance
Advance accountability and escalation paths, decision rights, and oversight structures across your AI lifecycle.
AI ethics education, training, and awareness (AIETA)
Build a strong and AI-capable workforce that recognizes opportunities, and risk, and are ready to advance organizational goals.
SB21-169 is Colorado’s groundbreaking insurance regulation requiring insurers to govern their use of external consumer data and algorithms, including artificial intelligence, to prevent unfair discrimination in insurance practices.
To ensure that the increasing use of AI, machine learning, and big data in insurance underwriting, pricing, and claims does not result in bias, unfair discrimination, or harm to consumers—particularly protected classes.
All life insurers operating in Colorado who use external consumer data and information sources (ECDIS), algorithms, and predictive models to make decisions about consumers. Other lines of insurance may follow.
Requires insurers to establish a governance and risk management framework for ECDIS and algorithms.
Insurers must demonstrate that their systems do not result in unfair discrimination.
Applies to third-party models and vendor tools used in decision-making.
Insurers must submit reports and documentation to Colorado’s Division of Insurance (DOI).
Rulemaking was finalized in 2023.
Insurers must begin compliance activities and submit their compliance plan in 2024.
Enforcement and evaluation of plans begin shortly after submissions.
Only in the state of Colorado, but it sets a precedent that other U.S. states may follow, especially as AI regulation gains traction.
Regulatory action from the Colorado Division of Insurance
Potential suspension or revocation of licenses
Civil penalties or financial enforcement actions, depending on the violation severity
AI used in underwriting, pricing, marketing, or claims must be explainable and auditable.
Requires documentation of data sources, model design, training, testing, and monitoring.
Bias audits and fairness assessments must be conducted.
Inventory all models and ECDIS in use—especially those affecting consumer outcomes.
Develop a governance framework – include oversight committees, testing protocols, and bias detection methods.
Document model development lifecycle – training data, assumptions, limitations, and testing.
Conduct bias impact assessments – ensure fairness and non-discrimination.
Review contracts with third-party vendors – ensure they meet the Act’s compliance standards.
Submit required documentation – align with DOI reporting deadlines and formats.
Finally, Colorado’s SB21-169 is a signal to the insurance industry that AI and algorithmic systems must be fair, transparent, and accountable. Proactive compliance today can position organizations as trustworthy leaders in a rapidly evolving regulatory environment.
For more 5-minute reads that matter, stay tuned for more insights on AI, risk, and governance from Obi Ogbanufe, PhD
RESEARCH
Fresh, transformative.
EDUCATION
Build capacity and transparency.
CONSULTING
Creative solutions.
Copyright 2025. Obi Ogbanufe. All Rights Reserved.