AI Governance Consulting
of organizations say they'll institute an AI ethics program
Without clear guardrails, AI systems can introduce bias, security vulnerabilities, legal exposure, and reputational risk. A lack of oversight is one of the leading reasons AI initiatives stall or fail to scale.
Adopt a responsible AI governance program that establishes accountability, escalation paths, decision rights, and oversight structures across your AI lifecycle.
Evaluate risks across your AI use cases using qualitative and quantitative assessments to identify, assess, and mitigate threats while ensuring compliance.
Comprehensive AI ethics and literacy training for employees and stakeholders, enabling them to understand AI's opportunities, risks, and obligations.
Independent audits to evaluate AI systems for fairness, accuracy, security, and compliance—ensuring accountability and informed governance.
🛡️
Navigate EU AI Act, NIST AI RMF, and internal policies to avoid reputational damage.
📈
Advance accountability, decision rights, and oversight structures across your AI lifecycle.
👥
Create an AI-capable workforce that recognizes opportunities and risks while advancing goals.

SB21-169 is Colorado’s groundbreaking insurance regulation requiring insurers to govern their use of external consumer data and algorithms, including artificial intelligence, to prevent unfair discrimination in insurance practices.
To ensure that the increasing use of AI, machine learning, and big data in insurance underwriting, pricing, and claims does not result in bias, unfair discrimination, or harm to consumers—particularly protected classes.
All life insurers operating in Colorado who use external consumer data and information sources (ECDIS), algorithms, and predictive models to make decisions about consumers. Other lines of insurance may follow.
Requires insurers to establish a governance and risk management framework for ECDIS and algorithms.
Insurers must demonstrate that their systems do not result in unfair discrimination.
Applies to third-party models and vendor tools used in decision-making.
Insurers must submit reports and documentation to Colorado’s Division of Insurance (DOI).
Rulemaking was finalized in 2023.
Insurers must begin compliance activities and submit their compliance plan in 2024.
Enforcement and evaluation of plans begin shortly after submissions.
Only in the state of Colorado, but it sets a precedent that other U.S. states may follow, especially as AI regulation gains traction.
Regulatory action from the Colorado Division of Insurance
Potential suspension or revocation of licenses
Civil penalties or financial enforcement actions, depending on the violation severity
AI used in underwriting, pricing, marketing, or claims must be explainable and auditable.
Requires documentation of data sources, model design, training, testing, and monitoring.
Bias audits and fairness assessments must be conducted.
Inventory all models and ECDIS in use—especially those affecting consumer outcomes.
Develop a governance framework – include oversight committees, testing protocols, and bias detection methods.
Document model development lifecycle – training data, assumptions, limitations, and testing.
Conduct bias impact assessments – ensure fairness and non-discrimination.
Review contracts with third-party vendors – ensure they meet the Act’s compliance standards.
Submit required documentation – align with DOI reporting deadlines and formats.
Finally, Colorado’s SB21-169 is a signal to the insurance industry that AI and algorithmic systems must be fair, transparent, and accountable. Proactive compliance today can position organizations as trustworthy leaders in a rapidly evolving regulatory environment.
For more 5-minute reads that matter, stay tuned for more insights on AI, risk, and governance from Obi Ogbanufe, PhD
Helping professionals build meaningful careers in AI, AI Governance, and organizations build AI systems people can trust.
Resources
Services
Connect
© 2026 Obi Ogbanufe. All rights reserved.