Systematizing Responsible AI & Governance

49%

of organizations say they'll institute an AI ethics program

33%

of organizations audit their AI systems for bias

35%

of organization are prepared to meet EU's regulatory requirements

Why AI governance matters

Without clear guardrails, AI systems can introduce bias, security vulnerabilities, legal exposure, and reputational risk. In fact, a lack of oversight is one of the leading reasons AI initiatives stall or fail to scale. *

AI governance provides the structure needed to responsibly manage AI technologies—defining roles, setting ethical standards, ensuring regulatory compliance, and aligning AI with business goals. It helps organizations mitigate risk while building trust among stakeholders, employees, and customers.

As AI becomes deeply embedded in decision-making processes, from hiring to healthcare, governance ensures systems are fair, secure, transparent, and aligned with human values.

With governance, companies can confidently innovate, scale solutions responsibly, and unlock AI’s full enterprise value.

What you can do

Establish an AI Governance program in your organization

Adopt a responsible AI governance program that establishes accountability and escalation paths, decision rights, and oversight structures across your AI lifecycle.

Implement AI risk and impact assessments

Evaluate risks across your AI use cases, applications, and systems using qualitative and quantitative assessments to identify, assess, and mitigate threats, while enhancing security and ensuring compliance.

Train users in AI ethics and literacy

Provide comprehensive AI ethics and literacy training for all your employees and relevant stakeholders across the AI value chain, enabling them to understand AI’s opportunities, risks, security, privacy, legal obligations, and potential harms.

Independent audit of AI systems

Engage independent audits to evaluate AI systems for fairness, accuracy, security, and compliance—ensuring accountability and informed governance.

What you'll achieve

Mitigate risk and ensure compliance

Navigate requirements like the EU AI Act, NIST AI RMF, and internal policies to avoid reputational damage.

Drive the development and maturity of AI governance

Advance accountability and escalation paths, decision rights, and oversight structures across your AI lifecycle.

AI ethics education, training, and awareness (AIETA)

Build a strong and AI-capable workforce that recognizes opportunities, and risk, and are ready to advance organizational goals.

Trending in Responsible AI

New York Hiring Law

Fast Facts: Understanding the New York AI Regulation

May 23, 20252 min read

Fast Facts: Understanding the New York AI Regulation (NYC Local Law 144-21)

What is NYC Local Law 144-21?

New York's AI Regulation refers to the proposed legislation (e.g., New York Assembly Bill A567) that aims to regulate the use of automated employment decision tools (AEDTs). The regulation is designed to ensure fairness, transparency, and accountability in AI systems, particularly in hiring and employment decisions. Also known as AI in hiring law.

Who does it affect?

The regulation applies to employers and employment agencies in New York City using AEDTs to screen candidates or employees. It impacts HR professionals, recruiters, AI vendors, and any organization deploying algorithmic tools in the hiring process.

Why was it introduced?

Concerns over algorithmic bias, discrimination, and lack of transparency in AI-driven hiring prompted the regulation. The goal is to protect candidates from unfair treatment and to increase accountability in AI usage.

How does it work?

Key provisions include:

  1. Mandatory bias audits by independent auditors before using AEDTs.

  2. Annual audits to assess disparate impact.

  3. Notification to candidates at least 10 business days before use.

  4. Disclosure of job qualifications and characteristics used by the AEDT.

  5. Public summary of audit results.

When does it take effect?

The NYC Local Law 144, which aligns with these principles, took effect on July 5, 2023, and is considered a model for broader state regulations.

Where does it apply?

Currently enforced within New York City, the regulation is influencing discussions and similar proposals at the state and national levels.

What are the penalties?

Violations can incur civil penalties:

  • $500 for a first violation.

  • $1,500 for subsequent violations, including each day an AEDT is used in non-compliance.

How can organizations comply?

  1. Inventory AI tools used in hiring.

  2. Conduct independent bias audits annually.

  3. Update candidate notifications to meet disclosure requirements.

  4. Publish audit results on the company website.

  5. Train HR teams on AI compliance and ethics.

  6. Review contracts with AI vendors to ensure compliance responsibilities are clear.

Finally, New York's AI regulation signals a growing trend toward responsible AI governance. Organizations could proactively adopt transparent and ethical AI practices, not just for compliance, but to build trust and equity in their workforce.

For more 5-minute reads that matter, stay tuned for more insights on AI, risk, and governance from Obi Ogbanufe, PhD

AI in Hiring LawUS AI Regulationautomated employment decision toolsAEDT
Back to Blog

Let's talk

Image

RESEARCH

Fresh, transformative.

Image

EDUCATION

Build capacity and transparency.

Excellence

CONSULTING

Creative solutions.

Copyright 2025. Obi Ogbanufe. All Rights Reserved.