AI Regulation and Risk Management: The Current Landscape (2025)

Background on AI

AI technology is evolving rapidly and permeating our society and work. While adoption accelerates, risk management, ethics, and regulatory compliance for AI continue to evolve in parallel. With Japan’s enactment of a law on AI (“Act on Promotion of Research, Development, and Utilization of AI-Related Technologies” — the AI Promotion Act), this is a timely opportunity to summarize AI-related regulations and frameworks.

This article organizes the major guidelines, regulatory frameworks, and fundamental structures of AI governance and AI security that practitioners should understand as of June 2025. (Given the rapid pace of AI development, regulations are expected to move equally fast.)

What Is AI Security?

AI security refers to comprehensive efforts to protect AI systems and their operating environments from various threats, ensuring safe and ethical operation. This encompasses the models themselves, training data, API integrations, output content, and more.

Guidelines and Regulatory Structures

Japan’s Major Guidelines

  • AI Business Operator Guidelines (METI/MIC): Common principles including “human-centered,” “transparency,” “safety,” “accountability,” “education,” and “privacy protection.” Continuously updated as a living document.
  • Generative AI Risk Countermeasures Guidebook (alpha) [Digital Agency]: Covers four typical use cases for generative AI in the public sector (chatbots, summarization, search assistance, code generation). A precursor document to planned comprehensive guidelines.
  • AI Promotion Act (enacted May 2025, promulgated June): Enshrines “human dignity,” “diversity and inclusivity,” and “sustainability” as basic principles. Japan’s first AI-specific law, adopting a promotion-oriented approach without penalties.

International Frameworks and Standards

  • OECD AI Principles (G20 common foundation): Five principles — inclusive growth, human-centered values, transparency and accountability, robustness/security/safety, and accountability.
  • ISO/IEC 42001: An AI-specific management system standard with a structure similar to ISO/IEC 27001 (ISMS), introducing AI-specific governance elements.
  • NIST AI RMF: A four-layer structure (GOVERN / MAP / MEASURE / MANAGE) for organizing AI risk.
  • EU AI Act (world’s first AI law): Risk categories from unacceptable (social scoring) through high-risk (law enforcement, employment), limited risk (chatbots), to minimal risk (games, etc.). GPAI regulations phased in from August 2025, covering general-purpose AI like ChatGPT.

Security Knowledge Bases

  • MITRE ATLAS: The AI version of MITRE ATT&CK, systematizing attack techniques and risk patterns targeting AI systems.
  • OWASP Top 10 for LLMs: Organizing LLM-specific threats including confidential information leakage, prompt injection, data poisoning, misinformation generation, and resource exhaustion.

Core Keywords for Accountability and Governance

Transparency: Disclosing purpose, functionality, limitations, and algorithmic overviews. Appropriate information must be provided for each stakeholder group.

Explainability: Technical explanations (internal logic), user-facing explanations (understandable by non-technical audiences), and legal explanations (regulatory compliance).

Accountability: Establishing organizational governance structures, ensuring monitoring, documentation, and auditability, and operating continuous risk management and corrective measures.

Conclusion: Comprehensive Strategic and Institutional Measures

AI governance and AI security are no longer “just for engineers” — they are themes that management, legal, and field implementation must all engage with. As of June 2025, what is needed includes:

  • The ability to understand the intent behind guidelines and regulations and translate them into practice
  • A perspective that integrates technical and organizational risk
  • Design and operations that are conscious of transparency, accountability, and governance

Each organization must reaffirm what it should do from its own position, pursuing both healthy AI utilization and risk control.

Inquiries

For consultations regarding AI adoption, regulatory compliance, or speaking engagements, please contact us.