“The technological advances in AI continue to fuel the digital transformation enabling new applications and their benefits that a mere few years ago would not have been possible," said Wael William Diab, Chair of SC 42.
He stressed that SC 42 is uniquely positioned to address these emerging areas by collaborating with the diverse portfolio of committees in IEC and ISO that cover many of these domains. "Moreover, "Our unique holistic approach to looking at the entire AI ecosystem has enabled us to dynamically react to emerging AI requirements and expand the work programme accordingly. The participating members and diversity of stakeholders and growth in our work programme continues to be strong despite the pandemic,” he continued.
Closer to home, Singapore is positioning itself as a central hub for global AI‑testing standards, championing a new international benchmark that aims to make generative AI systems more trustworthy, comparable and safe.
At the 17th ISO/IEC JTC 1/SC 42 plenary meeting in Singapore, the country has put forward ISO/IEC 42119‑8, the first international standard dedicated to testing methodologies for generative AI systems, with a specific focus on benchmarking and red teaming.
The draft standard is designed to create a structured, reproducible testing framework so that AI outputs can be assessed consistently across organisations, which in turn boosts transparency, trust and adoption.
Why a global testing standard matters
As AI moves beyond point‑product use into embedded, agentic workflows, there is growing recognition that globally recognised standards are essential for reliability, safety and cross‑border interoperability. “Standards are the quiet infrastructure that enables interoperability, consistency and trust at scale,” noted Ng Cher Pong, chief executive of the Infocomm Media Development Authority (IMDA), in his opening address to the SC 42 plenary.
He drew a parallel with telecoms, where de facto standards such as 3GPP’s LTE and 5G ensure that devices and networks work together seamlessly, and argued that AI needs the same stable, shared foundation.
Standards like ISO/IEC 42001 on AI Management Systems already show how frameworks can translate broad principles into concrete governance. Ng cited Changi Airport Group (CAG) in Singapore as an early adopter that used 42001 certification to institutionalise clearer accountability, risk assessment and oversight of AI use cases across the organisation.
“This is one example of how AI standards can help translate broad AI principles into concrete actionables and controls,” he said, underlining how certification can drive internal discipline and stakeholder confidence.
Singapore’s AI‑testing and assurance push
The proposed ISO/IEC 42119‑8 standard builds on IMDA’s earlier work on domestic testing frameworks, such as the AI Verify Toolkit and the Starter Kit for Testing of LLM‑Based Applications for Safety and Reliability, as well as on the Global AI Assurance Sandbox.
The Sandbox, run under the AI Verify Foundation, is already testing AI systems against real‑world problem statements, generating findings that IMDA views as “pre‑standardisation material” that can feed into broader international efforts.
Ng highlighted that standards must keep pace with AI’s rapid evolution, which has moved from generative to multimodal and now to agentic AI in little over three years. “Standards setting cannot move at a glacial pace,” he said, warning that overly slow processes risk irrelevance.
His remarks echo wider industry concern that AI‑governance frameworks must evolve as quickly as the technology itself, particularly in testing, where methods for benchmarking and red teaming need to stay ahead of adversarial and operational misuse.
Inclusiveness, testing and real‑world practice
A second theme in the address focused on inclusivity: standards should be representative across sectors, cultures and languages, and Southeast Asia—which is one of the world’s most diverse regions—must be plugged into the standards‑development process.
To that end, IMDA and Enterprise Singapore co‑organised a foundational AI‑standards workshop with the American National Standards Institute (ANSI), helping ASEAN member states build capacity and develop tailored national action plans on AI standards.
Ng also stressed the need to strengthen the link between standards and testing in practice. “Testing assures users that the product or system meets certain standards, and if done properly, encourages more widespread usage,” he said, pointing to ISO/IEC 42119‑8 on generative‑AI testing and 42119‑7 on AI red teaming as cornerstones for more trustworthy and repeatable assessment.
The Global AI Assurance Sandbox and the AI Assurance Exchange, where global standard‑setters, policymakers and industry leaders will discuss implementation, are intended to turn those standards into tangible practices rather than theoretical checklists.
Ultimately, he framed standards as a means to an end: “Their value lies in how they are put to action, in real world applications and use cases, to solve problems and enhance trust.”
Singapore’s role in hosting SC 42 and proposing ISO/IEC 42119‑8 signals a strategy to shape AI‑governance from the testing and assurance layer upward, giving COOs and AI‑deployers a clearer, more predictable framework for safe, scalable AI‑enabled operations.


