As the development and use of AI systems expands, policymakers increasingly recognise the need for targeted actions that promote beneficial outcomes while mitigating potential harms. Yet there is often a gap between these policy goals and the technical knowledge required for effective implementation, risking ineffective or actively harmful results.
To address this issue, we are hosting a workshop on Technical AI governance—a nascent field focused on providing analyses and tools to guide policy decisions and enhance policy implementation. With this workshop we aim to provide a venue that fostering interdisciplinary dialogue between machine learning researchers and policy experts.
Technical AI Governance is a broad field encompassing a range of distinct subareas. Reuel et al. taxonomise the field according to a set of ‘capacities’, which can apply across the AI value chain. These are:
Assessment: The ability to evaluate AI systems, involving both technical analyses and consideration of broader societal impacts;
Access: The ability to interact with AI systems, including model internals, as well as obtain relevant data and information while avoiding unacceptable privacy costs;
Verification: The ability of developers or third parties to verify claims made about AI systems’ development, behaviours, capabilities, or safety;
Security: The development and implementation of measures to protect AI system components from unauthorised access, use, or tampering;
Operationalisation: The translation of ethical principles, legal requirements, and governance objectives into concrete technical strategies, procedures, or standards;
Ecosystem Monitoring: Understanding and studying the evolving landscape of AI development and application, and associated impacts.
Speakers and Panelists
University of Oxford
Cohere for AI
University of Cambridge
Princeton University