The Beijing Key Laboratory of AI Safety and Super Alignment was established, with Professor Liu Chao serving as the deputy director
During the Zhongguancun Forum, Beijing announced the establishment of the Collaborative Innovation Matrix for AI Safety Governance. As a member of this matrix, the Beijing Key Laboratory for AI Safety and Superalignment was officially unveiled for the first time. The laboratory is led by the Institute of Automation, Chinese Academy of Sciences, with Peking University and Beijing Normal University participating in its joint construction. Official homepage address: https://beijing.ai-safety-and-superalignment.cn/
Laboratory Introduction
Beijing Key Laboratory of Safe AI and Superalignment focuses on addressing the safety risks and governance challenges posed by cutting-edge AI technologies. It aims to make breakthroughs in AI safety and superalignment, explore safe and controllable solutions for AI systems, and establish effective oversight and risk management for advanced AI models and superintelligence. The laboratory will conduct pilot applications on frontier AI models, building a leading-edge safety and governance framework to support the responsible development of artificial general intelligence (AGI) and guide superintelligence toward harmonious coexistence with humanity.
The laboratory is led by the Institute of Automation, Chinese Academy of Sciences (CASIA), with collaborative support from Peking University and Beijing Normal University. It brings together an interdisciplinary research team spanning artificial intelligence, cognitive psychology, brain science, ethics and safety governance, and systems science. By leveraging and consolidating Beijing’s strengths in cross-disciplinary research, the laboratory aims to tackle key scientific and practical challenges in AI safety and superalignment. Its mission is to reshape and optimize the current scientific and technological landscape of AI safety and superalignment, ultimately developing a “Beijing Solution” to ensure the safe and controllable development of AI and superintelligence.
Scientific Research Objectives
- Developing Ethical and Safe AI models and systems.
- Developing new theories and models that integrate passive and proactive risk prevention for AI.
- Exploring and advancing superalignment theories and technologies combining alignment based on external supervision and internal mechanism.
- Creating a controllable and sustainable AI safety framework for superalignment.
Laboratory Development Goals
- Establishing an internationally renowned interdisciplinary research base for AI safety and superalignment.
- Consolidating Beijing’s cross-disciplinary strengths to lead cutting-edge research in AI safety and superalignment.
- Reconstructing and optimizing existing scientific theories and technological paradigms of superalignment.
- Cultivate innovative talents specialized in AI safety and superalignment for Artificial General Intelligence and Superintelligence.
Introduction to the Laboratory Team of Beijing Normal University
Deputy Director of Beijing Key Laboratory of Safe AI and Superalignment
Chao Liu
Professor at the Faculty of Psychology, Beijing Normal University
Chang Jiang Young Scholars
National “Ten Thousand Talents Program” Young Top-Notch Talent
Core Research Fellows
Yin Wang
Professor, Beijing Normal University
PhD Supervisor
National Science Fund for Excellent Young Scholars
Core Research Fellows
Qinghua Chen
Professor, Beijing Normal University
PhD Supervisor
Core Research Fellows
Honghong Tang
Associate Professor, Beijing Normal University
Master’s Supervisor