top of page
Search

NExT++ Changsha Workshop 2025 on Responsible AI

  • Writer: NExT plusplus
    NExT plusplus
  • Jun 13
  • 3 min read

The NExT Changsha Workshop on Responsible AI was successfully held June 5-6, 2025, at the historic Yuelu Academy in Changsha. The two-day event brought together researchers, industry experts, and policy leaders from around the world to foster dynamic dialogue and cross-disciplinary collaboration on emerging trends in large foundation models and real-world applications across science, healthcare, and the humanities. The three-day event was jointly organized by academics from the National University of Singapore and Tsinghua University, with support from Hunan University.


The workshop commenced with welcome remarks by Prof. Jiang Jianhui, Vice President of Hunan University.


The workshop began with welcoming addresses from Prof. Tat-Seng Chua of the National University of Singapore and Prof. Maosong Sun of Tsinghua University, who outlined the key themes and objectives that would guide the three-day program.


The first session of the workshop focused on AI Safety. Dr. Chao Li from Tencent Cloud Tianyu delivered insightful talks on financial risk management of AI models, and Prof. Minlie Huang from Tsinghua University introduced safety problems of LLMs.


The Large-Scale AI Model in Financial Risk Management, Why and How - Dr Chao Li (Tencent Cloud Tianyu)


Safety Problems of LLMs - Prof Minlie Huang (Tsinghua)


The second session continued to focus on AI Safety. Prof. Xiang Wang from the University of Science and Technology of China delivered a talk on controllable safety of LLMs, and Prof. Yuxiao Dong from Tsinghua University gave a talk on agent abilities of LLMs.


Controllable Safety of Large Models -Prof Xiang Wang (USTC)


Advancing Agent Abilities of LLMs through RL and Inference Scaling - Prof Yuxiao Dong (Tsinghua)


A poster and demo session followed, where researchers from NUS and Tsinghua University showcased their latest work, sparking vibrant discussions among attendees. From Tsinghua University, Zijun Yao, Xiaozhi Wang, Yujia Zhou, Xinyi Li, Zhexin Zhang, Huiyuan Xie, and Ruikun Li presented their research, while Junfeng Fang, Leheng Sheng, Xiaohao Liu, and Xinyu Lin from NUS introduced their exciting work.



The afternoon sessions explored selected topics—education, culture, and agentic AI—where Prof Zhiyuan Liu, Prof Chong Wah Ngo, Prof Min Zhang and Prof Yong Li exchanged insights on the latest advances in LLMs and their societal impact.


Towards a Future Ready AI Campus: Agent Powered Education for Personalized Student Growth - Prof Zhiyuan Liu (Tsinghua)


What LLMs See: Text-to-Visual Reasoning in Cultural Contexts - Prof Chong Wah Ngo (SMU)


LLM and Users: Evaluation, Profiling, and Satisfaction - Prof Min Zhang (Tsinghua)


Navigating the Societal and Cognitive Risks of AI Agents - Prof Yong Li (Tsinghua)


After the presentations, a distinguished panel—Prof. Tat-Seng Chua, Dr. Chao Li, Prof. Xiangnan He, and Prof. Minlie Huang—gathered for a discussion titled "Challenges of AI Safety in the Agentic AI Era", exchanging perspectives on future research directions.


On the second day, the workshop delved into multimodal analysis, generation, and reasoning. Dr. Hanwang Zhang, Prof. Cathal Gurrin, and PhD candidate Leigang Qu presented their latest research on multimodal applications.


Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for

Reasoning – Dr. Hanwang Zhang (Huawei, Singapore)


Visual PoV and Progress in Understanding and Retrieving – Prof Cathal Gurrin (Dublin City University, Ireland)


Quality Assessment and Controllable Generative Videos – Leigang Qu (NUS)


The engaging talks were followed by an insightful panel on content generation and quality evaluation, where Prof. See-Kiong Ng, Dr. Hanwang Zhang, Prof. Cathal Gurrin, Prof. Min Zhang, and Prof. Chong Wah Ngo engaged in a deep, spirited discussion.


The workshop concluded with final remarks from Prof. See-Kiong Ng (NUS), reflecting on key insights and future collaborations in AI research.



 
 
 

Commentaires


bottom of page