Ningning Wang (王宁宁)

AI Researcher & Undergraduate Student

Senior student at Dalian University of Technology, majoring in Artificial Intelligence. Admitted to University of Science and Technology of China (USTC) for Master's degree in Information and Communication Engineering (081000). Passionate about VLM applications and Reinforcement Learning.

Research Interests

Vision-Language Models Reinforcement Learning Reasoning Models waiting for add...

Academic Journey

University of Science and Technology of China

Master's in Information and Communication Engineering

2026 - 20xx(expected)

Admitted Future Master

Dalian University of Technology

Bachelor of Engineering in Artificial Intelligence

2022 - 2026(expected)

Senior Student Bachelor Gap Year

News & Blog

News

Admitted to USTC Master's Program(expect 26 Fall)

December 2025

Publications & Projects

Publications

Publication

Efficient Agents: Building Effective Agents While Reducing Cost

N Wang, X Hu, P Liu, H Zhu, Y Hou, H Huang, S Zhang, J Yang, J Liu, et al.

None

The remarkable capabilities of Large Language Model (LLM)-driven agents have enabled sophisticated systems to tackle complex, multi-step tasks, but their escalating costs threaten scalability and accessibility. This work presents the first systematic study of the efficiency-effectiveness trade-off in modern agent systems, addressing the critical need for cost-effective designs without sacrificing performance. We investigate three key questions: (1) How much complexity do agentic tasks inherently require? (2) When do additional modules yield diminishing returns? (3) How much efficiency can be gained through the design of efficient agent frameworks? Through an empirical analysis on the GAIA benchmark, we evaluate the impact of LLM backbone selection, agent framework designs, and test-time scaling strategies. Using the cost-of-pass metric, we quantify the efficiency-performance trade-off across these dimensions. Our findings inform the development of Efficient Agents , a novel agent framework that has an optimal complexity to task requirements. Efficient Agents retains 96.7% of the performance of OWL, ...

Publication

Kongzi: A Historical Large Language Model with Fact Enhancement

J Yang, N Wang, Y Zhao, C Feng, J Du, H Pang, Z Fang, X Cheng

None

The capabilities of the latest large language models (LLMs) have been extended from pure natural language understanding to complex reasoning tasks. However, current reasoning models often exhibit factual inaccuracies in longer reasoning chains, which poses challenges for historical reasoning and limits the potential of LLMs in complex, knowledge-intensive tasks. Historical studies require not only the accurate presentation of factual information but also the ability to establish cross-temporal correlations and derive coherent conclusions from fragmentary and often ambiguous sources. To address these challenges, we propose Kongzi, a large language model specifically designed for historical analysis. Through the integration of curated, high-quality historical data and a novel fact-reinforcement learning strategy, Kongzi demonstrates strong factual alignment and sophisticated reasoning depth. Extensive experiments on tasks such as historical question answering and narrative generation demonstrate that Kongzi outperforms existing models in both factual accuracy and reasoning depth...

Projects