Trending Papers - Hugging Face

9 min read Original article ↗

new

Get trending papers in your email inbox once a day!

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Submitted by

taesiri

Submitted by

taesiri

Submitted by

nielsr

Geometric Context Transformer for Streaming 3D Reconstruction

LingBot-Map is a feed-forward 3D foundation model that reconstructs scenes from video streams using a geometric context transformer architecture with specialized attention mechanisms for coordinate grounding, dense geometric cues, and long-range drift correction, achieving stable real-time performance at 20 FPS.

Submitted by

nielsr

Geometric Context Transformer for Streaming 3D Reconstruction

LingBot-Map is a feed-forward 3D foundation model that reconstructs scenes from video streams using a geometric context transformer architecture with specialized attention mechanisms for coordinate grounding, dense geometric cues, and long-range drift correction, achieving stable real-time performance at 20 FPS.

Submitted by

unilm

VibeVoice Technical Report

VibeVoice synthesizes long-form multi-speaker speech using next-token diffusion and a highly efficient continuous speech tokenizer, achieving superior performance and fidelity.

Submitted by

unilm

VibeVoice Technical Report

VibeVoice synthesizes long-form multi-speaker speech using next-token diffusion and a highly efficient continuous speech tokenizer, achieving superior performance and fidelity.

Submitted by

jianchen0311

DFlash: Block Diffusion for Flash Speculative Decoding

DFlash is a speculative decoding framework that uses a lightweight block diffusion model for parallel token drafting, achieving significant speedup over existing autoregressive methods while maintaining high-quality outputs.

z-lab Z Lab

· Published on Feb 5, 2026

Submitted by

jianchen0311

DFlash: Block Diffusion for Flash Speculative Decoding

DFlash is a speculative decoding framework that uses a lightweight block diffusion model for parallel token drafting, achieving significant speedup over existing autoregressive methods while maintaining high-quality outputs.

Submitted by

taesiri

Submitted by

taesiri

Submitted by

akhaliq

Submitted by

akhaliq

Submitted by

taesiri

Submitted by

taesiri

LightRAG: Simple and Fast Retrieval-Augmented Generation

LightRAG improves Retrieval-Augmented Generation by integrating graph structures for enhanced contextual awareness and efficient information retrieval, achieving better accuracy and response times.

  • 5 authors

· Published on Oct 8, 2024

Submitted by

akhaliq

Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory

Mem0, a memory-centric architecture with graph-based memory, enhances long-term conversational coherence in LLMs by efficiently extracting, consolidating, and retrieving information, outperforming existing memory systems in terms of accuracy and computational efficiency.

· Published on Apr 28, 2025

Submitted by

akhaliq

Submitted by

tricktreat

Submitted by

tricktreat

Submitted by

akhaliq

Very Large-Scale Multi-Agent Simulation in AgentScope

Enhancements to the AgentScope platform improve scalability, efficiency, and ease of use for large-scale multi-agent simulations through distributed mechanisms, flexible environments, and user-friendly tools.

· Published on Jul 25, 2024

Submitted by

akhaliq

Submitted by

taesiri

Submitted by

taesiri

Submitted by

taesiri

Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems

The study analyzes Claude Code's architecture, identifying five motivating human values and tracing them through thirteen design principles to specific implementation choices, including a core while-loop architecture and supporting systems for safety, context management, and extensibility.

  • 4 authors

· Published on Apr 14, 2026

Submitted by

taesiri

Submitted by

taesiri

Submitted by

taesiri

Submitted by

taesiri

Submitted by

taesiri

Submitted by

Rbin

RAG-Anything: All-in-One RAG Framework

RAG-Anything is a unified framework that enhances multimodal knowledge retrieval by integrating cross-modal relationships and semantic matching, outperforming existing methods on complex benchmarks.

Submitted by

Rbin

RAG-Anything: All-in-One RAG Framework

RAG-Anything is a unified framework that enhances multimodal knowledge retrieval by integrating cross-modal relationships and semantic matching, outperforming existing methods on complex benchmarks.

Submitted by

andito

Submitted by

andito

Submitted by

taesiri

Fish Audio S2 Technical Report

Fish Audio S2 is an open-source text-to-speech system with multi-speaker capabilities, multi-turn generation, and instruction-following control through natural-language descriptions, utilizing a multi-stage training approach and production-ready inference engine.

Submitted by

taesiri

Fish Audio S2 Technical Report

Fish Audio S2 is an open-source text-to-speech system with multi-speaker capabilities, multi-turn generation, and instruction-following control through natural-language descriptions, utilizing a multi-stage training approach and production-ready inference engine.

Submitted by

lhmd

Submitted by

lhmd

Submitted by

akhaliq

Submitted by

akhaliq

AutoDev: Automated AI-Driven Development

AutoDev is an AI-driven software development framework that automates complex engineering tasks within a secure Docker environment, achieving high performance in code and test generation.

  • 5 authors

· Published on Mar 13, 2024

AutoDev: Automated AI-Driven Development

AutoDev is an AI-driven software development framework that automates complex engineering tasks within a secure Docker environment, achieving high performance in code and test generation.

Submitted by

donghao-zhou

Submitted by

donghao-zhou

Submitted by

taesiri

Submitted by

taesiri

Submitted by

yyh929

Submitted by

yyh929

Submitted by

fdugyt

MOSS-TTS Technical Report

MOSS-TTS is a speech generation model using discrete audio tokens and autoregressive modeling with capabilities for voice cloning, pronunciation control, and long-form generation across multiple languages.

Submitted by

fdugyt

MOSS-TTS Technical Report

MOSS-TTS is a speech generation model using discrete audio tokens and autoregressive modeling with capabilities for voice cloning, pronunciation control, and long-form generation across multiple languages.

Submitted by

zawnpn

Submitted by

zawnpn

Submitted by

taesiri

GLM-5: from Vibe Coding to Agentic Engineering

GLM-5 advances foundation models with DSA for cost reduction, asynchronous reinforcement learning for improved alignment, and enhanced coding capabilities for real-world software engineering.

· Published on Feb 17, 2026

Submitted by

taesiri

GLM-5: from Vibe Coding to Agentic Engineering

GLM-5 advances foundation models with DSA for cost reduction, asynchronous reinforcement learning for improved alignment, and enhanced coding capabilities for real-world software engineering.

Submitted by

akhaliq

Submitted by

akhaliq

Submitted by

yyamada

Submitted by

yyamada

Submitted by

hbx

Submitted by

hbx

Submitted by

YYF42

Introspective Diffusion Language Models

Introspective Diffusion Language Models address quality gaps with autoregressive models by enforcing introspective consistency through novel decoding algorithms and optimized inference engines.

  • 15 authors

· Published on Apr 13, 2026

Submitted by

YYF42

Introspective Diffusion Language Models

Introspective Diffusion Language Models address quality gaps with autoregressive models by enforcing introspective consistency through novel decoding algorithms and optimized inference engines.

  • 15 authors

· Apr 13, 2026

Submitted by

hao-li

Agent READMEs: An Empirical Study of Context Files for Agentic Coding

Agentic coding tools receive goals written in natural language as input, break them down into specific tasks, and write or execute the actual code with minimal human intervention. Central to this process are agent context files ("READMEs for agents") that provide persistent, project-level instructions. In this paper, we conduct the first large-scale empirical study of 2,303 agent context files from 1,925 repositories to characterize their structure, maintenance, and content. We find that these files are not static documentation but complex, difficult-to-read artifacts that evolve like configuration code, maintained through frequent, small additions. Our content analysis of 16 instruction types shows that developers prioritize functional context, such as build and run commands (62.3%), implementation details (69.9%), and architecture (67.7%). We also identify a significant gap: non-functional requirements like security (14.5%) and performance (14.5%) are rarely specified. These findings indicate that while developers use context files to make agents functional, they provide few guardrails to ensure that agent-written code is secure or performant, highlighting the need for improved tooling and practices.

  • 11 authors

· Published on Nov 17, 2025

Submitted by

hao-li

Agent READMEs: An Empirical Study of Context Files for Agentic Coding

Agentic coding tools receive goals written in natural language as input, break them down into specific tasks, and write or execute the actual code with minimal human intervention. Central to this process are agent context files ("READMEs for agents") that provide persistent, project-level instructions. In this paper, we conduct the first large-scale empirical study of 2,303 agent context files from 1,925 repositories to characterize their structure, maintenance, and content. We find that these files are not static documentation but complex, difficult-to-read artifacts that evolve like configuration code, maintained through frequent, small additions. Our content analysis of 16 instruction types shows that developers prioritize functional context, such as build and run commands (62.3%), implementation details (69.9%), and architecture (67.7%). We also identify a significant gap: non-functional requirements like security (14.5%) and performance (14.5%) are rarely specified. These findings indicate that while developers use context files to make agents functional, they provide few guardrails to ensure that agent-written code is secure or performant, highlighting the need for improved tooling and practices.

  • 11 authors

· Nov 17, 2025

Self-Supervised Prompt Optimization

A self-supervised framework optimizes prompts for both closed and open-ended tasks by evaluating LLM outputs without external references, reducing costs and required data.

· Published on Feb 7, 2025

Self-Supervised Prompt Optimization

A self-supervised framework optimizes prompts for both closed and open-ended tasks by evaluating LLM outputs without external references, reducing costs and required data.

Submitted by

Zuyan

Submitted by

Zuyan

Submitted by

SempraETY

Submitted by

SempraETY

Submitted by

chengtim

VOID: Video Object and Interaction Deletion

VOID is a video object removal framework that uses vision-language models and video diffusion models to generate physically plausible scenes by leveraging causal reasoning and counterfactual reasoning.

Submitted by

chengtim

VOID: Video Object and Interaction Deletion

VOID is a video object removal framework that uses vision-language models and video diffusion models to generate physically plausible scenes by leveraging causal reasoning and counterfactual reasoning.

Submitted by

JasperHaozhe

Submitted by

JasperHaozhe

Submitted by

Jeff-Wang

GigaWorld-Policy: An Efficient Action-Centered World--Action Model

GigaWorld-Policy introduces an action-centered World-Action Model that improves robotic policy learning by decoupling visual and motion representations, enabling faster inference and better task performance through dual supervision from action prediction and video generation.

open-gigaai GigaAI

· Published on Mar 18, 2026

Submitted by

Jeff-Wang

GigaWorld-Policy: An Efficient Action-Centered World--Action Model

GigaWorld-Policy introduces an action-centered World-Action Model that improves robotic policy learning by decoupling visual and motion representations, enabling faster inference and better task performance through dual supervision from action prediction and video generation.

Submitted by

vinthony

Submitted by

vinthony

Submitted by

WENGSYX

Submitted by

WENGSYX

Submitted by

liranringel

Submitted by

liranringel