Settings

Theme

Universal LLM Deployment Engine with ML Compilation

blog.mlc.ai

17 points by ruihangl 2 years ago · 7 comments

Reader

zhye 2 years ago

Glad to see MLC is becoming more mature :) I can imagine the unified engine could help build agents on multiple devices.

Any ideas on how those edge and cloud models collaborate on compound tasks (e.g. the compound ai systems: https://bair.berkeley.edu/blog/2024/02/18/compound-ai-system...)

ruihanglOP 2 years ago

A unified efficient open-source LLM deployment engine for both cloud server and local use cases.

It comes with full OpenAI-compatible API that runs directly with Python, iOS, Android, browsers. Supporting deploying latest large language models such as Qwen2, Phi3, and more.

yongwww 2 years ago

The MLCEngine presents an approach to universal LLM deployment, glad to know it works for both local servers and cloud devices with competitive performance. Looking forward to exploring it further!

neetnestor 2 years ago

Looks cool. I'm looking forward to trying building some interesting apps using the SDKs.

CharlieRuan 2 years ago

From first-hand experience, the all-in-one framework really helps reduce engineering effort!

cyx6 2 years ago

AI ALL IN ONE! Super universal and performant!

crowwork 2 years ago

runs on qwen2 on iphone with 26 tok/sec and a OpenAI style swift API

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection