Universal LLM Deployment Engine with ML Compilation
blog.mlc.aiGlad to see MLC is becoming more mature :) I can imagine the unified engine could help build agents on multiple devices.
Any ideas on how those edge and cloud models collaborate on compound tasks (e.g. the compound ai systems: https://bair.berkeley.edu/blog/2024/02/18/compound-ai-system...)
A unified efficient open-source LLM deployment engine for both cloud server and local use cases.
It comes with full OpenAI-compatible API that runs directly with Python, iOS, Android, browsers. Supporting deploying latest large language models such as Qwen2, Phi3, and more.
The MLCEngine presents an approach to universal LLM deployment, glad to know it works for both local servers and cloud devices with competitive performance. Looking forward to exploring it further!
Looks cool. I'm looking forward to trying building some interesting apps using the SDKs.
From first-hand experience, the all-in-one framework really helps reduce engineering effort!
AI ALL IN ONE! Super universal and performant!
runs on qwen2 on iphone with 26 tok/sec and a OpenAI style swift API