robertnishihara
- Karma
- 462
- Created
- 8 years ago
Recent Submissions
- 1. ▲ Major upgrades to Ray Serve: 88% lower latency and 11.1x higher throughput (anyscale.com)
- 2. ▲ SkyRL brings Tinker to your GPUs (2025) (novasky-ai.notion.site)
- 3. ▲ vLLM large scale serving: DeepSeek 2.2k tok/s/h200 with wide-ep (blog.vllm.ai)
- 4. ▲ Massively Parallel Agentic Simulations with Ray (anyscale.com)
- 5. ▲ Deploy DeepSeek‑R1 with VLLM and Ray Serve on Kubernetes (anyscale.com)
- 6. ▲ An Open Source Stack for AI Compute: Kubernetes and Ray and PyTorch and VLLM (anyscale.com)
- 7. ▲ Native LLM APIs in Ray Data and Ray Serve (anyscale.com)
- 8. ▲ Joins and Hash-Shuffle in Ray Data (anyscale.com)
- 9. ▲ AsyncFlow: An Asynchronous Streaming RL Framework for LLM Post-Training (arxiv.org)
- 10. ▲ Open Source RL Libraries for LLMs (anyscale.com)