Settings

Theme

Most ML applications are just request routers

onecontext.ai

10 points by rossamurphy 2 years ago · 5 comments

Reader

rossamurphyOP 2 years ago

Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!

  • harindirand 2 years ago

    Looks super interesting! This could be super helpful for us. Will drop your team a note :)

cwmdo 2 years ago

“simply by cutting out the network latency between the steps, OneContext reduces the pipeline execution time by 57%)”

how does this fit in with barebones langchain/bedrock setup?

georgespencer 2 years ago

Amazing! Congrats on launching. Company motto: "dumb enough to actually have attempted this already".

the_async 2 years ago

Seems like a great product !

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection