Amazon Titan Text Premier LLM with SOTA Common Sense Reasoning
aws.amazon.comUPDATE: As Onawa points out in his comment below, the OP is showing benchmarks.
---
I couldn't find any mention of model performance on standard benchmarks, nor any mention of model scale (number of parameters, MoE setup, etc.).
How come? Does Amazon not want customers to know how much better/worse, or much larger/smaller, this model is compared to other models, proprietary and open?
They must be updating the blog post, because I just checked and saw they listed benchmark results for MMLU, ARC-Challenge, BIG-Bench Hard, DROP, F1 score, and HellaSwag. However, their link https://aws.amazon.com/machine-learning/responsible-machine-... is showing 404 for me
Thanks. I updated my comment.
Context Window is 32k (present in the blog). I suppose number of Params, MOE setup are intentionally not revealed. Numbers on well known benchmarks and comparison with Google and Open AI is also published later in the blog.