Visual Reasoning Benchmark
Clock Bench
ClockBench evaluates whether models can read analog clocks - a task that is trivial for humans, but current frontier models struggle with.
Leaderboard
Results Summary
Despite frontier models showing strong reasoning skills, mathematical ability, and visual understanding on multiple benchmarks, they seem to be struggling at reading analog clocks for now.
One hypothesis might be that this task sets a high bar for doing reasoning within the visual space (as opposed to text space).
More research is likely needed to understand if these capabilities can be obtained by scaling existing paradigms, or a novel approach is required.
Dataset
Sample Clocks
Few examples of clocks that we used in the benchmark.

Questions
Reading Time
Models are asked to determine whether a given clock shows a valid time. If valid, they should report the hours, minutes, seconds, date, month, and day of the week (based on what is present), in a structured JSON format.
Adding or Subtracting Time
Models are asked to add or subtract varying amounts of time.
Rotating Hands
Models are asked to rotate one of the hands (hour, minute, or second) by a specified angle, clockwise or counterclockwise.
Shifting Time Zone
Models are asked to assume they are in New York during summer and report the corresponding time in various locations worldwide.
Try Yourself
Interesting in trying out ClockBench?
A small public dataset and sample evaluation code is available to everyone.
Please reach out to [email protected] with ideas, suggestions, questions or any other inquiries.