Settings

Theme

Show HN: Spelling Riddle – Think Spellbee with crossword clue and visual hint

ravel.live

2 points by nohat 6 days ago · 4 comments · 2 min read

Reader

I wanted a daily spelling game that tested semantic knowledge rather than vocab, and a riddle game with both text and visual hints. So I combined them.

Each day you get 9 letters and up to 15 hidden words to find. Every word has two orthogonal clues: a text hint and a image hint. The hints are crossword puzzle style — you need to figure out what the clues are pointing at, then spell it.

The site is fully static.

## How the clues get made

The pipeline has three stages:

1. Dictionary curation — Start from raw word lists, normalize, dedupe, filter plurals/profanity, and add frequency data.

2. Word quality tournament — Words are ranked for puzzle suitability using grouped LLM matches in a Swiss-style tournament. Performance is converted to a 0–100 quality rating using Wilson lower-bound scoring.

3. Hint generation — For each puzzle word set, generate multiple text + image hint candidates, lint-filter weak ones (e.g. text hints that contain the answer, image prompts that would render as text), rank the survivors, then generate the final images.

Text hints: GPT 5.2, Image hints: Qwen Image.

The clue generation is the hardest part and still not perfect (though model improvements might make it trivial soon).

vunderba 6 days ago

Nice job. Unfortunately I'd say (just like with AI generated crosswords) we're not quite at the state where we can automate this without any manual supervision.

Case in point:

The image clue from "Alexandrian who found area from mere sides" is more likely to confuse the player than help them.

Same with the image clue from "Fewer candles to count" - it looks to me like a woodcut maze and lead me completely in the wrong direction.

If you're pre-generating these, I'd suggest using a more powerful image model such as NB Pro.

  • nohatOP 6 days ago

    Fair points, thank you for the very thoughtful feedback! "Alexandrian who found area from mere sides" is a bad hint because it is fairly obscure, and really just a history trivia check. And the image is a bit random. You can view the explanation for the hint by clicking the ? button in the upper right corner after a word is completed if you are curious about the models reasoning.

    As a bit of an explanation I generated these before Nano banana pro came out, and at the time I made a large comparison grid for various image and text models. For this style qwen image performed very well. LLM wise I started with 5.1 and updated to 5.2. Of course with the rate of model release my choices are pretty much already obsolete... Expense is also a factor for a hobby project, and NB pro is 7.5x more expensive than Qwen image.

    • vunderba 6 days ago

      Yeah understandable. If you've got the hardware for it, consider looking into Flux.2 Dev with the Turbo LoRA. It's probably the best-in-class for sheer prompt adherence right now, though Qwen-Image is still very respectable.

      And of course, we should see Qwen-Image 2.0 become open-weight in 4-6 weeks as well - so good things on the horizon!

      • nohatOP 5 days ago

        Yep, I need to compare the new models and update everything. Foolish of me to generate ahead at all!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection