Settings

Theme

Show HN: Turn any webpage into structured data via LLM codegen

github.com

2 points by goncharom a month ago · 2 comments · 2 min read

Reader

Hey HN! I wanted to share a little library I've been working on to leverage AI to get structured data from arbitrary pages. Instead of sending the page's HTML to an LLM, Hikugen asks it to generate python code to fetch the data and enforces the generated data conforms to a Pydantic schema defined by the user. I'm using this to power yomu (https://github.com/goncharom/yomu), a personal email newsletter built from arbitrary websites.

Hikugen's main features:

- Automatically generates, runs, regenerates and caches the LLM-generated extraction code. - It uses sqlite to save the current working code for each page so it can be reused across executions. - It uses OpenRouter (https://openrouter.ai/) to call the LLM. - It can fetch the page automatically (it can even reuse Netscape-formatted cookies) but you can also just feed it the raw HTML and leverage the rest of its functionalities.

Here's a snippet of what it looks like:

  from hikugen import HikuExtractor
  from pydantic import BaseModel
  from typing import List
  
  class Article(BaseModel):
      title: str
      author: str
      published_date: str
      content: str
  
  class ArticlePage(BaseModel):
      articles: List[Article]
  
  extractor = HikuExtractor(api_key="your-openrouter-api-key")
  
  result = extractor.extract(
      url="https://example.com/articles",
      schema=ArticlePage
  )
  
  for a in result.articles:
      print(a.title, a.author)
Hikugen is intentionally minimal: it doesn't attempt website navigation, login flows, headless browsers, or large-scale crawling. Just "given this HTML, extract this structured data".

A good chunk of this was built with Claude Code (shoutout to Harper’s blog: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/.

Would love feedback or ideas—especially from others playing with codegen for scraping tasks.

matthewsanders a month ago

How does Hikugen sandbox, constrain, and audit the LLM-generated Python extraction code to prevent arbitrary code execution (e.g., filesystem writes, network egress, or imports outside stdlib), and does it enforce a deterministic execution environment (via AST rewriting, syscall filtering, seccomp profiles, or a WASM/Python sandbox) to guarantee that regenerated scrapers cannot drift into unsafe or non-reproducible states?

goncharomOP a month ago

The regeneration loop was probably the most interesting part to work on: you need very strict constraints on what “good” content looks like and what the specific issue is when codegen fails. I found Pydantic annotations to be specifically useful for this.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection