Settings

Theme

Ask HN: How is the community using LLMs for data cleaning/enriching/structuring?

7 points by jarulraj 2 years ago · 10 comments · 2 min read


Would love to learn how the community using LLMs for data wrangling and exchange prompts [1]. For example, we iterated several times on the following "data structuring" prompt for our Github Stargazers app [2,3]:

--- Prompt to GPT-3.5

You are given a block of disorganized text extracted from the GitHub user profile of a user using an automated web scraper. The goal is to get structured results from this data. Extract the following fields from the text: name, country, city, email, occupation, programming_languages, topics_of_interest, social_media. If some field is not found, just output fieldname: N/A. Always return all the 8 field names. DO NOT add any additional text to your output. The topic_of_interest field must list a broad range of technical topics that are mentioned in any portion of the text. This field is the most important, so add as much information as you can. Do not add non-technical interests. The programming_languages field can contain one or more programming languages out of only the following 4 programming languages - Python, C++, JavaScript, Java. Do not include any other language outside these 4 languages in the output. If the user is not interested in any of these 4 programming languages, output N/A. If the country is not available, use the city field to fill the country. For example, if the city is New York, fill the country as United States. If there are social media links, including personal websites, add them to the social media section. Do NOT add social media links that are not present. Here is an example (use it only for the output format, not for the content):

  name: Pramod Chundhuri
  country: United States
  city: Atlanta
  email: pramodc@gatech.edu
  occupation: PhD student at Georgia Tech
  programming_languages: Python, C++
  topics_of_interest: PyTorch, Carla, Deep Reinforcement Learning, Query Optimization
  social_media: https://pchunduri6.github.io
----

  [1] https://en.wikipedia.org/wiki/Data_wrangling
  [2] https://github.com/pchunduri6/stargazers-reloaded
  [3] https://medium.com/evadb-blog/stargazers-reloaded-llm-powered-analyses-of-your-github-community-aef9288eb8a5
nbrad 2 years ago

In general, just providing a schema and asking for the response in JSON with few-shot examples is extremely (99%+) reliable in my experience.

I've found GPT-3.5 more than adequate at inferring schemas and filling them for conventional use cases like chat-based forms (as an alternative to Google Forms/TypeForm); my code and prompts available at: https://github.com/nsbradford/talkformai - i've also used this to extract structured data from code for LLM coding agents (e.g. "return the names of every function in this file")

In my opinion, more and more APIs are likely to become unstructured and be reduced to LLM agents chatting with each other; I wrote a brief blog about this here: https://nickbradford.substack.com/p/llm-agents-behind-every-...

AlwaysNewb23 2 years ago

I've tried doing things like this and found that it's often not totally reliable. I've had a hard time getting a consistent output and will randomly get variations I did not expect. I've found it's useful if you're trying to clean up data as a manual task, but not trying to automate a process.

  • jarulrajOP 2 years ago

    Yes, there is definitely a human-in-the-loop element here.

    It would be great if you could share an example of the inconsistent output problem -- we also faced it. GPT-4 was much better than GPT-3.5 in output quality.

PaulHoule 2 years ago

How successful was this effort? How did you know how successful it was?

  • jarulrajOP 2 years ago

    Great question! We iterated on the prompt for several days and manually verified the results for ~100 users.

    The results were pretty good: https://gist.github.com/gaurav274/506337fa51f4df192de78d1280...

    Another interesting aspect was the money spent on LLMs. We could have directly used GPT-4 to generate the "golden" table; however, it's a bit expensive — costing $60 to process the information of 1000 users. To maintain accuracy while reducing costs significantly, we set up an LLM model cascade in the EvaDB query, running GPT-3.5 before GPT-4, leading to a 11x cost reduction ($5.5).

    Query 1: https://github.com/pchunduri6/stargazers-reloaded/blob/228e8...

    Query 2: https://github.com/pchunduri6/stargazers-reloaded/blob/228e8...

    • PaulHoule 2 years ago

      Did you compile accuracy, F1 numbers, or anything like that? Do you have quantitative comparisons of results you got w/ different models?

      • jarulrajOP 2 years ago

        As we do not have ground truth, we only qualitatively checked for accuracy -- no quantitative metrics. We did find a significant drop in accuracy with GPT 3.5 as opposed to GPT 4.

        Are you measuring accuracy with data wrangling prompts? Would love to learn more about that.

        • PaulHoule 2 years ago

          Everything I do now is classification and AUC-ROC is my metric. For your problem my first thought is an up-down accuracy metric, but the tricky problem you might have is "do you accept both 'United States' and 'USA' as a correct answer?" and the trouble dealing with that is one reason I stick to classification problems.

          I'm skeptical of any claim that "A works better than B" without some numbers to back it up.

tmaly 2 years ago

I have had some good results processing survey data.

Having the LLM generalize responses, look for patterns and rank by frequency

  • jarulrajOP 2 years ago

    Nice :)

    What were the interesting problems you faced in processing the survey data?

    If possible, can you share the prompt?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection