Settings

Theme

Ask HN: How can political bias across LLMs be factored?

3 points by shaburn 2 years ago · 8 comments · 1 min read

Reader

Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?

h2odragon 2 years ago

Imagine having an LLM do a translation of daily news into "simple english", much like wikipedia has: https://simple.wikipedia.org/wiki/Simple_English_Wikipedia

the results are not free of political bias, but may well highlight it in a starkly hilarious way.

you might do human training at that level but then you've only created a newly biased model.

jruohonen 2 years ago

What is "political bias"? Insofar as you're talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.

PaulHoule 2 years ago

A system which has artificial wisdom as opposed to just artificial intelligence might try to not get involved.

smoldesu 2 years ago

Well, text is political. You're not going to say "Tiananmen Square" without a political sentiment, so your only option would be to censor it.

LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.

shaburnOP 2 years ago

I beleive the model bias is highly influenced by the modelers. See Grok and OpenAI.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection