Settings

Theme

SOTA Code Retrieval with Efficient Code Embedding Models

qodo.ai

11 points by jimminyx a year ago · 3 comments

Reader

timbilt a year ago

anyone else concerned that training models on synthetic, LLM-generated data might push us into a linguistic feedback loop? relying on LLM text for training could bias the next model towards even more overuse of words like "delve", "showcasing", and "underscores"...

lenerdenator a year ago

SOTA? Lora? Seems like people are trying to usurp ham radio names for things.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection