Benchmarking LLMs against human expert-curated biomedical knowledge graphs
sciencedirect.comAcademic writing 101: The abstract is NOT meant to be written as a cliff-hanger!
You will not believe what it is all you need!
Due to the cliffhanger abstract, here is a part from the discussion that may help.
> In our case, the manual curation of a proportion of triples revealed that Sherpa was able to extract more triples categorized as correct or partially correct. However, when compared to the manually curated gold standard, the performance of all automated tools remains subpar.
I didn't see UMLS in the paper, but I've tried some of their human-created biomedical knowledge graphs, and they were too full of errors to be used. I imagine different ones have different levels of accuracy.
i was right; LLM needs two major components added before we can swan dive into humanistic aspect of medicine/pyschology/politics using a form of LLM.
1) weighting of each statement for probability of correctness and
2) citation for each source.