Settings

Theme

Search ArXiv Fluidly

searchthearxiv.com

51 points by Exorust a year ago · 15 comments

Reader

Syzygies a year ago

Wow! My first test query immediately found me a paper relevant to research code I'm writing. Down that rabbit hole, I almost forgot to come back and praise this project.

sitkack a year ago

It is pretty good, https://searchthearxiv.com/?q=https%3A%2F%2Farxiv.org%2Fabs%...

If this is your project, please talk about how you made it.

elashri a year ago

This seems interesting. I always thought of doing something similar but for specific topic in two specific arxiv categories. But for 300k papers like what is in here, Does anyone have an estimation on how much this will cost using OpenAI ada embeddings model?

  • stephantul a year ago

    Ada is deprecated: use the text embedding models instead.

    It depends on whether you do full text search, or abstract only. If you do full text, I’d guess about 1k tokens per page, 10 pages per paper? So that would be 3B tokens, which would cost you 60$ if you use the cheapest embedder.

    If you just do abstracts, the costs will be negligible.

    • eden-u4 a year ago

      this project only uses kaggle metadata and abstract from arxiv. Moreover it is "focused" on only 5-6 categories in the arxiv. Therefore, the costs are marginal.

      Plus you could use a mixed system: first you index the abstract of the most relevant 50 papers, then embedd the text of those 50 in order to asses which are truly relevant and/or meaningful.

Euphorbium a year ago

Did they calculate embeddings for the entire archive? That must have cost a fortune.

  • hskalin a year ago

    Arxiv has about 2.6M articles, assuming about 10 pages per article, that's 26M pages. According to OpenAI, their cheapest embedding model (text-embedding-3-small) costs a dollar for 62.5K pages. So the price for calculating embedding for the whole Arxiv is about $416.

    I think doing it locally with an open source model would be a lot cheaper as well. Especially because they wouldn't have to keep using OpenAI's API for each new query.

    Edit: I overlooked the about page (https://searchthearxiv.com/about), seems like they *are* using OpenAI's API, but they only have 300K papers indexed, use an older embedding model, and only calculate embeddings on the abstract. So this should be pretty cheap.

  • machiaweliczny a year ago

    Embeddings are very cheap to generate

  • HeatrayEnjoyer a year ago

    What are embeddings and why are they expensive?

    • Euphorbium a year ago

      Embeddings are vectors of chunks of documents, lists of 1024 (depending on a model) float numbers that represent that short snippet of text. This kind of search works by finding the most similar vectors, calculating them cost fractions of the cent, but when you need to do it billions to trillions of times, it adds up.

      • orf a year ago

        You could likely calculate them all on a modern MacBook easily enough.

        Searching the embeddings is a different problem, but there are lots of specialised databases that can make it efficient.

        • Euphorbium a year ago

          You can, but it is a scale problem. Doing that would take an unreasonable amount of time at this scale.

basedrum a year ago

I asked for papers from 2024,got two and then one from 2018

canadiantim a year ago

Now do the same with BioRxiv and you'd be a hero

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection