Show HN: I Built a Metasearch Engine with React, Redux, Express and TypeScript
github.comThis is a neat project.
I would encourage more development be done to have the type cover more of the front end code. Your development environment must not be of much help due to the heavy use of the "any" type throughout the codebase, which essentially escapes out of the type system.
Thanks, it started out as regular, valid JS, then I figured I might as well convert it to TypeScript, and I did so (perhaps too) quickly. I do plan to add stronger typing later.
What advantages did you get from typescript? Seems like type safety is a huge trade off from what you lost by adding those amounts of complexity, especially when you’re not really utilising it.
You'd get more benefit if you're developing API where random others are consuming your code for them to use API properly.
But still, it can still save you from obvious mistakes even for a one man project.
I agree but I’d pick graphql for that instead of typescript.
Not sure TypeScript and GraphQL are mutually exclusive.
They aren’t, but graphql would give you safer types in your api than typescript.
Not sure one gives you safer typing than the other in absolute. They both have a different context/purpose. TypeScript is great for typing a code base for function calls type of APIs, whereas GraphQL is great to express types for inter-system communication (usually over HTTP).
Using GraphQL to type your in-process API would be overkilled and would require quite a bit of contortionism, and while you can share same TypeScript types between your end-points interfaces (and can be a good first step), inter-process communication is better expressed with something like GraphQL or Protobuf.
Honestly, I'm using it for it's marketability. I haven't yet found it particularly useful.
I see that you are scraping the results directly, curious if you’d tried out Google’s json search api? Any thoughts about it? I was about to need it or similar for a project.
Google's json search api gives you a consistent easy to parse api.
However it requires a Google API key, which limits free usage.
Client-side scraping scales effectively, as it distributes the reads onto all the clients. But it's also more brittle, as Google could change a small implementation of their search and break all the scraping functions.
I've tried it with the C# wrapper library, so using plain old methods and C# objects rather than touching the JSON directly.
It's not too bad, but not free as far as I'm aware and the result snippets they give you are quite limited in length. Also I found it a little tricky to get good results on some queries when I tried it out.
I haven't tried it, I'm afraid.
Nice to see a project using React, Typescript and Redux in the wild.
Believe me, there are plenty of companies using that trio in production.
This is cool...
You mean you built a frontend for Google, not a search engine.
That's literally in the title. It says METASearch engine, not search engine. I have never claimed it was a search engine.
OP is being bit dismissive, but IMHO the key feature of a metasearch engine is the processing of results, not the interface. Filtering Google’s results is technically a metasearch engine... but it’s a fairly trivial example.
https://github.com/JoshuaScript/spresso-search/blob/master/s... - this will be blocked by Google if used at any kind of scale.
I was looking into how google was implemented initially with the pagerank algorithm and I think I am getting closer. Do they still use the same algorithm? I ahve no concept of how things behave when they scale or how to scale for that matter.
I think they just use BrainRank for search now. It still incorporates the basic premise of pagerank, but the rules are not hand written anymore, it's all done with Tensorflow.
Well now they're baking in things like piracy and fake news into the mix and I've noticed a loss in quality for regular search results over the years.