Show HN: JobLander – Chrome Extension for Real-Time Job Interview Hints
joblander.appIt sounds like fun but is it possible to hook up your prompts and RAG db (if used) to chat gpt4?
I've used GPT4 extensively for interviews and business communication tasks like writing sales comms or coordinating meetings and projects. Given how easy it is to create effective prompts and examples, I have the intuitive sense that gpt4 plus a decent prompting skill would still take the cake, despite the fine-tuning and augmentation you might perform with an off the shelf open model.
Care to comparatively analyze the two approaches?
Would you be interested in extending with a gpt4 mode?
And why not open source the product? Are you prepared for an open source competitor using gpt4 to get the market? It feels to me like being the open source provider here would be the safer bet, given how easy it is to reverse engineer fixed prompts and even RAG engines. In any case, what is your planned/expected time to market for an open competitor?
Great questions!
Let me try to answer them:
1. Intuitively, the use case "I don't have time to type in prompt, but I want to query AI real-time" does make sense, but is there a real use case where people actually would find it useful and beneficial so that they would pay for this ability - is unknown. We decided to test it in this specific niche and very narrowed down usecase.
2. Currently we use groq-sdk with Mixtral-8x7b. We tried gpt-3.5, gpt-4turbo, claude 3, but for our specific use case speed and the ease of predictable output made groq-sdk our choice for now.
3. RAG is coming, but first we would try something much simpler and launch a feature where you could upload a CV. We would parse it and add to system prompt.
4. Open source is a great idea, but I personally believe that this whole family of voice-driven, real-time ai assistants (that are definitely coming, no questions about it) need a lot of product refinement. Just to share our code now would give immediate value for developers who wants to do the same, but not for the end users. And to be honest, even our code, although being thoughtful and efficient in solving specific challenges (like async orchestration of many bg processes), I don't think it would bring a lot of value now. So yes, once the product side will have more data from users and the product itself would be bit more shaped - sure
Cool project but doesn’t this weaken the “assessing skills” part of the hiring process? (Or maybe we can call it cheating?) The end result would be that hiring managers would just end up relying more on lame signals like which school you went to or where you worked.
Look, surely it would do so, if the product become mainstream.
However, recently I suggested a Github Copilot to a dev, that was not really comfortable with filtering an array (easy task for anyone who code). This developer was on the salary in a product company (I have no idea how he achieved that). After 2 weeks I made this suggestion, he was fired. Copilot completely spoiled his undeveloped cognitive skills and he started to work even worse that he was before. Do Github Copilot team have a responsibility for what happened - I doubt it.
So yeah, tech is double-edged sword 100%. Use it wisely.
We’re still considering the ethical issue and are open for discussion. IT interviews often quiz you on facts irrelevant to daily tasks, while HR folks like to check how well you can memorize answers to their classical questions. We believe recruiters should be assessing thinking and problem-solving skills. Also, using AI assistants to get better productivity out of everything you do will be one key skills.
Works pretty fast and the hints are kind of helpful. What LLM are you using?
Currently, its a Mixtral-8x7b. We are thinking about giving users an option to choose a model, but so far nobody asked us to do so :)
The login button in the extension doesn't work.
A little bug report would be super helpful: https://forms.gle/zH6t7Tw2DhbcKCCL6
hey! thanks for comment! could you help us investigate the issue?