Settings

Theme

Launch HN: Defer (YC W23) – Zero-infrastructure background jobs for Node.js

202 points by wittydeveloper 3 years ago · 113 comments · 5 min read

Reader

Hi HN! We are Charly and Bryan, founders of Defer (https://www.defer.run/). Defer is a zero-infrastructure background jobs platform for Node.js developers. As a managed platform that brings modern development standards to background jobs (ex: multi-env support, zero-API design), we enable Node.js developers to build products faster and scale without effort and infrastructure knowledge.

Background jobs, while being used in all web applications (processing webhooks, interacting with 3rd party APIs, or powering core features), did not benefit from the developer experience improvements that arose in all other layers of the Node.js API stack: quick and reliable databases with Supabase or easy Serverless deployment with Vercel.

Today, even for simple use cases, working with background jobs in Node.js necessarily requires some infrastructure knowledge—either by deploying and scaling an open source solution (ex: BullMQ) or using an IaaS such as AWS SQS with Lambdas, which comes with complexity and limited features (no support for dead letter queues, dynamic concurrency, or throttling).

At a large scale, you will need to solve how to handle rolling restarts, how to auto-scale your workers, how to safely deploy without interrupting long-running jobs, how to safely encrypt jobs’ data, and how to version them. Once deployed, your background job’s code lives in a separate part of your codebase, with its own mental model (queues and workers). Finally, most solutions provide technical dashboards which are not always helpful in debugging production issues, so you end up having to build custom dashboards.

Most companies we talked to try to handle those different aspects, building custom similar solutions and using developers’ time that could have been used on user-facing features.

Bryan and I are technical founders with 10+ years of experience working at start-ups of all stages (e.g. Algolia, home of HN Search!), from tech lead to CTO roles. Like many developers, we got asked many times to work on background job stacks and invest time into tailoring and scaling them for product needs.

I even dedicated most of my time at Algolia to building a custom background jobs pipeline to power the Algolia Shopify integration: ingesting partial webhooks from Shopify, enriching them given customers configuration, in FIFO order per shop, with the Shopify rate limited API, for thousands of shops and the equivalents of 3 millions of jobs per day. Given the complex and unique product requirements of the Algolia Shopify Ingestion Pipeline, the only solution (at the time and context) was to build a custom background jobs stack combining Redis and Kubernetes.

When consulting with some startups, we witnessed some developers choosing to keep some slow API routes calling 3rd party APIs synchronously instead of investing time in setting up background jobs. When looking back to the recent increase of productive zero infrastructure solutions in the Node.js ecosystem, we were surprised that the experience with background jobs remained unchanged. We decided to build Defer, so working with background jobs, CRONs, and workflows would match the current standard of Node.js developer experience.

Inspired by Next.js, Remix, and Netlify design, background jobs in Defer become background functions that live in your application’s code, with direct access to all configuration options: retry, concurrency, and more (https://docs.defer.run/features/retries-concurrency/) , and no specific mental model to learn. Your background functions get continuously deployed from GitHub with support for branch-based environments, allowing you to test new background jobs in no time, before safely moving to production.

Defer works for all kinds of Node.js projects, not only serverless ones. It does not require you to learn any new architectures or adapt your system design—you just turn your code into background functions using coding patterns you already know, ex: map-reduce, or recursion. Defer brings features such as configurable retries (advanced backoff options), throttling, and concurrency at the background job level, which other solutions either require you to implement yourself or are simply not available. Finally, the Defer Dashboard is the only background jobs Dashboard to allow developers to quickly find executions based on business/product metadata, ex: “Show all executions for `user_id=123`) to quickly debug product issues.

Defer’s infrastructure, written in Go, is composed of 3 main components: a Build pipeline, a Scheduler, and a Runner. The Build pipeline enables us to build any Node.js project without requiring any configuration file (https://docs.defer.run/platform/builds/). The Scheduler relies on Postgres for persistent storage of your jobs (no risk of losing some)—all jobs’ data is encrypted—and on Redis, as an atomic counter to handle features such as concurrency and throttling (https://docs.defer.run/platform/executions/). Our infrastructure runs on AWS EC2 - leveraging auto-scaling groups, using the containerd API directly from Go.

We run a progressive deployment approach to enable uninterrupted long-running jobs (some of our customers’ jobs run for more than 5h) while releasing updates multiple times a day. Once your application is up and running, the Defer dashboard gives you all the essential information to operate background jobs: activity histograms, performances, and Slack alerting upon failures. The executions list comes with rich filters, allowing you to quickly find all the executions linked to a specific customer or other business metadata.

In short, we ensure that you get all the essential features, with the best developer experience, and with a fully managed infrastructure and observability tools so you can focus on building your product.

All of this would be meaningless without a free plan for small and side projects and usage-based pricing, so that’s what we offer: https://www.defer.run/pricing. If you want to give Defer a try, you can get started with a simple GitHub login, without any credit card information required, and our docs are at https://docs.defer.run.

We would love to get to read about your experience with doing background jobs in Node.js and feedback on what we’ve built. We look forward to your comments!

jamesmcintyre 3 years ago

You mention managing webhooks as another tedious part of integrating with third party api’s at scale but I’m not seeing in defer docs any considerations for this.

I’m guessing while I could use defer to create a job that calls a 3rd party api to create a webhook POST call (later trigger by some activity on their side) I would still be building an webhook endpoint in my app to receive that request and then optionally hand that data to another new defer job? In other words defer does not magically create and handle ingesting webhooks as a way to initiate a job right?

Another question regarding memory / hard disk limits for executions (500mb / 10mb on hobby) is 10mb on “storage” limit pertaining to, for instance, what the final build size of that function would be or does that fall under memory? I ask because for my use case there’s potentially scenarios when the defer function would be using numerous npm libraries and local files so i’m curious what limits im looking at there. (EDIT: Just notice build limits, i’m assuming thats what would pertain to “bundle” size for bundled code?)

Lastly in limits does “concurrency” mean how many executions across the entire account can run at once or is that just for that execution?

Product looks awesome by the way, great job solving a real pain point for devs!

johtso 3 years ago

This looks really exciting! This is a space that's desperate for innovation. Will be interested to have a play and see what the dashboard looks like. Other options in this field like Temporal have a high learning curve and DevOps cost. Something like Cloudflare Durable Objects are also a very tempting option but require rolling your own visibility / dashboard stuff.

I'm guessing this is a "just make sure everything you do is idempotent" approach? Is there any kind of state between attempts?

Also how would you have a task with a unique id / parameter so that when you call it it will only be scheduled once and returns the result of the existing scheduled job?

Another question, how would you have custom retry logic in your job without blocking your concurrency? Or if using the platforms retry mechanism, how can you run some logic when the job fails? Can the job know what attempt it is on? Is there context of any sort? The only solution I can see from the docs would be to execute jobs recursively but that seems like it would be ugly and mess up the observability? I'm looking for a feature like Temporal's sleeping that actually does a delayed reexecution of the job (I think).

Also I can't get the discord link to work, it seems to link to a discussion rather than being an invite link (I'll give it another try on desktop)

  • wittydeveloperOP 3 years ago

    Thank you!

    Right now, you will need to ensure that your background functions are idempotent but we plan to introduce an API for “retried execution” so you can clean up. Also, by default retry is an opt-in configuration option to avoid any unwanted side effects, the same goes with concurrency.

bluelightning2k 3 years ago

If we all have at least one cloud, why would we seperate out this one piece and run it in Defer instead?

I don't mean to be unkind it's a legitimate question not sarcasm.

It just seems like a subset of Cloud Functions or a subset of Lambda - without being connected to the rest or any of our existing workflow, monitoring, secrets, etc.

  • bayesian_horse 3 years ago

    You already set up that workflow. That can be a major PITA.

    Also I don't think this is as much about the "compute" of the background job as much as the nodejs app saying "take care of this for me please, got this? Don't bother me again until I ask for the result".

    Endpoint handlers are flaky. They aren't guaranteed to run to completion even through no fault of their own. They should ideally have short and predictable run times, otherwise scaling is much more painful. That's why people separate out certain stuff into those background jobs. And getting that just right, scale it and so on can be quite a bit of work, but yes, something like Cloud Functions or Lambda would serve very similar functions. If you set up all the CI/CD pipelines to make that work.

    • lytefm 3 years ago

      > If you set up all the CI/CD pipelines to make that work.

      I've been "abusing" GitLab CI pipelines a lot for running periodic background jobs that I'd like to have separated from the "regular" backend and without worrying much about deployment.

      It supports Cron Syntax for scheduling, manual re-runs, provides secret management and download of artifacts, alerts me on failure and allows me to easily use the docker images in our registry.

      Sure, I'm not storing anything in a database or calling the pipeline via an endpoint. But if that's needed, it should probably be "part of the regular backend".

  • wittydeveloperOP 3 years ago

    We primarily focus on helping Node.js developers to build products without investing time in infrastructure while keeping it configurable.

    As @thdxr mentioned, our goal is also to provide top-notch queueing/scheduling features that are complicated to achieve with Lambda/SQS/Cloud Functions, such as dead letter queue support, throttling, or a product-oriented dashboard. In the same idea, we plan to provide better integration, for example, as you mentioned on secrets management by integrating with Doppler, AWS Secrets Manager, and more - the same goes with linked deployment pipelines.

  • thdxr 3 years ago

    think we see a lot of unbundling happening right now (I work on https://sst.dev/ so see a lot of different setups)

    a lot of times these specialized services can really nail a more narrowly scoped feature set to the point where it's worth having stuff running outside of your primary cloud

    it always depends, for me personally that bar is very high because like you said, now you have to copy paste secrets into yet another service among other things

    some services are so good, like planetscale, especially compared to the cloud option that I'm willing to do that

consequential 3 years ago

What gets run on your infrastructure and what gets run locally?

This is imporant as I'm not sure I'd want to trust yet another service provider with my customers' data, especially one that is VC-backed and therefore inherently less trustworthy in the long-term than others.

  • wittydeveloperOP 3 years ago

    When deployed in non-local environments, background functions get executed on our infrastructure with their (securely stored) arguments, spawn in a dedicated isolated temporary container.

    For our users concerned about data locality, we recommend pushing the minimal data as arguments (ex: ids or external ids) and fetching the data during the execution on our side, if needed, through a dedicated SSH tunneling setup. Once an execution is done, its associated isolated container - gets a dedicated VPC and disk - gets destroyed permanently. We will also provide on-premises solutions for Enterprises.

    In local environment (dev), background functions run completely synchronously and locally, no call is made to Defer.

netfortius 3 years ago

This reminds me of the jobs where I had to come in, "post-mortem", identify source(s) and fix performance, outages, costs and - most importantly - security issues, post "agile" work, in some DevOps orgs, which had no consideration to / interest in at least understanding what was "underneath". In my opinion adding yet another level of abstraction, and asking the creators of the high(est) tier to not [have to] know what's running their "stuff", is like giving chatGPT the task to architect your next app(s).

  • bayesian_horse 3 years ago

    Yes, if some projects depend on Defer and that startup blows up because it's not profitable enough or by one of the myriad reasons why most of these companies fail, that's going to be majorly painful.

    I mean, basically anyone paying for this service would have to scramble to find and implement another solution once Defer shuts down.

  • gearnode 3 years ago

    I can see your point; sometimes, there is little consideration/interest in such issues. We created Defer to try to fix that gap and make it easier to understand and identify problems related to your long-running tasks, not the other way around.

brap 3 years ago

I only skimmed through the landing page so maybe I missed it, but the value proposition isn't clear to me.

If you're going to `await` for the contacts import to finish anyway, what's the advantage of separating the import logic from your main API? It's blocked, so might as well be part of the same service, no?

I could see maybe if the API returned right away with a pointer the user can later poll for task progress, but it doesn't seem like this is the case?

Side note: I like this type of web design, is it an in-house job or did you hire someone external?

  • gearnode 3 years ago

    > I only skimmed through the landing page so maybe I missed it, but the value proposition isn't clear to me.

    > If you're going to `await` for the contacts import to finish anyway, what's the advantage of separating the import logic from your main API? It's blocked, so might as well be part of the same service, no?

    > I could see maybe if the API returned right away with a pointer the user can later poll for task progress, but it doesn't seem like this is the case?

    As you suggest, the API returns right away with a pointer the user can later poll to get the function result. Also, using `await` ensures the function is enqueued on our system.

    > Side note: I like this type of web design, is it an in-house job or did you hire someone external?

    Happy to hear this! We are working with a friend who is a professional designer.

    • brap 3 years ago

      I see, makes perfect sense then. I would clarify that in the example (maybe add another snippet for querying the status).

      If your friend is open for business, maybe give them a shoutout ;)

      • gearnode 3 years ago

        > I see, makes perfect sense then. I would clarify that in the example (maybe add another snippet for querying the status).

        It is true that we don't elaborate on this point on our landing page. We will take that into consideration, thanks.

        > If your friend is open for business, maybe give them a shoutout ;)

        Again, glad you like his work! He's not available at the moment, but we will stay in touch :)

    • bayesian_horse 3 years ago

      That got me stuck too and it wasn't obvious.

      I used to use Celery a lot and find Hangfire lacking. To get the same value out of Defer, you'd need to "listen" to those handles.

      And listening may actually mean writing another local background job...

  • johtso 3 years ago

    Yeah it's not clear how you uniquely reference a task, a super core feature and something that the Temporal documentation for example covers very early on.

    • gearnode 3 years ago

      When enqueuing an execution, you get back a unique ID referencing it. We will make that clearer in our documentation, thanks.

      • johtso 3 years ago

        Often you want to specify your own unique id based on some property of the job, like a transaction reference, or just a unique combination of parameters. This means you can later refer to it without having to store the reference somewhere. Is this possible?

        • gearnode 3 years ago

          We specify our unique ID based on KSUID specifications. In addition, we are soon releasing the tag feature, which will enable you to store your references on your Defer executions to identify them your way.

          • johtso 3 years ago

            Just ideally the dev experience should be `result = await somethingThatMayHaveBeenCalledBefore(id=txnref, params)`

            You shouldn't need to first do a check to see if the task already exists using a different api, and then choose whether or not to run a new task.

            At least this is my preference.. a-la Durable Objects.

            If you don't specify a custom unique id when calling the task it would then be treated as a task that can (and would make sense to be) run multiple times (i.e. not idempotent).

            • gearnode 3 years ago

              Completely agree with you on the importance of idempotency. We will release our implementation shortly with a pattern close to the following:

              ``` const somethingThatMayHaveBeenCalledBefore = idempotent(myFunc, txnref); const result = await somethingThatMayHaveBeenCalledBefore(params); ```

  • ntonozzi 3 years ago

    Designers sure love gradients+noise. And dang does it look good.

swyx 3 years ago

livestreamed my run thru the docs and gave first impressions as someone quite familiar with the background jobs and workflows as code space: https://www.youtube.com/watch?v=iGccpHaB1hA

jsnk 3 years ago

Not fully related to this post but ...

I think there's a huge opportunity for whoever that can create Spark or Flink like framework that works in Node.js or at least has Node.js API.

It's a huge challenge to create a tool like that, but I believe there's an appetite for such tool. JS community is huge. JS is a flexible language that can conceptualize distributed computing well. JS community and developer ecosystem generally has superior emphasis on developer experience, outreach and education than the communities around existing tools.

Given how economically valuable companies that exist around big data distributed computing are, I think a company that can create an open source Node.js distributed computing tool will be like the next MongoDB, Elastic and Databricks.

Aeolun 3 years ago

Why am I paying more for concurrent executions? I’m already paying for more processing time. Why does it matter if I execute 10x10 in sequence or 100 in parallel.

lucasfcosta 3 years ago

I'd be curious to hear about how you've got your first customers. Would you be willing to share that?

I can't think of a top-down approach working for technical folks, especially cold outreach.

  • wittydeveloperOP 3 years ago

    Sure!

    We got our first customers from both channels: network (sales) and inbound (twitter discussions, etc). I agree that top-down is not working well for a newcomers but getting better at a later stage.

bastawhiz 3 years ago

I don't understand how the example on this page works:

https://docs.defer.run/features/delay/

Does the "defer" directory make the exports of those modules special? Because the defer function isn't called until the endpoint is hit, there's no way to really know (without static analysis, which is a can of worms) which functions will actually get deferred.

Moreover, how does the defer function know, at runtime, which function on the Defer platform needs to run? I could pass any arbitrary function to defer(), which surely can't be run externally. How does the defer platform know which function it can see (by importing my code) is the function my application has called defer() on?

Without knowing exactly the constraints of this, I have a lot of FUD.

  • wittydeveloperOP 3 years ago

    Here’s how our Builder works: When a commit is pushed, we fetch your application’s repository from GitHub and compile (if TypeScript) all the files in the first `defer/` folder found. Then, we require each of those files to retrieve the metadata exposed by the `defer()` helper, on the `default` export (is it a CRON function or not, a function name, concurrency, retries, etc).

    When a background function gets a call from your application, the `defer()` wrapper intercepts this call and pushes an execution to the Defer API with the function’s name and serialized arguments.

    I hope it makes thinks clearer, let me know!

    [update: grammar]

johtso 3 years ago

How do I test my jobs?

Is there a local runtime a-la miniflare?

Or do I have to deploy and then run my unit tests against your APIs?

  • gearnode 3 years ago

    Locally (without `DEFER_TOKEN` environment variable), your functions run synchronously, but apart from this, you get the same API behavior (arguments serialization, execution id, etc.).

    • johtso 3 years ago

      This is fantastic! I was hoping it would "just work" and seems like it does! Unless I missed it I think the docs should explain this. It would definitely be a selling point for me compared with having to run a local server like with Temporal.

      • bayesian_horse 3 years ago

        That was one of my questions too. Without a local execution for fallback there are all sorts of problems with this, especially in development.

KRAKRISMOTT 3 years ago

Congrats on the launch! Temporal.io needs more competition. They won't even talk to you unless you sign a sales contract first.

chronicom 3 years ago

A couple of days ago, I was actually thinking about the impact long-running analytical queries would have on the costs when using edge functions that are billed by GB-s, so this is a rather interesting solution.

Just a small note that on the landing page, the snippet under Define has the file path `defer/helloWorld`, while the snippet in Enqueue is importing `importContacts` from `defer/importContacts`. As I was reading it, I thought it seemed as if the Enqueue snippet was supposed to be importing from the snippet written under Define. Just thought I'd mention it in case that's what it's supposed to show.

  • gearnode 3 years ago

    Thanks for your interest! Also thanks for the side note on the snippet, we just fixed it.

fernandopj 3 years ago

This is an awesome value proposition. Congratulations on launching!

I have a project that I've been dreading on how I would tackle its background processing. It is a long-running process that should start and scale based on a user starting his work. I'm AWS certified and still wasn't thrilled about setting this up. Defer would solve this for me.

I can see myself suggesting this solution to clients who want to run complex jobs without the hassle from cloud infra.

codegeek 3 years ago

This may be a bit simplified take but is this sort of like Lambda functions/serverless functions as a service where you can also build the code (nodejs only) ?

  • wittydeveloperOP 3 years ago

    The Builder part is clearly a differentiator; however, background functions are different from Serverless functions on many points - on top of being natively integrated with our Scheduler.

    First, they allow a maximum execution time comparable to “pure server” environments, instead of 15min.

    Then, the code-first approach allows configuring the execution parameters (concurrency, retries, and more) from the function’s code and enables to write workflows by applying well-known code patterns such as recursion or map-reduce (calling child background functions is a workflow).

    Finally, you can write a background function that requires any kind of dependencies, whether they be internal, external, or even native.

    • johtso 3 years ago

      Oh interesting! So say if you wanted to call 3rd party API in your function, with some exponential backoff retry, without blocking your concurrency allowance, would you model that as recursively calling the same function with an attempt argument?

      Does this look nice in the observability layer?

      • wittydeveloperOP 3 years ago

        Yeah, that will be the way to go. This is actually how on of customers is achieving this behaviour for polling analytics.

        Right now our Executions list is not ideal for such pattern but we will soon release filtering based on arguments which will help to get all the executions linked to a specific sequence.

bayesian_horse 3 years ago

Interesting you can make money from this sort of thing.

  • rozap 3 years ago

    Every time a crazy workaround for node.js's shortcomings arises I think about how hardware engineers must feel. Your magnum opus is giving people 2, 4, even 16 core supercomputers and then the software engineers come along and start using this hot new single threaded tech called node. Sure, run multiple processes and then you get to spend most of the cycles passing stuff around. I think if I was a hardware engineer I'd want to quit and become a potato farmer.

    • bayesian_horse 3 years ago

      Nobody is stopping you from running multiple node workers on the same machine. Singlethreadedness actually makes things easier, and even with my limited experience in Dotnet I have already had issues with thread-safety...

  • margorczynski 3 years ago

    How do you know they're making money?

bluelightning2k 3 years ago

I'm kind of perplexed by this.

It's very clearly infrastructure. Putting "zero infrastructure" in the title and using the same API as Lambda invoke etc. doesn't make it true.

I also get double the risk of downtime, security as it's a third party running on top of AWS vs. just running on AWS.

The API looks nice - but no mention of typescript at all in this post or the website, so presumably type-safety isn't the thing.

  • gearnode 3 years ago

    > It's very clearly infrastructure. Putting "zero infrastructure" in the title and using the same API as Lambda invoke etc. doesn't make it true. > I also get double the risk of downtime, security as it's a third party running on top of AWS vs. just running on AWS.

    You are right. We provide an infrastructure service. We mean by "zero infrastructure" that you don't have to implement and/or manage your own. Our service could run on a platform other than AWS, though, as we are not relying on AWS-only specific services (e.g., lambda, SQS, etc.). Of course, like any other cloud or on-premises service, we could have downtime.

    > The API looks nice - but no mention of typescript at all in this post or the website, so presumably type-safety isn't the thing.

    Glad to hear this. Although we don't mention it, our client is written in Typescript. If you want to know more, you can check out the code: https://github.com/defer-run/defer.client.

    • johtso 3 years ago

      I do wish people had less of a tendency to be shy about Typescript in their product documentation. It's a selling point, show it off! Examples end up being a little more verbose, but they actually end up being useful for people that are writing Typescript (probably a closely overlapping group with those that care about their background jobs not disappearing).

      • Aeolun 3 years ago

        I use included Typescript types on npm as a sort of unofficial filter for what libraries I want to use now.

    • bluelightning2k 3 years ago

      Simple type safety from front end to long running jobs is actually a big deal. You should highlight this.

      To achieve type safety normally involves a shared interfaces folder and has to be specifically implemented

pseudostem 3 years ago

>As a managed platform that brings modern development standards to background jobs (ex: multi-env support, zero-API design), we enable Node.js developers to build products faster and scale without effort and infrastructure knowledge.

Can you please elaborate more on the modern development standards bit? I've always been intrigued by changes on development forefront from early 90's time.

  • wittydeveloperOP 3 years ago

    Sure, please note that was referring to “modern development standards” in the Node.js ecosystem.

    Node.js, with the flexibility of the JavaScript language, allowed the rise of great abstraction and other domain-oriented API designs, a bit like Ruby on Rails did with Ruby.

    The arrival of React.js Server Components pattern enabled the isomorphic pattern (first applied to mobile and front-end apps) to reach the server side of things with patterns such as Server side rendering, popularized by frameworks like Next.js or Remix.

    Those new patterns and abstractions allow Node.js developers to move faster while building more complex applications to match users’ requirements: real-time, performant apps, richly integrated with third-party products.

    Beyond code, those new coding habits came with new products such as Vercel or Supabase that help them to get the infrastructure done in no time, without any DevOps knowledge (good article on this topic: https://vercel.com/blog/framework-defined-infrastructure).

    “modern development standards”, applied to Node.js, do not only apply to coding experience and productivity (ex: the rise of monorepos, TypeScript, SSR) but also to enabling developers to configure their infrastructure from the code.

    [update: typo]

chipgap98 3 years ago

This looks awesome. I'm really excited about trying this.

I'm curious who your target customer for this? As someone who likes to work on side projects, this seems ideal for me, but I would imagine people will graduate to more mature solutions once they have decent volume. The concurrency limit is something that seems especially concerning in that regard.

  • wittydeveloperOP 3 years ago

    Thank you, good question!

    We want to enable developers working on a side project to build apps faster and new startups to have a faster time to market by avoiding spending time on infrastructure and building custom layers of API and dashboard.

    We also want to help bigger companies to take back control of their custom background jobs stack and to be able to ship new features or sub-products quickly. Again for growing companies, we provide Custom Pricing which comes with degressive pricing with custom concurrency to support, for example, high throughput needs.

fswd 3 years ago

Why a company with VC backing for something so simple? I am confused why somebody would think this is so important, they would need funding. Anybody with 1-2 years development experience should be able to hack something up in redis or even just crontab and flag files.

I use postgraphile graphile-worker https://github.com/graphile/worker for this.

For example, every month we roll over credits. For each user, when they signed up, 30 days from that, check. If they are available for roll over, reset and email. Then we have drip campaigns for alerts like running low on credit.

Also, if you upgraded your account, then pause payment, it uses a worker to schedule the date they are paid up to run the SQL to downgrade. With a simple API called 'addJob' that looks for a JavaScript file in a folder called task.

  • Aeolun 3 years ago

    To be fair, background jobs in Node.js kind of suck right now. The existing solution aren’t really robust enough to use withou constant babysitting (at least the ones I’ve tried, for the volume I’m sending through them).

  • revskill 3 years ago

    Because i guess you've never been worked in a company with VC backing ;)

  • anon223345 3 years ago

    Most corporate developers don’t really, really suck, but they’re really bad at new things

    I could see this selling fast to corporate

  • programmarchy 3 years ago

    With your solution you need to manage infrastructure. Why not abstract that away?

    • atleta 3 years ago

      Because it's one more service that you have very little control over and one more service that you have to manage over a web interface or (maybe) their own command line tool.

      It's on a separate network so they may go down and cause outage in your own app, go out of business, etc. (Yes, your IaaS service provider can go down as well, but that's a much larger operation and something that's very-very likely to have a much better uptime than what you could do on-site. And whatever the case is with that, it's still on top of that.)

      Abstractions aren't free. (BTW, I don't see it as an abstraction, it's simply outsourcing. You'd use some kind of API anyway even if you hosted the job queue yourself.)

      • programmarchy 3 years ago

        "It may go down" vs. "Save X hours of dev"

        Depends what you're building, but likely a good trade if it's an MVP.

        At $30/mo you're already ahead if it saves you 1 hour of dev time.

        Most of the time, I don't want control over my infrastructure (boring, low value); I'd rather focus on building features (fun, high value).

        • atleta 3 years ago

          Sure, it might be good for an MVP where you don't mind about reliability, developer experience and how long you can expect the service to be in business.

          If these start to matter, even picking a service from a list of possible providers takes time. Learning their tools, their APIs takes time, etc. OTOH if you have a project template, a setup you know, doing an MVP with that can be really cheap/quick. If not, I can see how this makes sense. But I'm not sure you can have a sustainable business just by serving people building MVPs.

        • bayesian_horse 3 years ago

          And at least when I host stuff it still goes down at least from time to time and I have to put in additional time to fix it.

          You need to put up significant resources in order to beat a dedicated team running this. Many companies can do that. Many can't. Even Twitter increasingly runs into problems with site reliability...

          • atleta 3 years ago

            I never had any issue where the task queue would go down but not the rest of our infrastructure. If it's a large system (which is definitely relative) then it can become a job on it's own to keep this part of the system running. But by then, I guess, you'll have the resources too.

            Also, you don't know the team size behind this service, as I mentioned in my original comment.

          • Aeolun 3 years ago

            At least when I host this I’m in a position to fix it, instead of twiddle my thumbs and hope they’ll get around to fixing my environment soon.

            It’s just scary to hand off such an important part of your system to a third party.

  • intelVISA 3 years ago

    Simpler is better for VC funding: easier to demonstrate the value prop.

  • andrewmcwatters 3 years ago

    Pointing Rick Dalton This is the bookmark comment.

    Something something Dropbox, something something rsync.

    Charly, Bryan consider yourself flattered.

buglungtung 3 years ago

I have two questions

1. Does the retry function work based on the logic that the same input will always return the same output, even if the code uses non-deterministic methods like "new Date()" or "Math.random"? If not, your retry logic may produce unpredictable results and therefore not be safe.

2. According to the article "How Defer Works" (https://docs.defer.run/platform/how-defer-works/), you will be running our code on your platform. How do you ensure the protection of sensitive data during the execution process? For example, how will you prevent access tokens or user/password information for the database from showing up in logs due to exceptions or bad configurations?

  • wittydeveloperOP 3 years ago

    1. Yes, we currently provide a simple approach to retries but are already working on providing features for idempotency.

    2. All sensitive data (tokens, env vars) are encrypted on our side, however, we don't prevent users to print their values in the logs yet - it's planned in our next releases.

    • buglungtung 3 years ago

      I see. I hope you have a great journey. I used to want to implement a system like that for my import/export tasks because my workload heavily involves importing and exporting CSV files. But the deadline defeated me ;) I have no time to design a system like that By the way, I really love how simple the defer API is to use

anikdas 3 years ago

I think BullMQ with deployment in kubernetes gets people off the ground quite quickly and scale out quite well without needing to worry about fine tuning the solution. Also RabbitMQ has also worked pretty well for us handling around 150k RPM messages with delayed exchange enabled while running in kubernetes without any tweaks. As a dev working on solution that is used in the industry of customer support, I am more worried about,

1. Data locality

2. Privacy

3. SLAs and Uptimes

While SaaS solutions like these can get us MVP/PoCs fast but a home grown solution is more preferable when SLOs are tight and security is a huge concern.

[edit: formatting]

  • wittydeveloperOP 3 years ago

    I agree with you, as mentioned in the article, I’ve built similar solutions as yours at Algolia, using Redis and Kubernetes. However, not all developers know and want to do Kubernetes or RabbitMQ and not all companies can afford to invest time to set up and manage it (https://docs.bullmq.io/guide/going-to-production).

    We address privacy by encrypting all the data on our side (doing a second pass with a symmetric PGP key for tokens such as GH tokens, and environment variables) and advise companies that want to keep their data on their infra to push as minimum data in arguments while leveraging a dedicated SSH tunneling setup between our infra and theirs.

    When it comes to the SLA/up-time of home grown, my POV would be that, again, achieving good results on those often requires SRE engineers, which is an investment.

benatkin 3 years ago

Checks to see if it uses export or module.exports in the example

OK, tell me more...

  • wittydeveloperOP 3 years ago

    We support both CJS and ESM but showcase examples in ESM in our docs and landing

    • benatkin 3 years ago

      Not about that haha. I mean I'm still interested in Defer because if your example had archaic js I'd think you didn't get me as js developer. :)

pphysch 3 years ago

Where are the encryption keys stored? Is this CronAAS suitable for sensitive workloads?

  • gearnode 3 years ago

    > Where are the encryption keys stored?

    We use AWS KMS to perform data-at-rest encryption on all data we store. We perform another encryption pass for sensitive data (e.g., Github Token, Secrets, etc.) before storing the data with a symmetric PGP key.

    > Is this CronAAS suitable for sensitive workloads?

    It should be. Could you elaborate?

krashidov 3 years ago

This is awesome! Congrats on shipping. Can't wait to use this in a side project

g_delgado14 3 years ago

Your logo is awfully-close to sequoia capital's logo: https://www.sequoiacap.com/

  • andrewmcwatters 3 years ago

    Sequoia: Up and to the right

    Defer: Down and to the right

    Not a good look. Calling it now.

    • benatkin 3 years ago

      Either you're mistaken or that's the quickest ninja-edit I've ever seen.

      Sequoia's has a diagonal line through it. Defer's does not.

      • andrewmcwatters 3 years ago

        I'm teasing of course. :)

        • benatkin 3 years ago

          Haha, I fell for it. Nice :)

          Thanks for pointing that out g_delgado14 but I still think it could work as their logo, because with a simple logo it's hard to not make it look a little bit like some other logo.

futhey 3 years ago

This is really amazing from my perspective: few or almost no background jobs that would justify building my own flavor of this setup. Love the API design around this.

baudic_julien 3 years ago

Very useful, We are using Defer to run our background tasks

numinoid 3 years ago

Not sure if this is intentional, but your blog posts seem to be ordered alphabetically rather than by time. Having that as default behavior feels a bit weird.

  • wittydeveloperOP 3 years ago

    The first blog post is actually pinned but an icon or label to highlight this is missing, you're right! I'll fix it soon.

abdellah123 3 years ago

I don't see myself using a non open source solution for such a critical part of my app

Why not open source it and use an open core model?

  • bayesian_horse 3 years ago

    I'd guess their main work and IP at the moment revolves around the devops and autoscaling parts. Defer's main value proposition seems to be in the reduced workload on developers and their infrastructure team (if they have any).

    If you are able to run and maintain a continuously integrated microservice on a kubernetes cluster (or something similar, maybe severless), then you really don't need Defer, as far as I understand it. There's already a lot of CI/CD and autoscaling open sourced.

    I haven't tried it yet, but I guess most of the "runtime" is already open sourced, otherwise the background jobs can't run locally...

carlosdp 3 years ago

Really nice API design! Also love the way you're handling workflow jobs, super intuitive. Definitely going to try this out!

debarshri 3 years ago

Is it comparable to sidekiq or a more complex orchestration framework like temporal?

  • wittydeveloperOP 3 years ago

    It is actually comparable to both.

    We aim to provide a similar feature set as Sidekiq (throttling, unique jobs) but with a complete hosted solution.

    While Defer and Temporal can both be used for writing background jobs, workflows, and CRONs, there are some core design differences.

    Temporal has been created as the Kubernetes of highly distributed systems, enabling developers to write code that runs on multiple regions without worrying about possible termination of the program and interruption of workflows spanning across multiple steps.

    While Temporal can be used for background jobs, workflows, and CRONs, its main goal is to ensure that highly distributed tasks will reliably be executed. That's the main reason why Temporal API is so verbose, with many concepts to deal with.

    Defer, on the other hand, provides comparable reliability while focusing on the developer experience.

    You can write workflows, CRONs, and jobs that run for hours without worrying about them being terminated.

    All this, with a simple API that enables you to write some workflows (background functions calling other background functions) in plain TypeScript, with no mental model to fit in.

pastacacioepepe 3 years ago

Is this built on top of Bull? Interesting idea.

  • gearnode 3 years ago

    Our product is not built on top of Bull. Instead, as briefly explained in the post, our scheduler is coded in Go and leverages Docker, PostgreSQL, and Redis.

danielmakestech 3 years ago

Wow, this is exactly what I was looking for!

mart1 3 years ago

Awesome! Love it!

andrewski77 3 years ago

this is awesome, congrats on the launch!

kundi 3 years ago

Yet another javascript attempt to use it for something it was never meant to do? Can we come up with a better way and languages to finally fix the node issues and use the appropriate tech for this purpose?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection