Settings

Theme

Show HN: Sandglass – Distributed, scalable, persistent time-sorted message queue

github.com

188 points by celrenheit 8 years ago · 71 comments

Reader

menacingly 8 years ago

It triggers my eyebrow when a highly available message queue offers either exactly-once delivery or reliable time sorting.

I'm not saying I wouldn't give this project a closer look, but I would much rather a product make it painfully obvious what compromises were made to offer its availability.

My instinct is: either you need strict constraints on ordering and delivery, in which case you use rabbit, or you need at-least-once-no-matter-what semantics, in which case you use something else and make your app less fragile.

manigandham 8 years ago

Looks interesting, might be a good replacement for the more enterprise messages/queue systems that have all the typical ack/redelivery/scheduling features as seen here.

It's worth mentioning the new Apache Pulsar messaging system which can replace Kafka with pub/sub and queueing semantics while providing better scalability and per-message acks, probably better suited to those who want a combined system.

  • arcbyte 8 years ago

    I checked out Pulsar and got completely lost in the multiple hierarchical Zookeeper clusters.

    • manigandham 8 years ago

      Pulsar supports multiple regions natively which requires separate Zookeeper clusters for each region to manage the global and local cluster state (ie: replicating messages from DC1 => DC2 but not DC3).

      If you dont need/want that, then its just a single ZK cluster as with Kafka or anything else. ZK + Brokers + Bookies = Pulsar.

  • aphistic 8 years ago

    I hadn't heard of Pulsar before but just checked it out as we use Kafka. Does it support replaying messages from an offset (or even better, a time) like Kafka does? I looked at the docs but it didn't seem like it did.

  • danellis 8 years ago

    So many Apache projects, and with a lot of overlap, too. It would be nice if someone made some kind of infographic that gives a high level overview of them all.

adrinavarro 8 years ago

This looks promising.

I'm currently dealing with a queuing-related issue.

I have a series of tasks running across servers that consume from a queue and run a task.

Often, these tasks die mid-execution (but can be resumed by any other server). So, the queue is a database, and the running tasks "touch" a timestamp in the database if they are still executing. When a database document hasn't been updated for a while, the "consumption query" makes it so that it is 'redelivered' to an available server listening to the "queue".

Of course, this is subpar, but we haven't yet come across an elegant (and not too over-engineered) way to replace this.

  • taspeotis 8 years ago

    > we haven't yet come across an elegant (and not too over-engineered) way to replace this.

    It's built into some RDBMS. SQL Server has READPAST [1, 2], so you can do:

        BEGIN TRANSACTION;
        DELETE TOP (1) QueueTable WITH (READPAST) OUTPUT deleted.* ORDER BY QueueId;
        -- Do your work
        COMMIT TRANSACTION;
    
    And if your process dies midway through, the transaction is rolled back and the row is immediately visible to another worker.

    [1] https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-tra... READPAST is primarily used to reduce locking contention when implementing a work queue that uses a SQL Server table. A queue reader that uses READPAST skips past queue entries locked by other transactions to the next available queue entry, without having to wait until the other transactions release their locks.

    [2] https://docs.microsoft.com/en-us/sql/t-sql/queries/output-cl...

  • Bogdanp 8 years ago

    Unless I'm misunderstanding, it sounds like your use case fits RabbitMQ perfectly. Pseudocode:

        while True:
          message = queue.consume()
          process(message)
          message.ack()
    
    
    RabbitMQ will automatically put the message back on the queue when the consumer that pulled it disconnects w/o acknowledging it first. Alternatively, you could explicitly reject messages:

        while True:
          message = queue.consume()
          try:
            process(message)
          except Exception:
            message.reject()
            raise
    
    If you're using Python you might want to check out Dramatiq[1]

    [1]: https://dramatiq.io

    • adrinavarro 8 years ago

      Not exactly. Tasks can last for weeks. It can run fine for several days and then die, and it needs to be requeued, until it is explicitly finished. In fact, we do use RabbitMQ to emit "status updates".

      With RabbitMQ, I'd need to ack rightaway, otherwise it would re-send the message again after a while.

      • spyspy 8 years ago

        Very curious what type of work you're doing where atomic tasks can run for weeks at a time.

        • softawre 8 years ago

          We have something like this, not weeks but days. Linear programs, integer math, using IBM Cplex to schedule "people" to do "things" at the ideal time.

        • adrinavarro 8 years ago

          In our case, data migration. Resumable, sometimes dies because of various reasons (or just infra rescaling happening), but takes very long to run.

      • Bogdanp 8 years ago

        > With RabbitMQ, I'd need to ack rightaway, otherwise it would re-send the message again after a while.

        This is not exactly the case. RMQ will only re-enqueue the message when the consumer disconnects. If you're able to keep the consumer connection alive (this is easy to do with the heartbeat mechanism) for the processing duration, even if it takes a long time, RMQ should handle it fine. That said, if the connection between your consumers and RMQ is flaky, you'll have to make your tasks re-entrant.

        • adrinavarro 8 years ago

          Good point. I'd rather have something explicit going on. In other places where we do use RabbitMQ (for short-lived, non-critical tasks), the listening processes log reconnects every once in a while, even with heartbeat.

      • jacques_chester 8 years ago

        I've seen folk talk about progress messages for long-running tasks and jobs. If you have checkpointing then they play well together.

  • exhilaration 8 years ago

    We use Azure Storage Queues for this: https://azure.microsoft.com/en-us/services/storage/queues/ It's all done via REST calls but we use their C# API so it's all transparent to us.

    An item is queued with a specific visibility timeout (it should take 10 seconds to process so we give it 10 minutes), a job picks up that item and it disappears from the queue for 10 minutes. If the job succeeds, the job explicitly deletes that queue item. If the job fails, the item reappears after 10 minutes for another instance of the job to pick up.

    We've been using it since April 2014. There's more information built into the queue item, like the number of times dequeued and original queued timestamp, so we can send trigger alerts if items are getting old.

    • ddorian43 8 years ago

      Jobs expires after 1 week though..

      • jsudhams 8 years ago

        Like others mentioned you can use SQL transaction but either case i used aerospike wherever i needed ActiveMQ or RabbitMQ or ApolloMQ and works like a champ with no issues of the connection / disconnection etc. It is superfast when compared to DB though slower when compared MQ stuff.

        Note: I did not like messaging because of these connectio timeouts and also very confusing to get last message etc, -- Not a pro programmer and use vb.net with .net core

  • manigandham 8 years ago

    Messaging systems with individual message acknowledgements, redelivery, dead-letter features are common in many enterprise systems. If you just need ack/retry ability, then Google's Cloud PubSub is about as cheap, fast and scalable as you can get. Otherwise look at Azure's Service Bus for more complicated routing.

  • cmsd2 8 years ago

    If you're in AWS land, try a step function triggered from SNS. Executions can last up to a year. To shell out to processes that actually perform the work either invoke a lambda or start an ECS task.

    • RHSman2 8 years ago

      Can you go a bit deeper into this please?

      • csears 8 years ago

        Not the GP author, but he's talking about using 4 different AWS services in a particular architectural pattern. SNS topics give you a triggering mechanism to start the long running task. Step Functions give you light-weight flow control and state management, but don't directly perform any interesting work. Instead, the step function steps can invoke Lambda functions or jobs in Elastic Container Service to do the actual work. When they finish, the step function can move on to the next step or retry things as needed.

eddd 8 years ago

These buzzword product descriptions are terrible. Documentation should say first, what does this piece of code do and when should I use it. The actual rationale for this project is buried down in the middle of the documentation with a sparse 3 sentences.

> The first is to be able to track each message individually (i.e. not using a single commit offset) to make suitable for asynchronous tasks. > The second is the ability to schedule messages to be consumed in the future. This make it suitable for retries.

That's a start - I'd love to see how did you solve the problem? How your solution compares to other similar products? Why should I care about stuff you are mentioning? It is not an issue of not mature product - I think one should start with defining exactly and precisely what is being solved here.

When solving a technical problem you always have to tailor your solution to a specific set of requirements and it is never like: "distributed, horizontally scalable, persistent, time sorted message queue." So please stop using such buzzwords.

  • aquadrop 8 years ago

    Those aren't buzzwords, when used right. It's dense searchable technical terms and are perfect for title. Do you think title should consists of two paragraphs?

    • falsedan 8 years ago

      I think that product pages should clearly communicate what the product is to the intended audience. These are 100% buzzwords, so the message I get is, "you will need to spend more time with this project to see whether it is worth looking into".

      The most convincing technologies have clear, simple value that's immediately apparent (like zeromq not needing a central coordinator/exchange), or some real-world use-cases which have been improved by using this technology (actual, already-happened use-cases, not hypothetical: I'm more convinced by words from people who've integrated with the product rather than the authors + their innate bias).

      I would have put the title as, 'open-source proof-of-concept messaging queue (Go, single author)'

    • yeukhon 8 years ago

      Searchable terms are fine. But product pitch is what people want. How many times have you searched for a concept or a tool just to find the official documentation/official site offers vague introduction? Or sometimes it takes some effort to find a meaningful description on the site because the site come with some weird layout? Then you end up search on the Internet for an alternate explainations from other sites?

      I have had this too many times, exactly what OP said, even in areas I am strong in... often I have no clue or little clue what to expect without doing additional research.

  • manigandham 8 years ago

    Those words are very descriptive and I got the entire purpose from just the title. The only other detail was individual message acks.

    What do you feel is missing or miscommunicated?

    • eddd 8 years ago

      I'd say they are too descriptive. I don't believe that this project provides such universal solutions and I expect from the author to clearly state what problem exactly does it solve. Ideally, as a picture/graph.

  • j_s 8 years ago

    > How your solution compares to other similar products?

    This would be a clear violation of the unwritten open source policy to never mention similar/competing projects!

  • pas 8 years ago

    So, is it a job queue or a message queue then?

    • notheguyouthink 8 years ago

      Pardon the ignorant question, but what is the difference in brief? I'm clearly not familiar with queue software. I imagine the difference is that a job/task queue has mechanisms to track the state of a job - eg, completed, etc. Where as a message queue does not track state?

      • pas 8 years ago

        A message queue can be extremely simple, like Redis pub-sub. Fire and forget, if you missed something, that's it. No guarantees.

        Kafka is a bit more durable, you can build exactly once delivery with it (but only in order), but no selective ACK. RabbitMQ (AMQP in general) has selective ACK, so it has a state DB, and needs compacting.

        A task/job queue is of course pretty meaningless without a task/job executor. And so there every message has some metadata, such as when to execute it, how many times to retry, timeout, where to execute (if you have labels or otherwise tagged workers/executors, you can think of this as channels of course).

        The distinction is not very clear, just as you implied. A message queue with selective ACK can be used to build a job queue, but then you need to create a small library to serialize and deserialize your arguments, to register workers/tasks, and you need an event loop that listens for the messages, unpacks it, loads the task, runs it, and ACKs it.

        In advanced (complicated/complex) job queues you can report progress (which is handy for checkpointing - but that's very much just a DB that the running task uses to persist some data, so almost orthogonal to the task scheduling function).

    • dullgiulio 8 years ago

      The title clearly says message queue. It doesn't seem to automatically do any task management.

      • kachnuv_ocasek 8 years ago

        The tagline clearly says

        > It was developed to support asynchronous tasks and message scheduling which makes it suitable for usage as a task queue.

jwr 8 years ago

Aphyr's Jepsen or it didn't happen :-)

agnivade 8 years ago

Seems similar to faktory ? https://github.com/contribsys/faktory

  • richardknop 8 years ago

    Obligatory plug of my own job queue in Go: https://github.com/RichardKnop/machinery

    • espadrine 8 years ago

      Faktory is CouchDB, Machinery is Redis/Memcache/MongoDB, Sandglass is a custom Raft on top of RocksDB…

      I'd love to see comparative stress tests, Jepsen-like, to assess the ability to survive partitions, corruption, node restart and node loss, to better estimate the probability that a job gets lost.

      • richardknop 8 years ago

        Machinery is basically a Go implementation of celery (popular Python task queue).

        It has two core components: broker (AMQP or Redis supported) and backend (Memcache, Redis or MongoDB or even no backend if you don't care about storing task states).

        I think comparison with Sandglass would not be valid as Machinery is a higher level job/task queue while Sandglass is a lower level library (basically a message queue such as RabbitMQ which would be just a component of Machinery).

        Faktory vs Machinery could be compared as they are on more or less same level of abstraction.

      • agnivade 8 years ago

        Faktory is based on rocksdb actually.

nicois 8 years ago

I would think the imminent Redis streams data type would provide this better. It is battle-tested and allows great customisation to a range of use cases.

dvdplm 8 years ago

What does Sandglass use for persistence? Is it using something like Rocksdb under the hood or is the WAL and VL "homegrown"?

tmp123tmp123 8 years ago

Nodejs sever? Segfault will cause data loss.

  • erulabs 8 years ago

    er? This appears to be all Go, but regardless, dunno how a segfault means more or less dataloss for Node than for anything else?

lloydatkinson 8 years ago

Why does everyone keep alluding to this replacing kafka? If anything this is more similar to RabbitMQ.

velodrome 8 years ago

How does this compare with NATS?

ninjamayo 8 years ago

Looking good. Question is how is it going to convince people to move over from Kafka.

  • buro9 8 years ago

    > Question is how is it going to convince people to move over from Kafka

    That's a bit unfair given that the project didn't mention Kafka in it's README and different products have different suitability at different scale and this could simply be a "if your traffic is low and you need this functionality, this will suffice" thing or just an academic interest in producing an ordered distributed message queue.

    But as you're asking: Proven ability to consume ~5+ million messages per second with a similar or less hardware requirement than Kafka and high reliability. Well document set of edge cases / compromises where applicable and a high degree of observability. Well understood operational requirements, and SRE runbooks (or just a lot of Github issues that go into how to handle various scenario). An active community of people to assist, and more than 1 committer.

    That's the "off the top of my head" thing. YMMV.

    • moreless 8 years ago

      I have no opinion about each of these projects, but his caught my eye:

      > Well document set of edge cases / compromises where applicable and a high degree of observability. Well understood operational requirements, and SRE runbooks (or just a lot of Github issues that go into how to handle various scenario). An active community of people to assist, and more than 1 committer.

      Are you talking about Sandglass or Kafka? Because Sandglass seems 3 months old, has 1 contributor and is featured here as "Show HN"... So it probably isn't as mature solution as Kafka is. Or am I missing something?

  • sz4kerto 8 years ago

    Kafka is not a task queue. Well, it is, but it's also an extremely scalable distributed schemaless persistent data store; probably that trait differentiates it from other fast commit logs.

  • qaq 8 years ago

    by not having to run Zookeeper :)?

yassinebenyahia 8 years ago

this is awesome, is it meant to replace kafka ?

  • ddorian43 8 years ago

    No:

    The first is to be able to track each message individually (i.e. not using a single commit offset) to make suitable for asynchronous tasks.

    The second is the ability to schedule messages to be consumed in the future. This make it suitable for retries.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection