IceRPC: RPC framework for the QUIC era
zeroc.comSlice looks like a saner alternative to Protobuf. Proper nullability support alone is big enough reason to consider this instead of Protobuf.
I would suggest ability to also generate "builder" variant of structs, where every field is optional. That would allow cases where one service to only partially build response and another service completes it. And ability to solve issues explained here:
https://capnproto.org/faq.html#how-do-i-make-a-field-require...
DUs would also be very useful. Have you considered it?
Overall, it looks very promising. I would like to try it, but my stack's currently C# & Dart. I see Rust is already on the roadmap.
Hey, hi, hello, I’m the Slice guy at ZeroC, and I’m glad to see the appreciation! Honestly, a saner alternative to Protobuf is kind of what we’re going for : vP
> DUs would also be very useful. Have you considered it?
Well, if DU is ‘discriminated union’ and not ‘diagnosis undetermined’, we’re literally releasing support for them tomorrow! Although we call them “enums with fields” because everyone needs their own names for things you know… But I totally agree, as someone who’s become pretty infatuated with Rust, I don’t know how I lived without them before!
> I would suggest ability to also generate "builder" variant of structs, where every field is optional
One of the big departures between Protobuf and Slice is that fields in Slice are `required` by default. You can still opt-in to Protobuf-like behavior by marking your fields with a modifier like `tag(2)`, but… it’s opt-in. This would also give you a struct like you described, where you could ‘build’ it piece by piece.
I’ve read similar posts to the one you linked about how dangerous `required` can be, and I understand where they’re coming from. Required fields definitely reduce your flexibility in the future. But as someone who’s been using IDLs for a while, I personally never found myself needing it tbh. And making it opt-in goes a good way towards two of the design goals of Slice:
- It feels natural to anyone who’s programming, and in most languages (especially the cool new ones), fields are ‘required’ by default.
- It’s more explicit. Unless you mark your type nullable, it’s non-null. Unless you tag it with an id, it’s required, etc.
Have your experiences differed; Where you’re frequently removing fields from existing types? Just as an honest question from someone trying to fine-tune their language!
Hi! Thanks for the in-depth explanation.
I overlooked that all tagged fields are optional. Builders would only make sense, if tagged fields could be required. Also, I'm not sure if they would be possible for structs with non-tagged fields and if they would even be useful in that case. So, you can ignore that idea.
The way I see it, tagged fields give you the flexibility at the cost of a slight size/performance overhead. And they also require additional manual labor for validation.
Non-tagged fields are inflexible (changes are not backward/forward compatible), but they are simpler to work with and more performant. Changes require API versioning.
For simple types it often doesn't make sense to use tagged fields. For example, for types like GeoCoordinates(Latitude, Longitude), tagged optional fields only add unnecessary complexity, without any real benefits. Another Protobuf limitation I dislike.
> Have your experiences differed; Where you’re frequently removing fields from existing types?
Not really (but I don't have much experience with large systems). I think that adding new fields is most common by far. Adding a new required field can be "dangerous" as an updated server won't be able to read old messages anymore (unless API is versioned of course).
I think that ability to have have either tagged or untagged fields in Slice is great.
> we’re literally releasing support for them tomorrow! Although we call them “enums with fields”
That's great! I don't care what they are called as long as they are available :)
Regarding varints, how does your QUIC like implementation compares to something like this[0]? Any thoughts on pros and cons?
Yeah, I pretty much completely agree with you on tags!
While required-by-default is more natural IMO, and more performant, etc. It does require a little more foresight by the programmer for sure. But, honestly, for those who want to play it safe (or not think about it), you can just mark everything tagged and pretend like you're still using protobuf : vP Although, it would hurt me a little bit on the inside...
The way we typically see definitions evolving is you'll create a struct with required fields first. Then if you need to augment it later on, you can add tagged fields, in a fully compatible manner. If one day you're in the rarer case where you want to remove a required field, you still can of course, you just need to be careful and keep your endpoints in sync!
> Regarding varints, how does your QUIC like implementation compares to something like this
So, the most obvious difference is in the supported ranges. An advantage of their varint is it can send a full 64 bit value, whereas ours can only go to 62 bits. We make this clear in the type names (`varint62` and `varuint62`) to avoid surprise, but it's a limitation.
I bet this is negligible in practice though, 62 bits is still ALOT of precision. It's the difference between: [0 .. 18,446,744,073,709,551,615] and [0 .. 4,611,686,018,427,387,903]. Odds are 4 quintillion is enough for what you're doing.
More on the nitty-gritty side, I think their encoding is actually pretty neat! The biggest difference is in granularity. They have 9 different step-sizes, whereas we only have 4. So, on average, they'll achieve a better 'compression ratio' than us, but we're basically tied up until 14 bits of precision. Then it alternates: for 15~21 bits theirs is more efficient, 29~30 ours is, 31~49 theirs is, then after 50 ours is. So like I said, on average they edge out the QUIC specification's encoding, due to more granular sizings.
Then on the performance side, I expect ours is slightly more efficient. But without a benchmark that's just pure heresay! Our size is encoded as `2^(the 2 least significant bits)`. Theirs is encoded by the number of leading zeros in binary. These are both a single instruction on any modern architecture, but counting the leading zeros is just a more complex operation.
But this just lets me plug another cool feature in Slice: `custom`[1] types(shameless, I know :^)). These let you hook your own types (with their own custom encodings) into all our machinery. In Rust, you just implement a trait, in C# you write an extension function.
So, if you're in a really bandwidth-constrained environment, and know that `vint64` will be better than our built-in types... you can totally use it with Slice!
[1] https://docs.icerpc.dev/slice2/language-guide/custom-types
> The way we typically see definitions evolving is you'll create a struct with required fields first. Then if you need to augment it later on, you can add tagged fields, in a fully compatible manner.
I think this is spot on. Each data container usually has "core" fields that are unlikely to change. Making core fields required and others tagged, gives you simplicity for former and flexibility for latter fields.
I really appreciate the flexibility of Slice IDL. FlatBuffers has table vs struct types, but that just doesn't offer the same fine control of tagged vs untagged fields.
> But this just lets me plug another cool feature in Slice: `custom`[1] types(shameless, I know :^)). These let you hook your own types (with their own custom encodings) into all our machinery. In Rust, you just implement a trait, in C# you write an extension function.
Can I map multiple custom types to the same C# type? :)
or[cs::type("NodaTime.Instant)] custom UnixEpochMinutes [cs::type("NodaTime.Instant)] custom UnixEpochMilliseconds
That would be really useful.[cs::type("long[]")] custom Int64ArrayDelta // delta encoding [cs::type("long[]")] custom Int64ArrayDoubleDelta // double delta encoding> Can I map multiple custom types to the same C# type? :)
Yes that is totally fine. In C# side both would be represented as "long[]", the generated code would use the appropriated encoding for the custom type, as provided by the user methods.
That's so awesome! This looks like the most well-designed IDL-based binary serialization format I have ever seen. Well done!
I guess brownie points for using .NET, it is not always that we see such kind of frameworks picking on .NET as implementation infrastructure.
ZeroC developer here. We picked .NET as our first language for two main reasons. Firstly, it is an excellent language for rapid iteration; the design and implementation of IceRPC evolved significantly throughout its development, so having a more flexible language greatly sped up the refactoring and development process. Secondly, it has mature async-await support, which is natural for an RPC.The next language we are adding support for is Rust!
All the best on your efforts.
booooo .net
where's python, where's browser javascript that's limited to HTTP1