Encrypted messengers: Riot, not Signal, is the future
titus-stahl.deThis topic has been beaten to death on HN over the last year (other people can provide links to discussions, with Moxie participating).
I think something worth keeping in mind is that almost everyone who works in secure messaging agrees on one thing: that electronic mail is not the future of secure communication.
There's no fundamental reason why that should be the case. The store-and-forward model used by SMTP could be made to work for asynchronous secure group messaging. You can get forward and future security with it. It can interoperate with existing email addresses. All of that can be made to work.
But it is the case. Email won't be a secure group communication system. The reason for that is that email is federated and thus permanently mired in the lowest common denominator of mainstream email clients.
I think reasonable people can disagree about whether it's tractable to create a federated secure group messaging system with what we know right now. But I do not think it's reasonable to suggest that the concern (federation = lowest common denominator security) is invalid. And that's what this piece does.
"lowest common denominator" security is not necessarily that bad, as long as that denominator ends up being a relatively high value and there's a way to ratchet it upwards overtime (by excommunicating obsolete/broken implementations).
My assumption with Matrix is that if some fatal flaw is found in the Olm/Megolm E2E implementations, we'll work with the major clients/bots/etc to implement a (if necessary) incompatible fix... and fork the community. Folks stuck on old insecure conversations will be isolated and shamed into upgrading - much like insecure HTTPS algorithms get killed off by pressure from browser vendors.
Yes, this process takes longer than a centralised solution which can flip the switch serverside and then worry only about upgrading all the apps, but in exchange you get freedom, as well as some level of security.
That's the problem: there isn't a way to ratchet it up over time. It stays anchored at the lowest common denominator. It's tough to find counterexamples; see, for instance, the waking nightmare that is the XMPP ecosystem.
Obviously, email is the best case in point. There's been a decade and a half of concerted effort to get some baseline level of crypto security for email, and all of it has run aground on the installed base of dumb email clients.
To believe we should be cavalier about the risk of this happening again is to assert that we know enough now not only about how to design a secure group messaging system, but also how to safely implement it, that we should freeze the current state of the art in amber by standardizing it.
Personally, I can resolve this for myself quickly. I log into my Linux server, type "man 4 random", see that we can't even properly standardize the secure way to generate a random number, and quickly conclude that I'd rather use a single system that Moxie and Trevor are actively designing and evolving than adopt the consensus protocol of a menagerie of different unrelated messaging projects.
Firstly, I completely agree that it's easier to flip a centralised switch, make an incompatible change to fix a security issue, and force all their clients to upgrade to continue working.
However, I don't think that decentralised solutions have to end up in the worst-case scenario we see with SMTP (or even random(4)), or the medium-case nightmare of (say) phasing out SHA-1 TLS certs. There's a spectrum of nightmare here, and you can engineer both the tech and the governance to enforce a culture of security awareness and aggressive upgrading until things move fast enough. In practice, this means:
* Set a cultural precedent that obsolete clients are a bug, not a feature, and should be killed off or upgraded.
* Ensure that the most popular clients are actively supported to implement security features. If necessary, the standards body itself should get off its ass to do the work in order to protect the integrity of the ecosystem.
* Set a cultural norm of shaming users and developers who don't upgrade for security features - the same societal pressure that generally stops people wandering around in public if they're covered with chickenpox.
* Enforce it in the largest public communities of the ecosystem - in Matrix, this would be #matrix:matrix.org and a bunch of other huge >5000 user rooms, where if one day we had to make an incompatible protocol change, it'd suddenly be abundantly clear that folks on old clients would be excommunicated until they upgraded. We believe that users would rapidly find a way to switch client or encourage their developers to upgrade if it meant they were no longer able to participate in the biggest rooms!
* It's obvious, but: layer the protocol so that functions can be swapped out easily, just as Matrix allows arbitrary future E2E protocols in addition to today's m.megolm.v1.aes-sha2.
SMTP's woes stem from innocently trying for backwards-compatibility at all costs; not layering the protocol; having no governance model or culture which shamed ancient or obnoxious MUAs into being abandoned or fixed.
Slowness of SHA-1 phase-out in HTTPS is more the community again being conservative about backwards-compatibility, and a rather cautious attitude from the browser vendors in deploying the upgrade due to fear of upsetting everyone's expectations that The Web Never Breaks. Again, if you had more of a willing attitude that sometimes there's a security disaster and everyone has to upgrade (assuming that the software mechanisms are in place to upgrade and you haven't baked into hardware etc), perhaps people would be more willing to move faster.
So, with Matrix, we're trying to instil that attitude from the outset, whilst ensuring that the tech can support it. Time will tell whether it will work :) Better worth trying than to give up on federation and decentralisation entirely, though: privacy has no value without freedom.
"non-federated" and "secure" in the same sentence is a joke. Signal's other problem is Google Play Services which has absolutely no place in a supposedly secure system.
Federation concerns availability, not security. "unplug the ethernet and write plaintext to /dev/null" is extraordinarily secure and 100% decentralized, though badly unavailable.
It can also affect security depending on what jurisdiction the servers fall under. Federation means that while it may be illegal to run the service in, say, China, it can be run elsewhere without those concerns. This is becoming more apparent with the widespread use of National Security Letters.
Sorry, am I missing something? It's my understanding that Signal is ETE encrypted. All an NSL would get you is ciphertext and metadata.
The scenario I'm imagining is that Google and OWS receive NSLs requiring them to push a modified APK that could do nefarious things.
This is a common misconception: NSLs are a legal tool that can be used to extract certain types of information (such as subscriber information and maybe a little bit of transactional information) that a service provider already has stored on their servers [0]. However, they cannot be used to force a service provider to write and deploy code.
[0] NSLs are not magic - https://www.youtube.com/watch?v=YN_qVqgRlx4&t=20m16s
He mentions "technical assistance orders" but doesn't really elaborate any more on them. I'm having a difficult time finding any information on these orders, does anyone else have information on the capability of these orders?
Replying to my own comment, as I found some more information in a Black Hat talk regarding technical assistance orders:
I was wondering about the extent of NSLs in that regard. Thanks for clarifying.
Google can't push a new Signal APK, it's signed by OWS, not google.
3rd parties can download the signal source and compile it. Not sure if there's enough information available to product a bit identical (and thus verifiable binary).
I guess a NSL might compel OWS to push a binary specifically for a targetted user. If that's in your threat model you definitely need to take additional steps.
> I guess a NSL might compel OWS to push a binary specifically for a targetted user.
To my (admittedly fairly limited) knowledge, that's something the courts have yet to rule on. They can definitely ask to you give them any data they store about their users (and force you to keep quiet about it), but whether they could force you to develop a backdoor (and ship it to someone) remains to be seen. That's basically what the FBI vs. Apple case was about, which the FBI sadly pulled before courts got to rule on it.
So Android checks the APK is signed by the same publisher on update? What about for new users? Nothing stops Google from just changing which package is on Play Store, right? Where does signature validation come in, and how would a user tell?
> When the system is installing an update to an app, it compares the certificate(s) in the new version with those in the existing version. The system allows the update if the certificates match. If you sign the new version with a different certificate, you must assign a different package name to the app—in this case, the user installs the new version as a completely new app.
https://developer.android.com/studio/publish/app-signing.htm...
If you want to install an update that was signed with a different private key, the app would need to be uninstalled first, which would also delete any sensitive data in private app storage.
This is enforced at the platform framework level, from what I loosely remember of scanning the AOSP source code.
Yes, Google could hijack packages sent to first-time downloaders. That's usually the downside with trust on first use. If the initial download isn't trustworthy, the whole verification scheme falls apart. It would be better off if Android had the APK equivalent of Certificate Transparency. That, and if Google Play made all developer-uploaded APK builds available to users, for awareness.
Can I just chime in again and say, if your threat model includes an adversary who could compromise the Google Play Store deployment process, then you should be comfortable with validating the SHA hash on your APK binaries.
Android is pretty open about letting you sideload and run binaries, which you can do easily as a non-rooted end user. You can personally GitHub pull & compile the Signal app and you're good to go (w/r/t compromised software download).
Google/Apple could also receive a NLS ordering them to write and install a keylogger on your specific device in their next OS update. There's really not much you can do about that.
If Signal was federated anyone could write a signal client to openBSD or Alpine Linux.
This does somewhat break Signal's security model..
Why? Google doesn't know who you are chatting with, or even the size of the messages you are sending. Google just sends a "wake up signal and check for messages". That's it.
Keep reading the thread.
Your argument on this thread is incoherent. You begin by suggesting that GCM is problematic because it's a component of a larger platform library that gives Google control of Android phones. When it's pointed out that GCM push can be supported without that platform library, your argument shifts: it's the messages themselves that are dangerous. When it's pointed out to you that the messages are empty, you invent a scenario in which GCM push messages enable a kind of traffic analysis that on-the-wire traffic analysis can't already accomplish.
Were this my argument, rather than pointing me to a thread where my points were continually and reliably refuted, I'd take this opportunity to instead restate my argument clearly.
I think you're misinterpreting my arguments because it suits your preconceptions if that were the case. GCM is not just client code. I clarified when someone said the client could be replaced.
Same issue, and here's Moxie himself on it:
I have read this; I don't generally comment on issues I'm not informed on.
Moxie is simply wrong on many of these points, and has been for a while and has had this repeatedly pointed out with no change in opinion. I would rehash this here but you'd be better off simply reading the linked thread and looking at other rebuttals. In particular I remember a 500+ comment GitHub thread on the Play Services issue where Moxie was repeatedly dismissive and rude to those who take issue with the glaring security problems in Signal.
"The glaring security problems in Signal"?
Signal's use of GCM has also been beaten to death: it's a platform issue that has no impact on security (but does make it harder to deploy Signal on nonstandard Android platforms).
That is not true. It's (1) a remotely exploitable rootkit and (2) a tracking system that's (3) operated by a multi-billion dollar company whose entire business model is invading your privacy.
Google Play Services is not the standard Android platform. AOSP is the standard Android platform.
You're conflating GCM and the wider Google Play Services that is installed on most Android devices. As someone who actually uses AOSP without Google, it weakens your argument when you conflate the two.
GCM depends on Play Services, so I'm really not.
GCM is merely one minor component. The existence of the microg project proves that you can implement one without the other.
MicroG does not implement GCM.
MicroG does implement the GCM client! I've been getting push notifications through that thing for several months.
No, it implements one half of it.
Signal still has to be linked with a proprietary binary, which contains lots of tracking code.
Through Google, yes, but you can't bring along your own push notification delivery service.
Your argument has become circular, because the what Signal uses push notifications for isn't security-relevant: the messages are empty and used only as a wakeup.
They still contain information, though. They say when you're talking on Signal. Matched with someone else's messages at about the right frequency to indicate a conversation, they give a pretty decent idea of who you're talking to.
They give the same information that TCP/IP traffic analysis does.
Perhaps, but we can at least start to explore solutions to that if we can work on the server too.
You're ping-ponging all over the place. Which is it? "Glaring security problems", or impediments to fully exploring the solutions space?
If browsers are able to deprecate old encryption layers I don't see a reason why matrix client wouldn't be able to do the same. And as with browsers, if the clients or servers don't get upgraded then at some point they will stop working.
That is only possible at all because web browsers are an oligopoly. There are only four organisations whose opinions matter, so they can coordinate to make breaking changes. (Even so, SHA1 deprecation is happening 1000x slower than, say, Whatsapp's rollout of E2E encryption.)
This level of oligopoly would not be tolerable to those who want to federate Signal-like apps. The whole point is to make it practical to use a small operator that's not such an easy target for one government's intervention (eg an NSL). But that ecosystem looks much more like email than web browsing - diverse, but fragmented, and impossible to upgrade in this way.
In the end it's a governance problem. If you create an ecosystem which sets a precedent of taking security seriously and thoroughly excommunicating insecure clients, netsplitting them out of the mainstream, then I think you'd see a lot more interest in client vendors and users routing around obsolescence and upgrading to whatever the current best practices are. This is particularly nice if the protocol is designed to let you enforce this, by ratcheting up to new versions.
Email never had this, and it shows. SHA-1 in HTTPS is a kinda intermediary example; browser and server vendors have been petrified to break legacy systems and generally not accorded that much priority to security updates. In Matrix, we hope to avoid this by setting a precedent that if Olm/Megolm is found broken tomorrow, we'd work with the major client authors to upgrade them, patching their clients ourselves if we have to, or providing a localhost shim or whatever, and then take the biggest community anchor points (e.g. #matrix:matrix.org) and throw the switch to the new protocol, and make it abundantly clear that folks on old clients have been left out in the cold for security reasons and need to get upgraded immediately. If you're a big enterprise with a private deployment who doesn't want to upgrade rapidly, that's fine. But the societal pressure will enormously be to get with the program and upgrade. Let's see how well that works though - we haven't really had to make any backwards incompatible changes yet since we started in Sep 2014.
(p.s. hi! :D)
A reasonable upgrade path is to do "room versioning". Each server would have a maximum supported room version, and publish it to each room they participate in, and every room has a version. When every server in the room agrees on a new version, they can publish a message to update the room version, and start talking over the new protocol. Older servers then can't join the room unless they agree to the room's new protocol.
Clients can then warn when they're in rooms with older versions than the latest supported, and since nobody wants the people they're talking to to receive scary warnings about insecurity, they'll upgrade.
And, of course, we can do similar with client versioning.
This is remarkably contrary to how people actually use software. What people see is "click this button to make annoying red flashing shit stop so I can do what I want to do".
There's a reason web browsers just don't allow users to easily get past the annoying pages when there is a chance they're being attacked. I see no reason that Matrix clients would be required to allow users to break security without having a persistent banner saying "this room is insecure".
Which they will ignore.
Cool. If they ignore the great big banner which says "do not enter any personal info, bank info, etc etc into this window" and they're attacked, obviously they didn't care much. In the meantime, people who actually understand security can make a reasonable decision.
Not cool.
First, that's why people like Signal: it just works (TM) encryption with no user gotchas.
Second, any communication is only as encrypted / safe as the minimum of the people with access to it. So if someone ignores warnings and enters that chatroom, he or she puts everyone at risk. Because sometimes she/he really is being MITM or surveilled by someone/oppressive government du jour.
The point is that you wouldn't be able to enter a chatroom at a higher version than your server+client supports - how would the old code be able to understand it, after all? You'd be in pre-upgrade chatrooms, which would display the banner for everyone until relevant people upgrade/get kicked, and you could possibly start new chats with people, which would display the banner for all participants, but if you were on version 5 and #megolm:matrix.org was on version 6, you just couldn't join it until you upgraded.
Especially dishonest of Riot promoters is to even introduce it at this very moment to the "normal" users, because
"Riot’s encryption is not yet fully stable and, more importantly, it is not yet enabled by default in chats (you have to enable it manually). This will be changed in the future, but makes it more likely for users to make mistakes until then."
Users "make mistakes"? By using the defaults? I consider it a mistake to promote it to the users with such defaults. A "secure" product which "doesn't encrypt by default"? And "it's not stable"? What does that mean? The encryption either works or not. "Almost working" is still "not working."
Then please don't write
"An alternative to Signal is Riot." It is not. As far as I understand it just "could once be an alternative."
But based on the responses I've received here to my questions about Riot, it's promising: according to them, I will be able to set up my own network of people (e.g. just my family) with which I'd like to communicate. Yay! (thanks to mxuribe and NoGravitas for the answers)
Right now, in practice, Matrix is "a better IRC". It provides bouncer-like functionality by default, federation across the whole network so you only have one identity vs having to register with each server on which there's a community you want to talk to, file sharing, voice/video chat, proper message formatting, and more.
Encryption currently works on Riot Web, iOS and Android, certain bugs excluded - but it's missing a lot of UX work. (Among other things, you have to manually verify each and every device the people you talk to use, there's no way for them to say "these are all my devices, if you trust me, you trust them" yet. You also lose chat history at present if you switch devices or log out.) If you're able to work around the UX, the underlying protocol is fine and has been audited, with certain tradeoffs discussed in the report.
Thank you. This is exactly what I wanted to read: clear explanation what works and surely not "it's not stable." What are the current encryption-related bugs, that is, what is their worst consequence?
I surely don't have a problem with the manual verification.
The current bugs are basically that occasionally, you can't decrypt a message. Supposedly this has actually been fixed (and wasn't a security issue), but I've seen it once or twice since. And as I say, you lose your chat history if you log out or bring in a new device. This is an important bug to fix, but it requires some UX work.
We are still chasing down the final unknown session ID bugs actually, although many have been fixed. The other big issue is to warn when unverified devices are added to a room. We are working on them all currently.
Just to be clear, the blog post here is not connected to the Matrix.org and Riot teams and is entirely independent of us. We've tried to be crystal clear that E2E is still in beta, as per https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-en.... We are not recommending or introducing it yet to normal users.
I think the intention of Titus' article is to comment on where things are going in future... hence the title: "Why Riot (and not Signal) is the future".
I think you make good points, but I think email works well as a secure group communication system for private organizations, and may improve over time as a federated one.
One important benefit of email is that you're not reliant on any single provider for your communication. Slack and Signal and systems like them are provided by single companies, and the continued function of those systems depends on the companies. Your long-term happiness with those systems could be influenced by how those companies interact with foreign or domestic governments' requests for surveillance, for censorship, etc. What will happen if one of these companies is ordered by a government to, like Lavabit, backdoor their system, or defeat the encryption on a device like Apple? Perhaps it will happen through secret court order and we'll never know, or perhaps the company will decide to shut down like Lavabit - as a user both are bad options.
With email, you can host your own email server, and take the fate of your group's secure communication into your own hands. For anyone to get your data they'll need to come after your devices directly; they can't just send a court order somewhere to obtain information about your communication - not even metadata. Although systems like Signal may not have the preexisting ability to eavesdrop on the substance of your communication centrally, they may be able to eavesdrop on your metadata and contact list, and it's possible that they or any vendor involved in app distribution such as app stores, OS vendors, telecom providers, device manufacturers, could be ordered to collect your data or metadata, or even build and deploy a backdoor.
As far as security, I think email is adequately secure for private group communication for most purposes. Many of us do rely on email for just that purpose within our organizations (e.g. companies). If your organization has a central mail server that authenticates its members, then you can get to a pretty good state fairly easily. Microsoft Exchange and Active Directory are an example of this. Gmail for business. Postfix with an LDAP server.
All modern email clients communicate with mail servers over TLS and will authenticate the server, just like a browser authenticates an HTTPS website. Modern email servers authenticate the client. What important security property do we get from the alternatives that we don't get from email? End-to-end encryption is the only one that occurs to me, and in an organization where we can trust our central server that's OK not to have. I suppose this is an easier proposition for companies than for loose groups of people organizing over the Internet. If you use a hosted email provider then obviously you must trust them, but self-hosted options are available and realistic (Exchange, Postfix).
Now, granted, what I am describing is a private organization rather than a federation. What's really cool though about email is that we get secure group communication within the organization, and also federated communication to other organizations on the Internet.
The security story for communication between organizations is weak, I'll grant. The primary weak link I see is not end user clients, but the communication between servers. Specifically, the lack of support in standards and servers for authentication of receiver by the sender, and for the receiver to declare that all incoming connections should come over TLS with a path-validated certificate. There is an Internet-Draft out for this called SMTP Strict Transport Security [1], but I don't know where it stands as far as support and adoption. DMARC+DKIM already provides authentication of the sender's organization by the receiver, and for declaring that all outbound messages from the sender's organization must carry a signature to be valid.
There's a fair amount of surface area in this model though: in particular, both end users and their mail servers need to be trusted. If in your communication between two organizations, you are willing to trust both organizations' central mail servers, then basic email setups will get you to a pretty good situation.
Some colleagues have recently made the case to me that S/MIME could fill in these gaps, and mentioned that folks in the industry are working to flesh out support for it. There are a number of problems to solve to make S/MIME practical, but with the will of a few major players I think we could get there. Some pieces that are missing include first-class client support, a key exchange/discovery/revocation system, and a scalable certificate issuance system. But solving S/MIME is only necessary to get to end-to-end encryption, and for most purposes it's practical to trust your organization's and partner organizations' email servers.
Even if email is not the secure group communication of choice, it's still used for a lot of secure communication and is worth improving IMO. I'm glad that Google and other have been investing in this: https://blog.google/products/gmail/making-email-safer-for-yo...
It's all well and good to run your own mail server. The biggest problem is 99% of who you send to/receive from will be in one of the big 3 mail servers.
Could you elaborate on why that's a problem?
If the threat model for the communication within your group is concerned with attack or coercion from hostile governments, then you have the ability to set up your own email server and convince your participants to switch to it. It's not really much to ask -- it's fairly routine for companies and organizations to issue email accounts to employees/members and require their use for official business.
Consider that it may be easier to ask your communication partners to start using a new email account you've provided them than to adopt a new communication platform.
On the other hand, the people you're communicating with have chosen to trust the organizations that they're using to provide them with those services. Whatever service they're using is not radically different to trust than it is to trust the Signal developers, or the Google Play Store or Apple App Store to distribute the Signal application to your mobile device, and so on. Even if your participants are using Signal to communicate, they may have online backups enabled with their device to Apple, Google, etc., such that without your awareness your communication partners are trusting them just as much as if they were email providers.
I just had a quick look at a email list I run for a local sporting organization and it looks like the big 3 are less than 50% of addresses. There were a lot of local ISP addresses, businesses and educational organizations.
2 things to consider. A lot of people use more disposable accounts for mailing lists. And a lot of businesses and educational orgs use Google apps or MS's equivalent with their own domain name.
Are you basing the 50% based on what is after the "@", i.e. gmail.com and yahoo.com? Or are you looking at what server the MX points at. It's very easy to have your own MX for your domain point to a google mail server.
Nothing against Signal, But I sure hope matrix-based platforms and clients (like riot.im) keeping growing. The folks who work on both matrix.org and riot.im have done so much work in such a short time...not just in developing the protocol/server/apps...but also in education. They really have helped people like me to setup our own little home servers (i.e. private networks)...which ultimately helps the entire federated network. Signal - while certainly can be setup/hosted by anyone else separate of OpenWhisper - leaves some to be desired in the actual self-implementation details; just not enough tutorials out there. (Or maybe its just me?)
> Signal - while certainly can be setup/hosted by anyone else separate of OpenWhisper - leaves some to be desired in the actual self-implementation details; just not enough tutorials out there.
That is the problem at hand, that Signal _does_not_federate_. You could modify your Signal app to connect to your own server, but then you would not be able to talk to anybody else.
For the record I prefer usability and walled-garden-security instead of federation, even though it hurts to admit as a long time FOSS user.
I think signal made the right decisions raising the bar for encryption while maintaining extreme ease of use. Random Joe can click on it on the app store and chat with anyone in his addressbook that runs signal within a minute or so, with no expertise whatsoever.
However I see no reason why a similar p2p app couldn't manage similar without a central server. Trick is cell phones (at least on WAN) do not accept incoming connection. Additionally apple/android push aren't good for a p2p transport.
However adding supernodes (like the original skype) that could run on raspberry pi's, opensource routers, and similar embedded devices might just bright the gap. After all the cpu, bandwidth, and memory needs for instant messaging are pretty modest, even for many people sharing a raspberry pi.
The idea of supernodes is what got everyone paranoid (with reason) that they were now able to be spied upon.
The Signal server software _does_ federate. It has had federation support since the first commit in the git history. Whisper Systems' server federated with Cyanogen's server for a while. Moxie has said it was a disaster and that is part of why the official Signal server won't federate.
You can, right now, run your own server and get other people to also run their own servers. Fork the Signal app, modify to ask for a server to use, and try and get people to use it instead of (or as well as) Signal itself.
It should be noted that Riot is just the first Matrix client to support end-to-end encryption, but there will be more in the future. The thing you want to bet on is the Matrix protocol, not necessarily Riot. (Although both are a safe bet since Riot is developed by the same team that built Matrix.)
I'm not part of the Matrix or Riot teams, but I'm convinced enough that Matrix is a great way forward for modern messaging. I started my own Matrix homeserver (as well as other Matrix libraries, eventually to include a Matrix client) written in Rust. If you're interested in Matrix, Rust, or both, I encourage you to get involved! https://www.ruma.io/
Are there any plans to do a security audit on Riot? The useful report by NCC [1] looks at libolm (which implements the end-to-end encryption) but of course that's only part of the whole product.
[1] https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-en...
Note that that report explains that the Double Ratchet E2E algorithm is used in Matrix, in large part because of the Open Whisper Systems implementation in Signal and subsequent licensing. So we're looking at an apples-to-apples comparison, at least with respect to this one piece.
Yes, it seems like a good choice of algorithm. And it seems like the implementation is also decent - they looked at that as well. It's a useful report and kudos to Open Technology Fund for funding it and to Matrix for making it public!
Still, this is only one piece of the overall security of Riot, so I'm still interested in knowing if there's any work going on looking at the bigger picture.
Yes. Once the E2E implementation is fully finished and out of beta, and once we have a non-beta homeserver (as Synapse is still technically in beta, albeit very late beta), we'll be going to NCC and working out how to do an audit of the whole enchilada (homeserver + olm + matrix-js-sdk + matrix-react-sdk + riot-{web,ios,android}). This may well end up being broken down into separate components, much as the Olm audit was limited to the Olm component. At the current rate this should happen at some point in 2017.
Great to hear, thanks!
> The most important concern is that Signal is a silo [...] you have to connect to OpenWhisperSystems servers to communicate with other users.
You can run your own private Signal service with OpenWhisperSystems' tools [1].
It's also worth noting that Signal - as a protocol - could easily be federated. (As others have mentioned, Moxie has chimed in on why the app is centralized [2]).
If confederated messaging is important, why not use the existing Signal protocol implementations, (including the X3DH key exchange, ratcheting protocol, etc), which is all F/LOSS, and has already been widely reviewed (as the article mentions)?
[1] https://github.com/WhisperSystems/libsignal-service-java
[2] https://whispersystems.org/blog/the-ecosystem-is-moving/
> You can run your own private Signal service
A distinction without a difference. I use Signal because people use Signal. People do not use 'the Signal service'. They use OWS's app and OWS's servers and moxie has explained he will not federate.
The fact that OWS goes to all the effort of creating this excellent protocol, and then insists on only deploying it to insecure devices (with direct-memory-access baseband radios) baffles me, but I hope that things move in a saner direction with time.
The biggest benefit I think OWS has provided is the ability for other platforms (e.g. Whatsapp) to use their protocols. I daydream about a day when all these competing messaging services realize they would stand to gain a lot by federating, but I know it won't happen in my lifetime.
I'm not a fan of opaque baseband firmwares either, don't get me wrong, but what's the alternative? Not for the DoD, I mean for union organizers making $50k a year -- people who aren't going to get murdered by Mossad, but still need to authenticate and encrypt their communication channels. What device would you recommend?
Who's the likely threat to union organisers? I suspect a pair-locked iPhone with Signal or Whatsapp would be more than secure enough.
The most prominent example would be https://en.wikipedia.org/wiki/Jimmy_Hoffa
and then the long, storied history of American strike-breaking &c.
Well, either the threat is a private group, then WhatsApp or even Google Hangouts is secure enough.
Or the threat is a government, then Signal is not secure enough either, because the US govt can just force Google and OWS to ship modified APKs.
conflating the specific binary instantiation with the general cryptosystem. Regardless, depending on your threat model, you can take increasingly { reasonable | paranoid } precautions like manually compiling and loading Signal, as it's OSS.
edit: "private group" can encompass a lot, especially in other ecosystems like Google and FB. If said "private group" adversary is, say, a prominent and wealthy Silicon Valley businessman and enterprising vampire who collaborates with fascists, then you can see the potential of compromising someone's security by coercing Google or Facebook engineers to run you a Hadoop query or conditionally inject malicious JS.
> like manually compiling and loading Signal, as it's OSS.
Except, I’d have to modify the code, as the current version depends on Google’s proprietary libs, which I can’t inspect. And I lose half of the functionality, as RedPhone is also proprietary.
> by coercing Google or Facebook engineers to run you a Hadoop query or conditionally inject malicious JS.
The same can be done by coercing OWS engineers to backdoor their services.
And in any case, Signal can start collecting metadata any minute now, and there’s nothing we could do against it.
> And I lose half of the functionality, as RedPhone is also proprietary.
The source code for the Redphone client is here: https://github.com/WhisperSystems/Signal-Android/tree/master...
The source code the redphone-audio library is here: https://github.com/WhisperSystems/Signal-Android/tree/master...
Stop spreading misinformation.
So it finally got opened? Still doesn’t help me, considering that the Firebase Messaging library compiled into the client is still proprietary.
I can not build Signal from source today.
I believe Riot is the future not because of its security(its attention to such is a great, great bonus) but because it's positioned itself so well as a credible successor to IRC.
And an open replacement for Slack.
Yes. I don't see Signal and Matrix in direct competition, just like WhatsApp and Slack are not in direct competition. The technology is very similar (main difference seems to be the size of chat rooms), but the use case and marketing is very different. Signal/WhatsApp is for casual mobile texting, while Matrix/Slack is for working.
I think Riot is a better comparison to Slack than Matrix is. Riot is essentially the Slack experience built on the Matrix protocol, but Matrix can certainly work just as well for clients that present a Signal/WhatsApp/iMessage/SMS-style interface.
It might be the case, but I should would prefer being able to stick to just one messenger in the end. And I don't see why it wouldn't be Matrix as opposed to WhatsApp. (If we ignore the networking factor, of course.)
After a recent discussion of the issues with XMPP on mobile, some obvious questions:
1. How well does Riot deal with changing network connections? Does it have problems when a mobile device switches between, say, WiFi and 4G? How well does it deal with a complete loss of connectivity?
2. How well does Riot deal with power management on mobile devices? Can it spend time in the background while getting message alerts while not running down the battery?
It's really good. I'll vouch for it as someone who has been using the mobile apps on both platforms, and the web application(s) on the desktop for over a year now. They're great. Battery isn't a problem; messages NEVER get lost.
I actually came to matrix after trying to write an XMPP client, believe it or not. The matrix protocol is WAY better equipped for the future than XMPP is: it simply has the core designs necessary to make it federate well and do message sync without losses. (XMPP doesn't. (Unless you count a half-dozen XEPs, none of which are reliably implemented in all clients. But we're getting increasingly parenthetical here; by comparison, matrix Just Works.))
I've been using Vector/Riot using the Android version in F-Droid on Blackberry 10 for 6-9 months, connecting to my own server on my DSL. I haven't noticed a single issue with battery or lost messages.
OK, I did a better search and found something relevant in the fricken Matrix FAQ:
* https://matrix.org/docs/guides/faq.html#i-installed-riot-via...
>I installed Riot via F-Droid, why is it draining my battery?
>The F-Droid release of Riot does not use Google Cloud Messaging. This allows users that do not have or want Google Services installed to use Riot.
>The drawback is that Riot has to pull for new messages, which can drain your battery. To counter this, you can change the delay between polls in the settings. Higher delay means better battery life (but may delay receiving messages). You can also disable the background sync entirely (which means that you won’t get any notifications at all).
>If you don’t mind using Google Services, you might be better off installing the Google Play store version.
The permissions Signal asks for do seem excessive (both on iPhone and even more so on Android).
Can anyone justify why they are necessary?
Developers who are unfamiliar with the Intents system? That's a common reason for applications requiring a laundry list of permissions.
http://stackoverflow.com/questions/6578051/what-is-an-intent...
The developers have feature justifications for every permission requested: https://support.whispersystems.org/hc/en-us/articles/2125358...
Edit: Reading your link now, as I didn't see it before I made my comment. Was that added in as an edit?
These are the justifications of developers who are unfamiliar with the Intent system. Were I unaware of Intent, I would make the same design decisions.
Correct. Someone should point the Signal developers to the following pages:
https://developer.android.com/training/contacts-provider/mod...
https://developer.android.com/guide/topics/providers/calenda...
and so on and so forth for almost any permission the Signal app needs on Android.
I agree. Mostly because they're asking for all of those permissions prematurely.
What if I never want to share my location, take pictures, or send files?
And then some things, like calendar access, aren't even used right now.
> What if I never want to share my location, take pictures, or send files?
Don't use these features and / or disable the corresponding permissions.
I get that. I'm just saying that it wouldn't look as bad if they just didn't ask for the permissions upfront.
If I'm about to take a picture for the first time using it then I'll understand it asking for camera access.
Are intents the same as the run-time permissions? I might still be misunderstanding, but I think they have an issue open for this already: https://github.com/WhisperSystems/Signal-Android/issues/3983
Intents are the idea that your app doesn't directly request camera permission, for example, but instead asks the system to open a camera screen, and another app (or even your own app) can then respond to that.
This provides more choice, and reduces the amount of needed permissions.
Signal used to use the camera intent, it was changed to use direct capture so the camera opens faster and so captured photos aren't stored in the camera roll.
OWS already has:
https://support.whispersystems.org/hc/en-us/articles/2125358...
> Riot is based on the so-called Matrix protocol which is a federated protocol
> In addition, people are writing alternative clients to access the Matrix/Riot network, implementing their favorite features and workflows. As users can vote with their feet for their own interests and choose providers and apps of their liking
Can I run my own network which is not part of other networks (i.e. not "federated")? Can I tell somebody "call with your Riot client 'acqq at server ip nnnnnn' and we can talk"?
There are 2 "ways" to do this:
1. You and your friend both use the riot (actually matrix.org) server/network and you both choose any matrix-relevant client (doesn't even have to be the riot client), but only make use of private rooms on that server. This avoids any system setup overhead whatsoever...But the private room(s) that you create would still on a server that is not controlled by you.
2. You can of course setup and run your own little private network; on your own domain name/IP address. This is what I do with my family; and only my wife, and daughter have access (I've even disabled registration). I have not yet connected my little network to the greater matrix network...For 2 reasons: I wanted to beta test this internally so I could leearn; Also, i wanted to be sure my family does not get exposed to any spam (if there is any that is).
Good luck; cheers!
The Ruma homeserver implementation is specifically supporting this use case. The federation system will run as a separate application alongside the client-server API, so if you don't want to federate with other homeservers, you just don't run the federation component. https://www.ruma.io/
Yes, I set up my own Synapse server and connected to it from the official Riot app on phone and web. Worked fine. Also connected with another Matrix client and that also worked.
I believe so, yes. If not with the standard homeserver (synapse), than with a custom homeserver.
Note: my question is, with a plain client, downloadable from the app store, not with some special custom build of the client.
Also, how puringpanda's question fits to your claim?
You can connect to any homeserver with the default client; it's not tied to the default homeserver.
If I understand puringpanda's question, the idea is that your domain and your homeserver are seized, but you have contacts on other homeservers. At this point, it's just like losing your email server. You lose your existing ID, and probably your message history, but you can reach your contacts from a new ID you create someplace else.
Wow, riot has a ways to go. With signal you install it from the app store, it creates an icon, you click on it. Similar for the desktop client, go to the chrome app store, click on it, and it tells you to use your phone to scan a barcode.
In both cases you can start chatting with any signal user in your address book in a minute or so, no expertise (other than using an app store) needed.
Tracked down the http://riot.im, it has a "try now button", that just scrolls you to the top. Didn't see any way to actually try it.
I tried the ubuntu app, they make you manually create your own /etc/apt/sources.list.d, from only the base URL. Then you have to know how to add the pgp key. Then apt-get update, apt-get install riot-web. Then... nothing. Nothing called riot or riot-web in the path. Thought maybe there would be a daemon running (it's called riot-web afterall). Can't find any processes running, nothing listening on a new socket. I track down /var/lib/dpkg/info/riot-web.list, look through the list and find they dropped a dir in /opt. So I run /opt/Riot/riot-web.
It worked, not exactly the kind of thing I'd ask random friends/family/colleagues to do though.
Does anyone know if they are planning to add a way to change home server. If they take your domain (With your Matrix server on it), you have no way of communicating with other people over riot anymore.
There are plans to support 3rd party identification (such as an E-Mail address or a phone number) and use that as a basis for looking up users across the network, but I don't think it is currently useable. Account migration was brought up recently in the chat room, but it is not defined anywhere in the spec or reference implementations AFAIK. I agree that these are both important features, but I wouldn't worry too much about them unless they are left out of the 1.0 spec.
In the meantime, it's not like you can migrate your Signal, Telegram, iMessage, or even Gmail/Hotmail accounts. I think Matrix needs a few more client/server implementations before the spec can't truly be set in stone.
Email identifiers work fine today, actually. They don't solve the problem of migrating accounts, but at least they abstract the discovery process away, as you say.
MSISDN (phone number) identifiers landed on the backend this afternoon; implementation in the Riot clients will be coming very shortly.
Account migration is Hard, and we deliberately descoped it from the original design of Matrix in a bid to ship stuff sooner than later. https://github.com/matrix-org/GSoC/blob/master/IDEAS.md#dece... is a quick description of the problem.
There are broadly two ways of solving it:
1. have a naive implementation where users can configure their accounts to replicate between sets of servers, and clients have primary and fallback servers they can talk to if the primary isn't available.
2. switch to a p2p model where each device has its own server, and so account data is automatically replicated across multiple devices. This has a host of other advantages too (e.g. you can own your data without running your own server; you can adopt metadata-protecting federation transports; you can still use all the existing apps today as the client-server API remains the same; you can still bridge with the Matrix network of today).
#2 is obviously way more work, and is effectively rewriting the federation side of Matrix. However, Matrix is designed to evolve and we're not ruling this out from happening at some point. Meanwhile #1 is more likely to land in the nearer future. We haven't got it scheduled in yet, but it's very much on our minds!
I believe you need to create a new account on another homeserver.
Hear hear. Down with silos.
I just hope matrix ends up working better than xmpp.
A question I've had about Signal is what is stopping Apple from modifying and rebuilding the source with a backdoor in it? Is this technically possible (seems like it would be since they control distribution of the binary to devices)? The article is correct in stating that web based chat is inherently insecure but it seems all iOS apps are also inherently insecure. I'm by no means an expert though so would love to hear from someone with more knowledge.
EDIT: Thank you for the responses! It pretty much confirms what I thought; Apple _could_ access your communication (either through keylogging at the OS level or backdooring Signal) but this solution is better than everyone use plain text communication. I personally would not trust Apple with my life if I needed that level of protection but maybe that's not the main use case for Signal.
Technically? There's nothing stopping them. For that matter, there's no stopping Google from doing the same. There's also no stopping Apple from patching LLVM so that only patched versions of OpenSSL are ever compiled against. The question is how paranoid are you and what is your threat model?
We have to trust someone, eventually. This is especially true for the 99% of the population who doesn't have the skill to compile source themselves (nor should they have to).
Just in case nobody has gotten to enjoy this gem:
http://wiki.c2.com/?TheKenThompsonHack
Ken describes how he injected a virus into a compiler. Not only did his compiler know it was compiling the login function and inject a backdoor, but it also knew when it was compiling itself and injected the backdoor generator into the compiler it was creating. The source code for the compiler thereafter contains no evidence of either virus.
Which is why standardization is just as important, if not moreso, than openness in making sure things stay secure. Such an attack is made a lot more difficult if you have a second toolchain you can use to verify things, and even moreso if you have a third.
> For that matter, there's no stopping Google from doing the same.
That's the exact reason why package signing is decentralized in the Android ecosystem. All apps in the Play Store are signed by their developers.
With such a system, you must end up trusting a certain entity; it's turtles all the way down otherwise. No system is independently secure.
Similar questions include: What if a CA is compromised? What if Apple/MS bundles unwanted certs with the OS? What if Intel/AMD biases the on-die hardware RNG or other hardware crypto primitives? What if Apple/MS bundles a backdoored compiler a la "Reflections on Trusting Trust"? What if MS/Apple backdoor the entire network stack, including the physical and data link layers? etc. etc.
Does Signal support reproducible builds, at least? Real question, I don't know.
Partially. They're moving towards it, but it obviously doesn't help that only half of the app is actually open source.
The way the App Store works is that the app is signed by both apple and the developer's private key. Without the developer's private key Apple is not supposed to be able to sign the app in a way that the App Store would upgrade an app and consider it to be the same app. But of course Apple could modify the app store or ios in a way to remove those restrictions.
I'm not sure it really matters -- if Apple wants to log your conversations they don't have to put a backdoor into Signal, they could just put a backdoor into iOS itself. An attacker with privileged access to the guts of the operating system doesn't have much need to muck around with hacking the applications that run on it.
Which is to say, security-minded users should strive to trust as few parties as possible, but since at the end of the day you have to trust somebody if you don't trust Apple the only really secure move would be to not use iOS devices at all.
Exactly: as soon as you use Apple, you can as well use iMessage and FaceTime with the other iOS users. You just need something to be able to communicate a bit safer with the users who don't have iOS.
But if the user has another OS, then you can believe those who get control of that OS/device can read your messages to that user and record your calls to him/her.
It's turtles all the way around. The more communication the less can you expect to remain "private." Come to think, it is so without computers too.
But one should never consider oneself secure from targeted attacks. What Signal et. al. protects from is dragnet surveillance, which Apple can perform remotely with iMessage without having to install an exploit on every iOS device. They do not have that opportunity with Signal.
> What Signal et. al. protects from is dragnet surveillance
Can it be claimed if
- the user has to log in with his phone number to Signal servers in order to communicate
- no user can use any other but Signal servers, which are hardcoded in the apps?
It seems that it's perfectly designed to at least collect the metadata and the owners of it don't want to let you change these rules.
Yes you depend on Apple's cooperation on iOS & Google's cooperation on Android. Even if you flash your phone with your own custom OS, your radio chips will still run proprietary firmware that can be updated over the air without the OS even knowing.
These aren't high priority problems right now for mass surveillance, as we have people using plain text chat. If you're expecting a wealthy adversary to directly target you, then the only safe move is to avoid technology.
There is no technical restriction. But if Apple did do this and someone figured it out it would probably hurt their perception by their users and it would be a bunch of bad PR.
What's to stop them from modifying the OS itself to spy on you? On a closed platform, you don't have peace of mind from spying.
On an open platform, you don't have peace of mind from spying...
Unless every single part of your platform is open and you build it yourself. But then make sure the parts you use to build it aren't tampered with.
I think somewhere along the conclusion of that train of thought you'll need to build a fab to make sure there aren't silicon level backdoors in your hardware.
If you care that much, just run your own OS. Many android phones have quite a few options available.
A custom Android OS is the last thing I'd consider secure...
++
Riot? Can we talk about the edgy names? Think about how much differently history might have been if Napster was named Library of Alexandria.
"According to Galen, any books found on ships that came into port were taken to the library, and were listed as 'books of the ships'. Official scribes then copied these writings; the originals were kept in the library, and the copies delivered to the owners."
https://en.wikipedia.org/wiki/Library_of_Alexandria
Sounds like Napster, yes? Think about how much harder it would be for Congress to pass laws shutting down the digital equivalent to a library sharing the world's music.
But no. We get names like Riot, and Felony (https://github.com/henryboldi/felony). Congress sends you a "Thank you" every time you put an edgy name on something disruptive.
> If OpenWhisperSystems adopts any policy that goes against users’ interests in the future, users cannot switch providers without losing all their contacts.
Is this correct? I've never bothered looking it up, but Signal was connecting me with people in my phone's address-book.
It's semi-correct? You don't lose your contacts, as they're still stored in your phone's address book. But all of your contacts will have to jump ship at the same time as you, to the same silo. And your old contacts will only still be usable if the new silo also uses phone numbers as userids.
On Matrix/Riot, userids are federated in the same way as emails. So when you change providers, your userid changes, but your contacts' stay the same, and you can still connect with them from your new provider.
I have the impression that Signal by now has such a great brand name that mere technical objections won't affect its growth for a very long time.
I wonder if the name of Riot will be a hindrance for widespread adoption. Most people don't like riots.
I actually just learned today that Riot (the IM app) is not related to Riot (makers of hugely popular video game League of Legends). I thought the occasional mentions I was seeing of "Riot chat" meant that LoL's mobile chat client was gaining traction among people who don't play the game.
Or the fact that searching for it usually results in the company riot behind league of legends.
Yeah, it's a bad name. Don't think I could get my parents or colleagues to use it.
Vector.im was much better - I don't really get why they changed it.
How about Discord? It's very popular among gamers despite its name.
You might have a point there. "Signal" feels much smoother than "Riot".
It's far from perfect, but it's better than any other option for most. Go to the android/apple store, click instlal, start chatting with anyone in your address book that uses it.
Anything else is either much less secure, or much harder to install/use.
The question I would ask is: given the list of capabilities presented in this post, is there even any difference between Riot and e-mail? Or are you reinventing the wheel?
Riot is most comparable to Slack or Discord. It has chat rooms. It supports voice, image posts, file transfer, etc. It stores conversation history. You can private message people.
Matrix is a generalized protocol for decentralized and federated communications; it's agnostic to the application layer provided by Riot. Something Matrix doesn't have, but is on the issue backlog, is support for email-esque thread contexts. [0]
I would love to see an email - like application implemented on top of Matrix.
we're working on it :>
Discord/slack isn't federated, is it?
no
End-to-end encryption. Even if you encrypt email with PGP, which no one has come up with a satisfactorily easy interface for, it leaks a lot of metadata. Riot gives essentially the same privacy guarantees as Signal, but with email-like federation.
PGP can be used to encrypt at the ends, in which case it is end-to-end encryption. So that's not a different feature.
Care to share what you mean by PGP leaks a lot of metadata? You might be right, I'm just not aware of such details.
If two users PGP email to each other, anyone who's monitoring either mail server or the network in between can tell A) when they are talking, B) who they are talking to, C) what the subject of the email is, and D) how big the email is.
With signal this is much harder, for instance google/apple do not know who you are chatting to. They also don't handle the encrypted message transport/delivery.
None of the email headers are protected in any way for a PGP-encrypted email. All the same metadata is that collected from plaintext email is still available on "encrypted" email. You literally can only protect the body of the email. In surveillance, that is often the least interesting or valuable piece of information.
As mentioned signal doesn't hide contact discovery. But it does a pretty good job of hiding who you are chatting to from everyone but OWS.
OWS received a Grand jury subpoena and was only able to produce "the only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user's connectivity to the Signal service.".
Certainly a NSL might compel OWS to add additional logging (and not talk about it). With that they could tell who messaged who, when the message was sent, and how big the message was.
> Certainly a NSL might compel OWS to add additional logging
NSLs cannot be used for that. They're a legal tool that can be used to extract certain types of information (such as subscriber information and maybe a little bit of transactional information) that a service provider already has stored on their servers [0]. However, they cannot be used to force a service provider to write and deploy code.
[0] NSLs are not magic - https://www.youtube.com/watch?v=YN_qVqgRlx4&t=20m16s
>As mentioned signal doesn't hide contact discovery. But it does a pretty good job of hiding who you are chatting to from everyone but OWS.
Anything that does TLS to connect to a single server can do that. Heck, if you do email to a single email server using secure IMAP and secure SMTP then PGP no longer leaks metadata.
> OWS received a Grand jury subpoena ( ... )
Link please
Signal does not solve the meta data problem either. Discovery is an open problem [0]. Signal messages leak the recipient, which is the most important meta data. You would have to use Tor/Onion routing, which is inefficient.
PGP encrypts the contents of the message, but not the headers.
I support good alternatives to Signal that also have other goals in mind. Signal's goal is to become basically as mainstream as Whatsapp is, and to get there it needs to make a few compromises for usability's sake.
Whatsapp has already backtracked on some major privacy promises, and who's to say it won't backtrack on the end-to-end encryption support eventually, after everyone is baited and switched to it? Or worse, it could start to decrypt E2E communications in secret for governments.
So we need a "mainstream" alternative that's actually trustworthy and can at least protect the security of the communications, if not the relationships between users.
However, I support applications that aim to offer even better privacy and security compared to Signal, that are aimed at more opsec-sensitive targets, such as journalists. Signal may be the best tool journalists have right now, but it's probably not the best one they could have, as it doesn't do a great job at protecting sources. Perhaps Ricochet or the Tor Messenger may be better for that.
What I'm worried about though is that even if these apps offer better security/privacy features, the various federated applications that use an E2EE protocol may not have too much of a security mindset. For instance, sure, Riot may adopt a better protocol, but is Riot itself using all modern security best practices? Can we trust the Riot developers just as much as we do the OWS developers? etc
Finally, I'd much rather see Signal become a P2P application than a federated one, if that would even be possible.
Trick to p2p is that generally you have to accept incoming connections for it to work. In signals case that's the signal servers.
Originally skype did this, skype users with a good network connection, good uptime, and who accepted incoming connections could self promote themselves to a supernode. This allowed async messaging for others, helped introduce peers who couldn't talk directly because of IP Masq/NAT etc.
So it's possible that signal could write a small application that could be a supernode. Ideally it could run on a Raspberry Pi, Plug computer, or even any of the numerous opensource routers. What way your battery sensitive phone wouldn't get run down by participating in a DHT or similar, but your raspberry pi could act like your inbox and facilitate incoming and outgoing messages.