BearSSL – Smaller SSL/TLS
bearssl.orgThe last thing the world needs is another immature SSL/TLS implementation however this makes it very interesting:
> No dynamic allocation whatsoever. There is not a single malloc() call in all the library. In fact, the whole of BearSSL requires only memcpy(), memmove(), memcmp() and strlen() from the underlying C library. This makes it utterly portable even in the most special, OS-less situations. (On “big” systems, BearSSL will automatically use a couple more system calls to access the OS-provided clock and random number generator.)
> On big desktop and server OS, this feature still offers an interesting characteristic: immunity to memory leaks and memory-based DoS attacks. Outsiders cannot make BearSSL allocate megabytes of RAM since BearSSL does not actually know how to allocate RAM at all.
Edit: Just discovered what makes this an even more interesting one to watch, it's the work of this Wizard: http://security.stackexchange.com/users/655/thomas-pornin
> The last thing the world needs is another immature SSL/TLS implementation...
No, they world needs as many of those as it can get (if they are tagged as such). And then all the authors need to discuss with each other what they learned from the implementation. And then they compare their code with existing code bases and discuss differences. They review their patches, as well as patches from other projects.
They discuss the diffs, learn from others and bring in new ways too look at things.
All bugs are shallow given enough eyes, and yet one of our biggest mantra's sole purpose is to limit the number of eyes. We need people that are familiar with crypto codebases and its subtleties, because we need the reviewers for our established projects. And for this reason, we need people to write and publish crypto related code. Not so we can push another new, excitingly half-baked TLS stacks into a product, but to foster the code review process we all rely on.
"The last thing the world needs is another immature SSL/TLS implementation"
Are you saying we should live forever with the established SSL libraries?
The only way software can mature, is to write it, release it, ship it, fix it, repeat.
Implying you have the resources to do so correctly. Which was the issue with OpenSSL.
Completely unrelated but:
Another small footprint ssl/tls library, very readable code and a pleasure to work with.
I'm not sure that was the issue with OpenSSL. According to the libressl folks the OpenSSL team were spending massive amounts of time on FIPS support at the expense of known serious issues the OpenBSD team had raised.
And it seems the firewall here has made a clbuttic mistake, as that page is blocked due to the url containing "porn".
So you're pretty much out of luck if you want to visit a page that contains some analysis?
Well, presumably, if the author's last name is Pornin and your spam filter is pretty basic (or overzealous).
"contains some analysis"
I wonder how it would cope with Scunthorpe
Or your name is "Olivia Gray". (True story, corporate mail filter blocked all that person's email)
Why? 50 Shades? Something else?
Remove the space from her name and then look for a word in the middle that is commonly associated with spam emails.
For anyone unaware: https://en.wikipedia.org/wiki/Scunthorpe_problem
security.se works over HTTPS, might bypass your firewall: https://security.stackexchange.com/users/655/thomas-pornin
What sorts of applications are written for OS-less systems that require a TLS library?
EDIT: Thanks for the sincere responses. In retrospect my question might have appeared smarmy, but that wasn't my intent and I really appreciate the responses.
Reasons for SSL:
• Security
• The server you speak with requires SSL
Reasons for no general purpose OS:
• Limited power (battery, solar)
• Extreme cost pressures (linux needs about $5 of hardware)
• Security (smaller code to audit)
• Extreme reliability requirements
So anything that ticks a bullet in each category is a candidate.
• Remote sensors
• Radio gateway, say LoRa to an internet server
• A device which keeps a secret for you and provides it to a server on command, perhaps something in a 2FA vein.
• Remotely triggerable actuators (door locks, parking lot lights)
I very badly need a TLS library in my embedded firmware so I can accept new firmware updates over HTTP.
You don't need TLS for that, you could simply use an HMAC and a shared secret, assuming you're not worried about people with physical access (and ability to get the secret) being able to create updates. Of course if you've got multiple instances of the device (not some hobby thing where they are all owned by you), then the secret for each device should be different so someone can't buy the device, determine the secret and then push updates to other people's devices.
Wouldn't signing each release with a private key be the simplest solution here?
(that can take many forms, but that general idea is how most software updates currently work)
RSA means big integer which means unhappy performance on devices that often don't even have floating point in hardware. I think elliptic curve could be faster?
> I think elliptic curve could be faster?
Yes, EdDSA is faster, with 64 byte signatures. Recommended.
Verifying a signature is not the simplest thing to do on hardware that doesn't even support a normal OS.
I see, that makes sense. Let's say you implement verification as:
1. Hashing the incoming data
2. Decrypting an attached signature
3. Verifying the decrypted and calculated hash are the same
Even though Step 2 would involve RSA or ECC, wouldn't Step 1 be the most expensive part regardless?
Yup you are right.
Not a very good idea, for the very reasons you point out. Signed releases with public keys, as conradev points out below is the far better approach.
Besides of what was already mentioned:
Many automotive or industrial communication buses are currently unencrypted, but could surely benefit from encryption.
probably could be useful in some small "internet of things" device
This. Internet of Things especially can benefit from adding TLS.
However you would often want DTLS there, which is for example what CoAP (HTTP-like IoT protocol which is based on UDP) uses.
i would imagine as some sort of secure-boot or trusted hardware process.
Lots of things with microcontrollers.
> The last thing the world needs is another immature SSL/TLS implementation
Here's an interesting thought: you don't become mature without starting somewhere.
Is there any evidence that memcpy/memmove outperform malloc?
That question doesn't make any sense. They don't do the same thing, so you can't compare their relative performance. What's faster: an Intel i7 or a BMW i8?
Malloc is potentially troublesome for two reasons. First, its performance is potentially unpredictable. It depends on the current state of the heap at the time of the call, which you can't know in advance except in some very rare situations. It can also fail entirely, and that is likewise unpredictable.
memcpy and memmove, ultimately being byte-copying loops, don't suffer from these problems. Their performance is consistent and they always succeed if your pointers and lengths are valid.
On PCs these days, the troubles of malloc don't matter much. You have so much performance margin that occasional slow calls don't matter, and virtual memory with a big address space means that it almost never fails. If it does fail, it's OK if the program crashes and you have to restart it. But many systems are much more constrained.
I think the implication is that without malloc you will remove a slew of potential bugs related to memory management, making the software more stable.
IMHO you're converting your heap buffer overflows into stack buffer overflows which are even easier to exploit.
Stack usage is also much, much, much easier to characterize. In systems where stack depth is well-controlled (i.e. most embedded systems that don't have dynamic process/thread creation), very simple analysis will suffice to identify places where you blow your stack.
Not using the heap != everything is allocated on the stack. In situations where you want to avoid dynamic allocation, memory for most things that would otherwise have been dynamically allocated ends up being statically allocated at compile time.
Absolutely none of this is immune to buffer overflows.
No, but you're a lot more likely to be able to overwrite the return address via a stack-based buffer overflow, and that is generally a much more serious kind of attack.
Exploiting systems without dynamic memory is pretty meh.. that's some NSA level Stuxnet bespoke shit.
But no, judging from the code, you just give it one big fat I/O buffer that will usually come from .bss
It depends, does malloc have some form of hardening? does the compiler insert stack canaries?
and moves and copies won't have potential for other memory management bugs? You'll still have memory management overhead and now additional complexity.
At a high level, isn't this like implementing your own "malloc" and "free" that just pulls from your process's own memory pool instead of the OS? Or is there more to it than that?
No, it's just placing the appropriate structs and buffers on the stack (when not provided by the caller).
It does eliminate a certain couple classes of errors, and makes some others less likely.
I didn't read all the code, but I don't think it's using alloca or the like. So the stack allocation sizes are known at compile time, and bounded unless there's some recursion going on (which is unlikely).
Many real time systems and applications disallow heap usage, because they have formal verification requirements that can't be met with dynamic memory that may "run out" depending on run time state.
Exactly this.
Almost always. Any sane implementation/system is going to need to zero memory so you're going to write 2x to it at a minimum.
I love the idea of a zero allocation crypto library, but isn't the fact that this is also in C going eventually lead down a similar path as that of OpenSSL?
I'm personally really excited for this: https://github.com/briansmith/ring
It's a Rust oxidization of the BoringSSL library, meaning that parts of BoringSSL are being rewritten in Rust, with the eventual goal of being pure Rust.
> with the eventual goal of being pure Rust
No, being pure Rust is not the goal. It aims to use Rust as much as possible for the parts that Rust is good at. But core crypto algorithms generally need to be written in assembler, to avoid various timing attacks that could be introduced by optimization. And for things that would require large amounts of `unsafe` in Rust, there's less reason to port that to Rust, and leaving it in C can be more clear.
See the style guidelines from the project:
https://github.com/briansmith/ring/blob/master/STYLE.md#unsa...
The thing that are most appropriate for Rust are parsing, protocol implementation, higher level code that uses core crypto primitives, and providing a safe API to client code. But the core crypto primitives themselves will remain written in C and/or assembler, as appropriate.
> to avoid various timing attacks that could be introduced by optimization
Assembler only goes so far. Until you figure out how the processor's front end will decode the machine code and run the underlying RISC program, or how the hypervisor will schedule your program on some shared machines (e.g. in EC2) you're susceptible to a different class of side-channel attacks.
This is true but not relevant to the discussion: whatever side channel problems you'd have in pure assembly, you're practically certain to have more of them if you implement crypto in a high-level language with an aggressive optimizer.
Note I never disagreed with GGP's premise - I'm only pointing out that you can go further than assembler.
Is there any way to write programs that bypass the processor's front end for most the processors in use? If not, it what sense is it true that "you can go further than assembler"?
You don't have much choice with commodity hardware - but you could perform crypto on hardware/chips where the behavior is completely specified.
Note that this is not very practical, and impractical crypto is almost as good as no crypto.
This is often, though not always, the case with deeply embedded processors of the kind BearSSL seems to be designed for. There's generally some way to get predictable cycle-exact performance on them because it matters for some embedded applications.
The problem with OpenSSL is less about the language it's written in and more about the age of the project, discipline of the developers, quality of the codebase, and its prevalance - which leads to its vulnerabilities having a high impact. OpenSSL's code is a heap of trash and that's why it's vulnerable, not neccessarily because it's written in C.
Sure. Though, C does nothing to help prevent you from creating that same garbage again.
So while there are excellent examples of C projects out there, there are many more that show why it's important to provide developers (even the good ones) with guard rails.
Libraries like this are almost invariably a terrible idea: none of the more recent alternatives to OpenSSL I've seen have avoided resurrecting crypto bugs OpenSSL fixed years ago.
But: Thomas Pornin!
So, this is pretty neat. I hope lots of crypto people take a very hard look at it.
Yeah, the general wisdom is, basically, "if you don't know what you're doing, leave the crypto to the experts".
I don't know the guy but, from what I gather, he is considered to one of these experts, yes?
(Edit: If I would have read further comments before replying, I would've found the answer to my question.)
Besides being a crypto professor, he managed to guess what CRIME was about, after it was announced that some bad OpenSSL advisory was imminent, but before it came out.
Therefore the proposed bug squashing strategy of "just claim that there's a bug in XYZ and let him oracle what it is".
Yep!
One of the cliches about crypto is that you should not implement your own crypto. Not to suggest that the authors don't know what they are doing, but they mention 'alpha' quality themselves on the site. I wonder, how long does it take until a new library is deemed secure? What does the process look like? Trial and error? Or do they compare notes with vulnerabilities found in e.g. openssl?
It is a combination of "many eyes" and "good documentation". What is needed is some good text that describes the design choice, the rationale, and all the tricky details; and then people who read it and think about it. I'll write and publish such text within the next few months.
So I guess it is going to take years.
Generally, open source software benefits from more users. But having a huge amount of users makes it more difficult to improve and cleanup because you can't just deprecate stuff easily. (like SSL2/3).
Also, having 100% of the internet using openssl makes the impact of a vulnerability in that library huge. Some diversity is probably a good thing.
I appreciate the time and effort that you are putting into it, good luck.
You could use some of your 233,693 reputation points as bounties on security.SE to attract people to read and think about bearssl.
any RSS feed to be able to catch that?
I was skeptical, but then saw:
Yeah, "should not implement your own crypto" doesn't apply to him.author Thomas PorninFrom his CV http://www.bolet.org/~pornin/cv-en.html :
Yeah, doesn't apply to him.AES (Advanced Encryption Standard, 1997 to 2000): co-author of the block cipher DFC eSTREAM (ECRYPT Stream Cipher Project, 2004 to 2008): co-author of the stream cipher SOSEMANUK (admitted in the final portfolio) SHA-3 (2007 to 2012): co-author of the cryptographic hash function Shabal (selected for second round) PHC (Password Hashing Competition, 2013 to 2015): author of the password hashing function Makwa (finalist, was awarded a "special recognition") Author of the sphlib library: optimized implementations of many cryptographic hash functions, both in C and Java. Author of RFC 6979: Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA).Definitely, most of my crypto searches ends up landing on one of his answers :)
And you are Dmitry! We use your TweetNacl-js implementation for https://github.com/wallix/PEPS!
After a little bit of looking around on the internet, I think I agree.
This is a very important point. When it comes to implementing new software, especially in a critical field like crypto, it takes a long time for acceptance to build.
As alpha software, I'd be shocked if anyone was using it in a production capacity, but it could be useful for early investigations into issues like timing attacks - clearly, it's better to get them sorted before a "final" release. I'd hope that any project planning to adopt this or any) crypto code was first getting it analysed carefully.
On the other hand, quite a bit of existing crypto is there because it was the first implementation on a given platform, despite potentially having issues - think about heartbleed, which was missed for quite a while in very heavily used software. It's not always bad to have fresh alternatives, as long as they are approached cautiously.
Generally the cliche is about not implementing your own cryptographic algorithms. As long as they only implement existing algorithms and don't generate new ones, I don't think this applies.
I used to think that; but I'm following Dan Boneh's crypto course on Coursera at the moment, and he specifically notes that you should not even try to implement known algorithms yourself (for production; you could do it ofcourse for the learning experience).
The reason is that there are subtle attacks on the implementation, such as timing attacks, which can leak information.
One of the point of the exercise is to provide constant-time implementations that provably do not leak such information. The hash functions, two AES, one DES/3DES, RSA and elliptic curve implementations in BearSSL have been written to be constant-time. I still have to write some document that explains how such things are done.
This thinking is why Heartbleed was such a disaster. Everyone left it to someone else, and the end result was OpenSSL being the only serious choice. Yet even the 'experts' got it wrong -- way wrong.
I'm not advocating everyone and their mother implement their own crypto. But some software diversity is a good thing. These algorithms aren't quite as scary or fickle as the documentation and existing implementations make them seem. Especially if you stick to good software like DJB's stuff.
For instance, using montgomery/edwards curves instead of weierstrass curves eliminates a lot of the difficulty in writing a constant-time implementation of ECC. And the 25519 implementation comes with a fast, constant-time implementation of a prime field type.
Yet even DJB's stuff can be simplified. You can knock off a good 80% of the scary code at a cost of a mere 10% of performance. To Google or Facebook, that may be unacceptable. But to me, that's entirely worth it. Now you have a tiny library that is easy to understand, and easy to audit.
> Yet even DJB's stuff can be simplified. You can knock off a good 80% of the scary code at a cost of a mere 10% of performance.
Wouldn't you say DJB (et al) did that themselves? https://tweetnacl.cr.yp.to/
I haven't benchmarked TweetNaCl's performance yet, but I think that goes way too far into making the code completely unreadable. The 80%/10% number I gave is against the ref10 implementation.
Compare theirs: https://tweetnacl.cr.yp.to/20140427/tweetnacl.c
To mine:
https://gitlab.com/higan/higan/blob/master/nall/elliptic-cur...
https://gitlab.com/higan/higan/blob/master/nall/elliptic-cur...
https://gitlab.com/higan/higan/blob/master/nall/cipher/chach...
https://gitlab.com/higan/higan/blob/master/nall/mac/poly1305...
https://gitlab.com/higan/higan/blob/master/nall/hash/sha256....
Please note that like BearSSL, my implementations are alpha-quality. Further, I'm not suggesting anyone use these in production. If I do so myself and it blows up in my face, it'll only have harmed me, and I'll only have myself to blame.
(Also, I'm really bad when it comes to source code comments, sorry. The why really needs you to read the research papers; the how is mostly self-evident. The remaining one-letter variable names were used to match the papers, and because I couldn't think of more descriptive terms.)
If you're using modern C++ you might consider using SaferCPlusPlus[1] for improved memory safety. Safe, fast compatible direct substitutes for C++'s unsafe elements (pointers, arrays, vectors, even references are technically unsafe). Small, easy to use, no dependency risk, and can be optionally "disabled" with a compile-time directive. It's the best answer to the Rust crowd who (justifiably) point out the inevitability of memory bugs in large enough C++ code bases.
[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus
To me, C++ with operator overloading looks scary. I think a reviewer would choose to review the generated binary instead of the source code.
Well if operator overloading looks scary you haven't seen the assembly generated for C++ programs ;)
It's just full with con-/de-structor noise and don't even get me started with virtual function calls.
cant be any worse than openssl.
they have all code running in constant time from the alpha version.
Next comes making sure there are no buffer overflows. the code is stable and compatible.
If everyone leaves it to someone else who does it exactly?
Obviously not ready for use in production until its been audited.
Remind me again where i can download an audited ssl implimentation?
> cant be any worse than openssl.
Not sure I'd agree with that. OpenSSL is very far from perfect and obviously contains many many security bugs, but it also has a very long history of fixes, knowledge, etc. and has a large number of eyes on it. It's more of a known quantity than something new.
OpenSSL does not have half the fixes people think it does. Generally what gets fixed is the ciphersuites some people happen to use, and the rest remain broken. For example, the constant time ECDSA signing only works with some curves, not all. It still supports horrific hacks for random number generation, instead of using OS provided interfaces. NSS is not much better on this front, but does a far better job of parsing TLS records in a sane way.
well. actually looks better from the start:
There is not a single malloc() call in all the library. In fact, the whole of BearSSL requires only memcpy(), memmove(), memcmp() and strlen() from the underlying C library. This makes it utterly portable even in the most special, OS-less situations. (On “big” systems, BearSSL will automatically use a couple more system calls to access the OS-provided clock and random number generator.)
On big desktop and server OS, this feature still offers an interesting characteristic: immunity to memory leaks and memory-based DoS attacks. Outsiders cannot make BearSSL allocate megabytes of RAM since BearSSL does not actually know how to allocate RAM at all.
Whoah, no. It's about building crypto code at all.
Given a choice between building something with Nacl and a bespoke stream cipher and building something with a bespoke cryptosystem and AES, I would have a hard time picking, but I'd lean towards Nacl.
this might give people false sense of security. implementing own crypto exposes you and those few souls that bought into your story. implementing bug-free "proven crypto" is near impossible task, but "proven crypto" part sells much better and so exposes many more un-expecting victims.
Nobody gets fired for linking openssl.
...yet. Not sure about 10 years from now.
Your own implementation of an algorithm is also a cliche.
Use proven and battle tested library. You won't do any better on your own.
I still think it applies.
My understanding is it generally takes years. First it is reviewed by multiple security professionals who look for known attacks. Once it is generally thought to be ok it sees limited real world use. There it generally just takes trial and error time.
The page claims: "[...] insecure protocol versions and choices of algorithms are not supported, by design",
followed by:
"TLS 1.0, TLS 1.1 and TLS 1.2 are supported", "3DES/CBC encryption algorithms are supported", and "SHA-1 [is supported]"
Sad-face.
While those algorithms all have weaknesses they are not yet completely broken and are still in wide (if declining) use. TLS 1.0 is most affected by BEAST but all modern clients have mitigations that have proven to be effective against it. The biggest issue with 3DES is its use of 64-bit blocks making it vulnerable to the SWEET32 attack, however that requires a huge amount of traffic under the same key (100s of gigabytes). SHA1 has been shown to be weak but as far as I know there are no practical attacks against it yet.
These algorithms should obviously only be used in fallback to stronger ones but they are not broken to the point where they should never be used as SSL3, RC4, and MD5 have been.
There is not supporting insecure protocols and then there is living in fantasy land away from everyone else.
You can't drop all of these things and end up with something generally useful.
Sure, I agree -- but that's not what the page claims. It says "insecure protocol versions and choices of algorithms are not supported, by design" -- the protocols and modes that I listed are known to have various insecurities, and it still supports them. I agree that to be useful it's necessary to support old, less secure or even insecure modes, but this is at odds with the above stated goal.
My point is about the imprecise description.
You're right technically but don't you think that's a little bit pedantic?
If your goal is to truly improve the state of the art in the ecosystem, dropping anything that is even remotely insecure is appealing I get that and I do believe the people behind BearSSL would love to do that. However to truly improve anything you need two things: Popularity and improve security.
There is a conflict there because popularity requires, at least some, compatibility to what already exists. You need to balance out security and compatibility. I think there is room for discussion about where precisely that balance is. You could further tilt it towards security by helping users of the library get a sense of what they need to support. Ultimately though you can't just blindly drop everything that's somehow not perfectly secure. Doing so would not improve security at all.
It's a small sacrifice to have one library be a little bit less secure than it could be, if that helps to make everything more secure it all.
This. I recently worked on updating an embedded TLS implementation from TLS 1.0 to TLS 1.2. I was told that it didn't need to implement TLS 1.0 or TLS 1.1, but once deployed we found a lot of non-HTTPS servers still using TLS 1.0. In particular, Microsoft's Hotmail/MSN SMTP servers and multiple RADIUS servers on WPA/WPA2 Enterprise networks. It now allows for client connections to TLS 1.0 servers, but will only serve TLS 1.2 itself.
Why not? TLS 1.3 is dropping those algorithms.
See how much of the Internet you can talk to if you only support TLS 1.3.
Cloud flare isn't the internet?
You kind of have to support SHA-1 still. Even with the browsers moving to deprecate it, many of the root certificates valid for another 10-20 years are still using it. (since the root certs ship with the browser, the security risk is lessened.)
If this is to be a general library that validates the entire certificate chain, then you'll need SHA-1.
Now if the library tries to advertise SHA1 in ServerHello by default, then that is indeed unfortunate.
Given various clues such as: -"OS-less" -small memory footprint -going out of their way to include discussion of legal jurisdictions -the author and past activities -performance seems not a major concern
Makes me suspect that a major goal is anonymity. It's less aimed at users installing on their non-anonymous Windows/Mac/phone but rather leverage generic/commodity hardware to communicate over SSL. Throwaway burner phones.
This just looks like someone who is aware of the complex legal history associated with cryptography and has considered embedded devices in the design of the library.
Considering the recent events around IoT having good crypto libraries for that seems like it could be useful.
Oh man, can't wait for the docs. TLS 1.2 on an ESP8266 would make it possible to use them with AWS IoT.
From what I understand, you can use them already with AWS IoT but I'm not sure if this solution is ideal or secure enough. Haven't used it personally but it's there:
https://github.com/SuperHouse/esp-open-rtos/blob/master/exam...
I'm wondering why this isn't on github
I wanted a clear situation with regards to laws on cryptographic software distribution and export. With my own server, I can keep everything in Canada, which makes things simpler.
Why not run Gitlab on that server?
It's hard to get by without all the fancy CVE features in GitLab, but software written in Ruby on Rails is banned in Canada, so they had to use good ol' reliable gitweb.
->Rails is banned in Canada...
wohaa. what?
wow. an opportunity for me to use google. if you are actually serious?
edit:lol my phone default searches to bing... didn't find anything about ruby on rails being banned in canada.
haha, I did the same thing. Must be trolling, or sarcasm
If Thomas has enough time to finish it (bearing in mind the hundreds of hours it will take to bring it to production quality) - and if he has time to maintain it over the years - and if he is able to commit to years' worth of future maintenance - then this could become a very nice and usable implementation.
I think it's great when people write something for the love of it.
For a high quality TLS implementation that is production-ready, and has had extensive third-party verification and certification, I'd recommend mbedTLS from ARM.
Seems like it is trying to replace PolarSSL (now called mBed TLS). It even dervies the name from PolarSSL (Polar Bear). It is nice to have multiple options, however it can make the vulnerability management a nightmare. How many more SSL libraries do we need (OpenSSL, LibreSSL,S2N,GnuTLS), not to mention native SSL libraries (Secure Transport, SChannel)?
We need a crypto library that is easy to use in other applications that want security.
This trainwreck of an API is the opposite of what we want: https://gnutls.org/reference/gnutls-gnutls.html
It would probably be faster to write your own TLS library than learn all of that.
OpenSSL doesn't fare much better. So far, libtls looks the most promising. But last I checked, it was still a bit too spartan and couldn't operate in non-blocking mode, which kills you if you want an event-driven server.
I, for one, am still waiting for the TLS library that's the spiritual equivalent of NaCl or libsodium; where the integration surface is narrowed to the essentials, and sensible and secure (internal) defaults predominate.
> It even dervies the name from PolarSSL (Polar Bear
I thought it was a play on 'bare' - e.g. only the basic features needed.
Probably a triple entendre as the author is a frequent poster and high rep user on Security StackExchange where his profile pic has been a bear for years. The community even has a couple in jokes about it because one of the other high rep users also has a picture of a bear as his profile picture and they are affectionately referred to as "big bear" and "little bear".
@pornin
Thanks, we definitely need a secure and reputable TLS implementation with small footprint for IoT devices.
[invalid issue removed]
q starts out larger than len, but is decremented inside the inner for loop.
Thanks, it seems the separate from variable decrease operator (q --) broke my "internal parser" as I completely missed it somehow. I'll edit my post to remove the noise.
I wonder if the Bear reference is in relation to the other smaller SSL implementation in WolfSSL https://www.wolfssl.com/wolfSSL/Home.html
The bear is more a favourite animal of the author
I first thought of PolarSSL, and thought there must be a connection there. (hmm apparently is now renamed to mbed TLS)
An obvious dig at wolfSSL - I'd be curious to see a comparison between the two. wolfSSL appears to be the more mature of the two (which makes sense, it's been in use for over a decade, and has a business dedicated to its development).
Not quite, Thomas's avatar in many places is that of a bear. He has named a few of his works after bears, in particular his password hashing function Makwa.
Huh, the first thing that came to my mind when I saw the name was this thing: https://matt.ucc.asn.au/dropbear/dropbear.html . Apparently forest animals are popular in the SSL implementation writing crowd.
I'll admit, they're a local company that I have a fair bit of respect for, so I'm a bit protective about people attacking them. That said, BearSSL is just too close to be coincidence.
The author could have easily continued with their trend of using alternative languages and ended up with something just as unique. UrsidSSL, BhaalooSSL, XiongSSL; all could have continued the trend without coming across as a sly attack.
It's not as if wolfSSL is a new kid on the block in the world of embedded SSL.
It has nothing to do with WolfSSL, Thomas uses a Bear as his avatar, done. I've never heard of WolfSSL, but Thomas's name and avatar was instantly familiar as someone who's contributed a wealth of invaluable Crypto knowledge to the world.
But then you don't have the homophone bear/bare, with "bare" having some nice connotations when your context of the topic includes OpenSSL.
Needs some Galois SAW tests. Also using OpenSSL for testing wouldn't be a bad idea.
This sounded very interesting.
However, it seems it's been developed "in secret" and the only public commit is a huge import of all of it. :/
Too bad, the development history would have been very interesting to read, digesting it all at once is harder.
Honestly, all my internal git commit messages are "...".
I intend to write (in many details) how the whole thing is designed. Give me a couple of months.
> Honestly, all my internal git commit messages are "...".
Very insightful about how security experts write security code, thanks! All that documentation which you hope to write some time - half of it should have been in your commit messages (that's what they teach on StackOverflow, no?)
Okay, that explains it, anyway. :)
I have absolutely nothing to say when it comes to crypto, but as a C dork I found it ... quirky that the encoding/decoding functions in inner.h (https://bearssl.org/gitweb/?p=BearSSL;a=blob;f=src/inner.h;h...) seem to use e.g. uint32_t to get a 32-bit type, while assuming that char is 8 bits (rather than using uint8_t). This seems strange.
I don't think it assumes it's 8 bits, but rather it assumes the values won't be outside the range of an 8 bit number, which should be fine, given that it's an octet-oriented protocol?
Using char pointers presumably is to get correct aliasing analysis?
I've been reading the code for T0 (a FORTH->C thingy), and I arrived at the conclusion that C# is a terrible language. The code itself is nice and clean if you assume it, though.
finally might be able to handle SSL requests on memory-constrained Arduino / ESP-8266 projects?! Yes please!
Another OpenSSL alternative:
This could be an interesting code base to use in a kernel. It should just work.
Edit: http://www.bolet.org/~pornin/cv-en.html
Disregard me.
Says who? Maybe the cruft of the old ones means its impractical to fix. Maybe the leadership of the old ones means it can't be fixed due to incompetence or toxic politics. Sometimes rolling your own or forking is the smart move. There's a reason you use x.org and not xfree86 for example.
The idea that no one should ever roll their own cryptography is a cutesey warning for amateurs, but not an absolute rule. If no one ever did, we would never have any.
Also projects like openssl don't have third-party quarterly audits or other formal practices. They're "rolling their own" as much as the other guy.
Doesn't matter anymore -- the credibility of the person leading the project is thoroughly established, so I'm retracting my comment.
But generally, "says who" is answerable as "says any reputable applied cryptographer, established audit/research team, etc. who's thoroughly cut their teeth on crypto and security in general."
Why bother with TLS 1.0 and TLS 1.1 anymore? I imagine by the time this project is "stable" Google would have already deprecated those two in Chrome.
And 3DES?
Please make a openssl compat api if possible, its incredibly hard porting n programs to $ssl.
No really he should not. A separate translation layer is free for anyone to write though. The OpenSSL design from an API perspective is basically as far from "user friendly" as possibly possible. Having a hard-to-use API means that it's hard to get things right. If things are hard to get right it leads to more bugs. You get where I'm going with this.
A clean, simple to use, or rather hard-to-use-in-a-wrong-way API is very much needed (and there are some libraries that are nice to use but not very proven).
I don't really know what you mean about a hard-to-use api as openssl compat would just wrap the bearssl api and not make any difference to consumers of bearssl.
The problem is that OpenSSL's API itself is quite complicated and probably not in an entirely necessary way. As but one concrete example, there's almost no particularly satisfying way to handle the "error queue" in a world with imperfect software. I also recently found how how insanely impractical it is in practice to perform one's own certificate validation (e.g. so I could interpret the CN field) if a TLS connection is abstracted even a tiny bit (e.g. in a database driver).
Little doubt that someone would find it useful, but it is An Undertaking, to preserve something which is not that desirable.
I'm pretty sure that this library would lose many of its advantages (small footprint, no allocations) when you slap an OpenSSL facade on it.