Sei sulla pagina 1di 11

OpenSSL Analysis Paper

CSOL560 Secure Software Design and Development


Professor Ashton Mozano
University of San Diego

Marc Leeka

Module 7 Assignment

May 1, 2017
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

Table of Contents and Figures

Executive Summary ....................................................................................................................... ii

OpenSSL and its flaws ....................................................................................................................1


DROWN is found ...........................................................................................................................1
How DROWN works ................................................................................................................................... 1
Figure 1: Which servers are vulnerable to a DROWN attack? .............................................................. 2
Some takeaway lessons ................................................................................................................................ 3
Why OpenSSL is particularly vulnerable .................................................................................................... 3
More OpenSSL problems ............................................................................................................................ 4
Keep it or rewrite from scratch .................................................................................................................... 4
LibreSSL promises not to make the same mistakes ..................................................................................... 5
Testing can improve LibreSSL .................................................................................................................... 5
Figure 2: A Security Testing Taxonomy ............................................................................................... 6

References ..................................................................................................................................................... 7

i
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

Executive Summary

OpenSSL is one of the most popular open-source cryptographic libraries, found on more than one-third of
websites. The software has a long history of fixes. OpenSSL suffers from a number of design flaws,
architectural weaknesses and bad programming practices, therefore any person could reasonably assume
that there will continue to be lots of bug fixes issued in the future. The OpenSSL coding is so big and
convoluted that it also would be reasonable to assume there are more really huge bugs on the scale of
Heartbleed yet to be discovered.

This paper singles out one recent bug fix large enough to be named, not numbered and describes the
vulnerability in detail.

The key principles and best practices for designing software in general are summarized and compared to
the steps the LibreSSL development team have taken, or promise to take, with their replacement for
OpenSSL. The paper concludes by describing testing and evaluation steps that should be standard for all
software, not just LibreSSL if it is to become a successful replacement.

ii
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

OpenSSL and its flaws

OpenSSL, the open-source war horse that is close to its 20th birthday, has long been a favorite
cryptographic library. Extraordinarily popular, it is utilized on 35% of websites.1 With such a long
history, OpenSSL code has been rigorously examined by the software and security community. That
scrutiny has resulted in the discovery of many notable flaws, including timing attacks on RSA keys,
denial of service, poor random number generation and, perhaps the most famous, a memory buffer
overflow vulnerability that was called Heartbleed.2

OpenSSL is not unique in that it has a long checkered record of serious vulnerabilities; there are lots of
software packages for example, Microsoft operating systems that require constant patches for the
never-ending stream of vulnerabilities that are discovered regularly. OpenSSL is not a commercial
product with a huge support budget, however, so its small programming staff largely relies on the user
community to find problems and suggest remediation. And, until Heartbleed surfaced, most users
assumed it had been so thoroughly tested that it was safe to use and all significant flaws had been found.
Unfortunately they were wrong. OpenSSL suffers from a number of architectural flaws that are so
ingrained that problems are masked from even the most determined testers. Those flaws include:
code that relies on external data to control its behavior;
code that depends on properties of the data that are not verified locally;
code that is so complex that a programmer cannot accurately predict its behavior;
support for hardware platforms that no longer exist;
support for weak cryptographic protocols; and
lack of documentation.

DROWN is found

One year ago a Common Vulnerabilities and Exposures (CVE) Project exposed another OpenSSL
vulnerability that was hidden for many years as a consequence of these structural flaws. CVE-2016-080
describes the DROWN attack that decrypts TLS sessions on servers supporting SSLv2 and using RSA
key exchange.3

DROWN (Decrypting RSA with Obsolete and Weakened eNcryption) was discovered in early 2016. The
researchers behind the project published their findings and later presented their work at the USENIX
Security Symposium in August. The researchers theorized a cross-protocol attack that took advantage of
the architectural weaknesses within OpenSSL. In particular, they were interested in how inexpensively
they could perform the exploit what level of financial commitment would be necessary to acquire the
computing platform and how fast the exploit could be accomplished. Hackers are drawn to
vulnerabilities that can be quickly breached for a small investment, and the researchers wanted to
emphasize the magnitude of the vulnerability by demonstrating the ease of attack.

How DROWN works

The researchers first discovered a previously-unknown vulnerability in SSLv2, the version of SSL that
was released in 1995 and declared dead before it was one year old.4 This old version of SSL is rarely used
yet it continues to be installed on many servers. The NIST standard mandates: While SSL 3.0 is the most
secure of the SSL protocol versions, it is not approved for use in the protection of Federal information
because it relies in part on the use of cryptographic algorithms that are not FIPS-Approved.5 (For security,
the United States government mandates the use of TLS because it supports AES encryption.)

1
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

The researchers then made the troubling discovery that the especially bad aspect of this attack is that it
can be used to exploit TLS, even in cases when client devices dont support SSLv2, and sometimes even
in cases when the servers dont support SSLv2 but use the same RSA key as some other server that does.

The researchers estimated more than a quarter of servers worldwide could be impacted by this problem.6

In technical terms, DROWN is a new form of cross-protocol Bleichenbacher RSA padding-oracle attack.
It allows an attacker to decrypt intercepted TLS connections by making specially crafted connections to
an SSLv2 server that uses the same private key.

The attacker begins by observing several hundred connections between the victim client and a victim
server. It has been established that roughly one out of every 1,000 full TLS handshakes can be decrypted,
eventually leading to the compromise of the entire TLS session.

Collecting 1,000 connections takes an average of 18 hours, requires 40,000 SSLv2 connections and 250
symmetric offline operations on an optimized computer. To reduce the 18 hour capture period, the
researchers proposed the attacker could trick the user into visiting another website that quickly makes
many connections to another site in the background.

The connections can use any version of the SSL/TLS protocol, including TLS 1.2, so long as they employ
the commonly used RSA key exchange method. In an RSA key exchange, the client picks a random
session key and sends it to the server, encrypted using RSA and the servers public key. Right away, the
researchers found one-third of all HTTPS servers were susceptible to this attack, dropping to 22% of
those that used browser-trusted certificates. Many HTTPS servers that do not directly support SSLv2
share RSA keys with other services that do, thus the widespread prevalence of key and certificate reuse
makes so many servers vulnerable.

Then, employing a previously-unknown OpenSSL vulnerability that was present in releases from 1998 to
early 2015, the researchers found a way to break the TLS by using only half as many connections to the
victim server and no large number of computations. They repeatedly connected to the SSLv2 server and
sent specially crafted handshake messages with modifications to the RSA ciphertext from the victims
connections. (This is possible because unpadded RSA is malleable.) The way the server responds to each

2
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

of these probes depends on whether the modified ciphertext decrypts to a plaintext message with the right
form. Since the attacker doesnt know the servers private key, he doesnt know exactly what the plaintext
will be, but the way that the server responds ends up leaking information to the attacker about the secret
keys used for the victims TLS connections.7

Using this modified DROWN technique, the researchers were able to decrypt a TLS ciphertext in 60
seconds on an inexpensive single-CPU. They were fast enough to perform man-in-the-middle attacks on
live TLS sessions before the handshake timed out. Unpatched servers were vulnerable to this modified
technique, yet that pool constituted 26% of HTTPS servers worldwide. An OpenSSL patch released in
2015 for an unrelated problem unknowingly patched the exploited flaw, but the server is protected only if
it is a current version and the patch has been applied.

The researchers tested QUIC, a Google cryptographic protocol implementation that is analogous to TLS.
The QUIC design still allowed for a successful attack but with the investment of considerably more
computational resources. They estimated an investment of $10,000,000 would break the handshake key in
about 30 days.

Some takeaway lessons

DROWN illustrates the cryptographic principle that keys should be single use. DROWN again shows that
using the same keys for different protocol versions can also be a serious security risk.

Many protocols and cryptographic primitives that were demonstrated to be weak decades ago are
surprisingly common in real-world systems.

It is important to recognize and remove deprecated technologies before they become exploitable
vulnerabilities.

The legacy of deliberately-weakened export-grade cryptography 512-bit RSA exchange, 512-bit Diffie-
Hellman key exchange, and 40-bit symmetric encryption still inflicts continued harm and has become
the cornerstone of many high-profile attacks, e.g. FREAK and Logjam.

The importance of regularly installing critical operating system and application patches cannot be
understated.

Why OpenSSL is particularly vulnerable

OpenSSL relies on an architecture that has many flaws. In the DROWN example, OpenSSL suffers from:
Code that relies on external data to control its behavior. Even the Google QUIC could be spoofed,
although it took $10,000,000 and a month to theoretically accomplish the hack.
Code that is so complex that a programmer cannot accurately predict its behavior. The DROWN
problem existed for 18 years before it was accidentally and unknowingly patched.
Support for weak cryptographic protocols. OpenSSL has default, native support for many broken
protocols. Only the most recent releases disable SSLv2 at build-time by default.
Lack of documentation. The average system operator will not have the training or background to
understand cryptography or the complexity of communications between servers and clients. Many
operators stubbornly remember obstacles they encountered when initially installing OpenSSL and
hesitate to apply patches or install new versions lest they have to make configuration changes.

3
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

More OpenSSL problems

SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. SSL connections
depend on validating public-key certificates presented when the connection is established. Researchers
have demonstrated that SSL certificate validation is completely broken in many security-critical
applications and libraries. The root causes of these vulnerabilities are badly designed APIs of SSL
implementations and data-transport libraries which present developers with a confusing array of settings
and options. OpenSSL was one of a group of non-browser libraries that the researchers deemed as The
most dangerous code in the world. 8

Those researchers confirmed that SSL connections established by OpenSSL were insecure against a man-
in-the-middle attack. Typical of most low-level libraries, OpenSSL only provides chain-of-trust
verification; applications must supply their own hostname verification code. Different application-layer
protocols such as HTTPS, LDAP, etc. have different notions of what constitutes a valid hostname and
what it means for a host name to match the name(s) listed in the certificate. Therefore, hostname
verification must be managed either by the application itself or by a data-transport wrapper. A program
using OpenSSL performs the SSL handshake by complex interactions that have been found to be error-
prone. In general, disabling proper certificate validation appears to be a developers preferred solution to
any problem with SSL libraries.

The researchers conclusion was to recommend a complete redesign of the SSL libraries API. Instead of
asking application developers to manage incomprehensible options such as SSL_get_verify_result (found
in three OpenSSL programs ssl_lib.c, easy-tls.c and and s_server.c), they should present high-level
abstractions that explicitly express security properties of network connections in terms that are close to
application semantics: for example, a confidential and authenticated tunnel. The library should also be
explicit about the security consequences of any application-controlled option: for example, instead of
verify hostname?, it could ask Anyone can impersonate the server. Ok or not?

Keep it or rewrite from scratch

When OpenSSL was first developed, Internet commerce was new and computer security excluding the
finance and national defense genres was nascent. OpenSSL became popular because it was free, solved
a need for web developers who were untrained in security matters, and was assumed to be secure because
it was open source. OpenSSL, even with its limitations, is still popular because most other alternatives are
not fully developed. Even after having witnessed revelations of so many critical security flaws in
OpenSSL, many developers seem resigned to follow an if it aint broke approach and resist changing a
familiar library.9

But it is broken. Knowledgeable experts have said OpenSSL contains such a multitude of poor design and
bad coding practices that it would be easier to start from scratch to design security into the software from
the foundation up. Programming key principles and best practices have not changed greatly for decades.
Some of those principles include simplicity, documentation, the exclusion of unrelated features, limited
nesting in order to easily follow procedures, building everything within a module in order to avoid calling
modules from other programs, rely on the processor CPU for randomness and entropy, explicitly clear
sensitive data structures, and always default to the highest security level.

4
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

LibreSSL promises not to make the same mistakes

LibreSSL was announced in spring 2014 in response to the OpenSSL Heartbleed discovery. The
LibreSSL developers promised their software would not perpetuate bad OpenSSL programming design
and they would incorporate these improvements, including: 10
building LibreSSL on a mature operating system (OpenBSD) and not trying to support older
compilers and older OSes, a practice they felt that over time had degraded OpenSSL to the lowest
common denominator C-language;
replacing all OpenSSL custom memory calls (malloc, calloc, realloc, snprintf, strlcat, etc.) by calls to
the standard library;
relying on the kernel itself for entropy generation instead of improper seeding of a software random
number generator;
the use of explicit_bzero (replacing memset) to clear sensitive data structures (which incidentally
degrades LibreSSL performance by an acceptable 4%);
the introduction of new, safer cipher suites based on Brainpool, ChaCha, poly1305, and ANSSI
FRP256v1 algorithms; and
eliminating support for the FIPS standard because it is considered harmful and inconvenient for
library development (including its mandated Dual_EC_DRBG algorithm which is suspected of
including a backdoor).

Furthermore the LibreSSL developers will:


fix bugs early in development rather than bolting on fixes after release;
give priority to future bug fixes rather than new features implementation;
avoid spaghetti-coding by encapsulating all platform-dependent code in platform-specific layers
and force all platforms to use them;
prune code to reduce duplication and remove unused routines to make the code easier to follow; and
adhere to a single reference platform and introduce portability only after a stable version is reached.

LibreSSL did not start from scratch as many experts suggested, rather they forked OpenSSL in order to
provide a fully-compatible, drop-in replacement. LibreSSL therefore inherited unknown vulnerabilities
from the parent program but, by adhering to key principles and best practices, the developers hope to
reduce the critical vulnerabilities. So far, the LibreSSL software has had about one-half the high critical
CVE reports as its parent.

Testing can improve LibreSSL

There are three quick and simple rules that, when followed, will improve the security of any software:
Move security assessment into the design phase. The additional cost overhead is as little as 2% at the
front end, but saves far more than that by reducing future fixes.
Incorporate a security role into the development process. Adopt pair programming and let the security
personnel have the same sign-off as the coders.
Test thoroughly at all design, development and deployment phases.

Testing throughout the software development life cycle is the most effective method to reduce or
eliminate vulnerabilities. Verification is the process of evaluating work-products of a development phase
to determine whether they have been built to meet the specified requirements for that phase. Plans,
requirements and specifications are created in the design cycle and later compared to the code that has
been produced for that phase.

5
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

Figure 2: A Security Testing Taxonomy

LibreSSL was not built from the ground up but was rewritten from the original source. Its developers,
wanting to make it a drop-in replacement for OpenSSL, skipped this important development life cycle
stage. They had to work backwards to create the specifications and requirements by which they could
verify the code.

Validation evaluates software during and at the end of the development process to ensure that it meets the
users needs and that it fulfills its intended use when placed in its intended environment. Validation is a
check that the specifications were correct in the first place. It is entirely possible that the software can
pass a verification test but fail when validated. LibreSSL is a critical library whose malfunction would
have severe consequences, therefore thorough validation testing is an important step.

Testing throughout the software development life cycle is the most effective method to reduce or
eliminate vulnerabilities. One of these measures is source code analysis, which includes both static and
dynamic analysis.

Static code analysis is a method of debugging that examines the code without executing the program. The
process provides an understanding of the code structure and can help to ensure that the code adheres to
the original specifications. Manual inspection can be mind-numbing and subject to developer biases, and
program compilers only identify language rule violations, such as type violations and syntax errors.
Automated static analysis checks the source code for problems such as semantical errors that pass through
compilers and result in problems such as buffer overflow, invalid pointer references, uninitialized
variables and other vulnerabilities. It is still necessary to have experienced developers analyze the results
and examine any suspect source code to remove the coding errors.11

Static analysis can reveal deep errors that potentially could be hidden for years after release. However,
some problems are difficult to foresee during static analysis. Interaction of multiple functions can
generate unanticipated errors, which only become apparent during component-level integration, system

6
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

integration or deployment. Therefore, once the software is functionally complete, dynamic analysis
should be performed. Dynamic analysis reveals how the application behaves when executed, how it
interacts with other processes and the operating system itself, and has the ability to find security issues
caused by the codes interaction with other system components. While static analysis can find errors early
in the software development life cycle, dynamic analysis tests the code in real-life attack scenarios.

Fully automated random testing can be an effective tool to detect inconsistencies between a specification
and its implementation, as it eliminates the subjectiveness in constructing test cases and increases the
variety of input values. Thus, it has a potential for finding errors that are difficult to find in other ways,
such as when testing very complex systems. Concolic testing, a different but complementary way of
testing, covers more branch execution paths than randomized testing and far deeper. If a symbolic
representation becomes overly complex, concolic execution replaces it with concrete values so it can still
be tested. Concolic testing uses fewer inputs to achieve greater branch coverage and provides coverage
for boundary cases.

Dynamic code reviews, presented with a wide range of inputs and security tests, will generally pick up
about 85% of the flaws present in the code. Static and dynamic analysis go hand-in-hand, as do random
and concolic testing.

Any change to a system build also needs to be considered to produce a secure system. Given its
complexity, its origin and its importance, one would expect the LibreSSL code should always be in a state
of static and dynamic analysis.

References
1
https://www.datanyze.com/market-share/ssl/openssl-market-share/
2
https://en.wikipedia.org/wiki/OpenSSL
3
https://nvd.nist.gov/vuln/detail/CVE-2016-0800
4
Ristic, I. (2016, March 1). DROWN Abuses SSL v2 to Attack TLS. Retrieved on April 29, 2017, from
https://blog.qualys.com/securitylabs/2016/03/01/drown-abuses-ssl-v2-to-attack-rsa-keys-and-tls
5
Polk, T., McKay, K. & Chokhani, S. (2014, April). NIST Special Publication 800-52 Revison 1:
Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations. Retrieved on
April 30, 2017, from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r1.pdf
6
https://drownattack.com/top-sites.html
7
Aviram, N., Schinzel, S., Somorovsky, J., Heninger, N., Dankel, M., Steube, J., . . . Shavitt, Y. (2016,
August). DROWN: Breaking TLS using SSLv2. In: Proceedings of the 25th USENIX Security
Symposium. (Austin). Retrieved on April 29, 2017, from https://drownattack.com/drown-attack-paper.pdf

7
CSOL 560 Secure Software Design and Development Final Assignment Marc Leeka

8
Georgiev, M., Iyengar, S., Jana, S., Anubhai, R., Boneh, D. & Shmatikov, V. (2012, October). The most
dangerous code in the world: validating SSL certificates in non-browser software. In: Proceedings of the
2012 ACM conference on Computer and communications security, pp. 38-49. (Raleigh, NC). Retrieved
on May 1, 2017, from https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf
9
Rowe, W. (2017, February 9). LibreSSL Replacement for OpenSSL. Retrieved on April 30, 2017, from
https://www.cursivesecurity.com/blog/2017/libressl-replacement-openssl/
10
De Simone, S. (2014, May 19). LibreSSL, OpenSSL Replacement: The First 30 Days. Retrieved on
April 30, 2017, from https://www.infoq.com/news/2014/05/libre-ssl-first-30-days
11
Cobb, M. (2008, January). Static and dynamic code analysis: A key factor for application security
success. Retrieved on April 30, 2017, from http://searchfinancialsecurity.techtarget.com/tip/Static-and-
dynamic-code-analysis-A-key-factor-for-application-security-success

Potrebbero piacerti anche