Big Brother Awards
quintessenz search  /  subscribe  /  upload  /  contact  
/q/depesche *
/kampaigns
/topiqs
/doquments
/contaqt
/about
/handheld
/subscribe
RSS-Feed Depeschen RSS
Hosted by NESSUS
<<   ^   >>
Date: 1999-09-16

Open Source & Krypto - Evaluation


-.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.-

Hier wundert sich einer über den "Wirbel , der jetzt gerade um Open
Source Software veranstaltet wird, zumal in seinem Fachgebiet frei
zugänglicher Source Code nachgerade die Voraussetzung für
funktionierende und preisgünstige Systeme ist.

Bruce Schneier räsonn- & evaluierend über die Zusammenhänge von
Öffentlichkeit und Geheimhaltung, Kryptographie und Open Source.

-.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.-
As a cryptography and computer security expert, I have never
understood the current fuss about the open source software
movement. In the cryptography world, we consider open source
necessary for good security; we have for decades. Public security is
always more secure than proprietary security. It's true for
cryptographic algorithms, security protocols, and security source
code. For us, open source isn't just a business model; it's smart
engineering practice.

Open Source Cryptography

Cryptography has been espousing open source ideals for decades,
although we call it "using public algorithms and protocols." The idea
is simple: cryptography is hard to do right, and the only way to know
if something was done right is to be able to examine it.

This is vital in cryptography, because security has nothing to do with
functionality. You can have two algorithms, one secure and the other
insecure, and they both can work perfectly. They can encrypt and
decrypt, they can be efficient and have a pretty user interface, they
can never crash. The only way to tell good cryptography from bad
cryptography is to have it examined.

Even worse, it doesn't do any good to have a bunch of random people
examine the code; the only way to tell good cryptography from bad
cryptography is to have it examined by experts. Analyzing
cryptography is hard, and there are very few people in the world who
can do it competently. Before an algorithm can really be considered
secure, it needs to be examined by many experts over the course of
years.

This argues very strongly for open source cryptographic algorithms.
Since the only way to have any confidence in an algorithm's security
is to have experts examine it, and the only way they will spend the
time necessary to adequately examine it is to allow them to publish
research papers about it, the algorithm has to be public. A
proprietary algorithm, no matter who designed it and who was paid
under NDA to evaluate it, is much riskier than a public algorithm.

The counter-argument you sometimes hear is that secret
cryptography is stronger because it is secret, and public algorithms
are riskier because they are public. This sounds plausible, until you
think about it for a minute. Public algorithms are designed to be
secure even though they are public; that's how they're made. So
there's no risk in making them public. If an algorithm is only secure if
it remains secret, then it will only be secure until someone reverse-
engineers and publishes the algorithms. A variety of secret digital
cellular telephone algorithms have been "outed" and promptly broken,
illustrating the futility of that argument.

Instead of using public algorithms, the U.S. digital cellular companies
decided to create their own proprietary cryptography. Over the past
few years, different algorithms have been made public. (No, the cell
phone industry didn't want them made public. What generally
happens is that a cryptographer receives a confidential specification
in a plain brown wrapper.) And once they have been made public,
they have been broken. Now the U.S. cellular industry is considering
public algorithms to replace their broken proprietary ones.

On the other hand, the popular e-mail encryption program PGP has
always used public algorithms. And none of those algorithms has
ever been broken. The same is true for the various Internet
cryptographic protocols: SSL, S/MIME, IPSec, SSH, and so on.

The Best Evaluation Money Can't Buy

Right now the U.S. government is choosing an encryption algorithm
to replace DES, called AES (the Advanced Encryption Standard).
There are five contenders for the standard, and before the final one is
chosen the world's best cryptographers will spend thousands of
hours evaluating them. No company, no matter how rich, can afford
that kind of evaluation. And since AES is free for all uses, there's no
reason for a company to even bother creating its own standard.
Open cryptography is not only better -- it's cheaper, too.

The same reasoning that leads smart companies to use published
cryptography also leads them to use published security protocols:
anyone who creates his own security protocol is either a genius or a
fool. Since there are more of the latter than the former, using
published protocols is just smarter.

Consider IPSec, the Internet IP security protocol. Beginning in 1992,
it was designed in the open by committee and was the subject of
considerable public scrutiny from the start. Everyone knew it was an
important protoc
ol and people spent a lot of effort trying to get it right. Security technologies were proposed, broken, and then modified. Versions were codified and analyzed. The first draft of the standard was published in 1995. D
ifferent aspects of IPSec were debated on security merits and on performance, ease of implementation, upgradability, and use.

In November 1998, the committee published a slew of RFCs -- one in a series of steps to make IPSec an Internet standard. And it is still being studied. Cryptographers at the Naval Research Laboratory recently discovered
a minor implementation flaw. The work continues, in public, by anyone and everyone who is interested. The result, based on years of public analysis, is a strong protocol that is trusted by many.

On the other hand, Microsoft developed its own Point-to-Point Tunneling Protocol (PPTP) to do much the same thing. They invented their own authentication protocol, their own hash functions, and their own key-generation a
lgorithm. Every one of these items was badly flawed. They used a known encryption algorithm, but they used it in such a way as to negate its security. They made implementation mistakes that weakened the system even fur
ther. But since they did all this work internally, no one knew that PPTP was weak.

Microsoft fielded PPTP in Windows NT and 95, and used it in their virtual private network (VPN) products. Eventually they published their protocols, and in the summer of 1998, the company I work for, Counterpane Systems,
published a paper describing the flaws we found. Once again, public scrutiny paid off. Microsoft quickly posted a series of fixes, which we evaluated this summer and found improved, but still flawed.

Like algorithms, the only way to tell a good security protocol from a broken one is to have experts evaluate it. So if you need to use a security protocol, you'd be much smarter taking one that has already been evaluated
. You can create your own, but what are the odds of it being as secure as one that has been evaluated over the past several years by experts?

Securing Your Code

The exact same reasoning leads any smart security engineer to demand open source code for anything related to security. Let's review: Security has nothing to do with functionality. Therefore, no amount of beta testing c
an ever uncover a security flaw. The only way to find security flaws in a piece of code -- such as in a cryptographic algorithm or security protocol -- is to evaluate it. This is true for all code, whether it is open so
urce or proprietary. And you can't just have anyone evaluate the code, you need experts in security software evaluating the code. You need them evaluating it multiple times and from different angles, over the course of
years. It's possible to hire this kind of expertise, but it is much cheaper and more effective to let the community at large do this. And the best way to make that happen is to publish the source code.

But then if you want your code to truly be secure, you'll need to do more than just publish it under an open source license. There are two obvious caveats you should keep in mind.

First, simply publishing the code does not automatically mean that people will examine it for security flaws. Security researchers are fickle and busy people. They do not have the time to examine every piece of source co
de that is published. So while opening up source code is a good thing, it is not a guarantee of security. I could name a dozen open source security libraries that no one has ever heard of, and no one has ever evaluated.
On the other hand, the security code in Linux has been looked at by a lot of very good security engineers.

Second, you need to be sure that security problems are fixed promptly when found. People will find security flaws in open source security code. This is a good thing. There's no reason to believe that open source code i
s, at the time of its writing, more secure than proprietary code. The point of making it open source is so that many, many people look at the code for security flaws and find them. Quickly. These then have to be fixed.
So a two year-old piece of open source code is likely to have far fewer security flaws than proprietary code, simply because so many of them have been found and fixed over that time. Security flaws will also be discove
red in proprietary code, but at a much slower rate.

Comparing the security of Linux with that of Microsoft Windows is
not very instructive. Microsoft has done such a terrible job with
security that it is not really a fair comparison. But comparing Linux
with Solaris, for example, is more instructive. People are finding
security problems with Linux faster and they are being fixed more
quickly. The result is an operating system that, even though it has
only been out a few years, is much more robust than Solaris was at
the same age.

Secure PR

One of the great benefits of the open source movement is the positive-
feedback effect of publicity. Walk into any computer superstore
these days, and you'll see an entire shelf of Linux-based products.
People buy them because Linux's appeal is no longer limited to
geeks; it's a useful tool for certain applications. The same feedback
loop works in security: public algorithms and protocols gain
credibility because people know them and use them, and then they
become the current buzzword. Marketing people call this mindshare.
It's not a perfect model, but hey, it's better than the alternative.

Source
http://www.counterpane.com
-.- -.-. --.-
Jetzt erst recht!
1000 Sonnen - das Musical zur Wahl
heiter, sauber, ordentlich
DrezninMusik MoechelBuch PoschRegie
Bis 2. Oktober [Di-Sa]
http://1000Sonnen.heimatseite.com/
-.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.-

- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.-
edited by Harkank
published on: 1999-09-16
comments to office@quintessenz.at
subscribe Newsletter
- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.- -.-. --.-
<<   ^   >>
Druck mich

BigBrotherAwards


Eintritt zur Gala
sichern ...



25. Oktober 2023
#BBA23
Big Brother Awards Austria
 related topiqs
 
 CURRENTLY RUNNING
q/Talk 1.Juli: The Danger of Software Users Don't Control
Dr.h.c. Richard Stallman live in Wien, dem Begründer der GPL und des Free-Software-Movements
 
 !WATCH OUT!
bits4free 14.Juli 2011: OpenStreetMap Erfinder Steve Coast live in Wien
Wie OpenStreetMaps die Welt abbildet und was ein erfolgreiches Crowdsourcing Projekt ausmacht.