Randomness and cryptography overview

From Randomness for cryptography
Jump to navigation Jump to search

Cryptography is the science and art of secret writing and communication. In the 21st century it is most commonly performed using computers and specialized cryptographic programs. These programs typically use one or more algorithms that have been designed in light of accumulated knowledge about cryptographic attacks, extensively tested and standardized. These algorithms almost invariably require as input some data that is random or unpredictable to achieve a safe level of security. One example is the key used for a symmetric cipher such as NIST's Advanced Encryption Standard (AES).

Generating random data for cryptographic purposes seems simple enough, but both theory and practice have shown that there are many pitfalls that can compromise security. What constitutes best or even acceptable practice has been the subject of heated debate in on-line forums such as the cryptography list. This is in part because of the subtle technical issues involved, in part because of differing security requirements reflecting different viewpoints on trust and threat and in part because old arguments keep resurfacing, getting rehashed but are never recorded in one place.

This Wiki is intended to address the last issue by providing a place to either resolve arguments or civilly present both sides where consensus is not possible.

Sources of randomness and unpredictability

The notion of randomness has long occupied philosophers. For our purposes we take it as a given that randomness exists in the world. Quantum mechanics, in its most widely accepted interpretations states this, and there are experimental tests that confirm it.

Even without quantum mechanics, there are theoretical bases for unpredictability in the world. Newton's equations of motion appear to be deterministic. They are a set of differential equations which, given exact knowledge of a set of initial conditions, determine the state of the system exactly over all time. However initial conditions can never be known exactly. For some systems, this does not matter, the errors in the initial conditions grow slowly over time, so if we have a sufficiently accurate knowledge of initial conditions, we can predict the future with good accuracy. However, other systems exist where errors increase exponentially over time. So no matter how small the error in initial conditions, after the systems has evolved for some time, its state cannot be predicted. Such systems are termed chaotic.

A third source of randomness comes from thermodynamics, a central theory in physics that was originally developed in the 1800s to calculate the efficiency steam engines, and later explained by statistical mechanics in terms of the random motion of the large number of molecules in macroscopic matter.

In particular classical thermodynamics predicts that any electrical circuit observed in some band of frequency, <math>\Delta f</math>, will have a thermal or Johnson–Nyquist noise energy of

<math>

P = 4 k_B \,T \Delta f, </math>

where T is the absolute temperature and <math>k_B</math> is Boltzman's constant, 1.38×10−23 J/K.

When thermodynamics and statistical mechanics are redone in a quantum mechanical framework, the above formula for thermal noise changes to Plank's law, and the noise power drops off exponentially at higher frequencies (shorter wavelengths), making the total noise power over all frequencies finite, instead of infinite as the classical thermodynamic formula above predicts. However the classical formula is usable into the tens of gigahertz region.

Finally there are phenomena that are purely quantum mechanical in nature. Radioactive decay is one example. Atoms of elements with atomic weight higher than iron tend to decay into lighter elements. The timing of this decay depends on forces between the particular configuration of protons and neutrons in the atoms nucleus. Each radioactive isotope has a half-life, a time interval during which each atom has a 50% chance of decaying, but the decay time of an individual atom cannot be predicted. Thus a device like a Geiger counter that detects individual decay particles from a radioactive source can provide truly random data by measuring the time between successive detection events.

Critical nature of random number generation

Unpredictable bit streams, often referred to as random numbers, are critical to the security of most cryptographic systems. Cryptography does not create secrets, it amplifies them. Thus a secret key consisting of 128 bits can protect, say, a health care database containing billions of bits of data. However if an adversary were able to predict these 128 bits, or even 2/3s of them, they would be able to decrypt all the data. There are a variety of ways an adversary might get hold of a key, such as inserting a Trojan Horse computer program that looks for secret keys and covertly transmits them back to the adversary. Such attacks have in fact become common, but they are difficult to mount and require some a channel that, at least potentially, can be traced back to the adversary.

If the adversary can somehow control our random number generation, then there is no need for a back channel. If it is done in some clever what only the adversary knows about, it may not reduce our security with respect to other adversaries. See Random number generator attack.

True random vs. pseudo-random vs. unpredictable vs entropy

The term random is used in several ways in cryptography and it is important to keep them straight.

  • True random (or sometimes physical random or hardware random) means data generated from some physical source of randomness as described above.
  • Pseudo-random means data generated from an algorithm whose output passes statistical tests for randomness, typically controlled by a seed or key that is much shorter than the output string.
  • Unpredictable means there is no way an adversary could know in advance the output without knowledge about the seed or key. Note that if the advisory knows all but n bits of seed, they can predict that the output string might be one out of 2n possibilities.
  • Entropy is a term that comes from classical thermodynamics, where it was a measure of thermal energy that was in some sense unusable in a steam engine. In information theory, entropy is a measure of uncertainty. (It turns out the two notions of entropy are the same at some deep level.) The simplest example is a fair coin flip. The outcome may be heads or tails, each with a 50-50 chance. That outcome can be represented as one bit of computer data, e.g. 1=heads, 0=tails, and we say a coin flip generates one bit of entropy. One could make up a 6-bit binary number by flipping a coin six times and that would generate 6 bits of entropy.
A random string of n bits has n bits of entropy. But what if the string isn't completely random? Consider a string of ten lower-case letters from the English alphabet represented in ordinary 8-bit byte characters of computer data. This string of characters will occupy 80 bits of computer memory, but it won't represent 80 bits of entropy. To begin with, there are 28 = 256 possible 8-bits codes. Therefore we don't really need 8 bits to store one out of 26 letters, in fact 5 bits will do because there are 25 = 32 possible 5 bit patterns. So there can be no more than 50 bits of entropy in our ten character random letter string. But we don't need all 32 possible patterns to represent 26 letters. The way to compute the precise entropy in a case like this is to take the base-2 logarithm of the number of possible letters or symbols. So each letter in a random string of letters has log2 (26) = 4.70004 bits of entropy, so our ten character random string has 47 bits of entropy, a little less than 50.
There are more complicated situations. For example, how much entropy is there in a video image or an English sentence? Both have some uncertainty but also much structure. Even if it is not feasible to calculate entropy exactly in such situations, it may be possible to give good estimates, or at least a lower bound.

An alternate definition of randomness is due to Gregory Chatin and Kolomogorov. A bit stream is random if in cannot be produced by a computer program (Turing machine) shorter than it. This definition clearly distinguishes a PRNG from an unbiased, un correlated source of digitized noise. However the practical ß of this distinction is challenge in the following section.

Indistinguishability of true and cryptographically generated random bits

There is another approach to generating unpredictable bits for cryptography that follows from the strength of cryptographic primitives such as ciphers and hashes. Here is the argument in its favor:

Standard ciphers and hashes, such as AES and SHAx, are designed and exhaustively tested to generate output that is indistinguishable from perfectly random bits by any statistical test. Thus if one is relying on the ability of these ciphers to protect confidentiality, based on a secret key, one should be able to rely on the same ciphers operating as PRNGs to generate unpredictable bit for cryptographic purposes when seeded with a key that from a true random source.

This leads to the suggestion that,say, 256 bits of high quality entropy be gate red as system start up and then used to seed a strong PRNG to generate all subsequent random bits needed for cryptographic purposes.

Different approaches to generating unpredictable bit for cryptography

There dare three major approaches for creating unpredictable bits for cryptography;

  1. Produce all bits from a tested purpose-built true RNG
  2. Collect sufficient high quality entropy, say 256 bits, at system startup and use t to seed a high quality PRNG once. Then use its output to generate all subsequent unpredictable bits required, aftr being processed through a cryptographic hash that prevents the attacker who sees some of the unpredictable bits, say in an IV or nonce, from recovering the PRNG internal state.

the argument for this approach is that if you trust a cipher for confidentiality, then why not trust the same cipher running as a PRNG? Furthermore, if an attacker has sufficient access to compromise the internal stat of the P+RNG, they will be able to do more serious damage, such as planting a root kit.

The cover argument goes like this. Many critical system operate continuously for years after initial start up. Would we really use the same PRNG seed for such a long time. If not and if there are good sources of entropy available available to refresh the P+RG, say weekly, why not every send or even every millisecond?


A variant of 3. is to collect entropy in baches to insure that an attacker who some how recovers the state of the PRNF cannot recover t compromised the PRNG from incrementally recovering the internal state of the PRNG as small amounts of entropy are stirred in,

    • Belt and suspenders and safety pin
  • Linux /dev/random vs /dev/urandom

So what's the problem?

  • Different goals
    • Avoiding corporate data loss
    • Limiting legal liability
    • Meeting standards, such as HIPPA, Sarbanes-Oxley, ...
    • Preserving or restoring individual privacy
    • Thwarting the surveillance state
  • Traditional sources of entropy are drying up
    • Hard disks being replaced Solid State Drives
    • Servers with no sound, camera or even USB input
  • Internet of Things based on SoCs with no good entropy source
  • Standards often duck the hard issues
  • Auditability is not a widely recognized requirement for random bit generators
    • Standards permit whitening inside of sealed compartment, making audit impossible (e.g. NIST-800-90B draft)
    • Intel's x86 random number generator design RdRand is opaque and potentially subverted
  • Certification processes that reject use of multiple sources of entropy
  • Terminology