# Chapter 5: Pseudorandom Generators

We have already seen that randomness is essential for cryptographic security. Following Kerckhoff’s principle, we assume that an adversary knows everything about our cryptographic algorithms except for the outcome of the internal random choices made when running the algorithm. Private randomness truly is the only resource honest parties can leverage for security in the presence of an adversary.

One-time secrecy for encryption is a very demanding requirement, when we view randomness as a resource. One-time secrecy essentially requires one to use independent, uniformly random bits to mask every bit of plaintext that is sent. As a result, textbook one-time pad is usually grossly impractical for practical use.

In this chapter, we ask whether it might be beneficial to settle for a distribution of bits which is not uniform but merely “looks uniform enough.” This is exactly the idea behind pseudorandomness. A pseudorandom distribution is not uniform in a strict mathematical sense, but is indistinguishable from the uniform distribution by polynomial-time algorithms (in the sense defined in the previous chapter).

Perhaps surprisingly, a relatively small amount of truly uniform bits can be used to generate a huge amount of pseudorandom bits. Pseudorandomness therefore leads to an entire new world of cryptographic possibilities, an exploration that we begin in this chapter.

## 5.1: Definition

As mentioned above, a pseudorandom distribution “looks uniform” to all polynomial-time computations. We already know of a distribution that “looks uniform” — namely, the uniform distribution itself! A more interesting case is when λ uniform bits are used to deterministically (i.e., without further use of randomness) produce λ + pseudorandom bits, for > 0. The process that “extends” λ bits into λ + bits is called a pseudorandom generator. More formally:

### Definition 5.1: PRG Security

Let G : {0,1}λ → {0,1}λ + be a deterministic function with ℓ > 0. We say that G is a pseudorandom generator (PRG) ifGprg-real ≋ ℒGprg-rand, where:

The value ℓ is called the stretch of the PRG. The input to the PRG is typically called a seed.

#### Discussion:

• Is 0010110110 a random string? Is it pseudorandom? What about 0000000001? Do these questions make any sense?

Randomness and pseudorandomness are not properties of individual strings, they are properties of the process used to generate the string. We will try to be precise about how we talk about these things (and you should too). When we have a value z = G(s) where G is a PRG and s is chosen uniformly, we can say that z was “chosen pseudorandomly”, but not that z “is pseudorandom”. On the other hand, a distribution can be described as “pseudorandom.” The same goes for describing a value as “chosen uniformly” and describing a distribution as “uniform.”

• Pseudorandomness can happen only in the computational setting, where we restricts focus to polynomial-time adversaries. The exercises ask you to prove that for all functions G (with positive stretch), ℒGprg-real  ≢≡ ℒGprg-rand (note the use of ≡ rather than ≋). That is, the output distribution of the PRG can never actually be uniform in a mathematical sense. Because it has positive stretch, the best it can hope to be is pseudorandom.
• It’s sometimes convenient to think in terms of statistical tests. When given access to some data claiming to be uniformly generated, your first instinct is probably to perform a set of basic statistical tests: Are there roughly an equal number of 0s as 1s? Does the substring 01010 occur with roughly the frequency I would expect? If I interpret the string as a series of points in the unit square [0,1)2 , is it true that roughly π/4 of them are within Euclidean distance 1 of the origin?1

1For a list of statistical tests of randomness that are actually used in practice, see http://csrc.nist.gov/
publications/nistpubs/800-22-rev1a/SP800-22rev1a.pdf
.

The definition of pseudorandomness is kind of a “master” definition that encompasses all of these statistical tests and more. After all, what is a statistical test, but a polynomial-time procedure that obtains samples from a distribution and outputs a yes-or-no decision? Pseudorandomness implies that every statistical test will “accept” when given pseudorandomly generated inputs with essentially the same probability as when given uniformly sampled inputs.

• Consider the case of a length-doubling PRG (so = λ; the PRG has input length λ and output length 2λ). The PRG only has 2λ possible inputs, and so there are at most only 2λ possible outputs. Among all of {0,1}, this is a miniscule fraction indeed. Almost all strings of length 2λ are impossible outputs of G. So how can outputs of G possibly “look uniform?”

The answer is that it’s not clear how to take advantage of this observation. While it is true that most strings (in a relative sense as a fraction of 2) are impossible outputs of G, it is also true that 2λ of them are possible, which is certainly a lot from the perspective of a program who runs in polynomial time in λ. Recall that the problem at hand is designing a distinguisher to behave as differently as possible in the presence of pseudorandom and uniform distributions. It is not enough to behave differently on just a few strings here and there — an individual string can contribute at most 1/2λ to the final distinguishing advantage. A successful distinguisher must be able to recognize a huge number of outputs of G so it can behave differently on them, but there are exponentially many G-outputs, and they may not follow any easy-to-describe pattern.

Below is an example that explores these ideas in more detail.

#### Example

Let G be a length-doubling PRG as above, let t be an arbitrary string t ∈ {0,1}, and consider the following distinguisher ?t that has the value t hard-coded:

What is the distinguishing advantage of ?t?

We can see easily what happens when ?t is linked to ℒGprg-rand. We get:

What happens when linked to ℒGprg-real depends on whether t is a possible output of G, which is a simple property of G. We always allow distinguishers to depend arbitrarily on G. In particular, it is fine for a distinguisher to “know” whether its hard-coded value t is a possible output of G. What the distinguisher “doesn’t know” is which library it is linked to, and the value of s that was chosen in the case that it is linked to ℒGprg-real.

Suppose for simplicity that G is injective (the exercises explore what happens when G is not). If t is a possible output of G, then there is exactly one choice of s such that G(s) = t, so we have:

Hence, the distinguishing advantage of ?t is | 1/2 − 1/2λ ≤ 1/2λ. If t is not a possible output of G, then we have:

Hence, the distinguishing advantage of ?t is |1/2 − 0| = 1/2.

In either case, ?t has negligible distinguishing advantage. This merely shows that ?t (for any hard-coded t) is not a particularly helpful distinguisher for any PRG. Of course, any candidate PRG might be insecure because of other distinguishers, but this example should serve to illustrate that PRG security is at least compatible with the fact that some strings are impossible outputs of the PRG

#### Related Concept: Random Number Generation

The definition of a PRG includes a uniformly sampled seed. In practice, this PRG seed has to come from somewhere. Generally a source of “randomness” is provided by the hardware or operating system, and the process that generates these random bits is (confusingly) called a random number generator (RNG).

In this course we won’t cover low-level random number generation, but merely point out what makes it different than the PRGs that we study:

• The job of a PRG is to take a small amount of “ideal” (in other words, uniform) randomness and extend it.
• By contrast, an RNG usually takes many inputs over time and maintains an internal state. These inputs are often from physical/hardware sources. While these inputs are “noisy” in some sense, it is hard to imagine that they would be statistically uniform. So the job of the RNG is to “refine” (sometimes many) sources of noisy data into uniform outputs.

#### Perspective on this Chapter

PRGs are a fundamental cryptographic building block that can be used to construct more interesting things. But you are unlikely to ever find yourself designing your own PRG or building something from a PRG that couldn’t be built from some higher-level primitive instead. For that reason, we will not discuss specific PRGs or how they are designed in practice. Rather, the purpose of this chapter is to build your understanding of the concepts (like pseudorandomness, indistinguishability, proof techniques) that will be necessary in the rest of the class.

## 5.2: Application: Shorter Keys in One-Time-Series Encryption

PRGs essentially convert a smaller amount of uniform randomness into a larger amount of good-enough-for-polynomial-time randomness. So a natural first application of PRGs is to obtain one-time-secure encryption but with keys that are shorter than the plaintext. This is the first of many acts of crypto-magic which are impossible except when restricting our focus to polynomial-time adversaries.

The main idea is to hold a short key k, expand it to a longer string using a PRG G, and use the result as a one-time pad on the (longer) plaintext. More precisely, let G : {0,1}λ → {0,1}λ + be a PRG, and define the following encryption scheme:

### Construction 5.2: Pseudo-OTP

The resulting scheme will not have (perfect) one-time secrecy. That is, encryptions of mL and mR will not be identically distributed in general. However, the distributions will be indistinguishable if G is a secure PRG. The precise flavor of security obtained by this construction is the following.

### Definition 5.3:

Let Σ be an encryption scheme, and letΣots-L andΣots-R be defined as in Definition 2.5 (and repeated below for convenience). Then Σ has (computational) one-time secrecy ifΣots-L ≋ ℒΣots-R. That is, if for all polynomial-time distinguishers ?, we have Pr[? ◊ ℒΣots-L = 1] ≈ Pr[? ◊ ℒΣots-R = 1].

### Claim 5.4

Let pOTP[G] denote Construction 5.2 instantiated with PRG G. If G is a secure PRG then pOTP[G] has computational one-time secrecy.

#### Proof:

We must show that ℒpOTP[G]ots-L ≋ ℒpOTP[G]ots-R. As usual, we will proceed using a sequence of hybrids that begins at ℒpOTP[G]ots-L and ends at ℒpOTP[G]ots-R. For each hybrid library, we will demonstrate that it is indistinguishable from the previous one. Note that we we are allowed to use the fact that G is a secure PRG. In practical terms, this means that if we can express some hybrid library in terms of ℒGprg-real (one of the libraries in the PRG security definition), we can replace it with its counterpart ℒGprg-rand (or vice-versa). The PRG security of G says that such a change will be indistinguishable.

The starting point is ℒpOTP[G]ots-L , shown here with the details of pOTP[G] filled in

The first hybrid step is to factor out the computations involving G, in terms of the ℒGprg-real library.

From the PRG security of G, we may replace the instance of ℒGprg-real with ℒGprg-rand. The resulting hybrid library ℒhyb-2 is indistinguishable from the previous one.

A subroutine has been inlined. Note that the resulting library is precisely ℒOTPots-L! Here, OTP is instantiated on plaintexts of size λ +  (and the variable k has been renamed to z).

The (perfect) one-time secrecy of OTP allows us to replace ℒOTPots-L with ℒOTPots-R; they are interchangeable.

The rest of the proof is essentially a “mirror image” of the previous steps, in which we perform the same steps but in reverse (and with mR hanging around instead of mL).

A statement has been factored out into a subroutine, which happens to exactly match ℒGprg-rand.

From the PRG security of G, we can replace ℒGprg-rand with ℒGprg-real. The resulting library is indistinguishable from the previous one.

A subroutine has been inlined. The result is ℒpOTP[G]ots-R .

Summarizing, we showed a sequence of hybrid libraries satisfying the following:

pOTP[G]ots-L ≡ ℒhyb-1 ≋ ℒhyb-2 ≡ ℒhyb-3 ≡ ℒhyb-4 ≡ ℒhyb-5 ≋ ℒhyb-6 ≡ ℒpOTP[G]ots-R .

Hence, ℒpOTP[G]ots-L ≋ ℒpOTP[G]ots-R , and pOTP has (computational) one-time secrecy.

## 5.3: Taking the Contrapositive Point-of-View

We just proved the statement “if G is a secure PRG, then pOTP[G] has one-time secrecy,” but let’s also think about the contrapositive of that statement:

If the pOTP[G] scheme is not one-time secret, then G is not a secure PRG.

If the pOTP scheme is not secure, then there is some distinguisher ? that can distinguish the two ℒots-* libraries with better than negligible advantage. Knowing that such an ? exists, can we indeed break the security of G?

Imagine going through the sequence of hybrid libraries with this hypothetical ? as the calling program. We know that one of the steps of the proof must break down since ? successfully distinguishes between the endpoints of the hybrid sequence. Some of the steps of the proof were unconditional; for example, factoring out and inlining subroutines never has an effect on the calling program. These steps of the proof always hold; they are the steps where we write ℒhyb-i ≡ ℒhyb-(i + 1).

The steps where we write ℒhyb-i ≋ ℒhyb-(i + 1) are conditional. In our proof, the steps ℒhyb-1 ≋ ℒhyb-2 and ℒOTPots-R ≋ ℒhyb-4 relied on G being a secure PRG. So if a hypothetical ? was able to break the security of pOTP, then that same ? must also successfully distinguish between ℒhyb-1 and ℒhyb-2, or between ℒhyb-5 and ℒhyb-6 — the only conditional steps of the proof.

Let’s examine the two cases:

But with a change of perspective, this means that ? ◊ ℒ is a calling program that successfully distinguishes ℒGprg-real from ℒGprg-rand. In other words, ? ◊ ℒ breaks the PRG security of G!

• Suppose the hypothetical ? only distinguishes ℒhyb-5 from ℒhyb-6. Going through the reasoning above, we will reach a similar conclusion but with a different ℒ than before (in fact, it will be mostly the same library but with mL replaced with mR).

So you can think of our security proof as a very roundabout way of saying the following:

If you give me an adversary/distinguisher ? that breaks the one-time secrecy of pOTP[G], then I can use it to build an adversary that breaks the PRG security of G. More specifically, G is guaranteed to be broken by (at least) one of the two distinguishers:2

2The statement is somewhat non-constructive, since we don’t know for sure which of the two distinguishers will be the one that actually works. But a way around this is to consider a single distinguisher that flips a coin and acts like the first one with probability 1/2 and acts like the second one with probability 1/2.

In fact, this would be the “traditional” method for proving security that you might see in many other cryptography texts. You would suppose you are given an adversary ? breaking pOTP, then you would demonstrate why at least one of the adversaries given above breaks the PRG. Unfortunately, it is quite difficult to explain to students how one is supposed to come up with these PRG-adversaries in terms of the one-time-secrecy adversaries. For that reason, we will reason about proofs in the “if this is secure then that is secure” realm, rather than the contrapositive “if that is insecure then this is insecure.”

## 5.4: Extending the Stretch of a PRG

Recall that the stretch of a PRG is the amount by which the PRG’s output length exceeds its input length. A PRG with very long stretch seems much more useful than one with small stretch. Is there a limit to the stretch of a PRG? Using only bits of true uniform randomness, can we generate 100λ, or even λ3 pseudorandom bits?

In this section we will see that once you can extend a PRG a little bit, you can also extend it a lot. This is another magical feat that is possible with pseudorandomness but not with truly uniform distributions. We will demonstrate the concept by extending a PRG with stretch λ into one with stretch 2λ, but the idea can be used to increase the stretch of any PRG indefinitely (see the exercises).

### Construction 5.5: PRG Feedback

Let G : {0,1}λ → {0,1} be a length-doubling PRG (i.e., a PRG with stretch λ). When x ∈ {0,1}, we write xleft to denote the leftmost λ bits of x and xright to denote the rightmost λ bits. Define the length-tripling function H : {0,1}λ → {0,1} as follows:

### Claim 5.6:

If G is a secure length-doubling PRG, then H (defined above) is a secure length-tripling PRG.

#### Proof:

We want to show that ℒHprg-real ≋ ℒHprg-rand. As usual, we do so with a hybrid sequence. Since we assume that G is a secure PRG, we are allowed to use the fact that ℒGprg-real ≋ ℒGprg-rand. In this proof, we will use the fact twice: once for each occurrence of G in the code of H.

The starting point is ℒHprg-real, shown here with the details of H filled in.

The first invocation of G has been factored out into a subroutine. The resulting hybrid library includes an instance of ℒGprg-real.

From the PRG security of G, we can replace the instance of ℒGprg-real with ℒGprg-rand. The resulting hybrid library is indistinguishable.

A subroutine has been inlined

Choosing 2λ uniformly random bits and then splitting them into two halves has exactly the same effect as choosing λ uniformly random bits and independently choosing λ more.

The remaining appearance of G has been factored out into a subroutine. Now ℒGprg-real makes its second appearance

Again, the PRG security of G lets us replace ℒGprg-real with ℒGprg-rand. The resulting hybrid library is indistinguishable.

A subroutine has been inlined.

Similar to above, concatenating λ uniform bits with 2λ independently uniform bits has the same effect as sampling 3λ uniform bits. The result of this change is ℒHprg-rand.

Through this sequence of hybrid libraries, we showed that:

Hprg-real ≡ ℒhyb-1 ≋ ℒhyb-2 ≡ ℒhyb-3 ≡ ℒhyb-4 ≡ ℒhyb-5 ≋ ℒhyb-6 ≡ ℒhyb-7 ≡ ℒHprg-rand.

Hence, H is a secure PRG.

## Exercises:

5.1: Let G : {0,1}λ → {0,1}λ + . Define im(G) = {y ∈ {0,1}λ + | ∃s : G(s) = y}, the set of possible outputs of G. For simplicity, assume that G is injective (i.e., 1-to-1).

(a). What is |im(G)|, as a function of λ and ?

(b). A string y is chosen randomly from {0,1}λ + . What is the probability that y ∈ im(G), expressed as a function of λ and ?

5.2: Let G : {0,1}λ → {0,1}λ + be an injective PRG. Consider the following distinguisher:

(a). What is the advantage of ? in distinguishing ℒGprg-real and ℒGprg-rand? Is it negligible?

(b). Does this contradict the fact that G is a PRG? Why or why not?

(c). What happens to the advantage if G is not injective?

5.3: Let G : {0,1}λ → {0,1}λ + ℓ be an injective PRG, and consider the following distinguisher:

What is the advantage of ? in distinguishing ℒGprg-real from ℒGprg-rand?

Hint: When computing Pr[? ◊ ℒGprg-rand outputs 1], separate the probabilities based on whether x ∈ im(G) or not. (im(G) is defined in a previous problem)

5.4: For any PRG G : {0,1}λ → {0,1}λ + there will be many strings in {0,1}λ + that are impossible to get as output of G. Let S be any such set of impossible G-outputs, and consider the following adversary that has S hard-coded:

What is the advantage of ? in distinguishing ℒGprg-real from ℒGprg-rand? Why does an adversary like this one not automatically break every PRG?

5.5: Show that the scheme from  does not have perfect one-time secrecy, by showing that there must exist two messages m1 and m2 whose ciphertext distributions differ.

Hint: There must exist strings s1, s2 ∈ {0,1} where s1 ∈ im(G), and s2 ∉ im(G). Use these two strings to find two messages m1 and m2 whose ciphertext distributions assign different probabilities to s1 and s2. Note that it is legitimate for an attacker to “know” s1 and s2, as these are properties of G alone, and independent of the random choices made when executing the scheme.

5.6: (a). Let ? be any function. Show that the following function G is not a secure PRF (even if ? is a secure PRG!). Describe a successful distinguisher and explicitly compute its advantage:

(b).Let G : {0,1}λ → {0,1}λ + be a candidate PRG. Suppose there is a polynomial-time algorithm V with the property that it inverts G with non-negligible probability. That is,

Show that if an algorithm V exists with this property, then G is not a secure PRG. In other words, construct a distinguisher contradicting the PRG-security of G and show that it achieves non-negligible distinguishing advantage.

Note: Don’t assume anything about the output of V other than the property shown above. In particular, V might very frequently output the “wrong” thing.

5.7: In the “PRG feedback” construction H in Section 5.4, there are two calls to G. The security proof applies the PRG security rule to both of them, starting with the first. Describe what happens when you try to apply the PRG security of G to these two calls in the opposite order. Does the proof still work, or must it be in the order that was presented?

5.8: Let ℓ’ > > 0. Extend the “PRG feedback” construction to transform any PRG of stretch into a PRG of stretch ℓ’. Formally define the new PRG and prove its security using the security of the underlying PRG.

5.9: Let G : {0,1}λ → {0,1}3λ be a length-tripling PRG. For each function below, state whether it is also a secure PRG. If the function is a secure PRG, give a proof. If not, then describe a successful distinguisher and explicitly compute its advantage.

5.10: Let G : {0,1}λ → {0,1}3λ be a length-tripling PRG. Prove that each of the following functions is also a secure PRG:

Note that H includes half of its input directly in the output. How do you reconcile this fact with the conclusion of Exercise 5.6(b)?

5.11: A frequently asked question in cryptography forums is whether it’s possible to determine which PRG implementation was used by looking at output samples.

Let G1 and G2 be two PRGs with matching input/output lengths. Define two libraries ℒG1which-prg and ℒG2which-prg as follows:

Prove that if G1 and G2 are both secure PRGs, then ℒG1which-prg ≋ ℒG2which-prg — that is, it is infeasible to distinguish which PRG was used simply by receiving output samples.

5.12: Prove that if PRGs exist, then P ≠ NP

Hint: {y | ∃s : G(s) = y} ∈ NP

Note: This implies that ℒGprg-real ≢≡Gprg-rand for all G. That is, there can be no PRG that is secure against computationally unbounded distinguishers. 