Back to taxonomy

Security Primer

Key concepts behind security definitions and proofs

Overview

The security notions in this zoo are all stated in terms of a common vocabulary drawn from provable security. This page introduces the key concepts needed to read and interpret these definitions: what kind of adversary we consider, how we measure advantage, and what the standard proof techniques look like.

Probabilistic Polynomial-Time (PPT)

A probabilistic polynomial-time adversary is the standard model of an efficient attacker in modern cryptography.

  • Polynomial-time means the adversary’s running time is bounded by some polynomial in the security parameter (e.g., the key length). This captures the intuition that the attacker is computationally bounded - it cannot, for instance, exhaustively try all keys.
  • Probabilistic means the adversary has access to a source of randomness and can make random choices during its execution. This models the fact that real attackers can flip coins, sample random values, and run randomized algorithms.

Formally, we write for an adversary that takes the security parameter in unary as input. Security is required to hold for all PPT adversaries, which means no efficient algorithm can break the scheme, regardless of its strategy.

Negligible Functions

When we say a scheme is secure, we do not require the adversary’s success probability to be exactly (for indistinguishability) or exactly (for key recovery). Instead, we allow a small advantage - but it must be negligible.

A function is negligible if it vanishes faster than the inverse of any polynomial:

Intuitively, a negligible quantity is so small that multiplying it by any polynomial in still gives something that goes to zero. This means even if the adversary repeats its attack polynomially many times, its cumulative success probability remains negligible. Typical examples of negligible functions are and .

A function that is not negligible is called non-negligible. For example, is non-negligible: a scheme that can be broken with probability is considered insecure, because an attacker repeating the attack times would succeed with overwhelming probability.

Game-Based Security Definitions

Security notions are named XXX-YYY, where XXX denotes the security property the adversary tries to break (e.g., IND for indistinguishability) and YYY denotes the capabilities granted to the adversary (e.g., CPA, CCA1, CCA2). This naming convention is used throughout the zoo.

A security game (also called a security experiment) is a formal protocol between a challenger and an adversary . The challenger simulates the “honest” world - it generates keys, answers oracle queries, and issues challenges. The adversary tries to win the game by guessing a hidden bit, recovering a key, or producing a forgery.

The advantage of in a game is a measure of how much better it performs than a trivial (random) strategy. Let be the probability that wins game :

  • For search games (like key recovery or forgery), the baseline is typically , so .
  • For indistinguishability games (like CPA or CCA), the baseline is , so .

A scheme is secure with respect to game if is negligible for every PPT adversary .

Reductions

A reduction is the primary tool for proving security. To show that scheme is secure under assumption (e.g., the hardness of LWE), we construct an algorithm that:

  1. receives an instance of problem as input,
  2. simulates the security game for toward ,
  3. uses ‘s output to solve the instance.

If breaks with non-negligible advantage, then solves with non-negligible probability - contradicting the assumption that is hard. Therefore, no PPT adversary can break with non-negligible advantage.

The quality of a reduction is measured by how much it degrades the advantage: if wins with advantage , does solve with advantage , or only where is the number of queries? Tight reductions (where the loss factor is constant) are preferred because they allow smaller security parameters without sacrificing concrete security.

Game Hopping

Game hopping (also called game playing) is a proof technique introduced by Bellare and Rogaway (2004) and Shoup (2004) in which a proof proceeds through a sequence of games :

  • is the original security game.
  • is a trivial game in which the adversary’s advantage is clearly (e.g., because its winning probability is exactly for an indistinguishability game).
  • Each consecutive pair differs by a single conceptual step, and the change in the adversary’s advantage between consecutive games is bounded.

By a triangle inequality argument:

Each hop is justified either by arguing the games are perfectly indistinguishable (so the adversary cannot tell the difference and has identical advantage), or by a reduction showing that distinguishing the two games would solve a hard problem.

Hybrid Arguments

A hybrid argument is a special case of game hopping used to handle multiple independent instances of the same operation (e.g., handling multiple challenge ciphertexts).

Consider proving that a scheme secure against a single challenge is also secure against challenges. Let be the challenger’s hidden bit. In the multi-challenge game, the adversary can submit pairs of messages and receive encryptions of . We define a sequence of hybrid games . In game , the challenger encrypts the “right” message for the first queries, and the “left” message for the remaining queries.

Notice that corresponds to encrypting all “left” messages (), and corresponds to encrypting all “right” messages (). Furthermore, consecutive hybrids and differ only in the encryption of the -th ciphertext. Any adversary that can distinguish from can be used to break the single-challenge security of the scheme. By a triangle inequality (union bound) argument over the hybrids:

If is polynomial in and the single-challenge advantage is negligible, the total multi-challenge advantage remains negligible. This is the standard argument establishing the equivalence of the single-challenge (Find-Then-Guess, FTG) and multi-challenge (Left-or-Right, LOR) formulations for standard CPA and CCA security.

Notably, for some FHE-specific notions (such as CPAD and vCCAD in the approximate setting), this hybrid argument fails. The simulated games are not perfectly indistinguishable across hybrids because the decryption of approximate ciphertexts leaks varying amounts of noise depending on the exact queries. This is precisely why the multi-challenge variants of those notions are strictly stronger than their single-challenge counterparts.

Circular Security

Standard security definitions (like IND-CPA) guarantee that ciphertexts hide the underlying plaintext, provided the adversary’s chosen messages do not depend on the secret key. Circular security (also called Key-Dependent Message or KDM security) is the property that a scheme remains secure even when the adversary is explicitly given encryptions of the secret key or functions of the secret key under its corresponding public key.

Formally, an -cycle arises when there exist key pairs such that is encrypted under (with indices wrapping around modulo ). The most critical case in FHE is a 1-cycle, where is encrypted under its own .

Circular security assumptions are ubiquitous and mandatory in FHE because fundamental homomorphic operations require publishing evaluation keys that consist of encrypted secret-key material:

  • Bootstrapping: The operation that refreshes a noisy ciphertext to evaluate circuits of arbitrary depth. A bootstrapping key contains encryptions of the secret key bits under the same public key.
  • Relinearization: Used after homomorphic multiplication to reduce the expanded ciphertext back to its original dimension. Relinearization keys encrypt the tensor product of the secret key with itself ().
  • Rotation (Galois) keys: Used in schemes like BGV, BFV, and CKKS to perform cyclic permutations on plaintext slots. These keys encrypt a Galois automorphism applied to the secret key.

Without circular security, standard IND-CPA proofs provide absolutely no guarantees once these evaluation keys are published. Circular security is a strictly stronger assumption than standard CPA security: Black, Rogaway, and Shrimpton (2002) and subsequent works demonstrated artificial encryption schemes that are provably CPA-secure but trivially leak the secret key if it is encrypted under itself. In practice, mainstream FHE schemes (BFV, BGV, CKKS, TFHE) are heuristically believed to be circularly secure, but no unconditional proof exists reducing this property to standard lattice assumptions like LWE.

Further Reading

The game-playing technique was systematized by Bellare and Rogaway (2004). Shoup (2004) presents a great introduction to game hopping proofs. Hybrid arguments and their role in proving CPA security appear in most cryptography textbooks; see Katz and Lindell, Introduction to Modern Cryptography (3rd ed.), or Boneh and Shoup, A Graduate Course in Applied Cryptography. For the failure of hybrid arguments in the approximate FHE setting, see Brzuska et al. (CIC 2025). For a comprehensive treatment of security notions for FHE - including definitions, relations, separations, and constructions - see the PhD thesis of Renard (2025).