# Difference between revisions of "Entropy Estimation"

Providing a robust communication over a noisy channel is the traditional application of error correction techniques. As an example, code rates as well as the error correction and detection capabilities for BCH codes (generic C[255,k,d]) are given in Figure 6.28. However, with respect to security applications, the priority lies in the remaining conditional min-entropy an adversary is not able to recover, rather than on the classical correction capabilities as we will show next.

We start analyzing the secure sketch approach assuming the best case. This means that there are (1) no statistical defects on preliminary key material, (2) no entropy losses due to public communication of quantizers, and (3) no potential eavesdropper measuring correlated channel profiles. Therefore, the bitwise min-entropy of the code word $\displaystyle y_a$ can be assumed as $\displaystyle H_\infty(X) = 1$ . The amount of information that an attacker can infer from eavesdropping $\displaystyle syn(y_a)$ corresponds to the number of transmitted bits:$\displaystyle p= n-k$ . Therefore, k bits entropy for each block are retained.

This algorithm is able to correct bit streams with no more than 25% disagreeing bits based on the chosen parameters. The scheme is especially advantageous since it only involves a single syndrome to be transmitted from Alice to Bob and does not result in a high entropy loss by revealing information to an eavesdropper. Also, the computational load can be balanced according to the resources, since Alice only needs to calculate the syndrome whereas Bob computes two syndromes and the computational expensive decode of the syndrome.

In the next step, we consider statistical defects in the preliminary key material and estimate those by applying on-line entropy estimation, e.g., by draft 800−90B of NIST [1] as proposed above. For simplicity, we assume that no potential attacker is close enough to measure correlated channel profiles, and further that a secure quantization scheme is applied. The entropy per bit $\displaystyle H_\infty(X)$ , as provided by the draft, may vary between [0, 1]. Addressing the information loss Y due to the reconciliation interactions in combination with the (estimated) min-entropy $\displaystyle H_\infty(X)$ , the leftover conditional min-entropy $\displaystyle H_\infty(X|Y)$ is given for the average case (AC) as well as for the worst case (WC):

Eq. 6.2
$\displaystyle H_{\infty}^{AC}(X|Y)=H_\infty(X)*\frac kn$

Eq. 6.3
$\displaystyle H_{\infty}^{WC}(X|Y)=H_\infty(X)-\frac{(n-k)}n$

for a code word with the length of n bit and rank k. A visualization of the Eq. 6.2 and 6.3 is given in Figure 6.29. In order to not reveal the entire information to an adversary by applying the scheme by Zhang et al. [2] the code limitation $\displaystyle k > (n-k)$ has to hold. Note that this is for the case of $\displaystyle H_\infty(X) = 1$ . Otherwise, the formula for average-case and worst-case H1(X|Y ) is given as follows:

Eq. 6.4
$\displaystyle H_{\infty}^{AC}(X|Y)=(H_{\infty}^{AC}(X|Y)=(H_\infty(X)*k)*\frac{k-(n-k)}{k}$

Eq. 6.5
$\displaystyle H_{\infty}^{WC}(X|Y)=\frac{(H_{\infty}(X)*k)-(n-k)}n$

Further, as an example, an illustration of the leftover conditional min-entropy for parity check bit based reconciliation (cf. approach [2]) using C[255,k,d] BCH codes is given in Figure 6.30(a). The results for the syndrome-based approach (cf. [3]) are illustrated in Figure 6.30(b). For future work $\displaystyle H_\infty(X)$ could also consider losses due to correlated observations of an eavesdropper and revealed information of further public interactions. // TODO ADD 2 IMAGES 163 // TODO ADD 1 IMAGE 164

## References

1. ElaineBarkerandJohnKelsey. Recommendationfortheentropysourcesusedforrandom bit generation. Draft NIST Special Publication, 2012.
2. Junxing Zhang, Sneha Kumar Kasera, and Neal Patwari. Mobility assisted secret key generation using wireless link signatures. In INFOCOM 2010. 29th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 15-19 March 2010, San Diego, CA, USA, pages 261–265. IEEE, 2010.
3. Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin, and Adam Smith. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. SIAM J. Comput., 38(1):97–139, 2008.