# Difference between revisions of "Quantization - Analysis of Protocols"

The potential success of a passive attacker E (estimating the channel from A to B) was evaluated in the previous section for distances between 0 cm and 30 cm. For the metric of success we again chose the blockwise Pearson correlation coefficient.

Figure0: BER over Time

Evaluation results for BER over time, applying ASBG-multibit quantization [1]. Alice and Bob with relatively low BER; Eve and Bob with BER around 0.5. A dot represents the correlation coefficient of a block. The line respresents the correlation coefficient of the hole series. The distribution of the blockwise BER is given on the right hand side. (c.f. Figure0)

Blockwise evaluation introduces errors into the entropy estimation

## Monte Carlo Simulation: Part 1

We perform Monte-Carlo simulations based on the simulation environment shown in Figure 6.5. Two independent random sequences of length M = 1 000 000, $\displaystyle x_\alpha= [x_\alpha(1), x_\alpha(2), . . . , x_\alpha(M)]^T$ and $\displaystyle x_\beta = [x_\beta(1), x_\beta(2), . . . , x_\beta(M)]^T$ , are modeled as temporally correlated Rayleigh distributed random variables. The maximum Doppler shift specifies the assumed Jakes Doppler spectrum. To achieve a quantitive measure for the grade of reciprocity, we define $\displaystyle \rho_{\alpha\beta}\in\{0,...,1\}$ as the Pearson cross-correlation coefficient between the channel measurements $\displaystyle h_{\beta\alpha}$ and $\displaystyle h_{\alpha\beta}$ of nodes $\displaystyle \alpha$ and $\displaystyle \beta$ , respectively, with $\displaystyle \alpha, \beta \in \{A,B,E\}$ and $\displaystyle \alpha\neq\beta$ . By a given $\displaystyle \rho_{\alpha\beta}$ the correlator correlates the random variables $\displaystyle x_\alpha$ and $\displaystyle x_\beta$ to create imperfect reciprocal estimates $\displaystyle h_{\beta\alpha}$ and $\displaystyle h_{\alpha\beta}$ . As it was shown in [2], this can be done by applying Cholesky decomposition of a symmetric and positive definite covariance matrix C that depends on $\displaystyle \rho_{\alpha\beta}$ [3]. With $\displaystyle h_{\beta\alpha}= x_\alpha$ it follows:

$\displaystyle h_{\alpha\beta}=\rho_{\alpha\beta}h_{\beta\alpha}+x_\beta\sqrt{1-\rho_{\alpha\beta}}^2$



Subsequently, $\displaystyle h_{\beta\alpha}$ and $\displaystyle h_{\alpha\beta}$ are given into quantization blocks to generate binary sequences $\displaystyle y_\alpha$ and $\displaystyle y_\beta$ .

To evaluate and compare the quantization schemes introduced in Section 2.4.2, we implemented the quantization protocols in Matlab and applied the Monte-Carlo simulation environment. We introduced this approach and the accompanying results in [4] and extended it in [5] by analyzing more quantization schemes from the literature. In this work, we again extend the number of quantization schemes and increase the resolution of the evaluation results by analyzing parameter dependencies.

Figure 6.5 Simulation model: Two independent random variates $\displaystyle x_\alpha$ and $\displaystyle x_\beta$ are mutually correlated with correlation coefficient $\displaystyle \rho_{\alpha\beta}$ . Subsequently, the resulting random variates $\displaystyle h_{\beta\alpha}$ and $\displaystyle h_{\beta\alpha}$ are separately quantized.
Figure 6.6: Simulation results for BER versus correlation coefficient
]]

Figure 6.6 illustrates a raw summary of the BER as a function of the correlation coefficient � for our set of quantization schemes (cf. Table 2.3). Basically two curve progressions are given:

1. regressive curve progressions with increasing correlation coefficient AONO see fig 6.6
2. degressive curve progressions with increasing correlation coefficient Zenger see fig 6.6

Due to the regressive curve progressions and, therefore, low BER (< 0.1) for lower correlation (< 0.75), it can be noticed that the schemes proposed by Mathur et al. [6], Jana et al. [7](single bit), and Aono et al. [8] achieve more reliability. Thus, legitimate nodes are likely to generate identical secret keys when applying reliable quantizers for a broad range of $\displaystyle \rho$ . However, from security perspective, this also holds for passive attackers, who can easily deduce a considerable amount of the legitimate nodes’ key if robust quantizers are used. For the multi bit quantizers and non-guardband based threshold methods, the robust reliability does not hold, e.g., Zenger et al. [9], Tope et al. [10], Patwari et al. [11], and Jana et al [12](multi bit). Here, the degressive curve progressions provide higher BER (> 0.25) for lower correlation (< 0.75). Our results show that the schemes by Ambekar et al. [13] do not lead to a BER of 0.5 for zero correlated channel profiles. This leads to a serious security problem, in so far as even with totally uncorrelated measurements an adversary may recover partial information of the initial key material. The schemes introduced by Jana et al. [14] and Mathur et al. [15] show a regressive curve progression of BER with increasing correlation. As this linear behavior lets an adversary learn a lot of secret information already with reasonably good correlations, this could corrupt the security of the scheme. The scheme of Zenger et al. [16] shows strong security properties for potential attackers with low correlated observations, however, it also shows bad performance for non- perfect correlated channel profiles.

## Monte Carlo Simulation: Part 2

In this paragraph, we analyze the influences of the different parameters of the quantization schemes in detail. Therefore, we utilize again the Monte Carlo simulation and the quantization schemes with different parameter sets. The results of the scheme by Tope et al. [17] show an almost linear behavior between BER and correlation (cf. Figure 6.7). However, with decreasing correlation the curve shows a slight digressive behavior of the BER. Interestingly, the parameters M, $\displaystyle \alpha$ , and blocksize do not significantly influence the results. This might be because of the element wise subtraction of two channel measurement vectors that smooths the distribution. The scheme by Aono et al. [18] shows different curve progression behaviors dependent on the blocksize (cf. Figure 6.8). Using block sizes smaller than 64 the curve progression is degressive. This behavior might be because the variances of small channel profile blocks are not meaningful due to uncertainty. Therefore, the statistics (variance) can not increase the robustness against noise. Using the block size of 64 shows a linear behavior between BER and correlation. With increasing block sizes (larger than 64) the curve progression changes towards a regressive behavior. The negative slope of the curve rapidly stagnates with a block size of 512. Further, our simulation shows that the parameter $\displaystyle \beta$ does not significantly influence the results. With increasing values for parameter $\displaystyle \alpha$ the regressive behavior increases and the scheme provides high reliability. The scheme by Azimi et al. [19] also has a degressive curve progressions with increasing correlation coefficient (cf. Figure 6.9). The behavior between 0.6 to 100% correlation is almost independent of block size. However, the scheme provides a BER of 0.48 for zero correlation for a blocksize of 16. The BER decreases with increasing block size (BER of 0.41 for block size of 8192).

Figure 6.7: Evaluation results based on the Monte Carlo simulation of the quantization scheme of Tope et al. [20]. The BER versus correlation coefficient $\displaystyle \rho$ is presented.

The scheme by Mathur et al. [21] also has block size dependent curve progression features as Aono et al.’s scheme (cf. Figure 6.10). Using block sizes smaller than 64 the curve progession is degressive. This behavior might be because the variances of small channel profile blocks are not meaningful due to uncertainty. Using the block size of 64 shows a linear behavior between BER and correlation. With increasing block sizes (larger than 64) the curve progression changes towards a regressive behavior. Here, the negative slope of the curve rapidly stagnates with a block size of 256. The parameter $\displaystyle m$ is a protocol parameter giving the length of excursion sequences. It can be seen that increasing the value $\displaystyle m$ lower the probability of mismatch.

Figure 6.8: Evaluation results based on the Monte Carlo simulation of the quantization scheme of Aono et al. [22]. The BER versus correlation coefficient $\displaystyle \rho$ is presented.

Furthermore, with increasing parameter $\displaystyle \alpha$ , the regression increases and, therefore, the BER decreases. The results of the single bit quantization scheme of Jana et al. [23] are similar to the results of Mathur et al.’s and Aono et al.’s schemes (cf. Figure 6.11). This is not a surprise since all three schemes are based on guard bands to extract key material. Here, the regression of the curve rapidly stagnates with a block size of 256. For $\displaystyle \alpha = 0.4$ the curve is almost linear. If $\displaystyle \alpha < 0.4$ the curve has a degressive progression with an increasing correlation coefficient. If $\displaystyle \alpha > 0.4$ the curve has a regressive curve progression with increasing correlation coefficient. The multi bit quantization scheme of Jana et al. [24] provides a strong degressive curve progression with increasing correlation coefficient (cf. Figure 6.12). The scheme has an interesting block size dependency. For zero correlation only the curves for a block size of 128 end at a BER of 0.5. Increasing or decreasing the block sizes leads to decreasing a BER. Furthermore, the larger the symbol size the more degressive (and less linear) the curve progression looks.

The quantization scheme of Hamida et al. [25] also has an interesting block size dependent curve behavior (cf. Figure 6.13). For a block size of {16, 32, 64, 128, 256} the BER is never higher than {0.01, 0.09, 0.24, 0.34, 0.40}. The highest BER of 0.43 is given for the block size of 512 at zero correlation. That leads to a serious security problem, in so far as even with zero correlated measurements an adversary may recover large parts of the initial key material. The performance parameter k represents the ratio between extracted key material and raw input material. However, it does not affect the curve behavior significantly.

The quantization scheme of Wallace et al. [26] also possess the block size dependent curve behavior known from Jana et al.’s and Hamida et al.’s schemes (cf. Figure 6.14). The optimum block size is 128, because here the curve has the maximum BER of 0.5 at zero correlation. Larger or smaller block sizes lead to lower BER curves introducing potential security defects. The symbol size has (similar to Jana et al.’s scheme) an effect on the strength of the degressive curve behavior.

The scheme by Patwari et al. [27] also has a degressive curve progression (cf. Figure 6.15). Again the block size plays an important role. However, here a block size larger than 64 provides 0.5 BER for zero correlation. Security defects are only given for smaller sizes.

The quantization scheme introduced by Ambekar et al. [28] also has a degressive curve progression (cf. Figure 6.16). The BER to correlation curve is also block size dependent. The larger the block size the higher the BER at zero correlation. Unfortunately, at a block size of 2048 the scheme stagnates at a BER of 0.27. For block size of {16, 32, 64, 128} the BER is even lower: {0.05, 0.08, 0.14, 0.19}. As described before, this leads to a serious security problem, in so far as even with zero correlated measurements an adversary may recover large parts of the initial key material.

The quantization scheme of Zenger et al. [29] also has a degressive curve progression (cf. Figure 6.17). The BER to correlation curve is also block size dependent. The larger the block size the higher the BER at zero correlation. A block size larger than 128 provides 0.5 BER for zero correlation. Security defects are given for smaller sizes. Interesting to mention is the very sharp degressive behavior. This means that attackers (or even legitimate parties) with not almost perfect correlation achieve high BERss. This is a good security feature, however, such high correlations might not be given constancy in real environments.

Figure 6.10: Evaluation results based on the Monte Carlo simulation of the quantization scheme of Mathur et al. [30]. The BER versus correlation coefficient $\displaystyle \rho$ is presented.

The quantization scheme using Lloyd-max thresholding has also a degressive curve progression (cf. Figure 6.18). The BER to correlation curve is slightly block size dependent. As larger the block size as more degressive the curve. The same holds for increasing symbol sizes. Therefore, with smaller symbol sizes and smaller block sizes the scheme gets more robust. The quantization scheme using mean-value thresholding has a degressive curve progression (cf. Figure 6.19). The scheme provides a secure behavior if the block size is smaller than 512 samples per block.

The quantization scheme using median-value thresholding has a degressive curve progression (cf. Figure 6.19). The scheme provides a secure behavior if the block size is larger than 64 samples per block. Furthermore, for the scheme holds than as larger the block size as less robust/more secure the correlation behavior.

## Key Extraction Efficiency

Figure 6.21 illustrates the IKGR as a function of the correlation coefficient $\displaystyle \rho$ for our set of quantization schemes. The $\displaystyle IKGR = \frac{|q|}{|h|}$ is defined as the average ratio $\displaystyle \frac qk$ of the length of the emitted bit stream $\displaystyle q$ and the number of samples provided as an input $\displaystyle h$ . As expected, the lossless quantization techniques always generate one bit per sample in case of single threshold techniques or two bit per sample in case of multibit quantization. The lossy quantization schemes drop various samples and, thus, cannot extract the same amount of possibly valuable information. Especially the excursion detection technique proposed by Mathur reduces the IKGR to values below 0.2 bit per sample. All schemes are fairly unaffected by the correlation coefficient $\displaystyle \rho$ . Next, we like to extend the evaluation of the key extraction performance by introducing the information-theoretic measures between the channel profiles and the preliminary key material (quantizer’s output). Therefore, we apply the k-NNE implementation of Kraskov et al. [31] to calculate the mutual information, as well as the entropies, the equivocation, and the irrelevance. For evaluating lossy quantization schemes, we apply the following strategy. To calculate the mutual information properly we skip dropped symbols. Then, for example, the mutual information will be calculated only from the corresponding channel profiles and quantizer’s output. The ratio between the dropped profiles and all profiles is 1−IKGR. The result of the analyses is illustrated using Venn diagrams. For example, in Figure 6.22 results of three schemes are illustrated. On the top, the results of two lossy schemes (Mathur et al. and Jana at el.’s single-bit ) are given. Generally, coarser quantization thresholds result in a lower mutual information. \\TODO ADD IMAGE p.152 Tope et al. [32] scheme performs an IKGR of 0.125 bit per channel profile. The Venn diagram demonstrates a mutual information of 0.46 between the initial key material and the extracted bits as well as an equivocation of 0.535, while the entropy of the channel profils is 4.23 bit per channel profiles. The large share of equivocation originates from the specific key extraction method that combines two channel profiles and uses the combination for key extraction. The results of Jana et al.’s single-bit scheme [33] is given on the top left side of Figure 6.22. Here the entropy of the channel profiles is 3.82 bit per channel profile. The reason for the difference between the input entropy for Jana et al. and Mathur et al. is based on the selected channel profiles. The mutual information is 0.97 and an equivocation of 0. The result shows that the entropy of the extracted material is almost perfect. Note, that the kNNE can only estimate the Shannon entropy.

Two results for Jana et al.’s multi-bit scheme [34] are given on the button of Figure 6.22. Here the parameter N = 1 and N = 4 are chosen to demonstrate the relationship between the increasing number of thresholds $\displaystyle (2^2 - 1$ and $\displaystyle 2^4 - 1)$ and the increasing amount of mutual information. Here a small amount of equivocation information is given. This might cause on round-off errors. Table 6.3 summarizes the results of the information-theoretic evaluation of the key extraction. Furthermore, we provide an illustration of the key extraction efficiency in Figure 6.23. The scheme by Zenger et al. [35] shows very small mutual information and large equivocation.

This is attributed to the fact that the correlation of the input data is not high enough and, therefore, the entropy maximizing behaviour (equally distributed output symbols) generated random allocations to the symbols.

## Online Entropy Estimation

The results of the on-line bit entropy estimation of the preliminary key material over time aregiven in Figure 6.24(a). Here the draft 800 − 90B of NIST [20] is applied and the worst-case estimation result of the five tests is used, as recommended by the draft. We investigated if those tests are applicable in an on-line scenario in terms of performance and implemented them. As we already stated, the on-line estimation ensures the current security level by responding to statistical defects of the material. As an example, we evaluated the influence of the channel sampling rate. Because of the high channel sampling rate $\displaystyle r_s$ of 100 Hz, the key material is highly correlated with time and therefore the estimated entropy is relatively low. Reducing the sampling rate (applying downsampling in our framework) helps to find the optimal sampling rate at the maximum estimated entropy as demonstrated in Figure 6.24(b). Interesting to mention is the different behavior of the different quantization schemes. This may be because of statistical defects of the schemes itselves or because of the amount of input material, and/or the sample selectivity of the (lossy) scheme.

// ADD 5 IMAGES 153

## Summary

An optimal quantization scheme requires to solve a multi-variable optimization problem.

The first task of a quantizer is to robust the key extraction. In general, this means that the mutual information between A’s and B’s quantizes channel profiles needs to be maximized and simultaneously the equivocation and irrelevance needs to be minimized. In other words, a low BER is required. However, for known quantization schemes this step decreases the amount of entropy. The more robust the alignment is the higher the entropy loss is.

To increase the bit-entropy, the quantization scheme can modify the probability distribution from a Rayleigh distributed input to an uniformly distributed output. High bit-entropy is required for secure information reconciliation, as we will see later on. However, this step also increases the BER.

A third feature of a quantization scheme is to provide high BER for low correlated channel profiles. It is worth noting if the preliminary key material has zero bit errors, but a potential adversary can also guess the correct key material with little effort.

// ADD 2 IMAGES p Figure6.20

A perfect quantization scheme would:

1. minimize the BER between both legitimate nodes,
2. maximize the BER for low correlated observations,
3. maximize the amount of mutual information between channel profile and quantization output for a given BER, and
4. maximize the bit-entropy.

1. 103
2. 221
3. 73
4. 76
5. 231
6. 132
7. 103
8. 15
9. 227
10. 191
11. 148
12. 103
13. 11
14. 103
15. 132
16. 227
17. 191
18. 15
19. 17
20. 191
21. 132
22. [15]
23. 103
24. 103
25. 81
26. 204
27. 148
28. 11
29. 227
30. [132
31. 113
32. 191
33. 103
34. 103
35. 227