Codebook design, generation, and adaptation, based on matching to stochastic source examples or prior knowledge of source distribution, has played a central role in many applications of source coding. The original iterative ''natural type selection'' (NTS) algorithm performs stochastic codebook generation of memoryless sources, and achieves the rate-distortion bound, as it asymptotically converges to the optimal codebook reproduction distribution, Q*. However, these optimality results are subject to significant limitations that compromise the practical applicability of NTS, namely: i) the string length L is required to go to infinity at the outset, before NTS iterations begin, whereas the iteration complexity is exponential in L, and ii) it is only applicable to discrete and memoryless sources, thus precluding a vast portion of important lossy coding applications. This thesis offers means to eliminate or circumvent these critical shortcomings by proposing new and enhanced NTS algorithms, complemented by optimality proofs that are not subject to the above limitations. To circumvent the need to start with asymptotically large string length, L, the approach leverages a maximum likelihood framework to estimate, at each NTS iteration n, the reproduction distribution most likely to generate the sequence of K length-L codewords that respectively and independently “d-match” (i.e., are within distortion d from) a sequence of K length-L source words. The reproduction distribution estimated at iteration n is used to regenerate the codebook for iteration n+1. The sequence of reproduction distributions estimated by NTS is shown to converge, asymptotically in K, n, and L (in this order), to the optimal distribution that achieves the rate-distortion bound. Thus, the string length L is the last parameter to be sent to infinity. Moreover, it is established that, for finite length L, the new NTS algorithm converges to the best achievable distribution, i.e., as constrained by the string length L, and details are provided for various types of sources, where numerical simulations show that the algorithm rate of convergence in n for finite length L is at least as fast as convergence in n with infinite L. To handle sources with memory, NTS is further generalized by considering source sub-vectors or ''super-symbols'', of memory depth M, during d-match search in the codebook, maximum likelihood estimation of reproduction distribution, and codebook regeneration. Asymptotic convergence, in L and M, to the optimal reproduction distribution is also established for sources with memory. As for, perhaps the more challenging, sources over continuous alphabet spaces, which are inconsistent with the traditional concept of ''type'' or ''typical sequence'', in the proposed asymptotically optimal approach, we employ empirical probability measures for codebook reproduction distribution estimation.
Methodologies for optimal codebook generation and adaptation are further developed and employed in two promising example applications in the areas of i) wireless communications and ii) machine learning. In particular, for 5G cellular systems and next generation wireless local area networks, directional beamforming with large antenna arrays is key to mitigating the substantial signal loss experienced at the millimeter wave frequency band, where it entails a significant increase in the number of beams required to maintain cell coverage, and hence an increase in the beam management overhead necessary to maintain link with mobile users. This observation, in turn, suggests that the underlying problem of finding the optimal set of beam steering directions will benefit from fundamental signal processing and codebook design methodologies, and specifically from basic principles and algorithms for cluster analysis. This part of the thesis establishes and exploits the equivalence between the problem of optimizing a set of beam steering directions and the classical problems of clustering in pattern recognition and codebook design in data compression, albeit with an unusual distortion measure. Subsequently, a global optimization approach within the deterministic annealing framework is derived, to circumvent poor local optima that may riddle the cost surface under the classical gradient descent clustering techniques. System simulation results show that the proposed approaches deliver considerable gains, over the baseline beam steering techniques, in terms of average signal-to-noise ratio.
The third part of the thesis is concerned with codebook design and adaptation for machine learning or artificial intelligence. Machine learning applications have exploded in recent years due to the availability of huge data sets, as well as advances in computational and storage capabilities. Although successful methods have been proposed to reduce learning system complexity while maintaining required accuracy levels, theoretical understanding of the underlying trade-offs remains elusive. In this work, the classical supervised learning problem is reformulated within a rate-distortion framework. It provides insights into the underlying accuracy-complexity trade-offs, by considering the overall learning system as consisting of two components. The first is tasked with extracting (learning) from the source the minimal number of information bits necessary to ultimately achieve the prescribed output accuracy. The learned bits are then used to retrieve the desired output from the second component, an appropriately designed codebook. The premise here is that an optimal system is characterized by having to learn the minimum amount of information from the source, just sufficient to yield the system output at the desired precision, which implies efficiency in terms of system complexity, generalization and training data requirements. The design and training of such a reformulated system is detailed, and asymptotically optimal performance that achieves the rate-distortion bound is established.