Massive multiple-input multiple-output (MIMO) is a promising technology for next generation communication systems, where the base station (BS) is equipped with a large number of antenna elements to serve multiple user equipments. With the large number of antenna elements, the BS can perform multi-user beamforming with much narrower beamwidth, thereby simultaneously serving more users with less interference among them. Furthermore, the large antenna array results in large array gain which lowers the radiated energy. However, efficient beamforming relies on the availability of channel state information at the BS. In a frequency-division duplexing massive MIMO system, the channel estimation is challenging due to the need to estimate a high dimensional unknown channel vector, which requires large training and feedback overhead for the conventional channel estimation algorithms. Moreover, massive MIMO system with fully digital architecture, where a dedicated radio frequency chain and a high-resolution analog-to-digital converter (ADC) are connected to each antenna element, will cause too much power and hardware cost as the size of the antenna array becomes large.
To reduce the training and feedback overhead, compressive sensing methods and sparse recovery algorithms are proposed to robustly estimate the downlink and uplink channel by exploiting the sparse representation of the massive MIMO channel. Previous works model this sparse representation by some predefined matrix, while in this dissertation, a dictionary learning based channel model is proposed which learns an efficient and robust representation from the data. Furthermore, a joint uplink/downlink dictionary learning framework is proposed by observing the reciprocity between the angle of arrival in uplink and the angel of departure in downlink, which enables a joint channel estimation algorithm. To save the power and hardware cost, a hardware-efficient architecture which contains both hybrid analog-digital processing and low-resolution ADCs is proposed. This hardware-efficient architecture poses significant challenges to channel estimation due to the reduced dimension and precision of the measured signal. To address the problem, the sparse nature of the channel is exploited and the transmitted data symbols are utilized as the “virtual pilots”, both of which are treated in a unified Bayesian formulation. We formulate the channel estimation into a quantized compressive sensing problem utilizing the sparse Bayesian learning framework, and develop a variational Bayesian algorithm for inference. The performance of the compressive sensing can be further improved by applying a well structured sensing matrix, and we propose a sensing matrix design algorithm which can exploit the partial knowledge of the support.