We present a joint model based on deep learning that is designed to inpaint the missing-wedge sinogram of electron tomography and reduce the residual artifacts in the reconstructed tomograms. Traditional methods, such as weighted back projection (WBP) and simultaneous algebraic reconstruction technique (SART), lack the ability to recover the unacquired project information as a result of the limited tilt range; consequently, the tomograms reconstructed using these methods are distorted and contaminated with the elongation, streaking, and ghost tail artifacts. To tackle this problem, we first design a sinogram filling model based on the use of Residual-in-Residual Dense Blocks in a Generative Adversarial Network (GAN). Then, we use a U-net structured Generative Adversarial Network to reduce the residual artifacts. We build a two-step model to perform information recovery and artifacts removal in their respective suitable domain. Compared with the traditional methods, our method offers superior Peak Signal to Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) to WBP and SART; even with a missing wedge of 45°, our method offers reconstructed images that closely resemble the ground truth with nearly no artifacts. In addition, our model has the advantage of not needing inputs from human operators or setting hyperparameters such as iteration steps and relaxation coefficient used in TV-based methods, which highly relies on human experience and parameter fine turning.