Detecting and anticipating global proliferation expertise and capability evolution from unstructured, noisy, and incomplete public data streams is a highly desired, but extremely challenging task. In this article, we present our pioneering data-driven approach to support the non-proliferation mission to detect and explain the evolution of proliferation expertise and capability development globally from terabytes of publicly available information (PAI), focusing on our knowledge extraction pipeline and descriptive analytics. We first discuss how we fuse nine open-source data streams, including multilingual data, to convert 4 TB of unstructured data to structured knowledge and encode dynamically evolving proliferation expertise representations - content and context graphs. For this, we rely on natural language processing (NLP) and deep learning (DL) models to perform information extraction, topic modeling, and distributed text representation (aka embedding) learning. We then present interactive, usable, and explainable descriptive analytics to refine domain knowledge and present it in a human-understandable form. Finally, we introduce future work avenues that will leverage our dynamic knowledge representations and descriptive analytics to enable predictive and prescriptive inferences to achieve real-time domain understanding and contextual reasoning about global proliferation expertise and capability evolution.