In today’s complex networked world, graph-structured data has emerged as a fundamental element across a multitude of fields, including social networks, biological systems, and communication infrastructures. The ability of graphs to represent complex relationships and interactions makes them invaluable for understanding and leveraging data in these domains. As artificial intelligence continues to advance, deep learning has become a pivotal technology, driving breakthroughs across various applications. In particular, deep learning techniques have revolutionized data analysis by automating feature extraction and enhancing predictive accuracy. Among these techniques, Graph Neural Networks (GNNs) have gained prominence for their ability to effectively handle graph-structured data. GNNs extend the power of deep learning to graph-based tasks such as node classification, link prediction, and graph generation, by leveraging the intrinsic relationships between nodes and edges. These models, through their message-passing mechanisms, enable sophisticated learning from the rich structural information present in graphs. Despite their success, GNNs face significant challenges, notably their vulnerability to adversarial attacks. These attacks, which involve deliberate modifications to graph structures—such as adding, removing, or rewiring edges—can severely compromise model performance and security, particularly in sensitive applications like financial systems and network security. Addressing these vulnerabilities, this dissertation proposes innovative methodologies to enhance the robustness of GNNs against adversarial threats. It proposes novel attack models that leverage meta-learning and convex relaxation techniques to generate and evaluate adversarial perturbations effectively. These models utilize advanced optimization strategies, including Focal Loss Projected Momentum and Projected Metattack, to address the vulnerabilities of GNNs. In parallel, a defense mechanism based on Non-Negative Matrix Factorization is developed to purify graph structures. This approach employs matrix decomposition techniques to separate the true graph structure from adversarial noise, demonstrating significant improvements in resilience against poisoning attacks. Additionally, the dissertation explores the interpretability of learning methods through the development of graph kernels based on Optimal Transport (OT) theory. This method enhances node embeddings and predictive modeling by offering a clearer understanding of graph data, particularly in biological applications. The OT-based kernels provide a transparent and efficient alternative to traditional deep learning methods, facilitating better insights into the decision-making processes of graph-based systems. Collectively, these contributions advance the state-of-the-art in robust graph learning by addressing critical issues related to model security, interpretability, and optimization, thereby enhancing the reliability and applicability of graph-based systems across various domains. In addition to the core contributions on robust and interpretable graph learning, the dissertation explores advancements in other related fields. This includes applying deep unrolling techniques to improve high dynamic range (HDR) imaging from low dynamic range (LDR) images, developing a novel algorithm for ghost-free HDR synthesis. It also introduces a new self-supervised learning method that combines contrastive learning with optimal transport for more efficient and scalable clustering. Moreover, it addresses challenges in genomics by presenting an optimization framework for detecting structural variants, enhancing the accuracy of detection in low-coverage sequencing scenarios. Collectively, these contributions extend the dissertation's impact beyond graph-based learning, advancing both theoretical and practical aspects across various domains. The developed methods were rigorously evaluated on real-world graph and imaging datasets, demonstrating their effectiveness and robustness. Comparative studies with state-of-the-art approaches reveal that our methods not only meet but often exceed the performance of existing solutions. These evaluations highlight the enhanced capability of our techniques in addressing complex challenges in robust and interpretable learning.