Efficient image annotation is necessary to utilize deep learning object recognition neural networks in nuclear safeguards, such as for the detection and localization of target objects like nuclear material containers (NMCs). This capability can help automate the inventory accounting of different types of NMCs within nuclear storage facilities. The conventional manual annotation process is labor-intensive and time-consuming, hindering the rapid deployment of deep learning models for NMC identifications. This paper introduces a novel semi-automatic method for annotating 2D images of nuclear material containers (NMCs) by combining 3D light detection and ranging (LiDAR) data with color and depth camera images collected from a handheld scan system. The annotation pipeline involves an operator manually marking new target objects on a LiDAR-generated map, and projecting these 3D locations to images, thereby automatically creating annotations from the projections. The semi-automatic approach significantly reduces manual efforts and the expertise in image annotation that is required to perform the task, allowing deep learning models to be trained on-site within a few hours. The paper compares the performance of models trained on datasets annotated through various methods, including semi-automatic, manual, and commercial annotation services. The evaluation demonstrates that the semi-automatic annotation method achieves comparable or superior results, with a mean average precision (mAP) above 0.9, showcasing its efficiency in training object recognition models. Additionally, the paper explores the application of the proposed method to instance segmentation, achieving promising results in detecting multiple types of NMCs in various formations.