Maintaining computational load balance is important to the performant
behavior of codes which operate under a distributed computing model. This is
especially true for GPU architectures, which can suffer from memory
oversubscription if improperly load balanced. We present enhancements to
traditional load balancing approaches and explicitly target GPU architectures,
exploring the resulting performance. A key component of our enhancements is the
introduction of several GPU-amenable strategies for assessing compute work.
These strategies are implemented and benchmarked to find the most optimal data
collection methodology for in-situ assessment of GPU compute work. For the
fully kinetic particle-in-cell code WarpX, which supports MPI+CUDA parallelism,
we investigate the performance of the improved dynamic load balancing via a
strong scaling-based performance model and show that, for a laser-ion
acceleration test problem run with up to 6144 GPUs on Summit, the enhanced
dynamic load balancing achieves from 62%--74% (88% when running on 6 GPUs) of
the theoretically predicted maximum speedup; for the 96-GPU case, we find that
dynamic load balancing improves performance relative to baselines without load
balancing (3.8x speedup) and with static load balancing (1.2x speedup). Our
results provide important insights into dynamic load balancing and performance
assessment, and are particularly relevant in the context of distributed memory
applications ran on GPUs.