As the demand for big data increases and the speed of traditional CPUs cannot keep pace, new computing paradigms and architectures are needed to meet the demands for our data hungry world. To keep pace with this, Ising Computing and probabilistic computing have emerged as a method to solve NP-Hard optimization problems (such as logistics, place and route in circuits), perform Machine Learning training and inference, model decision making in animal brains, and much more.
This work centers around parallelized computing algorithms and hardware based on probabilistic formulations of the Ising Model, known as Boltzmann Machines. The algorithms demonstrated use techniques from machine learning, structured algorithmic mapping, and mixed approaches. Using these algorithms we demonstrate potential solutions to 16 bit Integer Factorization, 3-SAT, LDPC Codes and scalable techniques for solutions to a variety of problems.
We map these algorithms to a variety of hardware, from small and medium scale (1000s of nodes) FPGA approaches to large and ultra large (>100,000 nodes) scale on GPU and TPU instances. These accelerated instances demonstrate state of the art performance on the MAX-CUT benchmark problem.
We finally demonstrate the Parallel Asynchronous Stochastic Sampler (PASS), a neuromorphic, clock-free accelerator that mimics brain-like asynchronous computation using the Ising Model. This has the potential for orders of magnitude speed increase over traditional methods for solving these problems while being the first on-chip, fully CMOS, demonstration of such an architecture.