Task-specific tangible input devices, like video game controllers, improve user speed and accuracy in input tasks compared to the more general-purpose touchscreen or mouse and keyboard. However, while modifying a graphical user interface (GUI) to accept mouse and keyboard inputs for new and specific tasks is relatively easy and requires only software knowledge, tangible input devices are challenging to prototype and build.
Rapid prototyping digital fabrication machines, such as vinyl cutters, laser cutters, and 3D printers, now permeate the design process for such devices. Using these tools, designers can realize a new tangible design faster than ever. In a typical design process, these machines are not used to create the interaction in these interactive product prototypes: they merely create the shell, case, or body, leaving the designer to, in an entirely separate process, assemble and program electronics for sensing a user's input. What are the most cost-effective, fast, and flexible ways of sensing rapid-prototyped input devices? In this dissertation, we investigate how 2D and 3D models for input devices can be automatically generated or modified in order to employ standard, off-the-shelf sensing techniques for adding interactivity to those objects: we call this ``fabbing to sense.''
We describe the capabilities of modern rapid prototyping machines, linking these abilities to potential sensing mechanisms when possible. We plunge more deeply into three examples of sensing/fabrication links: we build analysis and design tools that help users design, fabricate, assemble, and \emph{use} input devices sensed through these links. First, we discuss Midas, a tool for building capacitive sensing interfaces on non-screen surfaces, like the back of a phone. Second, we describe Lamello, a technique that generates lasercut and 3D printed tine structures and simulates their vibrational frequencies for training-free audio sensing. Finally, we present Sauron, a tool that automatically modifies the interior of 3D input models to allow sensing via a single embedded camera. We demonstrate each technique's flexibility to be used for many types of input devices through a series of example objects.