Even though wireless and mobile devices have evolved with richer capabilities, resources in such devices are still limited, and thus in many cases, are insufficient to accomplish the tasks entrusted in them. The goal of my work is to improve the efficiencies of applications on wireless and mobile devices, focusing on resource constrained settings.
First, we develop TIDE, a user-centric framework that helps to identify high energy consuming applications on users’ smartphones. TIDE identifies energy hungry applications by looking at the correlation between applications’ activities and high drainage periods on the smartphones. Experiments on Android smartphones show that TIDE is able to identify correctly ≈ 90% of the energy hungry applications, while imposing reasonably low energy overheads.
Subsequently, we develop a framework to identify redundant images uploaded from multiple wireless devices in bandwidth constrained networks, e.g., the destructed networks at natural disaster scenes. Our framework intelligently combines state-of-the-art vision techniques to identify redundant images uploaded to a server. Suppressing the transfer
of redundant contents significantly lowers network load, so that the delay in transferring unique and important contents in such critical scenarios is reduced up to ≈ 44%.
We then design ACTION, a framework for accurate and timely object (e.g., human) detection in bandwidth constrained settings. In ACTION, the objects of interest are effectively detected at individual camera sensors. Metadata of detected objects is then aggregated at a designated fusion node to improve the detection accuracy. Most accurate information of each detected object is then chosen to upload to a central controller, while adhering to the bandwidth constraints. We show that ACTION helps reduce up to three folds the amount of transferred data, while still delivering important information to the central node.
Finally, we design EECS, a framework for adaptive detection algorithm selection in multi-camera settings. In EECS, only a subset of camera sensors is chosen to detect objects; further, the most energy efficient algorithm is assigned to each camera to reduce energy consumption, while still ensuring a desired accuracy. We show that EECS can be tuned to achieve the right trade-offs between energy efficiency and desired accuracy.