To enable a robotic platform that is low-cost, yet useful for manipulation tasks in household and low-volume production settings without prior calibration, two algorithms for grasp viewpoint planning are proposed. Both algorithms rely on performing motions on a virtual sphere in an object-centric workspace, utilizing only RGB images from an eye-in-hand camera to reach an optimal grasping viewpoint. The first method succeeds on grasping more than 80 \% of the time on the Rethink Robotics Baxter Platform by utilizing image moments to calculate an optimal grasp pose, and the second method is capable of orienting the end effector within 5 degrees of an object on a simulated Universal Robotics UR5 platform using a Convolutional Neural Network (CNN). From experiment results of these two algorithms, it can be shown that an RGB camera is sufficient for viewpoint planning and grasping of novel objects with simple geometries. Additionally, a grasping model can be trained for a new object to increase alignment accuracy and grasp success rates.