Traditionally, stochastic models in operations research use specific probabilistic assumptions to model random phenomena, and determine optimal policies or decisions on this basis. Often, these probabilistic assumptions are parametric, and entail estimation of parameters using very small samples of data. Many a times, the available information is not sufficient to postulate a model with any degree of certainty. Consequently, policies based on parametric assumptions in this case, are very sensitive to the particular assumptions made. One of the goals of this thesis is therefore the development of objective, adaptive, data-driven, learning approaches to objective functions, that make as few parametric assumptions as possible, and give rise to optimal policies that perform well for small samples, without compromising large sample performance. While this clearly seems a very difficult problem, it is one that is observed in nearly every operations management problem and is certainly the right problem to pursue. In this thesis, we develop novel learning approaches to specific problems in inventory control, call center staffing and dynamic assortment optimization. We test these approaches computationally, and provide strong evidence for the adoption of our general approach in tackling model uncertainty in operations management problems.