This paper introduces the paradigm of in-context operator learning and the corresponding model In-Context Operator Networks to simultaneously learn operators from the prompted data and apply it to new questions during the inference stage, without any weight update. Existing methods are limited to using a neural network to approximate a specific equation solution or a specific operator, requiring retraining when switching to a new problem with different equations. By training a single neural network as an operator learner, rather than a solution/operator approximator, we can not only get rid of retraining (even fine-tuning) the neural network for new problems but also leverage the commonalities shared across operators so that only a few examples in the prompt are needed when learning a new operator. Our numerical results show the capability of a single neural network as a few-shot operator learner for a diversified type of differential equation problems, including forward and inverse problems of ordinary differential equations, partial differential equations, and mean-field control problems, and also show that it can generalize its learning capability to operators beyond the training distribution.