Semantic features have been playing a central role in investigating the nature of our conceptual representations. Yet the time and effort required to sample features from human raters has restricted their use to a limited set of manually curated concepts. Given recent success of transformer-based language models, we asked whether it was possible to use such models to automatically generate meaningful lists of properties for arbitrary object concepts and whether these models would produce features similar to those found in humans. We probed a GPT-3 model to generate semantic features for 1,854 objects and compared them to existing human feature norms. GPT-3 showed a similar distribution in the types of features and similar performance in predicting similarity, relatedness, and category membership. Together these results highlight the potential of large language models to capture important facets of human knowledge and yield a new approach for automatically generating interpretable feature sets.