Most frame-based knowledge representation (KR) systems have two strange features. First, the concepts represented by the nodes are nouns rather than verbs. Verbal ideas tend to appear mostly in describing roles or slots. Thus the systems are asymmetric. Second, and more seriously, the slot names on frames are arbitrary and not defined in the system. Usually no metasystem is given to account for them. Thus the systems are not closed. Both these features can be avoided by structures inspired by case-based linguistic theories. The basic ideas are that an ontology consists of separate, parallel lattices of verbal and nominal concepts, and that the slots of concepts in each lattice are defined by reference to the concepts in the other lattice. Slots of verbal concepts are derived from cases, and restricted by nominal concepts. Slots of nominal concepts include conducts (verbal concepts) and derivatives of the slots of verbal concepts. Our objective in this paper is not to define a new KR language, but to use input from the study of natural cognition (case grammar) to refine technology for artificial cognition.