In recent years, many datasets have been created that represent natural language semantics as graphs. To effectively use these datasets, we require models of graphs. In particular, we want models that are probabilistic and can be composed efficiently---e.g. for tasks such as machine translation we could compose a string-to-graph model with a graph-to-string model. In this thesis, we explore the space of formal models of graphs including directed acyclic graph automata (DAGA), hyperedge replacement gramamrs (HRG), and monadic-second order logic. We argue that each of these formalisms are unsuitable for use in modelling natural language semantics, and provide a proof that not all DAGA can be made probabilistic. We propose that a restricted form of HRG, the regular graph grammars (RGG; Courcelle 1991), are more suited to the task. We prove that they are probabilistic and composable, and we provide a parsing algorithm for them that runs in time linear in the size of the input graph.