This paper introduces a probabilistic, two-stage classification framework for the semantic annotation of urban maps as provided by a mobile robot. During the first stage, local scene properties are considered using a probabilistic bagof- words classifier. The second stage incorporates contextual information across a given scene via a Markov Random Field (MRF). Our approach is driven by data from an onboard camera and 3D laser scanner and uses a combination of appearancebased and geometric features. By framing the classification exercise probabilistically we are able to execute an informationtheoretic bail-out policy when evaluating appearance-based classconditional likelihoods. This efficiency, combined with low order MRFs resulting from our two-stage approach, allows us to generate scene labels at speeds suitable for online deployment and use. We demonstrate and analyze the performance of our technique on data gathered over almost 17 km of track through a city.