Thumbnail
Access Restriction
Open

Author Vineet, Vibhav ♦ Warrell, Jonathan ♦ Sturgess, Paul ♦ Torr, Philip H. S.
Source CiteSeerX
Content type Text
File Format PDF
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Gaussian Mixture Pairwise Term ♦ Mean-field Inference ♦ Dense Random Field ♦ Hierarchical Mean-field Approach ♦ Pairwise Random Field ♦ Uk Research Visiongroup Recently ♦ Piecewise Learning ♦ Mean-field Approximation ♦ Camvid Datasets ♦ Pascalvoc-10 Segmentation ♦ Mean-field Method ♦ Single Value ♦ Weighted Combination ♦ Good Initial Condition ♦ Mixture Model ♦ Camvid Dataset ♦ Pairwise Weight ♦ Finer Level ♦ Label Transfer ♦ Mixing Co-efficient ♦ Pascalvoc-10 Dataset ♦ Conditional Random Field ♦ General Gaussian Pairwise Weight ♦ Efficient Inference Method ♦ Co-variance Matrix ♦ Gaussian Component ♦ Object Class Segmentation Problem ♦ Label Pair ♦ Expectation Maximization ♦ Gaussian Kernel ♦ Mixture Component ♦ Inference Time ♦ Maximum Likelihood Function ♦ Art Performance
Abstract cms.brookes.ac.uk/research/visiongroup/ Recently, Krahenbuhl and Koltun proposed an efficient inference method for densely connected pairwise random fields using the mean-field approximation for a Conditional Random Field (CRF). However, they restrict their pairwise weights to take the form of a weighted combination of Gaussian kernels where each Gaussian component is allowed to take only zero mean, and can only be rescaled by a single value for each label pair. Further, their method is sensitive to initialization. In this paper, we propose methods to alleviate these issues. First, we propose a hierarchical mean-field approach where labelling from the coarser level is propagated to the finer level for better initialisation. Further, we use SIFT-flow based label transfer to provide a good initial condition at the coarsest level. Second, we allow our approach to take general Gaussian pairwise weights, where we learn the mean, the co-variance matrix, and the mixing co-efficient for every mixture component. We propose a variation of Expectation Maximization (EM) for piecewise learning of the parameters of the mixture model determined by the maximum likelihood function. Finally, we demonstrate the efficiency and accuracy offered by our method for object class segmentation problems on two challenging datasets: PascalVOC-10 segmentation and CamVid datasets. We show that we are able to achieve state of the art performance on the CamVid dataset, and an almost 3 % improvement on the PascalVOC-10 dataset compared to baseline graph-cut and mean-field methods, while also reducing the inference time by almost a factor of 3 compared to graph-cuts based methods. 1
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study