
Segmentation using eigenvectors分割使用特征向量.ppt
52页Segmentation using eigenvectorsPapers: “Normalized Cuts and Image Segmentation”. Jianbo Shi and Jitendra Malik, IEEE, 2000“Segmentation using eigenvectors: a unifying view”. Yair Weiss, ICCV 1999.Presenter: Carlos Vallespicvalles@cs.cmu.eduImage SegmentationImage segmentationlHow do you pick the right segmentation? •Bottom up segmentation: - Tokens belong together because they are locally coherent. •Top down segmentation: - Tokens grouped because they lie on the same object.“Correct” segmentationlThere may not be a single correct answer.lPartitioning is inherently hierarchical.lOne approach we will use in this presentation:l“Use the low-level coherence of brightness, color, texture or motion attributes to come up with partitions”Outline1.Introduction2.Graph terminology and representation.3.“Min cuts” and “Normalized cuts”.4.Other segmentation methods using eigenvectors.5.Conclusions.Outline1.1.Introduction2.Graph terminology and representation.3.“Min cuts” and “Normalized cuts”.4.Other segmentation methods using eigenvectors.5.Conclusions.Graph-based Image SegmentationImage (I)Graph Affinities(W)IntensityColorEdgesTextureSlide from Timothee Cour (http://www.seas.upenn.edu/~timothee)Graph-based Image SegmentationImage (I)Slide from Timothee Cour (http://www.seas.upenn.edu/~timothee)Graph Affinities(W)IntensityColorEdgesTextureGraph-based Image SegmentationImage (I)Eigenvector X(W)Slide from Timothee Cour (http://www.seas.upenn.edu/~timothee)Graph Affinities(W)IntensityColorEdgesTextureGraph-based Image SegmentationImage (I)Eigenvector X(W)DiscretizationSlide from Timothee Cour (http://www.seas.upenn.edu/~timothee)Graph Affinities(W)IntensityColorEdgesTextureOutline1.Introduction2.2.Graph terminology and representation.3.“Min cuts” and “Normalized cuts”.4.Other segmentation methods using eigenvectors.5.Conclusions.Graph-based Image SegmentationV: graph nodesE: edges connection nodesG = {V,E}PixelsPixel similaritySlides from Jianbo ShiGraph terminologylSimilarity matrix:Slides from Jianbo ShiAffinity matrixSimilarity of image pixels to selected pixelBrighter means more similarReshapeN*M pixelsN*M pixelsM pixelsN pixelsWarningWarningthe size of W is quadratic with the numberof parameters!Graph terminologylDegree of node:Slides from Jianbo Shi……Graph terminologylVolume of set:Slides from Jianbo ShiGraph terminologySlides from Jianbo ShilCuts in a graph:RepresentationPartition matrix Partition matrix X X: :Pair-wise similarity matrix Pair-wise similarity matrix WW: :Degree matrix Degree matrix D D: :LaplacianLaplacian matrix matrix L L: :segmentspixelsPixel similarity functionsIntensityTextureDistancePixel similarity functionsIntensityTextureDistancehere c(x) is a vector of filter outputs. A natural thing to do is to square the outputs of a range of different filters at different scales and orientations, smooth the result, and rack these into a vector.DefinitionslMethods that use the spectrum of the affinity matrix to cluster are known as spectral clustering.lNormalized cuts, Average cuts, Average association make use of the eigenvectors of the affinity matrix.lWhy these methods work?Spectral ClusteringDataSimilarities* Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford UniversityEigenvectors and blockslBlock matrices have block eigenvectors:lNear-block matrices have near-block eigenvectors:1100110000110011eigensolver.71.710000.71.711= 22= 23= 04= 011.2 0110-.2.20110-.211eigensolver.71.69.1400-.14.69.711= 2.022= 2.023= -0.024= -0.02* Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford UniversitySpectral SpacelCan put items into blocks by eigenvectors:lClusters clear regardless of row ordering:11.2 0110-.2.20110-.211.71.69.1400-.14.69.71e1e2e1e21.2 10.2101101-.201-.21.71.14.6900.69-.14.71e1e2e1e2* Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford UniversityOutline1.Introduction2.Graph terminology and representation.3.3.“Min cuts” and “Normalized cuts”.4.Other segmentation methods using eigenvectors.5.Conclusions.How do we extract a good cluster?l lSimplest ideaSimplest idea: we want a vector x x giving the association between each element and a clusterlWe want elements within this cluster to, on the whole, have strong affinity with one anotherstrong affinity with one anotherlWe could maximizemaximizelBut need the constraintconstraintlThis is an eigenvalueeigenvalue problem problem - choose the eigenvector of W with largest eigenvalue.lCriterion for partition:Minimum cutFirst proposed by Wu and LeahyABIdeal CutCuts with lesser weightthan the ideal cutProblem! Problem! Weight of cut is directly proportional to the number of edges in the cut.Normalized CutNormalized cut or balanced cut:Finds better cutNormalized CutlVolume of set (or association):ABNormalized CutlVolume of set (or association):lDefine normalized cut: “a fraction of the total edge connections to all the nodes in the graph”:ABABlDefine normalized association: “how tightly on average nodes within the cluster are connected to each other”ABSubject to:Observations(I)lMaximizing Nassoc is the same as minimizing Ncut, since they are related:lHow to minimize Ncut?lTransform Ncut equation to a matricial form.lAfter simplifying:Rayleigh quotientNP-Hard!y y’s’s values are quantized values are quantizedlInstead, relax into the continuous domain by solving generalized eigenvalue system:lWhich gives:lNote that so, the first eigenvector is y0=1 with eigenvalue 0.lThe second smallest eigenvector is the real valued solution to this problem!!Observations(II)minAlgorithm1.Define a similarity function between 2 nodes. i.e.:2.Compute affinity matrix (W) and degree matrix (D).3.Solve4.Use the eigenvector with the second smallest eigenvalue to bipartition the graph.5.Decide if re-partition current partitions.Note: since precision requirements are low, W is very sparse and only few eigenvectors are required, the eigenvectors can be extracted very fast using Lanczos algorithm.DiscretizationlSometimes there is not a clear threshold to binarize since eigenvectors take on continuous values.lHow to choose the splitting point? a)Pick a constant value (0, or 0.5).b)Pick the median value as splitting point.c)Look for the splitting point that has the minimum Ncut value:1.Choose n possible splitting points.2.Compute Ncut value.3.Pick minimum.Use k-eigenvectorslRecursive 2-way NcutNcut is slow.lWe can use more eigenvectors to re-partition the graph, however:lNot all eigenvectors are useful for partition (degree of smoothnessdegree of smoothness).lProcedure: compute k-means k-means with a high k k. Then follow one of these procedures:a)Merge segments that minimize k k-way NcutNcut criterion.b)Use the k segments and find the partitions there using exhaustive search.lCompute Q (next slides).11.2 0110-.2.20110-.211.71.69.1400-.14.69.71e1e2e1e2Toy examplesImages from Matthew Brand (TR-2002-42)Example (I)EigenvectorsEigenvectorsSegmentsSegmentsExample (II)* Slide from Khurram Hassan-Shafique CAP5415 Computer Vision 2003SegmentsSegmentsOriginalOriginalOutline1.Introduction2.Graph terminology and representation.3.3.“Min cuts” and “Normalized cuts”.4.4.Other segmentation methods using eigenvectors.5.Conclusions.Other methodslAverage associationlUse the eigenvector of W associated to the biggest eigenvalue for partitioning.lTries to maximize:lHas a bias to find tight clusters. Useful for gaussian distributions.ABOther methodslAverage cutlTries to minimize:lVery similar to normalized cuts.lWe cannot ensure that partitions will have a a tight within-group similarity since this equation does not have the nice properties of the equation of normalized cuts.Other methodsOther methods20 points are randomly distributed from 0.0 to 0.512 points are randomly distributed from 0.65 to 1.0Normalized cutNormalized cutAverage cutAverage cutAverage associationAverage associationOther methodsl lScott and Scott and LonguetLonguet-Higgins (1990).-Higgins (1990).lV contains the first eigenvectors of W.lNormalize V by rows.lCompute Q=VTVlValues close to 1 belong to the same cluster.Second Second evevFirst First evevWWDataDataOther applicationsl lCosteiraCosteira and and KanadeKanade (1995). (1995).lUsed to segment points in motion.lCompute M=(XY). lThe affinity matrix W is compute as W=MTM. This trick computes the affinity of every pair of points as a inner product.lCompute Q=VTVlValues close to 1 belong to the same cluster.DataDataMMOther applicationslFace clustering in meetings.lGrab faces from video in real time (use a face detector + face tracker).lCompare all faces using a distance metric (i.e. projection error into representative basis).lUse normalized cuts to find best clustering.Outline1.Introduction2.Graph terminology and representation.3.“Min cuts” and “Normalized cuts”.4.Other segmentation methods using eigenvectors.5.5.Conclusions.ConclusionslGood news:lSimple and powerful methods to segment images.lFlexible and easy to apply to other clustering problems.lBad news:lHigh memory requirements (use sparse matrices).lVery dependant on the scale factor for a specific problem.Thank you!The End!ExamplesSpectral CluteringImages from Matthew Brand (TR-2002-42)Spectral clusteringlMakes use of the spectrum of the similarity matrix of the data to cluster the points. Solve clustering for affinity matrixw(i,j) distance node i to node jGraph terminologySimilarity matrix:Degree of node:Volume of set:Graph cuts:。
