Vantage-point tree
A vantage-point tree (or VP tree) is a metric tree that segregates data in a metric space by choosing a position in the space (the "vantage point") and partitioning the data points into two parts: those points that are nearer to the vantage point than a threshold, and those points that are not. By recursively applying this procedure to partition the data into smaller and smaller sets, a tree data structure is created where neighbors in the tree are likely to be neighbors in the space.[1]
One generalization is called a multi-vantage-point tree, or MVP tree: a data structure for indexing objects from large metric spaces for similarity search queries. It uses more than one point to partition each level.[2]
Contents
1 History
2 Understanding a vantage-point tree
3 Searching through a vantage-point tree
4 Advantages of a vantage-point tree
5 References
6 External links
History
Peter Yianilos claimed that the vantage-point tree was discovered independently by him (Peter Yianilos)
and by Jeffrey Uhlmann.[1]
Yet, Uhlmann published this method before Yianilos in 1991.[3]
Uhlmann called the data structure a metric tree, the name VP-tree was
proposed by Yianilos.
Vantage-point trees have been generalized to non-metric spaces using Bregman divergences by Nielsen et al.[4]
This iterative partitioning process is similar to that of a k-d tree, but uses circular (or spherical, hyperspherical, etc.) rather than rectilinear partitions. In two-dimensional Euclidean space, this can be visualized as a series of circles segregating the data.
The vantage-point tree is particularly useful in dividing data in a non-standard metric space into a metric tree.
Understanding a vantage-point tree
The way a vantage-point tree stores data can be represented by a circle.[5] First, understand that each node of this tree contains an input point and a radius. All the left children of a given node are the points inside the circle and all the right children of a given node are outside of the circle. The tree itself does not need to know any other information about what is being stored. All it needs is the distance function that satisfies the properties of the metric space.[5]
Searching through a vantage-point tree
A vantage-point tree can be used to find the nearest neighbor of a point x. The search algorithm is recursive. At any given step we are working with a node of the tree that has a vantage point v and a threshold distance t. The point of interest x will be some distance from the vantage point v. If that distance d is less than t then use the algorithm recursively to search the subtree of the node that contains the points closer to the vantage point than the threshold t; otherwise recurse to the subtree of the node that contains the points that are farther than the vantage point than the threshold t. If the recursive use of the algorithm finds a neighboring point n with distance to x that is less than |t − d| then it cannot help to search the other subtree of this node; the discovered node n is returned. Otherwise, the other subtree also needs to be searched recursively.
A similar approach works for finding the k nearest neighbors of a point x. In the recursion, the other subtree is searched for k − k′ nearest neighbors of the point x whenever only k′ (< k) of the nearest neighbors found so far have distance that is less than |t − d|.
Advantages of a vantage-point tree
- Instead of inferring multidimensional points for domain before the index being built, we build the index directly based on the distance.[5] Doing this, avoids pre-processing steps.
- Updating a vantage-point tree is relatively easy compared to the fast-map approach. For fast maps, after inserting or deleting data, there will come a time when fast-map will have to rescan itself. That takes up too much time and it is unclear to know when the rescanning will start.
- Distance based methods are flexible. It is “able to index objects that are represented as feature vectors of a fixed number of dimensions."[5]
References
^ ab Yianilos (1993). Data structures and algorithms for nearest neighbor search in general metric spaces (PDF). Fourth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics Philadelphia, PA, USA. pp. 311–321. pny93. Retrieved 2008-08-22..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
^ Bozkaya, Tolga; Ozsoyoglu, Meral (September 1999). "Indexing Large Metric Spaces for Similarity Search Queries". ACM Trans. Database Syst. 24 (3): 361–404. doi:10.1145/328939.328959. ISSN 0362-5915.
^ Uhlmann, Jeffrey (1991). "Satisfying General Proximity/Similarity Queries with Metric Trees". Information Processing Letters. 40 (4). doi:10.1016/0020-0190(91)90074-r.
[dead link]
^ Nielsen, Frank (2009). "Bregman vantage point trees for efficient nearest Neighbor Queries". Proceedings of Multimedia and Exp (ICME). IEEE. pp. 878–881.
^ abcd Fu, Ada Wai-chee; Polly Mei-shuen Chan; Yin-Ling Cheung; Yiu Sang Moon (2000). "Dynamic vp-tree indexing for n-nearest neighbor search given pair-wise distances". The VLDB Journal — The International Journal on Very Large Data Bases. Springer-Verlag New York, Inc. Secaucus, NJ, USA. pp. 154–173. vp. Retrieved 2012-10-02.
External links
- Understanding VP Trees
Comments
Post a Comment