Similarity measure





In statistics and related fields, a similarity measure or similarity function is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity measure exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. For example, if two pieces of data have close x, y coordinates, then their “similarity” score, the likelihood that they are similar, will be much higher than two data points with more space between them.[1] In the context of cluster analysis, Frey and Dueck suggest defining a similarity measure


s(x,y)=−x−y‖22{displaystyle s(x,y)=-|x-y|_{2}^{2}}{displaystyle s(x,y)=-|x-y|_{2}^{2}}

where x−y‖22{displaystyle |x-y|_{2}^{2}}{displaystyle |x-y|_{2}^{2}} is the squared Euclidean distance.[2]


Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model. In machine learning, common kernel functions such as the RBF kernel can be viewed as similarity functions.[3]




Contents






  • 1 Use in clustering


  • 2 Use in sequence alignment


  • 3 See also


  • 4 References





Use in clustering


In spectral clustering, a similarity, or affinity, measure is used to transform data to overcome difficulties related to lack of convexity in the shape of the data distribution.[4] The measure gives rise to an (n,n){displaystyle (n,n)}{displaystyle (n,n)}-sized similarity matrix for a set of n points, where the entry (i,j){displaystyle (i,j)}(i,j) in the matrix can be simply the (negative of the) Euclidean distance between i{displaystyle i}i and j{displaystyle j}j, or it can be a more complex measure of distance such as the Gaussian e−s1−s2‖2/2σ2{displaystyle e^{-|s_{1}-s_{2}|^{2}/2sigma ^{2}}}{displaystyle e^{-|s_{1}-s_{2}|^{2}/2sigma ^{2}}}.[4] Further modifying this result with network analysis techniques is also common.[5]



Use in sequence alignment


Similarity matrices are used in sequence alignment. Higher scores are given to more-similar characters, and lower or negative scores for dissimilar characters.


Nucleotide similarity matrices are used to align nucleic acid sequences. Because there are only four nucleotides commonly found in DNA (Adenine (A), Cytosine (C), Guanine (G) and Thymine (T)), nucleotide similarity matrices are much simpler than protein similarity matrices. For example, a simple matrix will assign identical bases a score of +1 and non-identical bases a score of −1. A more complicated matrix would give a higher score to transitions (changes from a pyrimidine such as C or T to another pyrimidine, or from a purine such as A or G to another purine) than to transversions (from a pyrimidine to a purine or vice versa).
The match/mismatch ratio of the matrix sets the target evolutionary distance.[6][7] The +1/−3 DNA matrix used by BLASTN is best suited for finding matches between sequences that are 99% identical; a +1/−1 (or +4/−4) matrix is much more suited to sequences with about 70% similarity. Matrices for lower similarity sequences require longer sequence alignments.


Amino acid similarity matrices are more complicated, because there are 20 amino acids coded for by the genetic code, and so a larger number of possible substitutions. Therefore, the similarity matrix for amino acids contains 400 entries (although it is usually symmetric). The first approach scored all amino acid changes equally. A later refinement was to determine amino acid similarities based on how many base changes were required to change a codon to code for that amino acid. This model is better, but it doesn't take into account the selective pressure of amino acid changes. Better models took into account the chemical properties of amino acids.


One approach has been to empirically generate the similarity matrices. The Dayhoff method used phylogenetic trees and sequences taken from species on the tree. This approach has given rise to the PAM series of matrices. PAM matrices are labelled based on how many nucleotide changes have occurred, per 100 amino acids.
While the PAM matrices benefit from having a well understood evolutionary model, they are most useful at short evolutionary distances (PAM10–PAM120). At long evolutionary distances, for example PAM250 or 20% identity, it has been shown that the BLOSUM matrices are much more effective.


The BLOSUM series were generated by comparing a number of divergent sequences. The BLOSUM series are labeled based on how much entropy remains unmutated between all sequences, so a lower BLOSUM number corresponds to a higher PAM number.



See also



  • Affinity propagation


  • Recurrence plot, a visualisation tool of recurrences in dynamical (and other) systems

  • Self-similarity matrix

  • Semantic similarity

  • String metric



References





  1. ^ "What is an Affinity Matrix?". deepai.org..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Brendan J. Frey; Delbert Dueck (2007). "Clustering by passing messages between data points". Science. 315: 972–976. doi:10.1126/science.1136800. PMID 17218491.


  3. ^ Vert, Jean-Philippe; Tsuda, Koji; Schölkopf, Bernhard (2004). "A primer on kernel methods". Kernel Methods in Computational Biology (PDF).


  4. ^ ab Ng, A.Y.; Jordan, M.I.; Weiss, Y. (2001), "On Spectral Clustering: Analysis and an Algorithm" (PDF), Advances in Neural Information Processing Systems, MIT Press, 14: 849–856


  5. ^ Li, Xin-Ye; Guo, Li-Jie (2012), "Constructing affinity matrix in spectral clustering based on neighbor propagation", Neurocomputing, MIT Press, 97: 125–130, doi:10.1016/j.neucom.2012.06.023


  6. ^ States, D; Gish, W; Altschul, S (1991). "Improved sensitivity of nucleic acid database searches using application-specific scoring matrices". Methods: a companion to methods in enzymology. 3 (1): 66. doi:10.1016/S1046-2023(05)80165-3.


  7. ^ Sean R. Eddy (2004). "Where did the BLOSUM62 alignment score matrix come from?" (PDF). Nature Biotechnology. 22 (8): 1035–6. doi:10.1038/nbt0804-1035. PMID 15286655. Archived from the original (PDF) on 2006-09-03.




  • F. Gregory Ashby; Daniel M. Ennis (2007). "Similarity measures". Scholarpedia. 2 (12). doi:10.4249/scholarpedia.4116.



Comments

Popular posts from this blog

Information security

Lambak Kiri

章鱼与海女图