module documentation

Distance Metrics.

Compute the distance between two items (usually strings). As metrics, they must satisfy the following three requirements:

  1. d(a, a) = 0
  2. d(a, b) >= 0
  3. d(a, c) <= d(a, b) + d(b, c)
Function binary_distance Simple equality test.
Function custom_distance Undocumented
Function demo Undocumented
Function edit_distance Calculate the Levenshtein edit-distance between two strings. The edit distance is the number of characters that need to be substituted, inserted, or deleted, to transform s1 into s2. For example, transforming "rain" to "shine" requires three steps, consisting of two substitutions and one insertion: "rain" -> "sain" -> "shin" -> "shine"...
Function edit_distance_align Calculate the minimum Levenshtein edit-distance based alignment mapping between two strings. The alignment finds the mapping from string s1 to s2 that minimizes the edit distance cost. For example, mapping "rain" to "shine" would involve 2 substitutions, 2 matches and an insertion resulting in the following mapping: [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (4, 5)] NB: (0, 0) is the start state without any letters associated See more: ...
Function fractional_presence Undocumented
Function interval_distance Krippendorff's interval distance metric
Function jaccard_distance Distance metric comparing set-similarity.
Function jaro_similarity Computes the Jaro similarity between 2 sequences from:
Function jaro_winkler_similarity The Jaro Winkler distance is an extension of the Jaro similarity in:
Function masi_distance Distance metric that takes into account partial agreement when multiple labels are assigned.
Function presence Higher-order function to test presence of a given label
Function _edit_dist_backtrace Undocumented
Function _edit_dist_init Undocumented
Function _edit_dist_step Undocumented
def binary_distance(label1, label2): (source)

Simple equality test.

0.0 if the labels are identical, 1.0 if they are different.

>>> from nltk.metrics import binary_distance
>>> binary_distance(1,1)
0.0
>>> binary_distance(1,3)
1.0
def custom_distance(file): (source)

Undocumented

def demo(): (source)

Undocumented

def edit_distance(s1, s2, substitution_cost=1, transpositions=False): (source)

Calculate the Levenshtein edit-distance between two strings. The edit distance is the number of characters that need to be substituted, inserted, or deleted, to transform s1 into s2. For example, transforming "rain" to "shine" requires three steps, consisting of two substitutions and one insertion: "rain" -> "sain" -> "shin" -> "shine". These operations could have been done in other orders, but at least three steps are needed.

Allows specifying the cost of substitution edits (e.g., "a" -> "b"), because sometimes it makes sense to assign greater penalties to substitutions.

This also optionally allows transposition edits (e.g., "ab" -> "ba"), though this is disabled by default.

:rtype int

Parameters
s1:strUndocumented
s2:strUndocumented
substitution_cost:intUndocumented
transpositions:boolWhether to allow transposition edits
s1, s2The strings to be analysed
def edit_distance_align(s1, s2, substitution_cost=1): (source)

Calculate the minimum Levenshtein edit-distance based alignment mapping between two strings. The alignment finds the mapping from string s1 to s2 that minimizes the edit distance cost. For example, mapping "rain" to "shine" would involve 2 substitutions, 2 matches and an insertion resulting in the following mapping: [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (4, 5)] NB: (0, 0) is the start state without any letters associated See more: https://web.stanford.edu/class/cs124/lec/med.pdf

In case of multiple valid minimum-distance alignments, the backtrace has the following operation precedence: 1. Skip s1 character 2. Skip s2 character 3. Substitute s1 and s2 characters The backtrace is carried out in reverse string order.

This function does not support transposition.

:rtype List[Tuple(int, int)]

Parameters
s1:strUndocumented
s2:strUndocumented
substitution_cost:intUndocumented
s1, s2The strings to be aligned
def fractional_presence(label): (source)

Undocumented

def interval_distance(label1, label2): (source)

Krippendorff's interval distance metric

>>> from nltk.metrics import interval_distance
>>> interval_distance(1,10)
81

Krippendorff 1980, Content Analysis: An Introduction to its Methodology

def jaccard_distance(label1, label2): (source)

Distance metric comparing set-similarity.

def jaro_similarity(s1, s2): (source)

Computes the Jaro similarity between 2 sequences from:

Matthew A. Jaro (1989). Advances in record linkage methodology as applied to the 1985 census of Tampa Florida. Journal of the American Statistical Association. 84 (406): 414-20.

The Jaro distance between is the min no. of single-character transpositions required to change one word into another. The Jaro similarity formula from https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance :

jaro_sim = 0 if m = 0 else 1/3 * (m/|s_1| + m/s_2 + (m-t)/m)
where:
  • |s_i| is the length of string s_i
  • m is the no. of matching characters
  • t is the half no. of possible transpositions.
def jaro_winkler_similarity(s1, s2, p=0.1, max_l=4): (source)

The Jaro Winkler distance is an extension of the Jaro similarity in:

William E. Winkler. 1990. String Comparator Metrics and Enhanced Decision Rules in the Fellegi-Sunter Model of Record Linkage. Proceedings of the Section on Survey Research Methods. American Statistical Association: 354-359.

such that:

jaro_winkler_sim = jaro_sim + ( l * p * (1 - jaro_sim) )

where,

  • jaro_sim is the output from the Jaro Similarity,

see jaro_similarity() - l is the length of common prefix at the start of the string

  • this implementation provides an upperbound for the l value to keep the prefixes.A common value of this upperbound is 4.
  • p is the constant scaling factor to overweigh common prefixes. The Jaro-Winkler similarity will fall within the [0, 1] bound, given that max(p)<=0.25 , default is p=0.1 in Winkler (1990)

Test using outputs from https://www.census.gov/srd/papers/pdf/rr93-8.pdf from "Table 5 Comparison of String Comparators Rescaled between 0 and 1"

>>> winkler_examples = [("billy", "billy"), ("billy", "bill"), ("billy", "blily"),
... ("massie", "massey"), ("yvette", "yevett"), ("billy", "bolly"), ("dwayne", "duane"),
... ("dixon", "dickson"), ("billy", "susan")]
>>> winkler_scores = [1.000, 0.967, 0.947, 0.944, 0.911, 0.893, 0.858, 0.853, 0.000]
>>> jaro_scores =    [1.000, 0.933, 0.933, 0.889, 0.889, 0.867, 0.822, 0.790, 0.000]
# One way to match the values on the Winkler's paper is to provide a different

# p scaling factor for different pairs of strings, e.g. >>> p_factors = [0.1, 0.125, 0.20, 0.125, 0.20, 0.20, 0.20, 0.15, 0.1]

>>> for (s1, s2), jscore, wscore, p in zip(winkler_examples, jaro_scores, winkler_scores, p_factors):
...     assert round(jaro_similarity(s1, s2), 3) == jscore
...     assert round(jaro_winkler_similarity(s1, s2, p=p), 3) == wscore

Test using outputs from https://www.census.gov/srd/papers/pdf/rr94-5.pdf from "Table 2.1. Comparison of String Comparators Using Last Names, First Names, and Street Names"

>>> winkler_examples = [('SHACKLEFORD', 'SHACKELFORD'), ('DUNNINGHAM', 'CUNNIGHAM'),
... ('NICHLESON', 'NICHULSON'), ('JONES', 'JOHNSON'), ('MASSEY', 'MASSIE'),
... ('ABROMS', 'ABRAMS'), ('HARDIN', 'MARTINEZ'), ('ITMAN', 'SMITH'),
... ('JERALDINE', 'GERALDINE'), ('MARHTA', 'MARTHA'), ('MICHELLE', 'MICHAEL'),
... ('JULIES', 'JULIUS'), ('TANYA', 'TONYA'), ('DWAYNE', 'DUANE'), ('SEAN', 'SUSAN'),
... ('JON', 'JOHN'), ('JON', 'JAN'), ('BROOKHAVEN', 'BRROKHAVEN'),
... ('BROOK HALLOW', 'BROOK HLLW'), ('DECATUR', 'DECATIR'), ('FITZRUREITER', 'FITZENREITER'),
... ('HIGBEE', 'HIGHEE'), ('HIGBEE', 'HIGVEE'), ('LACURA', 'LOCURA'), ('IOWA', 'IONA'), ('1ST', 'IST')]
>>> jaro_scores =   [0.970, 0.896, 0.926, 0.790, 0.889, 0.889, 0.722, 0.467, 0.926,
... 0.944, 0.869, 0.889, 0.867, 0.822, 0.783, 0.917, 0.000, 0.933, 0.944, 0.905,
... 0.856, 0.889, 0.889, 0.889, 0.833, 0.000]
>>> winkler_scores = [0.982, 0.896, 0.956, 0.832, 0.944, 0.922, 0.722, 0.467, 0.926,
... 0.961, 0.921, 0.933, 0.880, 0.858, 0.805, 0.933, 0.000, 0.947, 0.967, 0.943,
... 0.913, 0.922, 0.922, 0.900, 0.867, 0.000]
# One way to match the values on the Winkler's paper is to provide a different

# p scaling factor for different pairs of strings, e.g. >>> p_factors = [0.1, 0.1, 0.1, 0.1, 0.125, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.20, ... 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]

>>> for (s1, s2), jscore, wscore, p in zip(winkler_examples, jaro_scores, winkler_scores, p_factors):
...     if (s1, s2) in [('JON', 'JAN'), ('1ST', 'IST')]:
...         continue  # Skip bad examples from the paper.
...     assert round(jaro_similarity(s1, s2), 3) == jscore
...     assert round(jaro_winkler_similarity(s1, s2, p=p), 3) == wscore

This test-case proves that the output of Jaro-Winkler similarity depends on the product l * p and not on the product max_l * p. Here the product max_l * p > 1 however the product l * p <= 1

>>> round(jaro_winkler_similarity('TANYA', 'TONYA', p=0.1, max_l=100), 3)
0.88
def masi_distance(label1, label2): (source)

Distance metric that takes into account partial agreement when multiple labels are assigned.

>>> from nltk.metrics import masi_distance
>>> masi_distance(set([1, 2]), set([1, 2, 3, 4]))
0.665

Passonneau 2006, Measuring Agreement on Set-Valued Items (MASI) for Semantic and Pragmatic Annotation.

def presence(label): (source)

Higher-order function to test presence of a given label

def _edit_dist_backtrace(lev): (source)

Undocumented

def _edit_dist_init(len1, len2): (source)

Undocumented

def _edit_dist_step(lev, i, j, s1, s2, substitution_cost=1, transpositions=False): (source)

Undocumented