class PunktTrainer(PunktBaseClass): (source)
Constructor: PunktTrainer(train_text, verbose, lang_vars, token_cls)
Learns parameters used in Punkt sentence boundary detection.
Method | __init__ |
Undocumented |
Method | finalize |
Uses data that has been gathered in training to determine likely collocations and sentence starters. |
Method | find |
Recalculates abbreviations given type frequencies, despite no prior determination of abbreviations. This fails to include abbreviations otherwise found as "rare". |
Method | freq |
Allows memory use to be reduced after much training by removing data about rare tokens that are unlikely to have a statistical effect with further training. Entries occurring above the given thresholds will be retained. |
Method | get |
Calculates and returns parameters for sentence boundary detection as derived from training. |
Method | train |
Collects training data from a given text. If finalize is True, it will determine all the parameters for sentence boundary detection. If not, this will be delayed until get_params() or finalize_training() is called... |
Method | train |
Collects training data from a given list of tokens. |
Constant | ABBREV |
cut-off value whether a 'token' is an abbreviation |
Constant | ABBREV |
upper cut-off for Mikheev's(2002) abbreviation detection algorithm |
Constant | COLLOCATION |
minimal log-likelihood value that two tokens need to be considered as a collocation |
Constant | IGNORE |
allows the disabling of the abbreviation penalty heuristic, which exponentially disadvantages words that are found at times without a final period. |
Constant | INCLUDE |
this includes as potential collocations all word pairs where the first word is an abbreviation. Such collocations override the orthographic heuristic, but not the sentence starter heuristic. This is overridden by INCLUDE_ALL_COLLOCS, and if both are false, only collocations with initials and ordinals are considered. |
Constant | INCLUDE |
this includes as potential collocations all word pairs where the first word ends in a period. It may be useful in corpora where there is a lot of variation that makes abbreviations like Mr difficult to identify. |
Constant | MIN |
this sets a minimum bound on the number of times a bigram needs to appear before it can be considered a collocation, in addition to log likelihood statistics. This is useful when INCLUDE_ALL_COLLOCS is True. |
Constant | SENT |
minimal log-likelihood value that a token requires to be considered as a frequent sentence starter |
Static Method | _col |
A function that will just compute log-likelihood estimate, in the original paper it's described in algorithm 6 and 7. |
Static Method | _dunning |
A function that calculates the modified Dunning log-likelihood ratio scores for abbreviation candidates. The details of how this works is available in the paper. |
Method | _find |
Generates likely collocations and their log-likelihood. |
Method | _find |
Uses collocation heuristics for each candidate token to determine if it frequently starts sentences. |
Method | _freq |
Returns a FreqDist containing only data with counts below a given threshold, as well as a mapping (None -> count_removed). |
Method | _get |
Collect information about whether each token type occurs with different case patterns (i) overall, (ii) at sentence-initial positions, and (iii) at sentence-internal positions. |
Method | _get |
Returns the number of sentence breaks marked in a given set of augmented tokens. |
Method | _is |
Returns True if the pair of tokens may form a collocation given log-likelihood statistics. |
Method | _is |
Returns True given a token and the token that preceds it if it seems clear that the token is beginning a sentence. |
Method | _is |
it's not already marked as an abbreviation |
Method | _reclassify |
it is period-final and not a known abbreviation; or |
Method | _train |
Undocumented |
Method | _unique |
Undocumented |
Instance Variable | _collocation |
A frequency distribution giving the frequency of all bigrams in the training data where the first word ends in a period. Bigrams are encoded as tuples of word types. Especially common collocations are extracted from this frequency distribution, and stored in ... |
Instance Variable | _finalized |
A flag as to whether the training has been finalized by finding collocations and sentence starters, or whether finalize_training() still needs to be called. |
Instance Variable | _num |
The number of words ending in period in the training data. |
Instance Variable | _sent |
A frequency distribution giving the frequency of all words that occur at the training data at the beginning of a sentence (after the first pass of annotation). Especially common sentence starters are extracted from this frequency distribution, and stored in ... |
Instance Variable | _sentbreak |
The total number of sentence breaks identified in training, used for calculating the frequent sentence starter heuristic. |
Instance Variable | _type |
A frequency distribution giving the frequency of each case-normalized token type in the training data. |
Inherited from PunktBaseClass
:
Method | _annotate |
Perform the first pass of annotation, which makes decisions based purely based on the word type of each word: |
Method | _first |
Performs type-based annotation on a single token. |
Method | _tokenize |
Divide the given text into tokens, using the punkt word segmentation regular expression, and generate the resulting list of tokens augmented as three-tuples with two boolean values for whether the given token occurs at the start of a paragraph or a new line, respectively. |
Instance Variable | _lang |
Undocumented |
Instance Variable | _params |
Undocumented |
Instance Variable | _ |
The collection of parameters that determines the behavior of the punkt tokenizer. |
Undocumented
Uses data that has been gathered in training to determine likely collocations and sentence starters.
Recalculates abbreviations given type frequencies, despite no prior determination of abbreviations. This fails to include abbreviations otherwise found as "rare".
Allows memory use to be reduced after much training by removing data about rare tokens that are unlikely to have a statistical effect with further training. Entries occurring above the given thresholds will be retained.
Collects training data from a given text. If finalize is True, it will determine all the parameters for sentence boundary detection. If not, this will be delayed until get_params() or finalize_training() is called. If verbose is True, abbreviations found will be listed.
allows the disabling of the abbreviation penalty heuristic, which exponentially disadvantages words that are found at times without a final period.
Value |
|
this includes as potential collocations all word pairs where the first word is an abbreviation. Such collocations override the orthographic heuristic, but not the sentence starter heuristic. This is overridden by INCLUDE_ALL_COLLOCS, and if both are false, only collocations with initials and ordinals are considered.
Value |
|
this includes as potential collocations all word pairs where the first word ends in a period. It may be useful in corpora where there is a lot of variation that makes abbreviations like Mr difficult to identify.
Value |
|
this sets a minimum bound on the number of times a bigram needs to appear before it can be considered a collocation, in addition to log likelihood statistics. This is useful when INCLUDE_ALL_COLLOCS is True.
Value |
|
minimal log-likelihood value that a token requires to be considered as a frequent sentence starter
Value |
|
A function that will just compute log-likelihood estimate, in the original paper it's described in algorithm 6 and 7.
This should be the original Dunning log-likelihood values, unlike the previous log_l function where it used modified Dunning log-likelihood values
A function that calculates the modified Dunning log-likelihood ratio scores for abbreviation candidates. The details of how this works is available in the paper.
Uses collocation heuristics for each candidate token to determine if it frequently starts sentences.
Returns a FreqDist containing only data with counts below a given threshold, as well as a mapping (None -> count_removed).
Collect information about whether each token type occurs with different case patterns (i) overall, (ii) at sentence-initial positions, and (iii) at sentence-internal positions.
Returns True given a token and the token that preceds it if it seems clear that the token is beginning a sentence.
- A word type is counted as a rare abbreviation if...
- it's not already marked as an abbreviation
- it occurs fewer than ABBREV_BACKOFF times
- either it is followed by a sentence-internal punctuation mark, or it is followed by a lower-case word that sometimes appears with upper case, but never occurs with lower case at the beginning of sentences.
- (Re)classifies each given token if
- it is period-final and not a known abbreviation; or
- it is not period-final and is otherwise a known abbreviation
by checking whether its previous classification still holds according to the heuristics of section 3. Yields triples (abbr, score, is_add) where abbr is the type in question, score is its log-likelihood with penalties applied, and is_add specifies whether the present type is a candidate for inclusion or exclusion as an abbreviation, such that:
- (is_add and score >= 0.3) suggests a new abbreviation; and
- (not is_add and score < 0.3) suggests excluding an abbreviation.
A frequency distribution giving the frequency of all bigrams in the training data where the first word ends in a period. Bigrams are encoded as tuples of word types. Especially common collocations are extracted from this frequency distribution, and stored in _params.``collocations <PunktParameters.collocations>``.
A flag as to whether the training has been finalized by finding collocations and sentence starters, or whether finalize_training() still needs to be called.
A frequency distribution giving the frequency of all words that occur at the training data at the beginning of a sentence (after the first pass of annotation). Especially common sentence starters are extracted from this frequency distribution, and stored in _params.sent_starters.