class documentation

class PorterStemmer(StemmerI): (source)

Known subclasses: nltk.stem.snowball.PorterStemmer

Constructor: PorterStemmer(mode)

View In Hierarchy

A word stemmer based on the Porter stemming algorithm. Porter, M. "An algorithm for suffix stripping." Program 14.3 (1980): 130-137. See http://www.tartarus.org/~martin/PorterStemmer/ for the homepage of the algorithm. Martin Porter has endorsed several modifications to the Porter algorithm since writing his original paper, and those extensions are included in the implementations on his website. Additionally, others have proposed further improvements to the algorithm, including NLTK contributors. There are thus three modes that can be selected by passing the appropriate constant to the class constructor's `mode` attribute: PorterStemmer.ORIGINAL_ALGORITHM - Implementation that is faithful to the original paper. Note that Martin Porter has deprecated this version of the algorithm. Martin distributes implementations of the Porter Stemmer in many languages, hosted at: http://www.tartarus.org/~martin/PorterStemmer/ and all of these implementations include his extensions. He strongly recommends against using the original, published version of the algorithm; only use this mode if you clearly understand why you are choosing to do so. PorterStemmer.MARTIN_EXTENSIONS - Implementation that only uses the modifications to the algorithm that are included in the implementations on Martin Porter's website. He has declared Porter frozen, so the behaviour of those implementations should never change. PorterStemmer.NLTK_EXTENSIONS (default) - Implementation that includes further improvements devised by NLTK contributors or taken from other modified implementations found on the web. For the best stemming, you should use the default NLTK_EXTENSIONS version. However, if you need to get the same results as either the original algorithm or one of Martin Porter's hosted versions for compatibility with an existing implementation or dataset, you can use one of the other modes instead.

Method __init__ Undocumented
Method __repr__ Undocumented
Method stem :param to_lowercase: if `to_lowercase=True` the word always lowercase
Constant MARTIN_EXTENSIONS Undocumented
Constant NLTK_EXTENSIONS Undocumented
Constant ORIGINAL_ALGORITHM Undocumented
Instance Variable mode Undocumented
Instance Variable pool Undocumented
Instance Variable vowels Undocumented
Method _apply_rule_list Applies the first applicable suffix-removal rule to the word
Method _contains_vowel Returns True if stem contains a vowel, else False
Method _ends_cvc Implements condition *o from the paper
Method _ends_double_consonant Implements condition *d from the paper
Method _has_positive_measure Undocumented
Method _is_consonant Returns True if word[i] is a consonant, False otherwise
Method _measure Returns the 'measure' of stem, per definition in the paper
Method _replace_suffix Replaces `suffix` of `word` with `replacement
Method _step1a Implements Step 1a from "An algorithm for suffix stripping"
Method _step1b Implements Step 1b from "An algorithm for suffix stripping"
Method _step1c Implements Step 1c from "An algorithm for suffix stripping"
Method _step2 Implements Step 2 from "An algorithm for suffix stripping"
Method _step3 Implements Step 3 from "An algorithm for suffix stripping"
Method _step4 Implements Step 4 from "An algorithm for suffix stripping"
Method _step5a Implements Step 5a from "An algorithm for suffix stripping"
Method _step5b Implements Step 5a from "An algorithm for suffix stripping"
def __init__(self, mode=NLTK_EXTENSIONS): (source)

Undocumented

def __repr__(self): (source)

Undocumented

def stem(self, word, to_lowercase=True): (source)

:param to_lowercase: if `to_lowercase=True` the word always lowercase

MARTIN_EXTENSIONS: str = (source)

Undocumented

Value
'MARTIN_EXTENSIONS'
NLTK_EXTENSIONS: str = (source)

Undocumented

Value
'NLTK_EXTENSIONS'
ORIGINAL_ALGORITHM: str = (source)

Undocumented

Value
'ORIGINAL_ALGORITHM'

Undocumented

pool: dict = (source)

Undocumented

Undocumented

def _apply_rule_list(self, word, rules): (source)

Applies the first applicable suffix-removal rule to the word Takes a word and a list of suffix-removal rules represented as 3-tuples, with the first element being the suffix to remove, the second element being the string to replace it with, and the final element being the condition for the rule to be applicable, or None if the rule is unconditional.

def _contains_vowel(self, stem): (source)

Returns True if stem contains a vowel, else False

def _ends_cvc(self, word): (source)

Implements condition *o from the paper From the paper: *o - the stem ends cvc, where the second c is not W, X or Y (e.g. -WIL, -HOP).

def _ends_double_consonant(self, word): (source)

Implements condition *d from the paper Returns True if word ends with a double consonant

def _has_positive_measure(self, stem): (source)

Undocumented

def _is_consonant(self, word, i): (source)

Returns True if word[i] is a consonant, False otherwise A consonant is defined in the paper as follows: A consonant in a word is a letter other than A, E, I, O or U, and other than Y preceded by a consonant. (The fact that the term `consonant' is defined to some extent in terms of itself does not make it ambiguous.) So in TOY the consonants are T and Y, and in SYZYGY they are S, Z and G. If a letter is not a consonant it is a vowel.

def _measure(self, stem): (source)

Returns the 'measure' of stem, per definition in the paper From the paper: A consonant will be denoted by c, a vowel by v. A list ccc... of length greater than 0 will be denoted by C, and a list vvv... of length greater than 0 will be denoted by V. Any word, or part of a word, therefore has one of the four forms: CVCV ... C CVCV ... V VCVC ... C VCVC ... V These may all be represented by the single form [C]VCVC ... [V] where the square brackets denote arbitrary presence of their contents. Using (VC){m} to denote VC repeated m times, this may again be written as [C](VC){m}[V]. m will be called the \measure\ of any word or word part when represented in this form. The case m = 0 covers the null word. Here are some examples: m=0 TR, EE, TREE, Y, BY. m=1 TROUBLE, OATS, TREES, IVY. m=2 TROUBLES, PRIVATE, OATEN, ORRERY.

def _replace_suffix(self, word, suffix, replacement): (source)

Replaces `suffix` of `word` with `replacement

def _step1a(self, word): (source)

Implements Step 1a from "An algorithm for suffix stripping" From the paper: SSES -> SS caresses -> caress IES -> I ponies -> poni ties -> ti SS -> SS caress -> caress S -> cats -> cat

def _step1b(self, word): (source)

Implements Step 1b from "An algorithm for suffix stripping" From the paper: (m>0) EED -> EE feed -> feed agreed -> agree (*v*) ED -> plastered -> plaster bled -> bled (*v*) ING -> motoring -> motor sing -> sing If the second or third of the rules in Step 1b is successful, the following is done: AT -> ATE conflat(ed) -> conflate BL -> BLE troubl(ed) -> trouble IZ -> IZE siz(ed) -> size (*d and not (*L or *S or *Z)) -> single letter hopp(ing) -> hop tann(ed) -> tan fall(ing) -> fall hiss(ing) -> hiss fizz(ed) -> fizz (m=1 and *o) -> E fail(ing) -> fail fil(ing) -> file The rule to map to a single letter causes the removal of one of the double letter pair. The -E is put back on -AT, -BL and -IZ, so that the suffixes -ATE, -BLE and -IZE can be recognised later. This E may be removed in step 4.

def _step1c(self, word): (source)

Implements Step 1c from "An algorithm for suffix stripping" From the paper: Step 1c (*v*) Y -> I happy -> happi sky -> sky

def _step2(self, word): (source)

Implements Step 2 from "An algorithm for suffix stripping" From the paper: Step 2 (m>0) ATIONAL -> ATE relational -> relate (m>0) TIONAL -> TION conditional -> condition rational -> rational (m>0) ENCI -> ENCE valenci -> valence (m>0) ANCI -> ANCE hesitanci -> hesitance (m>0) IZER -> IZE digitizer -> digitize (m>0) ABLI -> ABLE conformabli -> conformable (m>0) ALLI -> AL radicalli -> radical (m>0) ENTLI -> ENT differentli -> different (m>0) ELI -> E vileli - > vile (m>0) OUSLI -> OUS analogousli -> analogous (m>0) IZATION -> IZE vietnamization -> vietnamize (m>0) ATION -> ATE predication -> predicate (m>0) ATOR -> ATE operator -> operate (m>0) ALISM -> AL feudalism -> feudal (m>0) IVENESS -> IVE decisiveness -> decisive (m>0) FULNESS -> FUL hopefulness -> hopeful (m>0) OUSNESS -> OUS callousness -> callous (m>0) ALITI -> AL formaliti -> formal (m>0) IVITI -> IVE sensitiviti -> sensitive (m>0) BILITI -> BLE sensibiliti -> sensible

def _step3(self, word): (source)

Implements Step 3 from "An algorithm for suffix stripping" From the paper: Step 3 (m>0) ICATE -> IC triplicate -> triplic (m>0) ATIVE -> formative -> form (m>0) ALIZE -> AL formalize -> formal (m>0) ICITI -> IC electriciti -> electric (m>0) ICAL -> IC electrical -> electric (m>0) FUL -> hopeful -> hope (m>0) NESS -> goodness -> good

def _step4(self, word): (source)

Implements Step 4 from "An algorithm for suffix stripping" Step 4 (m>1) AL -> revival -> reviv (m>1) ANCE -> allowance -> allow (m>1) ENCE -> inference -> infer (m>1) ER -> airliner -> airlin (m>1) IC -> gyroscopic -> gyroscop (m>1) ABLE -> adjustable -> adjust (m>1) IBLE -> defensible -> defens (m>1) ANT -> irritant -> irrit (m>1) EMENT -> replacement -> replac (m>1) MENT -> adjustment -> adjust (m>1) ENT -> dependent -> depend (m>1 and (*S or *T)) ION -> adoption -> adopt (m>1) OU -> homologou -> homolog (m>1) ISM -> communism -> commun (m>1) ATE -> activate -> activ (m>1) ITI -> angulariti -> angular (m>1) OUS -> homologous -> homolog (m>1) IVE -> effective -> effect (m>1) IZE -> bowdlerize -> bowdler The suffixes are now removed. All that remains is a little tidying up.

def _step5a(self, word): (source)

Implements Step 5a from "An algorithm for suffix stripping" From the paper: Step 5a (m>1) E -> probate -> probat rate -> rate (m=1 and not *o) E -> cease -> ceas

def _step5b(self, word): (source)

Implements Step 5a from "An algorithm for suffix stripping" From the paper: Step 5b (m > 1 and *d and *L) -> single letter controll -> control roll -> roll