class documentation

class WhitespaceTokenizer(RegexpTokenizer): (source)

View In Hierarchy

Tokenize a string on whitespace (space, tab, newline). In general, users should use the string split() method instead.

>>> from nltk.tokenize import WhitespaceTokenizer
>>> s = "Good muffins cost $3.88\nin New York.  Please buy me\ntwo of them.\n\nThanks."
>>> WhitespaceTokenizer().tokenize(s)
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.',
'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']
Method __init__ Undocumented

Inherited from RegexpTokenizer:

Method __repr__ Undocumented
Method span_tokenize Identify the tokens using integer offsets (start_i, end_i), where s[start_i:end_i] is the corresponding token.
Method tokenize Return a tokenized copy of s.
Method _check_regexp Undocumented
Instance Variable _discard_empty Undocumented
Instance Variable _flags Undocumented
Instance Variable _gaps Undocumented
Instance Variable _pattern Undocumented
Instance Variable _regexp Undocumented

Inherited from TokenizerI (via RegexpTokenizer):

Method span_tokenize_sents Apply self.span_tokenize() to each element of strings. I.e.:
Method tokenize_sents Apply self.tokenize() to each element of strings. I.e.: