class documentation
class WhitespaceTokenizer(RegexpTokenizer): (source)
Tokenize a string on whitespace (space, tab, newline). In general, users should use the string split() method instead.
>>> from nltk.tokenize import WhitespaceTokenizer >>> s = "Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\n\nThanks." >>> WhitespaceTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']
Method | __init__ |
Undocumented |
Inherited from RegexpTokenizer
:
Method | __repr__ |
Undocumented |
Method | span |
Identify the tokens using integer offsets (start_i, end_i), where s[start_i:end_i] is the corresponding token. |
Method | tokenize |
Return a tokenized copy of s. |
Method | _check |
Undocumented |
Instance Variable | _discard |
Undocumented |
Instance Variable | _flags |
Undocumented |
Instance Variable | _gaps |
Undocumented |
Instance Variable | _pattern |
Undocumented |
Instance Variable | _regexp |
Undocumented |
Inherited from TokenizerI
(via RegexpTokenizer
):
Method | span |
Apply self.span_tokenize() to each element of strings. I.e.: |
Method | tokenize |
Apply self.tokenize() to each element of strings. I.e.: |