class documentation

Reader for the Pros and Cons sentence dataset.

>>> from nltk.corpus import pros_cons
>>> pros_cons.sents(categories='Cons')
[['East', 'batteries', '!', 'On', '-', 'off', 'switch', 'too', 'easy',
'to', 'maneuver', '.'], ['Eats', '...', 'no', ',', 'GULPS', 'batteries'],
...]
>>> pros_cons.words('IntegratedPros.txt')
['Easy', 'to', 'use', ',', 'economical', '!', ...]
Method __init__ No summary
Method sents Return all sentences in the corpus or in the specified files/categories.
Method words Return all words and punctuation symbols in the corpus or in the specified files/categories.
Method _read_sent_block Undocumented
Method _read_word_block Undocumented
Method _resolve Undocumented
Instance Variable _word_tokenizer Undocumented

Inherited from CategorizedCorpusReader:

Method categories Return a list of the categories that are defined for this corpus, or for the file(s) if it is given.
Method fileids Return a list of file identifiers for the files that make up this corpus, or that make up the given category(s) if specified.
Method _add Undocumented
Method _init Undocumented
Instance Variable _c2f Undocumented
Instance Variable _delimiter Undocumented
Instance Variable _f2c Undocumented
Instance Variable _file Undocumented
Instance Variable _map Undocumented
Instance Variable _pattern Undocumented

Inherited from CorpusReader (via CategorizedCorpusReader):

Method __repr__ Undocumented
Method abspath Return the absolute path for the given file.
Method abspaths Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified.
Method citation Return the contents of the corpus citation.bib file, if it exists.
Method encoding Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None.
Method ensure_loaded Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus).
Method license Return the contents of the corpus LICENSE file, if it exists.
Method open Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode.
Method readme Return the contents of the corpus README file, if it exists.
Class Variable root Undocumented
Method _get_root Undocumented
Instance Variable _encoding The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings.
Instance Variable _fileids A list of the relative paths for the fileids that make up this corpus.
Instance Variable _root The root directory for this corpus.
Instance Variable _tagset Undocumented
def __init__(self, root, fileids, word_tokenizer=WordPunctTokenizer(), encoding='utf8', **kwargs): (source)
Parameters
rootThe root directory for the corpus.
fileidsa list or regexp specifying the fileids in the corpus.
word_tokenizera tokenizer for breaking sentences or paragraphs into words. Default: WhitespaceTokenizer
encodingthe encoding that should be used to read the corpus.
**kwargsadditional parameters passed to CategorizedCorpusReader.
def sents(self, fileids=None, categories=None): (source)

Return all sentences in the corpus or in the specified files/categories.

Parameters
fileidsa list or regexp specifying the ids of the files whose sentences have to be returned.
categoriesa list specifying the categories whose sentences have to be returned.
Returns
list(list(str))the given file(s) as a list of sentences. Each sentence is tokenized using the specified word_tokenizer.
def words(self, fileids=None, categories=None): (source)

Return all words and punctuation symbols in the corpus or in the specified files/categories.

Parameters
fileidsa list or regexp specifying the ids of the files whose words have to be returned.
categoriesa list specifying the categories whose words have to be returned.
Returns
list(str)the given file(s) as a list of words and punctuation symbols.
def _read_sent_block(self, stream): (source)

Undocumented

def _read_word_block(self, stream): (source)

Undocumented

def _resolve(self, fileids, categories): (source)

Undocumented

_word_tokenizer = (source)

Undocumented