class CategorizedSentencesCorpusReader(CategorizedCorpusReader, CorpusReader): (source)
Constructor: CategorizedSentencesCorpusReader(root, fileids, word_tokenizer, sent_tokenizer, ...)
A reader for corpora in which each row represents a single instance, mainly a sentence. Istances are divided into categories based on their file identifiers (see CategorizedCorpusReader). Since many corpora allow rows that contain more than one sentence, it is possible to specify a sentence tokenizer to retrieve all sentences instead than all rows.
Examples using the Subjectivity Dataset:
>>> from nltk.corpus import subjectivity >>> subjectivity.sents()[23] ['television', 'made', 'him', 'famous', ',', 'but', 'his', 'biggest', 'hits', 'happened', 'off', 'screen', '.'] >>> subjectivity.categories() ['obj', 'subj'] >>> subjectivity.words(categories='subj') ['smart', 'and', 'alert', ',', 'thirteen', ...]
Examples using the Sentence Polarity Dataset:
>>> from nltk.corpus import sentence_polarity >>> sentence_polarity.sents() [['simplistic', ',', 'silly', 'and', 'tedious', '.'], ["it's", 'so', 'laddish', 'and', 'juvenile', ',', 'only', 'teenage', 'boys', 'could', 'possibly', 'find', 'it', 'funny', '.'], ...] >>> sentence_polarity.categories() ['neg', 'pos']
Method | __init__ |
No summary |
Method | raw |
No summary |
Method | readme |
Return the contents of the corpus Readme.txt file. |
Method | sents |
Return all sentences in the corpus or in the specified file(s). |
Method | words |
Return all words and punctuation symbols in the corpus or in the specified file(s). |
Method | _read |
Undocumented |
Method | _read |
Undocumented |
Method | _resolve |
Undocumented |
Instance Variable | _sent |
Undocumented |
Instance Variable | _word |
Undocumented |
Inherited from CategorizedCorpusReader
:
Method | categories |
Return a list of the categories that are defined for this corpus, or for the file(s) if it is given. |
Method | fileids |
Return a list of file identifiers for the files that make up this corpus, or that make up the given category(s) if specified. |
Method | _add |
Undocumented |
Method | _init |
Undocumented |
Instance Variable | _c2f |
Undocumented |
Instance Variable | _delimiter |
Undocumented |
Instance Variable | _f2c |
Undocumented |
Instance Variable | _file |
Undocumented |
Instance Variable | _map |
Undocumented |
Instance Variable | _pattern |
Undocumented |
Inherited from CorpusReader
(via CategorizedCorpusReader
):
Method | __repr__ |
Undocumented |
Method | abspath |
Return the absolute path for the given file. |
Method | abspaths |
Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified. |
Method | citation |
Return the contents of the corpus citation.bib file, if it exists. |
Method | encoding |
Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None. |
Method | ensure |
Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus). |
Method | license |
Return the contents of the corpus LICENSE file, if it exists. |
Method | open |
Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode. |
Class Variable | root |
Undocumented |
Method | _get |
Undocumented |
Instance Variable | _encoding |
The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings. |
Instance Variable | _fileids |
A list of the relative paths for the fileids that make up this corpus. |
Instance Variable | _root |
The root directory for this corpus. |
Instance Variable | _tagset |
Undocumented |
Parameters | |
root | The root directory for the corpus. |
fileids | a list or regexp specifying the fileids in the corpus. |
word | a tokenizer for breaking sentences or paragraphs
into words. Default: WhitespaceTokenizer |
sent | a tokenizer for breaking paragraphs into sentences. |
encoding | the encoding that should be used to read the corpus. |
**kwargs | additional parameters passed to CategorizedCorpusReader. |
Parameters | |
fileids | a list or regexp specifying the fileids that have to be returned as a raw string. |
categories | a list specifying the categories whose files have to be returned as a raw string. |
Returns | |
str | the given file(s) as a single string. |
Return all sentences in the corpus or in the specified file(s).
Parameters | |
fileids | a list or regexp specifying the ids of the files whose sentences have to be returned. |
categories | a list specifying the categories whose sentences have to be returned. |
Returns | |
list(list(str)) | the given file(s) as a list of sentences. Each sentence is tokenized using the specified word_tokenizer. |
Return all words and punctuation symbols in the corpus or in the specified file(s).
Parameters | |
fileids | a list or regexp specifying the ids of the files whose words have to be returned. |
categories | a list specifying the categories whose words have to be returned. |
Returns | |
list(str) | the given file(s) as a list of words and punctuation symbols. |