class documentation

Reader for the Customer Review Data dataset by Hu, Liu (2004). Note: we are not applying any sentence tokenization at the moment, just word tokenization.

>>> from nltk.corpus import product_reviews_1
>>> camera_reviews = product_reviews_1.reviews('Canon_G3.txt')
>>> review = camera_reviews[0]
>>> review.sents()[0]
['i', 'recently', 'purchased', 'the', 'canon', 'powershot', 'g3', 'and', 'am',
'extremely', 'satisfied', 'with', 'the', 'purchase', '.']
>>> review.features()
[('canon powershot g3', '+3'), ('use', '+2'), ('picture', '+2'),
('picture quality', '+1'), ('picture quality', '+1'), ('camera', '+2'),
('use', '+2'), ('feature', '+1'), ('picture quality', '+3'), ('use', '+1'),
('option', '+1')]

We can also reach the same information directly from the stream:

>>> product_reviews_1.features('Canon_G3.txt')
[('canon powershot g3', '+3'), ('use', '+2'), ...]

We can compute stats for specific product features:

>>> n_reviews = len([(feat,score) for (feat,score) in product_reviews_1.features('Canon_G3.txt') if feat=='picture'])
>>> tot = sum([int(score) for (feat,score) in product_reviews_1.features('Canon_G3.txt') if feat=='picture'])
>>> mean = tot / n_reviews
>>> print(n_reviews, tot, mean)
15 24 1.6
Method __init__ No summary
Method features Return a list of features. Each feature is a tuple made of the specific item feature and the opinion strength about that feature.
Method raw No summary
Method readme Return the contents of the corpus README.txt file.
Method reviews Return all the reviews as a list of Review objects. If fileids is specified, return all the reviews from each of the specified files.
Method sents Return all sentences in the corpus or in the specified files.
Method words Return all words and punctuation symbols in the corpus or in the specified files.
Method _read_features Undocumented
Method _read_review_block Undocumented
Method _read_sent_block Undocumented
Method _read_word_block Undocumented
Instance Variable _word_tokenizer Undocumented

Inherited from CorpusReader:

Method __repr__ Undocumented
Method abspath Return the absolute path for the given file.
Method abspaths Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified.
Method citation Return the contents of the corpus citation.bib file, if it exists.
Method encoding Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None.
Method ensure_loaded Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus).
Method fileids Return a list of file identifiers for the fileids that make up this corpus.
Method license Return the contents of the corpus LICENSE file, if it exists.
Method open Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode.
Class Variable root Undocumented
Method _get_root Undocumented
Instance Variable _encoding The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings.
Instance Variable _fileids A list of the relative paths for the fileids that make up this corpus.
Instance Variable _root The root directory for this corpus.
Instance Variable _tagset Undocumented
def __init__(self, root, fileids, word_tokenizer=WordPunctTokenizer(), encoding='utf8'): (source)
Parameters
rootThe root directory for the corpus.
fileidsa list or regexp specifying the fileids in the corpus.
word_tokenizera tokenizer for breaking sentences or paragraphs into words. Default: WordPunctTokenizer
encodingthe encoding that should be used to read the corpus.
def features(self, fileids=None): (source)

Return a list of features. Each feature is a tuple made of the specific item feature and the opinion strength about that feature.

Parameters
fileidsa list or regexp specifying the ids of the files whose features have to be returned.
Returns
list(tuple)all features for the item(s) in the given file(s).
def raw(self, fileids=None): (source)
Parameters
fileidsa list or regexp specifying the fileids of the files that have to be returned as a raw string.
Returns
strthe given file(s) as a single string.
def readme(self): (source)

Return the contents of the corpus README.txt file.

def reviews(self, fileids=None): (source)

Return all the reviews as a list of Review objects. If fileids is specified, return all the reviews from each of the specified files.

Parameters
fileidsa list or regexp specifying the ids of the files whose reviews have to be returned.
Returns
the given file(s) as a list of reviews.
def sents(self, fileids=None): (source)

Return all sentences in the corpus or in the specified files.

Parameters
fileidsa list or regexp specifying the ids of the files whose sentences have to be returned.
Returns
list(list(str))the given file(s) as a list of sentences, each encoded as a list of word strings.
def words(self, fileids=None): (source)

Return all words and punctuation symbols in the corpus or in the specified files.

Parameters
fileidsa list or regexp specifying the ids of the files whose words have to be returned.
Returns
list(str)the given file(s) as a list of words and punctuation symbols.
def _read_features(self, stream): (source)

Undocumented

def _read_review_block(self, stream): (source)

Undocumented

def _read_sent_block(self, stream): (source)

Undocumented

def _read_word_block(self, stream): (source)

Undocumented

_word_tokenizer = (source)

Undocumented