«
module documentation

Utility methods for Sentiment Analysis.

Function demo_liu_hu_lexicon Basic example of sentiment classification using Liu and Hu opinion lexicon. This function simply counts the number of positive, negative and neutral words in the sentence and classifies it depending on which polarity is more represented...
Function demo_movie_reviews Train classifier on all instances of the Movie Reviews dataset. The corpus has been preprocessed using the default sentence tokenizer and WordPunctTokenizer. Features are composed of:
Function demo_sent_subjectivity Classify a single sentence as subjective or objective using a stored SentimentAnalyzer.
Function demo_subjectivity Train and test a classifier on instances of the Subjective Dataset by Pang and Lee. The dataset is made of 5000 subjective and 5000 objective sentences. All tokens (words and punctuation marks) are separated by a whitespace, so we use the basic WhitespaceTokenizer to parse the data.
Function demo_tweets Train and test Naive Bayes classifier on 10000 tweets, tokenized using TweetTokenizer. Features are composed of:
Function demo_vader_instance Output polarity scores for a text using Vader approach.
Function demo_vader_tweets Classify 10000 positive and negative tweets using Vader approach.
Function extract_bigram_feats Populate a dictionary of bigram features, reflecting the presence/absence in the document of each of the tokens in bigrams. This extractor function only considers contiguous bigrams obtained by nltk.bigrams...
Function extract_unigram_feats Populate a dictionary of unigram features, reflecting the presence/absence in the document of each of the tokens in unigrams.
Function json2csv_preprocess Convert json file to csv file, preprocessing each row to obtain a suitable dataset for tweets Semantic Analysis.
Function mark_negation Append _NEG suffix to words that appear in the scope between a negation and a punctuation mark.
Function output_markdown Write the output of an analysis to a file.
Function parse_tweets_set Parse csv file containing tweets and output data a list of (text, label) tuples.
Function split_train_test Randomly split n instances of the dataset into train and test sets.
Function timer A timer decorator to measure execution performance of methods.
Constant CLAUSE_PUNCT Undocumented
Constant CLAUSE_PUNCT_RE Undocumented
Constant HAPPY Undocumented
Constant NEGATION Undocumented
Constant NEGATION_RE Undocumented
Constant SAD Undocumented
Function _show_plot Undocumented
def demo_liu_hu_lexicon(sentence, plot=False): (source)

Basic example of sentiment classification using Liu and Hu opinion lexicon. This function simply counts the number of positive, negative and neutral words in the sentence and classifies it depending on which polarity is more represented. Words that do not appear in the lexicon are considered as neutral.

Parameters
sentencea sentence whose polarity has to be classified.
plotif True, plot a visual representation of the sentence polarity.
def demo_movie_reviews(trainer, n_instances=None, output=None): (source)

Train classifier on all instances of the Movie Reviews dataset. The corpus has been preprocessed using the default sentence tokenizer and WordPunctTokenizer. Features are composed of:

  • most frequent unigrams
Parameters
trainertrain method of a classifier.
n_instancesthe number of total reviews that have to be used for training and testing. Reviews will be equally split between positive and negative.
outputthe output file where results have to be reported.
def demo_sent_subjectivity(text): (source)

Classify a single sentence as subjective or objective using a stored SentimentAnalyzer.

Parameters
texta sentence whose subjectivity has to be classified.
def demo_subjectivity(trainer, save_analyzer=False, n_instances=None, output=None): (source)

Train and test a classifier on instances of the Subjective Dataset by Pang and Lee. The dataset is made of 5000 subjective and 5000 objective sentences. All tokens (words and punctuation marks) are separated by a whitespace, so we use the basic WhitespaceTokenizer to parse the data.

Parameters
trainertrain method of a classifier.
save_analyzerif True, store the SentimentAnalyzer in a pickle file.
n_instancesthe number of total sentences that have to be used for training and testing. Sentences will be equally split between positive and negative.
outputthe output file where results have to be reported.
def demo_tweets(trainer, n_instances=None, output=None): (source)

Train and test Naive Bayes classifier on 10000 tweets, tokenized using TweetTokenizer. Features are composed of:

  • 1000 most frequent unigrams
  • 100 top bigrams (using BigramAssocMeasures.pmi)
Parameters
trainertrain method of a classifier.
n_instancesthe number of total tweets that have to be used for training and testing. Tweets will be equally split between positive and negative.
outputthe output file where results have to be reported.
def demo_vader_instance(text): (source)

Output polarity scores for a text using Vader approach.

Parameters
texta text whose polarity has to be evaluated.
def demo_vader_tweets(n_instances=None, output=None): (source)

Classify 10000 positive and negative tweets using Vader approach.

Parameters
n_instancesthe number of total tweets that have to be classified.
outputthe output file where results have to be reported.
def extract_bigram_feats(document, bigrams): (source)

Populate a dictionary of bigram features, reflecting the presence/absence in the document of each of the tokens in bigrams. This extractor function only considers contiguous bigrams obtained by nltk.bigrams.

>>> bigrams = [('global', 'warming'), ('police', 'prevented'), ('love', 'you')]
>>> document = 'ice is melting due to global warming'.split()
>>> sorted(extract_bigram_feats(document, bigrams).items())
[('contains(global - warming)', True), ('contains(love - you)', False),
('contains(police - prevented)', False)]
Parameters
documenta list of words/tokens.
bigramsUndocumented
unigramsa list of bigrams whose presence/absence has to be checked in document.
Returns
a dictionary of bigram features {bigram : boolean}.
def extract_unigram_feats(document, unigrams, handle_negation=False): (source)

Populate a dictionary of unigram features, reflecting the presence/absence in the document of each of the tokens in unigrams.

>>> words = ['ice', 'police', 'riot']
>>> document = 'ice is melting due to global warming'.split()
>>> sorted(extract_unigram_feats(document, words).items())
[('contains(ice)', True), ('contains(police)', False), ('contains(riot)', False)]
Parameters
documenta list of words/tokens.
unigramsa list of words/tokens whose presence/absence has to be checked in document.
handle_negationif handle_negation == True apply mark_negation method to document before checking for unigram presence/absence.
Returns
a dictionary of unigram features {unigram : boolean}.
def json2csv_preprocess(json_file, outfile, fields, encoding='utf8', errors='replace', gzip_compress=False, skip_retweets=True, skip_tongue_tweets=True, skip_ambiguous_tweets=True, strip_off_emoticons=True, remove_duplicates=True, limit=None): (source)

Convert json file to csv file, preprocessing each row to obtain a suitable dataset for tweets Semantic Analysis.

Parameters
json_filethe original json file containing tweets.
outfilethe output csv filename.
fieldsa list of fields that will be extracted from the json file and kept in the output csv file.
encodingthe encoding of the files.
errorsthe error handling strategy for the output writer.
gzip_compressif True, create a compressed GZIP file.
skip_retweetsif True, remove retweets.
skip_tongue_tweetsif True, remove tweets containing ":P" and ":-P" emoticons.
skip_ambiguous_tweetsif True, remove tweets containing both happy and sad emoticons.
strip_off_emoticonsif True, strip off emoticons from all tweets.
remove_duplicatesif True, remove tweets appearing more than once.
limitan integer to set the number of tweets to convert. After the limit is reached the conversion will stop. It can be useful to create subsets of the original tweets json data.
def mark_negation(document, double_neg_flip=False, shallow=False): (source)

Append _NEG suffix to words that appear in the scope between a negation and a punctuation mark.

>>> sent = "I didn't like this movie . It was bad .".split()
>>> mark_negation(sent)
['I', "didn't", 'like_NEG', 'this_NEG', 'movie_NEG', '.', 'It', 'was', 'bad', '.']
Parameters
documenta list of words/tokens, or a tuple (words, label).
double_neg_flipif True, double negation is considered affirmation (we activate/deactivate negation scope everytime we find a negation).
shallowif True, the method will modify the original document in place.
Returns
if shallow == True the method will modify the original document and return it. If shallow == False the method will return a modified document, leaving the original unmodified.
def output_markdown(filename, **kwargs): (source)

Write the output of an analysis to a file.

def parse_tweets_set(filename, label, word_tokenizer=None, sent_tokenizer=None, skip_header=True): (source)

Parse csv file containing tweets and output data a list of (text, label) tuples.

Parameters
filenamethe input csv filename.
labelthe label to be appended to each tweet contained in the csv file.
word_tokenizerthe tokenizer instance that will be used to tokenize each sentence into tokens (e.g. WordPunctTokenizer() or BlanklineTokenizer()). If no word_tokenizer is specified, tweets will not be tokenized.
sent_tokenizerthe tokenizer that will be used to split each tweet into sentences.
skip_headerif True, skip the first line of the csv file (which usually contains headers).
Returns
a list of (text, label) tuples.
def split_train_test(all_instances, n=None): (source)

Randomly split n instances of the dataset into train and test sets.

Parameters
all_instancesa list of instances (e.g. documents) that will be split.
nthe number of instances to consider (in case we want to use only a subset).
Returns
two lists of instances. Train set is 8/10 of the total and test set is 2/10 of the total.
def timer(method): (source)

A timer decorator to measure execution performance of methods.

CLAUSE_PUNCT: str = (source)

Undocumented

Value
'^[.:;!?]$'
CLAUSE_PUNCT_RE = (source)

Undocumented

Value
re.compile(CLAUSE_PUNCT)

Undocumented

Value
set([':-)',
     ':)',
     ';)',
     ':o)',
     ':]',
     ':3',
     ':c)',
...
NEGATION: str = (source)

Undocumented

Value
'''
    (?:
        ^(?:never|no|nothing|nowhere|noone|none|not|
            havent|hasnt|hadnt|cant|couldnt|shouldnt|
            wont|wouldnt|dont|doesnt|didnt|isnt|arent|aint
        )$
    )
...
NEGATION_RE = (source)

Undocumented

Value
re.compile(NEGATION, re.VERBOSE)

Undocumented

Value
set([':L',
     ':-/',
     '>:/',
     ':S',
     '>:[',
     ':@',
     ':-(',
...
def _show_plot(x_values, y_values, x_labels=None, y_labels=None): (source)

Undocumented