class documentation

Reader for corpora that consist of Tweets represented as a list of line-delimited JSON.

Individual Tweets can be tokenized using the default tokenizer, or by a custom tokenizer specified as a parameter to the constructor.

Construct a new Tweet corpus reader for a set of documents located at the given root directory.

If you made your own tweet collection in a directory called twitter-files, then you can initialise the reader as:

from nltk.corpus import TwitterCorpusReader
reader = TwitterCorpusReader(root='/path/to/twitter-files', '.*\.json')

However, the recommended approach is to set the relevant directory as the value of the environmental variable TWITTER, and then invoke the reader as follows:

root = os.environ['TWITTER']
reader = TwitterCorpusReader(root, '.*\.json')

If you want to work directly with the raw Tweets, the json library can be used:

import json
for tweet in reader.docs():
    print(json.dumps(tweet, indent=1, sort_keys=True))
Method __init__ smaller units, including but not limited to words.
Method docs Returns the full Tweet objects, as specified by Twitter documentation on Tweets
Method raw Return the corpora in their raw form.
Method strings Returns only the text content of Tweets in the file(s)
Method tokenized as a list of words, screenanames, hashtags, URLs and punctuation symbols.
Method _read_tweets Assumes that each line in stream is a JSON-serialised object.
Instance Variable _word_tokenizer Undocumented

Inherited from CorpusReader:

Method __repr__ Undocumented
Method abspath Return the absolute path for the given file.
Method abspaths Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified.
Method citation Return the contents of the corpus citation.bib file, if it exists.
Method encoding Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None.
Method ensure_loaded Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus).
Method fileids Return a list of file identifiers for the fileids that make up this corpus.
Method license Return the contents of the corpus LICENSE file, if it exists.
Method open Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode.
Method readme Return the contents of the corpus README file, if it exists.
Class Variable root Undocumented
Method _get_root Undocumented
Instance Variable _encoding The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings.
Instance Variable _fileids A list of the relative paths for the fileids that make up this corpus.
Instance Variable _root The root directory for this corpus.
Instance Variable _tagset Undocumented
def __init__(self, root, fileids=None, word_tokenizer=TweetTokenizer(), encoding='utf8'): (source)

smaller units, including but not limited to words.

Parameters
rootThe root directory for this corpus.
fileidsA list or regexp specifying the fileids in this corpus.
word_tokenizerTokenizer for breaking the text of Tweets into
encodingUndocumented
def docs(self, fileids=None): (source)

Returns the full Tweet objects, as specified by Twitter documentation on Tweets

from JSON. :rtype: list(dict)

Returns
the given file(s) as a list of dictionaries deserialised
def raw(self, fileids=None): (source)

Return the corpora in their raw form.

def strings(self, fileids=None): (source)

Returns only the text content of Tweets in the file(s)

Returns
list(str)the given file(s) as a list of Tweets.
def tokenized(self, fileids=None): (source)

as a list of words, screenanames, hashtags, URLs and punctuation symbols.

Returns
list(list(str))the given file(s) as a list of the text content of Tweets as
def _read_tweets(self, stream): (source)

Assumes that each line in stream is a JSON-serialised object.

_word_tokenizer = (source)

Undocumented