class documentation

Reader for chunked (and optionally tagged) corpora. Paragraphs are split using a block reader. They are then tokenized into sentences using a sentence tokenizer. Finally, these sentences are parsed into chunk trees using a string-to-chunktree conversion function. Each of these steps can be performed using a default function or a custom function. By default, paragraphs are split on blank lines; sentences are listed one per line; and sentences are parsed into chunk trees using nltk.chunk.tagstr2tree.

Method __init__ No summary
Method chunked_paras No summary
Method chunked_sents No summary
Method chunked_words No summary
Method paras No summary
Method raw No summary
Method sents No summary
Method tagged_paras No summary
Method tagged_sents No summary
Method tagged_words No summary
Method words No summary
Method _read_block Undocumented
Instance Variable _cv_args Arguments for corpus views generated by this corpus: a tuple (str2chunktree, sent_tokenizer, para_block_tokenizer)

Inherited from CorpusReader:

Method __repr__ Undocumented
Method abspath Return the absolute path for the given file.
Method abspaths Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified.
Method citation Return the contents of the corpus citation.bib file, if it exists.
Method encoding Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None.
Method ensure_loaded Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus).
Method fileids Return a list of file identifiers for the fileids that make up this corpus.
Method license Return the contents of the corpus LICENSE file, if it exists.
Method open Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode.
Method readme Return the contents of the corpus README file, if it exists.
Class Variable root Undocumented
Method _get_root Undocumented
Instance Variable _encoding The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings.
Instance Variable _fileids A list of the relative paths for the fileids that make up this corpus.
Instance Variable _root The root directory for this corpus.
Instance Variable _tagset Undocumented
def __init__(self, root, fileids, extension='', str2chunktree=tagstr2tree, sent_tokenizer=RegexpTokenizer('\n', gaps=True), para_block_reader=read_blankline_block, encoding='utf8', tagset=None): (source)
Parameters
rootThe root directory for this corpus.
fileidsA list or regexp specifying the fileids in this corpus.
extensionUndocumented
str2chunktreeUndocumented
sent_tokenizerUndocumented
para_block_readerUndocumented
encodingUndocumented
tagsetUndocumented
def chunked_paras(self, fileids=None, tagset=None): (source)
Returns
list(list(Tree))the given file(s) as a list of paragraphs, each encoded as a list of sentences, which are in turn encoded as a shallow Tree. The leaves of these trees are encoded as (word, tag) tuples (if the corpus has tags) or word strings (if the corpus has no tags).
def chunked_sents(self, fileids=None, tagset=None): (source)
Returns
list(Tree)the given file(s) as a list of sentences, each encoded as a shallow Tree. The leaves of these trees are encoded as (word, tag) tuples (if the corpus has tags) or word strings (if the corpus has no tags).
def chunked_words(self, fileids=None, tagset=None): (source)
Returns
list(tuple(str,str) and Tree)the given file(s) as a list of tagged words and chunks. Words are encoded as (word, tag) tuples (if the corpus has tags) or word strings (if the corpus has no tags). Chunks are encoded as depth-one trees over (word,tag) tuples or word strings.
def paras(self, fileids=None): (source)
Returns
list(list(list(str)))the given file(s) as a list of paragraphs, each encoded as a list of sentences, which are in turn encoded as lists of word strings.
def raw(self, fileids=None): (source)
Returns
strthe given file(s) as a single string.
def sents(self, fileids=None): (source)
Returns
list(list(str))the given file(s) as a list of sentences or utterances, each encoded as a list of word strings.
def tagged_paras(self, fileids=None, tagset=None): (source)
Returns
list(list(list(tuple(str,str))))the given file(s) as a list of paragraphs, each encoded as a list of sentences, which are in turn encoded as lists of (word,tag) tuples.
def tagged_sents(self, fileids=None, tagset=None): (source)
Returns
list(list(tuple(str,str)))the given file(s) as a list of sentences, each encoded as a list of (word,tag) tuples.
def tagged_words(self, fileids=None, tagset=None): (source)
Returns
list(tuple(str,str))the given file(s) as a list of tagged words and punctuation symbols, encoded as tuples (word,tag).
def words(self, fileids=None): (source)
Returns
list(str)the given file(s) as a list of words and punctuation symbols.
def _read_block(self, stream): (source)

Undocumented

_cv_args = (source)

Arguments for corpus views generated by this corpus: a tuple (str2chunktree, sent_tokenizer, para_block_tokenizer)