class documentation

Corpus reader for the SemCor Corpus. For access to the complete XML data structure, use the ``xml()`` method. For access to simple word lists and tagged word lists, use ``words()``, ``sents()``, ``tagged_words()``, and ``tagged_sents()``.

Method __init__ No summary
Method chunk_sents :return: the given file(s) as a list of sentences, each encoded as a list of chunks. :rtype: list(list(list(str)))
Method chunks :return: the given file(s) as a list of chunks, each of which is a list of words and punctuation symbols that form a unit. :rtype: list(list(str))
Method sents :return: the given file(s) as a list of sentences, each encoded as a list of word strings. :rtype: list(list(str))
Method tagged_chunks :return: the given file(s) as a list of tagged chunks, represented in tree form. :rtype: list(Tree)
Method tagged_sents :return: the given file(s) as a list of sentences. Each sentence is represented as a list of tagged chunks (in tree form). :rtype: list(list(Tree))
Method words :return: the given file(s) as a list of words and punctuation symbols. :rtype: list(str)
Static Method _word Undocumented
Method _items Undocumented
Method _words Helper used to implement the view methods -- returns a list of tokens, (segmented) words, chunks, or sentences. The tokens and chunks may optionally be tagged (with POS and sense information).
Instance Variable _lazy Undocumented
Instance Variable _wordnet Undocumented

Inherited from XMLCorpusReader:

Method raw Undocumented
Method xml Undocumented
Instance Variable _wrap_etree Undocumented

Inherited from CorpusReader (via XMLCorpusReader):

Method __repr__ Undocumented
Method abspath Return the absolute path for the given file.
Method abspaths Return a list of the absolute paths for all fileids in this corpus; or for the given list of fileids, if specified.
Method citation Return the contents of the corpus citation.bib file, if it exists.
Method encoding Return the unicode encoding for the given corpus file, if known. If the encoding is unknown, or if the given file should be processed using byte strings (str), then return None.
Method ensure_loaded Load this corpus (if it has not already been loaded). This is used by LazyCorpusLoader as a simple method that can be used to make sure a corpus is loaded -- e.g., in case a user wants to do help(some_corpus).
Method fileids Return a list of file identifiers for the fileids that make up this corpus.
Method license Return the contents of the corpus LICENSE file, if it exists.
Method open Return an open stream that can be used to read the given file. If the file's encoding is not None, then the stream will automatically decode the file's contents into unicode.
Method readme Return the contents of the corpus README file, if it exists.
Class Variable root Undocumented
Method _get_root Undocumented
Instance Variable _encoding The default unicode encoding for the fileids that make up this corpus. If encoding is None, then the file contents are processed using byte strings.
Instance Variable _fileids A list of the relative paths for the fileids that make up this corpus.
Instance Variable _root The root directory for this corpus.
Instance Variable _tagset Undocumented
def __init__(self, root, fileids, wordnet, lazy=True): (source)
Parameters
root:PathPointer or strA path pointer identifying the root directory for this corpus. If a string is specified, then it will be converted to a PathPointer automatically.
fileidsA list of the files that make up this corpus. This list can either be specified explicitly, as a list of strings; or implicitly, as a regular expression over file paths. The absolute path for each file will be constructed by joining the reader's root to each file name.
wordnetUndocumented
lazyUndocumented
encoding

The default unicode encoding for the files that make up the corpus. The value of encoding can be any of the following: - A string: encoding is the encoding name for all files. - A dictionary: encoding[file_id] is the encoding

name for the file whose identifier is file_id. If file_id is not in encoding, then the file contents will be processed using non-unicode byte strings.
  • A list: encoding should be a list of (regexp, encoding) tuples. The encoding for a file whose identifier is file_id will be the encoding value for the first tuple whose regexp matches the file_id. If no tuple's regexp matches the file_id, the file contents will be processed using non-unicode byte strings.
  • None: the file contents of all files will be processed using non-unicode byte strings.
tagsetThe name of the tagset used by this corpus, to be used for normalizing or converting the POS tags returned by the tagged_...() methods.
def chunk_sents(self, fileids=None): (source)

:return: the given file(s) as a list of sentences, each encoded as a list of chunks. :rtype: list(list(list(str)))

def chunks(self, fileids=None): (source)

:return: the given file(s) as a list of chunks, each of which is a list of words and punctuation symbols that form a unit. :rtype: list(list(str))

def sents(self, fileids=None): (source)

:return: the given file(s) as a list of sentences, each encoded as a list of word strings. :rtype: list(list(str))

def tagged_chunks(self, fileids=None, tag='pos' or 'sem' or 'both'): (source)

:return: the given file(s) as a list of tagged chunks, represented in tree form. :rtype: list(Tree) :param tag: `'pos'` (part of speech), `'sem'` (semantic), or `'both'` to indicate the kind of tags to include. Semantic tags consist of WordNet lemma IDs, plus an `'NE'` node if the chunk is a named entity without a specific entry in WordNet. (Named entities of type 'other' have no lemma. Other chunks not in WordNet have no semantic tag. Punctuation tokens have `None` for their part of speech tag.)

def tagged_sents(self, fileids=None, tag='pos' or 'sem' or 'both'): (source)

:return: the given file(s) as a list of sentences. Each sentence is represented as a list of tagged chunks (in tree form). :rtype: list(list(Tree)) :param tag: `'pos'` (part of speech), `'sem'` (semantic), or `'both'` to indicate the kind of tags to include. Semantic tags consist of WordNet lemma IDs, plus an `'NE'` node if the chunk is a named entity without a specific entry in WordNet. (Named entities of type 'other' have no lemma. Other chunks not in WordNet have no semantic tag. Punctuation tokens have `None` for their part of speech tag.)

def words(self, fileids=None): (source)

:return: the given file(s) as a list of words and punctuation symbols. :rtype: list(str)

@staticmethod
def _word(xmlword, unit, pos_tag, sem_tag, wordnet): (source)

Undocumented

def _items(self, fileids, unit, bracket_sent, pos_tag, sem_tag): (source)

Undocumented

def _words(self, fileid, unit, bracket_sent, pos_tag, sem_tag): (source)

Helper used to implement the view methods -- returns a list of tokens, (segmented) words, chunks, or sentences. The tokens and chunks may optionally be tagged (with POS and sense information). :param fileid: The name of the underlying file. :param unit: One of `'token'`, `'word'`, or `'chunk'`. :param bracket_sent: If true, include sentence bracketing. :param pos_tag: Whether to include part-of-speech tags. :param sem_tag: Whether to include semantic tags, namely WordNet lemma and OOV named entity status.

Undocumented

_wordnet = (source)

Undocumented