Undocumented
Class |
|
An object that enumerates the code points of the CJK characters as listed on http://en.wikipedia.org/wiki/Basic_Multilingual_Plane#Basic_Multilingual_Plane |
Function | align |
This module attempt to find the offsets of the tokens in s, as a sequence of (start, end) tuples, given the tokens and also the source string. |
Function | is |
Python port of Moses' code to check for CJK character. |
Function | regexp |
Return the offsets of the tokens in s, as a sequence of (start, end) tuples, by splitting the string at each successive match of regexp. |
Function | spans |
Return a sequence of relative spans, given a sequence of spans. |
Function | string |
Return the offsets of the tokens in s, as a sequence of (start, end) tuples, by splitting the string at each occurrence of sep. |
Function | xml |
This function transforms the input text into an "escaped" version suitable for well-formed XML formatting. |
Function | xml |
This function transforms the "escaped" version suitable for well-formed XML formatting into humanly-readable string. |
This module attempt to find the offsets of the tokens in s, as a sequence of (start, end) tuples, given the tokens and also the source string.
>>> from nltk.tokenize import TreebankWordTokenizer >>> from nltk.tokenize.util import align_tokens >>> s = str("The plane, bound for St Petersburg, crashed in Egypt's " ... "Sinai desert just 23 minutes after take-off from Sharm el-Sheikh " ... "on Saturday.") >>> tokens = TreebankWordTokenizer().tokenize(s) >>> expected = [(0, 3), (4, 9), (9, 10), (11, 16), (17, 20), (21, 23), ... (24, 34), (34, 35), (36, 43), (44, 46), (47, 52), (52, 54), ... (55, 60), (61, 67), (68, 72), (73, 75), (76, 83), (84, 89), ... (90, 98), (99, 103), (104, 109), (110, 119), (120, 122), ... (123, 131), (131, 132)] >>> output = list(align_tokens(tokens, s)) >>> len(tokens) == len(expected) == len(output) # Check that length of tokens and tuples are the same. True >>> expected == list(align_tokens(tokens, s)) # Check that the output is as expected. True >>> tokens == [s[start:end] for start, end in output] # Check that the slices of the string corresponds to the tokens. True
Parameters | |
tokens:list(str) | The list of strings that are the result of tokenization |
sentence:str | The original string |
Returns | |
list(tuple(int,int)) | Undocumented |
Python port of Moses' code to check for CJK character.
>>> CJKChars().ranges [(4352, 4607), (11904, 42191), (43072, 43135), (44032, 55215), (63744, 64255), (65072, 65103), (65381, 65500), (131072, 196607)] >>> is_cjk(u'㏾') True >>> is_cjk(u'﹟') False
Parameters | |
character:char | The character that needs to be checked. |
Returns | |
bool |
Return the offsets of the tokens in s, as a sequence of (start, end) tuples, by splitting the string at each successive match of regexp.
>>> from nltk.tokenize.util import regexp_span_tokenize >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(regexp_span_tokenize(s, r'\s')) [(0, 4), (5, 12), (13, 17), (18, 23), (24, 26), (27, 30), (31, 36), (38, 44), (45, 48), (49, 51), (52, 55), (56, 58), (59, 64), (66, 73)]
Parameters | |
s:str | the string to be tokenized |
regexp:str | regular expression that matches token separators (must not be empty) |
Returns | |
iter(tuple(int, int)) | Undocumented |
Return a sequence of relative spans, given a sequence of spans.
>>> from nltk.tokenize import WhitespaceTokenizer >>> from nltk.tokenize.util import spans_to_relative >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(spans_to_relative(WhitespaceTokenizer().span_tokenize(s))) [(0, 4), (1, 7), (1, 4), (1, 5), (1, 2), (1, 3), (1, 5), (2, 6), (1, 3), (1, 2), (1, 3), (1, 2), (1, 5), (2, 7)]
Parameters | |
spans:iter(tuple(int, int)) | a sequence of (start, end) offsets of the tokens |
Returns | |
iter(tuple(int, int)) | Undocumented |
Return the offsets of the tokens in s, as a sequence of (start, end) tuples, by splitting the string at each occurrence of sep.
>>> from nltk.tokenize.util import string_span_tokenize >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(string_span_tokenize(s, " ")) [(0, 4), (5, 12), (13, 17), (18, 26), (27, 30), (31, 36), (37, 37), (38, 44), (45, 48), (49, 55), (56, 58), (59, 73)]
Parameters | |
s:str | the string to be tokenized |
sep:str | the token separator |
Returns | |
iter(tuple(int, int)) | Undocumented |
This function transforms the input text into an "escaped" version suitable for well-formed XML formatting.
Note that the default xml.sax.saxutils.escape() function don't escape some characters that Moses does so we have to manually add them to the entities dictionary.
>>> input_str = ''')| & < > ' " ] [''' >>> expected_output = ''')| & < > ' " ] [''' >>> escape(input_str) == expected_output True >>> xml_escape(input_str) ')| & < > ' " ] ['
Parameters | |
text:str | The text that needs to be escaped. |
Returns | |
str | Undocumented |
This function transforms the "escaped" version suitable for well-formed XML formatting into humanly-readable string.
Note that the default xml.sax.saxutils.unescape() function don't unescape some characters that Moses does so we have to manually add them to the entities dictionary.
>>> from xml.sax.saxutils import unescape >>> s = ')| & < > ' " ] [' >>> expected = ''')| & < > ' " ] [''' >>> xml_unescape(s) == expected True
Parameters | |
text:str | The text that needs to be unescaped. |
Returns | |
str | Undocumented |