I`m trying to work with Chinese text and big data in Python.
Part of work is clean text from some unneeded data. For this goal I am using regexes. However I met some problems as in Python regex as in PyCharm application:
1) The data is stored in postgresql and viewed well in the columns, however, after select and pull it to the var it is displayed as a square:
When the value printed to the console is looks like:
Mentholatum 曼秀雷敦 男士 深层活炭洁面乳100g(新包装)
So I presume there is no problem with application encoding but with debug part of encoding, however, I did not find any solutions for such behaviour.
2) The example of regex that I need to care is to remove the values between Chinese brackets include them. The code I used is:
#!/usr/bin/env python
# -*- coding: utf-8 -*
import re
from pprint import pprint
import sys, locale, os
columnString = row[columnName]
startFrom = valuestoremove["startsTo"]
endWith = valuestoremove["endsAt"]
isInclude = valuestoremove["include"]
escapeCharsRegex = re.compile('([\.\^\$\*\+\?\(\)\[\{\|])')
nonASCIIregex = re.compile('([^\x00-\x7F])')
if escapeCharsRegex.match(startFrom):
startFrom = re.escape(startFrom)
if escapeCharsRegex.match(endWith):
endWith = re.escape(endWith)
if isInclude:
regex = startFrom + '(.*)' + endWith
else:
regex = '(?<=' + startFrom + ').*?(?=' + endWith + ')'
if nonASCIIregex.match(regex):
p = re.compile(ur'' + regex)
else:
p = re.compile(regex)
row[columnName] = p.sub("", columnString).strip()
But the regex does not influence on the given string.
I`ve made a test with next code:
#!/usr/bin/env python
# -*- coding: utf-8 -*
import re
reg = re.compile(ur'((.*))')
string = u"巴黎欧莱雅 男士 劲能冰爽洁面啫哩(原男士劲能净爽洁面啫哩)100ml"
print string
string = reg.sub("", string)
print string
And it is work fine for me.
The only difference between those two code examples is that n the first the regex values are come from the txt file with json, encoded as utf-8:
{
"between": {
"startsTo": "(",
"endsAt": ")",
"include": true,
"sequenceID": "1"
}
}, {
"between": {
"startsTo": "(",
"endsAt": ")",
"include": true,
"sequenceID": "2"
}
},{
"between": {
"startsTo": "(",
"endsAt": ")",
"include": true,
"sequenceID": "2"
}
},{
"between": {
"startsTo": "(",
"endsAt": ")",
"include": true,
"sequenceID": "2"
}
}
The Chinese brackets from the file are also viewed like the squares:
I cant find explanation or any solution for such behavior, thus the community help need
Thanks for help.
The problem is that the text you're reading in isn't getting understood as Unicode correctly (this is one of the big gotchas that prompted sweeping changes for Python 3k). Instead of:
data_file = myfile.read()
You need to tell it to decode the file:
data_file = myfile.read().decode("utf8")
Then continue with json.loads, etc, and it should work out fine. Alternatively,
data = json.load(myfile, "utf8")
After many searches and consultations here is a solution for Chinese text (also mixed and non-mixed language)
import codecs
def betweencase(valuestoremove, row, columnName):
columnString = row[columnName]
startFrom = valuestoremove["startsTo"]
endWith = valuestoremove["endsAt"]
isInclude = valuestoremove["include"]
escapeCharsRegex = re.compile('([\.\^\$\*\+\?\(\)\[\{\|])')
if escapeCharsRegex.match(startFrom):
startFrom = re.escape(startFrom)
if escapeCharsRegex.match(endWith):
endWith = re.escape(endWith)
if isInclude:
regex = ur'' + startFrom + '(.*)' + endWith
else:
regex = ur'(?<=' + startFrom + ').*?(?=' + endWith + ')'
***p = re.compile(codecs.encode(unicode(regex), "utf-8"))***
delimiter = ' '
if localization == 'CN':
delimiter = ''
row[columnName] = p.sub(delimiter, columnString).strip()
As you can see we encode any regex to utf-8 thus the postgresql db value is match to regex.
Related
I'm trying to extract text in specific parts of an MS word document (link) - sample below. Essentially I need to write all text with the tags -- ASN1START and -- ASN1STOP to a file excluding the aforementioned tags.
sample text
-- ASN1START
CounterCheck ::= SEQUENCE {
rrc-TransactionIdentifier RRC-TransactionIdentifier,
criticalExtensions CHOICE {
c1 CHOICE {
counterCheck-r8 CounterCheck-r8-IEs,
spare3 NULL, spare2 NULL, spare1 NULL
},
criticalExtensionsFuture SEQUENCE {}
}
}
CounterCheck-r8-IEs ::= SEQUENCE {
drb-CountMSB-InfoList DRB-CountMSB-InfoList,
nonCriticalExtension CounterCheck-v8a0-IEs OPTIONAL
}
CounterCheck-v8a0-IEs ::= SEQUENCE {
lateNonCriticalExtension OCTET STRING OPTIONAL,
nonCriticalExtension CounterCheck-v1530-IEs OPTIONAL
}
CounterCheck-v1530-IEs ::= SEQUENCE {
drb-CountMSB-InfoListExt-r15 DRB-CountMSB-InfoListExt-r15 OPTIONAL, -- Need ON
nonCriticalExtension SEQUENCE {} OPTIONAL
}
DRB-CountMSB-InfoList ::= SEQUENCE (SIZE (1..maxDRB)) OF DRB-CountMSB-Info
DRB-CountMSB-InfoListExt-r15 ::= SEQUENCE (SIZE (1..maxDRBExt-r15)) OF DRB-CountMSB-Info
DRB-CountMSB-Info ::= SEQUENCE {
drb-Identity DRB-Identity,
countMSB-Uplink INTEGER(0..33554431),
countMSB-Downlink INTEGER(0..33554431)
}
-- ASN1STOP
I have tried using docx.
from docx import *
import re
import json
fileName = './data/36331-f80.docx'
document = Document(fileName)
startText = re.compile(r'-- ASN1START')
for para in document.paragraphs:
# look for each paragraph
text = para.text
print(text)
# if startText.match(para.text):
# print(text)
It seems every line here with the tags mentioned above is a paragraph. I need help with extracting just the text within the tags.
You may try first reading all document/paragraph text into a single string, and then using re.findall to find all matching text in between the target tags:
text = ""
for para in document.paragraphs:
text += para.text + "\n"
matches = re.findall(r'-- ASN1START\s*(.*?)\s*-- ASN1STOP', text, flags=re.DOTALL)
Note that we use DOT ALL mode with the regex to ensure that .* can match content in between the tags which occurs across newlines.
I have below string, I am able to grab the 'text' what I wanted to (text is warped between pattern). code is give below,
val1 = '[{"vmdId":"Text1","vmdVersion":"text2","vmId":"text3"},{"vmId":"text4","vmVersion":"text5","vmId":"text6"}]'
temp = val1.split(',')
list_len = len(temp)
for i in range(0, list_len):
var = temp[i]
found = re.findall(r':"([^(]*)\"\;', var)
print ''.join(found)
I would like to replace values (Text1, text2, tex3, etc) with new values provided by user / or by reading from another XML. (Text1, tex2 .. are is totally random and alphanumeric data. below some details
Text1 = somename
text2 = alphanumatic value
text3 = somename
Text4 = somename
text5 = alphanumatic value
text6 = somename
anstring =
[{"vmdId":"newText1","vmdVersion":"newtext2","vmId":"newtext3"},{"vmId":"newtext4","vmVersion":"newtext5","vmId":"newtext6"}]
I decided to go with replace() but later realize data is not constant. hence seeking for help again. Appreciate your response.
Any help would be appreciated. Also, if let me know if I can improve the way i am grabing the value right now, as i new with regex.
You can do this by using backreferences in combination with re.sub:
import re
val1 = '[{"vmdId":"Text1","vmdVersion":"text2","vmId":"text3"},{"vmId":"text4","vmVersion":"text5","vmId":"text6"}]'
ansstring = re.sub(r'(?<=:")([^(]*)', r'new\g<1>' , val1)
print ansstring
\g<1> is the text which is in the first ().
EDIT
Maybe a better approach would be to decode the string, change the data and encode it again. This should allow you to easier access the values.
import sys
# python2 version
if sys.version_info[0] < 3:
import HTMLParser
html = HTMLParser.HTMLParser()
html_escape_table = {
"&": "&",
'"': """,
"'": "'",
">": ">",
"<": "<",
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
html.escape = html_escape
else:
import html
import json
val1 = '[{"vmdId":"Text1","vmdVersion":"text2","vmId":"text3"},{"vmId":"text4","vmVersion":"text5","vmId":"text6"}]'
print(val1)
unescaped = html.unescape(val1)
json_data = json.loads(unescaped)
for d in json_data:
d['vmId'] = 'new value'
new_unescaped = json.dumps(json_data)
new_val = html.escape(new_unescaped)
print(new_val)
I hope this helps.
I have a JavaScript file with an array of data.
info = [ {
Date = "YR-MM-DDT00:00:10"
}, ....
What I'm trying to do is remove T and on in the Date field.
Here's what I've tried:
import re
with open ("info.js","r") as myFile:
data= myFile.read();
data= re.sub('\0-9T,'',data);
Desired output for each Date field in the array:
Date = "YR-MM-DD"
You should match the T and the characters that come after it, This works for a single timestamp:
import re
print(re.sub('T.*$', '', 'YR-MM-DDT00:00:10'))
Or if you have text containing a bunch of timestamps, match the closing double quote as well, and replace with a double quote:
import re
text = """
info = [ {
Date = "YR-MM-DDT00:00:10",
Date = "YR-MM-DDT01:02:03",
Date = "YR-MM-DDT11:22:33"
}
"""
new_text = re.sub('T.*"', '"', text)
print(new_text)
I'm trying to find and replace a multiline pattern in a JSON feed. Basically, I'm looking for a line ending "}," followed by a line with just "}".
Example input would be:
s = """
"essSurfaceFreezePoint": "1001",
"essSurfaceBlackIceSignal": "4"
},
}
}
"""
and I want to find:
"""
},
}
"""
and replace it with:
"""
}
}
"""
I've tried the following:
pattern = re.compile(r'^ *},\n^ *}$',flags=re.MULTILINE)
pattern.findall(feedStr)
This works in the python shell. However, when I do the same search in my python program, it finds nothing. I'm using the full JSON feed in the program. Perhaps it's getting a different line termination when reading the feed.
The feed is at:
http://hardhat.ahmct.ucdavis.edu/tmp/test.json
If anyone can point out why this is working in the shell, but not in the program, I'd greatly appreciate it. Is there a better way to formulate the regular expression, so it would work in both?
Thanks for any advice.
=====================================================================================
To make this clearer, I'm adding my test code here. Note that I'm now including the regular expression provided by Ahosan Karim Asik. This regex works in the live demo link below, but doesn't quite work for me in a python shell. It also doesn't work against the real feed.
Thanks again for any assistance.
import urllib2
import json
import re
if __name__ == "__main__":
# wget version of real feed:
# url = "http://hardhat.ahmct.ucdavis.edu/tmp/test.json"
# Short text, for milepost and brace substitution test:
url = "http://hardhat.ahmct.ucdavis.edu/tmp/test.txt"
request = urllib2.urlopen(url)
rawResponse = request.read()
# print("Raw response:")
# print(rawResponse)
# Find extra comma after end of records:
p1 = re.compile('(}),(\r?\n *})')
l1 = p1.findall(rawResponse)
print("Brace matches found:")
print(l1)
# Check milepost:
#p2 = re.compile('( *\"milepost\": *\")')
p2 = re.compile('( *\"milepost\": *\")([0-9]*\.?[0-9]*)\r?\n')
l2 = p2.findall(rawResponse)
print("Milepost matches found:")
print(l2)
# Do brace substitutions:
subst = "\1\2"
response = re.sub(p1, subst, rawResponse)
# Do milepost substitutions:
subst = "\1\2\""
response = re.sub(p2, subst, response)
print(response)
try this:
import re
p = re.compile(ur'(^ *}),(\n^ *})$', re.MULTILINE)
test_str = u" \"essSurfaceFreezePoint\": \"1001\",\n \"essSurfaceBlackIceSignal\": \"4\"\n },\n }\n }"
subst = u"$1$2"
result = re.sub(p, subst, test_str)
live demo
I have modified a python babelizer to help me to translate english to chinese.
## {{{ http://code.activestate.com/recipes/64937/ (r4)
# babelizer.py - API for simple access to babelfish.altavista.com.
# Requires python 2.0 or better.
#
# See it in use at http://babel.MrFeinberg.com/
"""API for simple access to babelfish.altavista.com.
Summary:
import babelizer
print ' '.join(babelizer.available_languages)
print babelizer.translate( 'How much is that doggie in the window?',
'English', 'French' )
def babel_callback(phrase):
print phrase
sys.stdout.flush()
babelizer.babelize( 'I love a reigning knight.',
'English', 'German',
callback = babel_callback )
available_languages
A list of languages available for use with babelfish.
translate( phrase, from_lang, to_lang )
Uses babelfish to translate phrase from from_lang to to_lang.
babelize(phrase, from_lang, through_lang, limit = 12, callback = None)
Uses babelfish to translate back and forth between from_lang and
through_lang until either no more changes occur in translation or
limit iterations have been reached, whichever comes first. Takes
an optional callback function which should receive a single
parameter, being the next translation. Without the callback
returns a list of successive translations.
It's only guaranteed to work if 'english' is one of the two languages
given to either of the translation methods.
Both translation methods throw exceptions which are all subclasses of
BabelizerError. They include
LanguageNotAvailableError
Thrown on an attempt to use an unknown language.
BabelfishChangedError
Thrown when babelfish.altavista.com changes some detail of their
layout, and babelizer can no longer parse the results or submit
the correct form (a not infrequent occurance).
BabelizerIOError
Thrown for various networking and IO errors.
Version: $Id: babelizer.py,v 1.4 2001/06/04 21:25:09 Administrator Exp $
Author: Jonathan Feinberg <jdf#pobox.com>
"""
import re, string, urllib
import httplib, urllib
import sys
"""
Various patterns I have encountered in looking for the babelfish result.
We try each of them in turn, based on the relative number of times I've
seen each of these patterns. $1.00 to anyone who can provide a heuristic
for knowing which one to use. This includes AltaVista employees.
"""
__where = [ re.compile(r'name=\"q\">([^<]*)'),
re.compile(r'td bgcolor=white>([^<]*)'),
re.compile(r'<\/strong><br>([^<]*)')
]
# <div id="result"><div style="padding:0.6em;">??</div></div>
__where = [ re.compile(r'<div id=\"result\"><div style=\"padding\:0\.6em\;\">(.*)<\/div><\/div>', re.U) ]
__languages = { 'english' : 'en',
'french' : 'fr',
'spanish' : 'es',
'german' : 'de',
'italian' : 'it',
'portugese' : 'pt',
'chinese' : 'zh'
}
"""
All of the available language names.
"""
available_languages = [ x.title() for x in __languages.keys() ]
"""
Calling translate() or babelize() can raise a BabelizerError
"""
class BabelizerError(Exception):
pass
class LanguageNotAvailableError(BabelizerError):
pass
class BabelfishChangedError(BabelizerError):
pass
class BabelizerIOError(BabelizerError):
pass
def saveHTML(txt):
f = open('page.html', 'wb')
f.write(txt)
f.close()
def clean(text):
return ' '.join(string.replace(text.strip(), "\n", ' ').split())
def translate(phrase, from_lang, to_lang):
phrase = clean(phrase)
try:
from_code = __languages[from_lang.lower()]
to_code = __languages[to_lang.lower()]
except KeyError, lang:
raise LanguageNotAvailableError(lang)
html = ""
try:
params = urllib.urlencode({'ei':'UTF-8', 'doit':'done', 'fr':'bf-res', 'intl':'1' , 'tt':'urltext', 'trtext':phrase, 'lp' : from_code + '_' + to_code , 'btnTrTxt':'Translate'})
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("babelfish.yahoo.com")
conn.request("POST", "http://babelfish.yahoo.com/translate_txt", params, headers)
response = conn.getresponse()
html = response.read()
saveHTML(html)
conn.close()
#response = urllib.urlopen('http://babelfish.yahoo.com/translate_txt', params)
except IOError, what:
raise BabelizerIOError("Couldn't talk to server: %s" % what)
#print html
for regex in __where:
match = regex.search(html)
if match:
break
if not match:
raise BabelfishChangedError("Can't recognize translated string.")
return match.group(1)
#return clean(match.group(1))
def babelize(phrase, from_language, through_language, limit = 12, callback = None):
phrase = clean(phrase)
seen = { phrase: 1 }
if callback:
callback(phrase)
else:
results = [ phrase ]
flip = { from_language: through_language, through_language: from_language }
next = from_language
for i in range(limit):
phrase = translate(phrase, next, flip[next])
if seen.has_key(phrase): break
seen[phrase] = 1
if callback:
callback(phrase)
else:
results.append(phrase)
next = flip[next]
if not callback: return results
if __name__ == '__main__':
import sys
def printer(x):
print x
sys.stdout.flush();
babelize("I won't take that sort of treatment from you, or from your doggie!",
'english', 'french', callback = printer)
## end of http://code.activestate.com/recipes/64937/ }}}
and the test code is
import babelizer
print ' '.join(babelizer.available_languages)
result = babelizer.translate( 'How much is that dog in the window?', 'English', 'chinese' )
f = open('result.txt', 'wb')
f.write(result)
f.close()
print result
The result is to be expected inside a div block . I modded the script to save the html response . What I found is that all utf8 characters are turned to nul . Do I need take special care in treating the utf8 response ?
I think you need to use:
import codecs
codecs.open
instead of plain open, in your:
saveHTML
method, to handle utf-8 docs. See the Python Unicode Howto for a complete explanation.