Using Python to translate Japanese to English - python

I am using Python to write some scripts that integrate two systems. The system scans mailboxes and searches for a specific subject line and then parses the information from the email. One of the elements I am looking for is an HTML link which I then use Curl to write the html code to a text file in text format.
My question is if the text in the email is in Japanese, are there any modules in Python that will automatically convert that text to English? Or do I have the convert to string to Unicode and then decode that?
Here is an example of what I am seeing. When I use curl to grab the text from the URL:
USB Host Stack 処理において解放されたメモリを不正に使用している
When I do a simple re.match to grab the string and write it to a file get this:
USB Host Stack æQtk0J0D0f0ã‰>eU0Œ0_0á0â0ê0’0Nckk0O(uW0f0D0‹0
I also get the following when I grab the email using the email module
>>> emailMessage.get_payload()
USB Host Stack =E5=87=A6=E7=90=86=E3=81=AB=E3=81=8A=E3=81=84=E3=81=A6=E8=A7=
=A3=E6=94=BE=E3=81=95=E3=82=8C=E3=81=9F=E3=83=A1=E3=83=A2=E3=83=AA=E3=82=92=
=E4=B8=8D=E6=AD=A3=E3=81=AB=E4=BD=BF=E7=94=A8=E3=81=97=E3=81=A6=E3=81=84=E3=
=82=8B
So, I guess my real question is what steps do I have to take to get this to convert to English correctly. I'd really like to take the first one which are Japanese characters and convert that to English.

Natural language translation is a very challenging problem, as others wrote. So look into sending strings to be translated to a service, e.g., google translate, which will translate them for you (poorly, but it's better than nothing) and send them back.
The following SO link shows one way: translate url with google translate from python script
Before you get that to work, you should sort out your encoding problems (unicode, uuencoding etc.) so that you're reading and writing text without corrupting it.

Related

integrating a web server into a python script

I have written a program to generate sequences that pass certain filters (the exact sequences etc don't matter). Each sequence is generated by making a random string of 40 characters made up of C, G, T or A. When each string is generated, it is put through a set of filters, and if it passes the filters it is saved to a list.
I am trying to make one of those filters include an online tool, BPROM, which doesn't appear to have a python library implementation. This means I will need to get my python script to send the sequence string described above to the online tool, and save the output as a python variable.
My question is, if I have a url to the tool (http://www.softberry.com/berry.phtml?topic=bprom&group=programs&subgroup=gfindb), how can I interface my script that generates the sequences, with the online tool - is there a way to send data to the web tool and save the tool's output as a variable? I've been looking into requests but i'm not sure it is the right way to approach this (as a massive python/coding noob).
Thanks for reading, I'm a bit brain dead so I hope this made sense :P
Of course, you can use requests or urllib
Here is demo code:
with urllib.request.urlopen('http://www.softberry.com/berry.phtml?topic=bprom&group=programs&subgroup=gfindb') as response:
html = response.read()

Facebook/messenger archive contains emoji that I am unable to parse

I can' figure out how to decode facebook's way of encoding emoji in the messenger archive.
Hi everyone,
I'm trying to code a handy utility to explore messenger's archive file with PYTHON.
The message's file is a "badly encoded "JSON and as stated in this other post: Facebook JSON badly encoded
Using .encode('latin1').decode('utf8) I've been able to deal with most characters such as "é" or "à" and display them correctly. But I'm having a hard time with emojis, as they seem to be encoded in a different way.
Example of a problematic emoji : \u00f3\u00be\u008c\u00ba
The encoding/decoding does not yield any errors, but Tkinter is not willing to display what the function outputs and gives "_tkinter.TclError: character U+fe33a is above the range (U+0000-U+FFFF) allowed by Tcl". Tkinter is not yet this issue thought because trying to display the same emoji in the consol yields "ó¾º" which clearly isn't what's supposed to be displayed ( it's supposed to be a crying face)
I've tried using the emoji library but it doesn't seem to help any
>>> print(emoji.emojize("\u00f3\u00be\u008c\u00ba"))
'ó¾º'
How can I retrieve the proper emoji and display it?
If it's not possible, how can I detect problematic emojis to maybe sanitize and remove them from the JSON in the first place?
Thank you in advance
.encode('latin1').decode('utf8) is correct - it results in the codepoint U+fe33a("󾌺"). This codepoint is in a Private Use Area (PUA) (specifically Supplemental Private Use Area-A), so everyone can assign his own meaning to that codepoint (Maybe facebook wanted to use a crying face, when there wasn't yet one in Unicode, so they used PUA?).
Googling for that char (https://www.google.com/search?q=󾌺) makes google autocorrect it to U+1f62d ("😭") - sadly I have no idea how google maps U+fe33a to U+1f62d.
Googling for U+fe33a site:unicode.org gives https://unicode.org/L2/L2010/10132-emojidata.pdf, which lists U+1F62D as proposed official codepoint.
As that document from unicode lists U+fe33a as a codepoint used by google, I searched for android old emoji codepoints pua. Among other stuff two actually usable results:
How to get Android emoji code point - the question links to :
https://unicodey.com/emoji-data/table.htm - a html table, that seems to be acceptably parsable
and even better: https://github.com/google/mozc/blob/master/src/data/emoji/emoji_data.tsv - a tab sepperated list, that maps modern codepoints to legacy PUA codepoints and other information like this:
1F62D 😭 FE33A E72D E411[...]
https://github.com/googlei18n/noto-emoji/issues/115 - this thread links to:
https://github.com/Crissov/noto-emoji/blob/legacy-pua/emoji_aliases.txt - a machine readable document, that translates legacy PUA codepoints to modern codepoints like this:
FE33A;1F62D # Google
I included my search queries in the answer, because non of the results I found are in any way authoritative - but it should be enough, to get your tool working :-)

RSS parser + unicode decoding ( python )

I have two questions :)
I am working on extension for my irc bot. It is supposed to check rss for new content and post it to channel. I am using feedparser. Only way I found is to store every new content to file and every few minutes download rss content and match it with content in file, which is in my opinion kinda weird. Is there some easy way how to check if there is new content in rss? Thx
When I am saving content to file, sometimes some parts are encoded by unicode ( special characters in czech language ) - u"xxx". But I want to save them into file as utf8. How do I do it?
RSS items usually have a GUID or a link associated with them. Use the GUID if present, otherwise the link to uniquely identify each item. You'll still have to keep track of which ones you've seen before, as the RSS format doesn't tell you what changed since last time. There really is no other way, I'm afraid.
To save data(a unicode object) in UTF-8, simply encode it when writing to the file:
output.write(data.encode('utf8'))
Please do read the Joel Spolsky article on Unicode and the Python Unicode HOWTO, to fully understand what encoding and decoding means.

How do I access both binary and text data for email processing with Python 3?

I am converting a Python 2 program to Python 3 and I'm not sure about the approach to take.
The program reads in either a single email from STDIN, or file(s) are specified containing emails. The program then parses the emails and does some processing on them.
SO we need to work with the raw data of the email input, to store it on disk and do an MD5 hash on it. We also need to work with the text of the email input in order to run it through the Python email parser and extract fields etc.
With Python 3 it is unclear to me how we should be reading in the data. I believe we need the raw binary data in order to do an md5 on it, and also to be able to write it to disk. I understand we also need it in text form to be able to parse it with the email library. Python 3 has made significant changes to the IO handling and text handling and I can't see the "correct" approach to read the email raw data and also use the same data in text form.
Can anyone offer general guidance on this?
The general guidance is convert everything to unicode ASAP and keep it that way until the last possible minute.
Remember that str is the old unicode and bytes is the old str.
See http://docs.python.org/dev/howto/unicode.html for a start.
With Python 3 it is unclear to me how we should be reading in the data.
Specify the encoding when you open the file it and it will automatically give you unicode. If you're reading from stdin, you'll get unicode. You can read from stdin.buffer to get binary data.
I believe we need the raw binary data in order to do an md5 on it
Yes, you do. encode it when you need to hash it.
and also to be able to write it to disk.
You specify the encoding when you open the file you're writing it to, and the file object encodes it for you.
I understand we also need it in text form to be able to parse it with the email library.
Yep, but since it'll get decoded when you open the file, that's what you'll have.
That said, this question is really too open ended for Stack Overflow. When you have a specific problem / question, come back and we'll help.

Handling unicode data in XMLRPC

I have to migrate data to OpenERP through XMLRPC by using TerminatOOOR.
I send a name with value "Rotule right Aurélia".
In Python the name with be encoded with value : 'Rotule right Aur\xc3\xa9lia '
But in TerminatOOOR (xmlrpc client) the data is encoded with value 'Rotule middle Aur\357\277\275lia'
So in the server side, the data value is not decoded correctly and I get bad data.
The terminateOOOR is a ruby plugin for Kettle ( Java product) and I guess it should encode data by utf-8.
I just don't know why it happens like this.
Any help?
This issue comes from Kettle.
My program is using Kettle to get an Excel file, get the active sheet and transfer the data in that sheet to TerminateOOOR for further handling.
At the phase of reading data from Excel file, Kettle can not recognize the encoding then it gives bad data to TerminateOOOR.
My work around solution is manually exporting excel to csv before giving data to TerminateOOOR. By doing this, I don't use the feature to mapping excel column name a variable name (used by kettle).
first off, whenever you deal with text (and all text is bound to contain some non-US-ASCII character sooner or later), you'll be much happier doing that in Python 3.x instead of in the 2.x series. if Py3 is not an option, try to always use from __future__ import unicode_literals (available in Python 2.6 and 2.7).
basically, when you send text or any other data over the wire, that will only happen in the form of bytes (octets of bits), so it will have to be encoded at some point. try to find out exactly where that encoding takes place in your tool chain; if necessary, use a debugging tool (or deploy print( repr( x ) ) statements) to look into relevant variables. the other software you mention is presumably written in PHP, a language which is known to have issues with unicode. you say that 'it should encode the data by utf-8', but on the other hand, when the receiving end sees the data of an incoming RPC request, that data should already be in utf-8. it would have to be decoded to obtain unicode again.

Categories