I can' figure out how to decode facebook's way of encoding emoji in the messenger archive.
Hi everyone,
I'm trying to code a handy utility to explore messenger's archive file with PYTHON.
The message's file is a "badly encoded "JSON and as stated in this other post: Facebook JSON badly encoded
Using .encode('latin1').decode('utf8) I've been able to deal with most characters such as "é" or "à" and display them correctly. But I'm having a hard time with emojis, as they seem to be encoded in a different way.
Example of a problematic emoji : \u00f3\u00be\u008c\u00ba
The encoding/decoding does not yield any errors, but Tkinter is not willing to display what the function outputs and gives "_tkinter.TclError: character U+fe33a is above the range (U+0000-U+FFFF) allowed by Tcl". Tkinter is not yet this issue thought because trying to display the same emoji in the consol yields "ó¾º" which clearly isn't what's supposed to be displayed ( it's supposed to be a crying face)
I've tried using the emoji library but it doesn't seem to help any
>>> print(emoji.emojize("\u00f3\u00be\u008c\u00ba"))
'ó¾º'
How can I retrieve the proper emoji and display it?
If it's not possible, how can I detect problematic emojis to maybe sanitize and remove them from the JSON in the first place?
Thank you in advance
.encode('latin1').decode('utf8) is correct - it results in the codepoint U+fe33a(""). This codepoint is in a Private Use Area (PUA) (specifically Supplemental Private Use Area-A), so everyone can assign his own meaning to that codepoint (Maybe facebook wanted to use a crying face, when there wasn't yet one in Unicode, so they used PUA?).
Googling for that char (https://www.google.com/search?q=) makes google autocorrect it to U+1f62d ("😭") - sadly I have no idea how google maps U+fe33a to U+1f62d.
Googling for U+fe33a site:unicode.org gives https://unicode.org/L2/L2010/10132-emojidata.pdf, which lists U+1F62D as proposed official codepoint.
As that document from unicode lists U+fe33a as a codepoint used by google, I searched for android old emoji codepoints pua. Among other stuff two actually usable results:
How to get Android emoji code point - the question links to :
https://unicodey.com/emoji-data/table.htm - a html table, that seems to be acceptably parsable
and even better: https://github.com/google/mozc/blob/master/src/data/emoji/emoji_data.tsv - a tab sepperated list, that maps modern codepoints to legacy PUA codepoints and other information like this:
1F62D 😭 FE33A E72D E411[...]
https://github.com/googlei18n/noto-emoji/issues/115 - this thread links to:
https://github.com/Crissov/noto-emoji/blob/legacy-pua/emoji_aliases.txt - a machine readable document, that translates legacy PUA codepoints to modern codepoints like this:
FE33A;1F62D # Google
I included my search queries in the answer, because non of the results I found are in any way authoritative - but it should be enough, to get your tool working :-)
Related
I tried to use a escpos package but it doesn't generate unique characters for my language, even with encoding ISO-8859-2 and windows-1250 this same behavior appears with python version
I have already printed a pdf, the printer is able to print a unique character so it seems to be a package problem, as I can not solve a problem with escpos I decided to generate a pdf and then print it, but I am not sure what kind of tools to use I saw a pdf kit but some people advice to use XML.
I am looking for some advice with it or maybe you know how to deal with escpos
Hello there,
even if i really tried... im stuck and somewhat desperate when it comes to Python, Windows, Ansi and character encoding. I need help, seriously... searching the web for the last few hours wasn't any help, it just drives me crazy.
I'm new to Python, so i have almost no clue what's going on. I'm about to learn the language, so my first program, which ist almost done, should automatically generate music-playlists from a given folder containing mp3s. That works just fine, besides one single problem...
...i can't write Umlaute (äöü) to the playlist-file.
After i found a solution for "wrong-encoded" Data in the sys.argv i was able to deal with that. When reading Metadata from the MP3s, i'm using some sort of simple character substitution to get rid of all those international special chars, like french accents or this crazy skandinavian "o" with a slash in it (i don't even know how to type it...). All fine.
But i'd like to write at least the mentioned Umlaute to the playlist-file, those characters are really common here in Germany. And unlike the Metadata, where i don't care about some missing characters or miss-spelled words, this is relevant - because now i'm writing the paths to the files.
I've tried so many various encoding and decoding methods, i can't list them all here.. heck, i'm not even able to tell which settings i tried half an hour ago. I found code online, here, and elsewhere, that seemed to work for some purposes. Not for mine.
I think the tricky part is this: it seems like the Problem is the Ansi called format of the files i need to write. Correct - i actually need this Ansi-stuff. About two hours ago i actually managed to write whatever i'd like to an UFT-8 file. Works like charm... until i realized that my Player (Winamp, old Version) somehow doesn't work with those UTF-8 playlist files. It couldn't resolve the Path, even if it looks right in my editor.
If i change the file format back to Ansi, Paths containing special chars get corrupted. I'm just guessing, but if Winamp reads this UTF-8 files as Ansi, that would cause the Problem i'm experiencing right now.
So...
I DO have to write äöü in a path, or it will not work
It DOES have to be an ANSI-"encoded" file, or it will not work
Things like line.write(str.decode('utf-8')) break the funktion of the file
A magical comment at the beginning of the script like # -*- coding: iso-8859-1 -*- does nothing here (though it is helpful when it comes to the mentioned Metadata and allowed characters in it...)
Oh, and i'm using Python 2.7.3. Third-Party modules dependencies, you know...
Is there ANYONE who could guide me towards a way out of this encoding hell? Any help is welcome. If i need 500 lines of Code for another functions or classes, i'll type them. If there's a module for handling such stuff, let me know! I'd buy it! Anything helpful will be tested.
Thank you for reading, thanks for any comment,
greets!
As mentioned in the comments, your question isn't very specific, so I'll try to give you some hints about character encodings, see if you can apply those to your specific case!
Unicode and Encoding
Here's a small primer about encoding. Basically, there are two ways to represent text in Python:
unicode. You can consider that unicode is the ultimate encoding, you should strive to use it everywhere. In Python 2.x source files, unicode strings look like u'some unicode'.
str. This is encoded text - to be able to read it, you need to know the encoding (or guess it). In Python 2.x, those strings look like 'some str'.
This changed in Python 3 (unicode is now str and str is now bytes).
How does that play out?
Usually, it's pretty straightforward to ensure that you code uses unicode for its execution, and uses str for I/O:
Everything you receive is encoded, so you do input_string.decode('encoding') to convert it to unicode.
Everything you need to output is unicode but needs to be encoded, so you do output_string.encode('encoding').
The most common encodings are cp-1252 on Windows (on US or EU systems), and utf-8 on Linux.
Applying this to your case
I DO have to write äöü in a path, or it will not work
Windows natively uses unicode for file paths and names, so you should actually always use unicode for those.
It DOES have to be an ANSI-"encoded" file, or it will not work
When you write to the file, be sure to always run your output through output.encode('cp1252') (or whatever encoding ANSI would be on your system).
Things like line.write(str.decode('utf-8')) break the funktion of the file
By now you probably realized that:
If str as indeed an str instance, Python will try to convert it to unicode using the utf-8 encoding, but then try to encode it again (likely in ascii) to write it to the file
If str is actually an unicode instance, Python will first encode it (likely in ascii, and that will probably crash) to then be able to decode it.
Bottom line is, you need to know if str is unicode, you should encode it. If it's already encoded, don't touch it (or decode it then encode it if the encoding is not the one you want!).
A magical comment at the beginning of the script like # -- coding: iso-8859-1 -- does nothing here (though it is helpful when it comes to the mentioned Metadata and allowed characters in it...)
Not a surprise, this only tells Python what encoding should be used to read your source file so that non-ascii characters are properly recognized.
Oh, and i'm using Python 2.7.3. Third-Party modules dependencies, you know...
Python 3 probably is a big update in terms of unicode and encoding, but that doesn't mean Python 2.x can't make it work!
Will that solve your issue?
You can't be sure, it's possible that the problem lies in the player you're using, not in your code.
Once you output it, you should make sure that your script's output is readable using reference tools (such as Windows Explorer). If it is, but the player still can't open it, you should consider updating to a newer version.
On Windows there is special encoding available called mbcs, it converts between current default ANSI codepage and UNICODE.
For example on a Spanish Language PC:
u'ñ'.encode('mbcs') -> '\xf1'
'\xf1'.decode('mbcs') -> u'ñ'
On Windows ANSI means current default multi-byte code page. For western European languages Windows ISO-8859-1, for eastern European languages windows ISO-8859-2) encoded byte string and other encodings for other languages as appropriate.
More info available at:
https://docs.python.org/2.4/lib/standard-encodings.html
See also:
https://docs.python.org/2/library/sys.html#sys.getfilesystemencoding
# -*- coding comments declare the character encoding of the source code (and therefore of byte-string literals like 'abc').
Assuming that by "playlist" you mean m3u files, then based on this specification you may be at the mercy of the mp3 player software you are using. This spec says only that the files contain text, no mention of what character encoding.
I have personally observed that various mp3 encoding software will use different encodings for mp3 metadata. Some use UTF-8, others ISO-8859-1. So you may have to allow encoding to be specified in configuration and leave it at that.
I have two questions :)
I am working on extension for my irc bot. It is supposed to check rss for new content and post it to channel. I am using feedparser. Only way I found is to store every new content to file and every few minutes download rss content and match it with content in file, which is in my opinion kinda weird. Is there some easy way how to check if there is new content in rss? Thx
When I am saving content to file, sometimes some parts are encoded by unicode ( special characters in czech language ) - u"xxx". But I want to save them into file as utf8. How do I do it?
RSS items usually have a GUID or a link associated with them. Use the GUID if present, otherwise the link to uniquely identify each item. You'll still have to keep track of which ones you've seen before, as the RSS format doesn't tell you what changed since last time. There really is no other way, I'm afraid.
To save data(a unicode object) in UTF-8, simply encode it when writing to the file:
output.write(data.encode('utf8'))
Please do read the Joel Spolsky article on Unicode and the Python Unicode HOWTO, to fully understand what encoding and decoding means.
I need help with a scraper I'm writing. I'm trying to scrape a table of university rankings, and some of those schools are European universities with foreign characters in their names (e.g. ä, ü). I'm already scraping another table on another site with foreign universities in the exact same way, and everything works fine. But for some reason, the current scraper won't work with foreign characters (and as far as parsing foreign characters, the two scrapers are exactly the same).
Here's what I'm doing to try & make things work:
Declare encoding on the very first line of the file:
# -*- coding: utf-8 -*-
Importing & using smart unicode from django framework from django.utils.encoding import smart_unicode
school_name = smart_unicode(html_elements[2].text_content(), encoding='utf-8',
strings_only=False, errors='strict').encode('utf-8')
Use encode function, as seen above when chained with the smart_unicode function.
I can't think of what else I could be doing wrong. Before dealing with these scrapers, I really didn't understand much about different encoding, so it's been a bit of an eye-opening experience. I've tried reading the following, but still can't overcome this problem
http://farmdev.com/talks/unicode/
http://www.joelonsoftware.com/articles/Unicode.html
I understand that in an encoding, every character is assigned a number, which can be expressed in hex, binary, etc. Different encodings have different capacities for how many languages they support (e.g. ASCII only supports English, UTF-8 supports everything it seems. However, I feel like I'm doing everything necessary to ensure the characters are printed correctly. I don't know where my mistake is, and it's driving me crazy.
Please help!!
When extracting information from a web page, you need to determine its character encoding, similarly to how browsers do such things (analyzing HTTP headers, parsing HTML to find meta tags, and possibly guesswork based on the actual data, e.g. the presence of something that looks like BOM in some encoding). Hopefully you can find a library routine that does this for you.
In any case, you should not expect all web sites to be utf-8 encoded. Iso-8859-1 is still in widespread use, and in general reading iso-8859-1 as if it were utf-8 results in a big mess (for any non-Ascii characters).
If you are using the requests library it will automatically decode the content based on HTTP headers. Getting the HTML content of a page is really easy:
>>> import requests
>>> r = requests.get('https://github.com/timeline.json')
>>> r.text
'[{"repository":{"open_issues":0,"url":"https://github.com/...
You need first to look at the <head> part of the document and see whether there's charset information:
<meta http-equiv="Content-Type" content="text/html; charset=xxxxx">
(Note that StackOverflow, this very page, doesn't have any charset info... I wonder how 中文字, which I typed assuming it's UTF-8 in here, will display on Chinese PeeCees that are most probably set up as GBK, or Japanese pasokon which are still firmly in Shift-JIS land).
So if you have a charset, you know what to expect, and deal with it accordingly. If not, you'll have to do some educated guessing -- are there non-ASCII chars (>127) in the plain text version of the page? Are there HTML entities like 一 (一) or é (é)?
Once you have guessed/ascertained the encoding of the page, you can convert that to UTF-8, and be on your way.
I am using Python to write some scripts that integrate two systems. The system scans mailboxes and searches for a specific subject line and then parses the information from the email. One of the elements I am looking for is an HTML link which I then use Curl to write the html code to a text file in text format.
My question is if the text in the email is in Japanese, are there any modules in Python that will automatically convert that text to English? Or do I have the convert to string to Unicode and then decode that?
Here is an example of what I am seeing. When I use curl to grab the text from the URL:
USB Host Stack 処理において解放されたメモリを不正に使用している
When I do a simple re.match to grab the string and write it to a file get this:
USB Host Stack æQtk0J0D0f0ã‰>eU0Œ0_0á0â0ê0’0Nckk0O(uW0f0D0‹0
I also get the following when I grab the email using the email module
>>> emailMessage.get_payload()
USB Host Stack =E5=87=A6=E7=90=86=E3=81=AB=E3=81=8A=E3=81=84=E3=81=A6=E8=A7=
=A3=E6=94=BE=E3=81=95=E3=82=8C=E3=81=9F=E3=83=A1=E3=83=A2=E3=83=AA=E3=82=92=
=E4=B8=8D=E6=AD=A3=E3=81=AB=E4=BD=BF=E7=94=A8=E3=81=97=E3=81=A6=E3=81=84=E3=
=82=8B
So, I guess my real question is what steps do I have to take to get this to convert to English correctly. I'd really like to take the first one which are Japanese characters and convert that to English.
Natural language translation is a very challenging problem, as others wrote. So look into sending strings to be translated to a service, e.g., google translate, which will translate them for you (poorly, but it's better than nothing) and send them back.
The following SO link shows one way: translate url with google translate from python script
Before you get that to work, you should sort out your encoding problems (unicode, uuencoding etc.) so that you're reading and writing text without corrupting it.