Add special characters in csv pandas python - python

While writing strings containing certain special characters, such as
Töölönlahdenkatu
using to_csv from pandas, the result in the csv looks like
T%C3%B6%C3%B6l%C3%B6nlahdenkatu
How do we get to write the text of string as it is? This is my to_csv command
df.to_csv(csv_path,index=False,encoding='utf8')
I have even tried
df.to_csv(csv_path,index=False,encoding='utf-8')
df.to_csv(csv_path,index=False,encoding='utf-8-sig')
and still no success.There are other characters replaced with random symbols
'-' to –
Is there a workaround?

What you're trying to do is remove German umlauts and Spanish tildes. There is an easy solution for that.
import unicodedata
data = u'Töölönlahdenkatu Adiós Pequeño'
english = unicodedata.normalize('NFKD', data).encode('ASCII', 'ignore')
print(english)
output : b'Toolonlahdenkatu Adios Pequeno'
Let me know if it works or if there are any edge cases.

Special characters like ö cannot be stored in a csv the same way english letters can. The "random symbols" tell a program like excel to interpret the letters as special characters when you open the file, but special characters cannot be seen when you view the csv in vscode (for instance).

Related

How to remove special characters in json data python

I am reading a set of data from a json file. Content of the json file looks like:
"Address":"4820 ALCOA AVE� ",
"City":"VERNON� "
As you can see that it contains a special character � and white space at the end. While reading this json data, it is coming like:
'address': '4820 ALCOA AVE� '
'city': 'VERNON� '
I can remove the whitespace easily but I am not sure how can I remove the ¿½. I do not have direct access to json file so cannot edit it and even if I had access to json file, I would talk couple of hours to edit the file. Is there any way in python we can remove this special characters. Please help. Thanks
you can use regexp
import re
address = re.sub(r"[^\x20-\x7E]", "", "4820 ALCOA AVE� ")
print(address)
Looks like somewhere upstream wasn't handling character encoding properly and ended up with replacement characters... You may need to keep an eye out in case it mangled more important parts of the text (eg. accented characters, non-English letters, emoji).
For the immediate problem, you can load the JSON data with the utf-8 encoding, then strip the character '\ufffd'.
value = value.strip().strip('\ufffd')
If the replacement characters also appear in the middle (and you want to delete them), you might want to use replace() instead.
value = value.replace('\ufffd', '').strip()

special characters in xml file with Python2.7

I have several strings like this:
"Programa Directrices de Gesti\xc3\xb3n Tur\xc3\xadstica"
That I should store in an xml file in this way
<content><![CDATA[Programa Directrices de Gestión Turística]]></content>
I use this code:
from xml.dom import minidom
data_cdata = doc.createCDATASection(text)
cdv = doc.createElement(tag)
cdv.appendChild(data_cdata)
root.appendChild(cdv)
doc.appendChild(root)
but the output is:
<content><![CDATA["Programa Directrices de Gesti\xc3\xb3n Tur\xc3\xadstica]]></content>
How i can do it?
(sorry for my english)
Python does not represent characters outside of the ascii range like you would like. The special characters \xc3\xb3 and \xc3\xad are related to hexadecimal ordinals of each character: ó and í .
It seems like your code does not translate the special characters very well. Instead of posting the actual ó and í it posts their respective representations: \xc3\xb3 and \xc3\xad. Now I know nothing of the library you use, but I would search in the appendChild function for a quick fix regarding the translation. If you cannot find it, you could perhaps iterate over the text with a loop removing special characters and turning them into regular letters
("ó" into "o").
I wish I could be of more help :).
Good luck,
Jesper

How to find non-ascii characters in file using Regular Expression Python

I have a string of characters that includes [a-z] as well as á,ü,ó,ñ,å,... and so on. Currently I am using regular expressions to get every line in a file that includes these characters.
Sample of spanishList.txt:
adan
celular
tomás
justo
tom
átomo
camara
rosa
avion
Python code (charactersToSearch comes from flask #application.route('/<charactersToSearch>')):
print (charactersToSearch)
#'átdsmjfnueó'
...
#encode
charactersToSearch = charactersToSearch.encode('utf-8')
query = re.compile('[' + charactersToSearch + ']{2,}$', re.UNICODE).match
words = set(word.rstrip('\n') for word in open('spanishList.txt') if query(word))
...
When I do this, I am expecting to get the words in the text file that include the characters in charactersToSearch. It works perfectly for words without special characters:
...
#after doing further searching for other conditions, return list of found words.
return '<br />'.join(sorted(set(word for (word, path) in solve())))
>>> adan
>>> justo
>>> tom
Only problem is that it ignores all words in the file that aren't ASCII. I should also be getting tomás and átomo.
I've tried encode, UTF-8, using ur'[...], but I haven't been able to get it to work for all characters. The file and the program (# -*- coding: utf-8 -*-) are in utf-8 as well.
A different tack
I'm not sure how to fix it in your current workflow, so I'll suggest a different route.
This regex will match characters that are neither white-space characters nor letters in the extended ASCII range, such as A and é. In other words, if one of your words contains a weird character that is not part of this set, the regex will match.
(?i)(?!(?![×Þß÷þø])[a-zÀ-ÿ])\S
Of course this will also match punctuation, but I'm assuming that we're only looking at words in an unpunctuated list. otherwise, excluding punctuation is not too hard.
As I see it, your challenge is to define your set.
In Python, you can so something like:
if re.search(r"(?i)(?!(?![×Þß÷þø])[a-zÀ-ÿ])\S", subject):
# Successful match
else:
# Match attempt failed
I feel your pain. Dealing with Unicode in python2.x is the headache.
The problem with that input is that python sees "á" as the raw byte string '\xc3\xa1' instead of the unicode character "u'\uc3a1'. So your going to need to sanitize the input before passing the string into your regex.
To change a raw byte string to to a unicode string
char = "á"
## print char yields the infamous, and in python unparsable "\xc3\xa1".
## which is probably what the regex is not registering.
bytes_in_string = [byte for byte in char]
string = ''.join([str(hex(ord(byte))).strip('0x') for byte in bytes_in_string])
new_unicode_string = unichr(int(string),16))
There's probably a better way, because this is a lot of operations to get something ready for regex, which I think is supposed to be faster in some way than iterating & 'if/else'ing.
Dunno though, not an expert.
I used something similar to this to isolate the special char words when I parsed wiktionary which was a wicked mess. As far as I can tell your going to have to comb through that to clean it up anyways, you may as well just:
for word in file:
try:
word.encode('UTF-8')
except UnicodeDecodeError:
your_list_of_special_char_words.append(word)
Hope this helped, and good luck!
On further research found this post:
Bytes in a unicode Python string
The was able to figure out the issue. After getting the string from the flask app route, encode it otherwise it give you an error, and then decode the charactersToSearch and each word in the file.
charactersToSearch = charactersToSearch.encode('utf-8')
Then decode it in UTF-8. If you leave the previous line out it give you an error
UNIOnlyAlphabet = charactersToSearch.decode('UTF-8')
query = re.compile('[' + UNIOnlyAlphabet + ']{2,}$', re.U).match
Lastly, when reading the UTF-8 file and using query, don't forget to decode each word in the file.
words = set(word.decode('UTF-8').rstrip('\n') for word in open('spanishList.txt') if query(word.decode('UTF-8')))
That should do it. Now the results show regular and special characters.
justo
tomás
átomo
adan
tom

Writing Escape Characters to a Csv File in Python

I'm using the csv module in python and escape characters keep messing up my csv's. For example, if I had the following:
import csv
rowWriter = csv.writer(open('bike.csv', 'w'), delimiter = ",")
text1 = "I like to \n ride my bike"
text2 = "pumpkin sauce"
rowWriter.writerow([text1, text2])
rowWriter.writerow(['chicken','wings'])
I would like my csv to look like:
I like to \n ride my bike,pumpkin sauce
chicken,wings
But instead it turns out as
I like to
ride my bike,pumpkin sauce
chicken,wings
I've tried combinations of quoting, doublequote, escapechar and other parameters of the csv module, but I can't seem to make it work. Does anyone know whats up with this?
*Note - I'm also using codecs encode("utf-8"), so text1 really looks like "I like to \n ride my bike".encode("utf-8")
The problem is not with writing them to the file. The problem is that \n is a line break when inside '' or "". What you really want is either 'I like to \\n ride my bike' or r'I like to \n ride my bike' (notice the r prefix).
Firstly, it is not obvious why you want r"\n" (two bytes) to appear in your file instead of "\n" (one byte). What is the consumer of the output file meant to do? Use ast.evaluate_literal() on each input field? If your actual data contains any of (non-ASCII characters, apostrophes, quotes), then I'd be very wary of serialising it using repr().
Secondly, you have misreported either your code or your output (or both). The code that you show actually produces:
"I like to
ride my bike",pumpkin sauce
chicken,wings
Thirdly, about your "I like to \n ride my bike".encode("utf-8"): str_object.encode("utf-8") is absolutely pointless if str_object contains only ASCII bytes -- it does nothing. Otherwise it raises an exception.
Fourthly, this comment:
I don't need to call encode anymore, now that I'm using the raw
string. There are a lot of unicode characters in the text that I am
using, so before I started using the raw string I was using encode so
that csv could read the unicode text
doesn't make any sense -- as I've said, "ascii string".encode('utf8') is pointless.
Consider taking a step ot two backwards, and explain what you are really trying to do: where does your data come from, what's in it, and most importantly, what does the process that is going to read the file going to do?

What's a good way to replace international characters with their base Latin counterparts using Python?

Say I have the string "blöt träbåt" which has a few a and o with umlaut and ring above. I want it to become "blot trabat" as simply as possibly. I've done some digging and found the following method:
import unicodedata
unicode_string = unicodedata.normalize('NFKD', unicode(string))
This will give me the string in unicode format with the international characters split into base letter and combining character (\u0308 for umlauts.) Now to get this back to an ASCII string I could do ascii_string = unicode_string.encode('ASCII', 'ignore') and it'll just ignore the combining characters, resulting in the string "blot trabat".
The question here is: is there a better way to do this? It feels like a roundabout way, and I was thinking there might be something I don't know about. I could of course wrap it up in a helper function, but I'd rather check if this doesn't exist in Python already.
It would be better if you created an explicit table, and then used the unicode.translate method. The advantage would be that transliteration is more precise, e.g. transliterating "ö" to "oe" and "ß" to "ss", as should be done in German.
There are several transliteration packages on PyPI: translitcodec, Unidecode, and trans.

Categories