I'm using Python 3.5 on Windows.
I have this little piece of code that downloads close to one hundred CSV files from different URLs stored in Links.txt:
from urllib import request
new_lines = 'None'
def download_data(csv_url):
response = request.urlopen(csv_url)
csv = response.read()
csv_str = str(csv)
global new_lines
new_lines = csv_str.split("\\n")
with open('Links.txt') as file:
for line in file:
URL = line
file_name = URL[54:].rsplit('.ST', 1)[0]
download_data(URL)
save_destination = 'C:\\Download data\\Data\\' + file_name + '.csv'
fx = open(save_destination, "w")
for lines in new_lines:
fx.write(lines+"\n")
fx.close()
The problem is that the CSV files generated always starts with b ' and after the last line of the data follows another ' and a couple of empty rows to wrap things up. I do not see these characters when I look at the files from the browser (before I download them).
This creates problems when I want to import and use the data in a database. Do you have any idea on why this happens and how I can get the code to write the CSV files correctly?
Tips that can make the code faster/better, or adjustments for other flaws in the code are obviously very welcome.
What's happening is that urllib treats its stream as bytes - any string that looks like b'...' means it's a byte-string.
Your immediate problem could be solved by encoding the stream by calling decode('utf-8') (as Chedy2149 shows), which will convert the data's bytes.
However, you can complete elide this problem by downloading the file directly to disk. You go through the work of downloading it, splitting it, and writing it to disk, but all that seems unnecessary because your code just ultimately writes the file's contents to disk without additional work against them.
You can use urllib.request.urlretrieve and download to a file directly.
Here's an example, modified from your code.
import urllib.request
def download_data(url, file_to_save):
filename, rsp = urllib.request.urlretrieve(url, file_to_save)
# Assuming everything worked, the file has been downloaded to file_to_save
with open('Links.txt') as file:
for line in file:
url = line.rstrip() # adding this here to remove extraneous '\n' from string
file_name = url[54:].rsplit('.ST', 1)[0]
save_destination = 'C:\\Download data\\Data\\' + file_name + '.csv'
download_data(url, save_destination)
In the download_data function you need to convert the byte string csv response to a plain string.
Try replacing csv_str = str(csv) by csv_str = csv.decode('utf-8').
This should properly decode the byte string returned by response.read().
The problem is that your function returns a bytes object; str() doesn't convert it to a string the way you expect. Use csv_str = csv.decode() instead.
Related
I am trying to unzip some .json.gz files, but gzip adds some characters to it, and hence makes it unreadable for JSON.
What do you think is the problem, and how can I solve it?
If I use unzipping software such as 7zip to unzip the file, this problem disappears.
This is my code:
with gzip.open('filename' , 'rb') as f:
json_content = json.loads(f.read())
This is the error I get:
Exception has occurred: json.decoder.JSONDecodeError
Extra data: line 2 column 1 (char 1585)
I used this code:
with gzip.open ('filename', mode='rb') as f:
print(f.read())
and realized that the file starts with b' (as shown below):
b'{"id":"tag:search.twitter.com,2005:5667817","objectType":"activity"
I think b' is what makes the file unworkable for the next stage. Do you have any solution to remove the b'? There are millions of this zipped file, and I cannot manually do that.
I uploaded a sample of these files in the following link
just a few json.gz files
The problem isn't with that b prefix you're seeing with print(f.read()), which just means the data is a bytes sequence (i.e. integer ASCII values) not a sequence of UTF-8 characters (i.e. a regular Python string) — json.loads() will accept either. The JSONDecodeError is because the data in the gzipped file isn't in valid JSON format, which is required. The format looks like something known as JSON Lines — which the Python standard library json module doesn't (directly) support.
Dunes' answer to the question #Charles Duffy marked this—at one point—as a duplicate of wouldn't have worked as presented because of this formatting issue. However from the sample file you added a link to in your question, it looks like there is a valid JSON object on each line of the file. If that's true of all of your files, then a simple workaround is to process each file line-by-line.
Here's what I mean:
import json
import gzip
filename = '00_activities.json.gz' # Sample file.
json_content = []
with gzip.open(filename , 'rb') as gzip_file:
for line in gzip_file: # Read one line.
line = line.rstrip()
if line: # Any JSON data on it?
obj = json.loads(line)
json_content.append(obj)
print(json.dumps(json_content, indent=4)) # Pretty-print data parsed.
Note that the output it prints shows what valid JSON might have looked like.
I am extracting data from an html file and outputting it to another html file template using .replace. I wrote it so that on double clicking my script, the page opens up in a browser, ready to be printed.
Everything works fine until I ran into an extracted string that had a special character in it. On double click, nothing would happen (the web browser would not open). However, it seems to work when I run it straight from IDLE, with one issue: The special character comes up as a weird combination of characters.
I haven't tested this out with other special characters, but my problem right now is happening with Nyström, which comes up as Nyström in my outputted file.
I figure this has something to do with encoding/decoding in 'utf-8', however I do not know enough about the subject to solve this issue myself post research.
When I open the read and write files, I make sure they have encoding='utf-8' as the third argument.
Finally, when I print the string i'm having trouble with out onto IDLE, it comes out fine. The issue just seems to pop up when I write it to my file.
Below are my file read and write calls if that helps
path = os.path.dirname(os.path.realpath(__file__))
htmlFile = open(path + input_filename, "r", encoding="utf-8")
htmlString = htmlFile.read()
infile = open(template_path, 'r', encoding='utf-8')
contents = infile.read()
After this I .replace certain parts of content with my extracted strings put into a dictionary named data.
eg:
(please ignore inconsistent naming conventions)
data = dict()
data['name_email'] = email
contents = contents.replace('_name_email', data['name_email'])
then:
outfile = open(output_filename, 'w', encoding='utf-8')
outfile.write(contents)
I am running this on python 3.6
I'm new to Python. My second time coding in it. The main point of this script is to take a text file that contains thousands of lines of file names (sNotUsed file) and match it against about 50 XML files. The XML files may contain up to thousands of lines each and are formatted as most XML's are. I'm not sure what the problem with the code so far is. The code is not fully complete as I have not added the part where it writes the output back to an XML file, but the current last line should be printing at least once. It is not, though.
Examples of the two file formats are as follows:
TEXT FILE:
fileNameWithoutExtension1
fileNameWithoutExtension2
fileNameWithoutExtension3
etc.
XML FILE:
<blocks>
<more stuff="name">
<Tag2>
<Tag3 name="Tag3">
<!--COMMENT-->
<fileType>../../dir/fileNameWithoutExtension1</fileType>
<fileType>../../dir/fileNameWithoutExtension4</fileType>
</blocks>
MY CODE SO FAR:
import os
import re
sNotUsed=list()
sFile = open("C:\Users\xxx\Desktop\sNotUsed.txt", "r") # open snotused txt file
for lines in sFile:
sNotUsed.append(lines)
#sNotUsed = sFile.readlines() # read all lines and assign to list
sFile.close() # close file
xmlFiles=list() # list of xmlFiles in directory
usedS=list() # list of S files that do not match against sFile txt
search = "\w/([\w\-]+)"
# getting the list of xmlFiles
filelist=os.listdir('C:\Users\xxx\Desktop\dir')
for files in filelist:
if files.endswith('.xml'):
xmlFile = open(files, "r+") # open first file with read + write access
xmlComp = xmlFile.readlines() # read lines and assign to list
for lines in xmlComp: # iterate by line in list of lines
temp = re.findall(search, lines)
#print temp
if temp:
if temp[0] in sNotUsed:
print "yes" # debugging. I know there is at least one match for sure, but this is not being printed.
TO HELP CLEAR THINGS UP:
Sorry, I guess my question wasn't very clear. I would like the script to go through each XML line by line and see if the FILENAME part of that line matches with the exact line of the sNotUsed.txt file. If there is match then I want to delete it from the XML. If the line doesn't match any of the lines in the sNotUsed.txt then I would like it be part of the output of the new modified XML file (which will overwrite the old one). Please let me know if still not clear.
EDITED, WORKING CODE
import os
import re
import codecs
sFile = open("C:\Users\xxx\Desktop\sNotUsed.txt", "r") # open sNotUsed txt file
sNotUsed=sFile.readlines() # read all lines and assign to list
sFile.close() # close file
search = re.compile(r"\w/([\w\-]+)")
sNotUsed=[x.strip().replace(',','') for x in sNotUsed]
directory=r'C:\Users\xxx\Desktop\dir'
filelist=os.listdir(directory) # getting the list of xmlFiles
# for each file in the list
for files in filelist:
if files.endswith('.xml'): # make sure it is an XML file
xmlFile = codecs.open(os.path.join(directory, files), "r", encoding="UTF-8") # open first file with read
xmlComp = xmlFile.readlines() # read lines and assign to list
print xmlComp
xmlFile.close() # closing the file since the lines have already been read and assigned to a variable
xmlEdit = codecs.open(os.path.join(directory, files), "w", encoding="UTF-8") # opening the same file again and overwriting all existing lines
for lines in xmlComp: # iterate by line in list of lines
#headerInd = re.search(search, lines) # used to get the headers, comments, and ending blocks
temp = re.findall(search, lines) # finds all strings that match the regular expression compiled above and makes a list for each
if temp: # if the list is not empty
if temp[0] not in sNotUsed: # if the first (and only) value in each list is not in the sNotUsed list
xmlEdit.write(lines) # write it in the file
else: # if the list is empty
xmlEdit.write(lines) # write it (used to preserve the beginning and ending blocks of the XML, as well as comments)
There is a lot of things to say but I'll try to stay concise.
PEP8: Style Guide for Python Code
You should use lower case with underscores for local variables.
take a look at the PEP8: Style Guide for Python Code.
File objects and with statement
Use the with statement to open a file, see: File Objects: http://docs.python.org/2/library/stdtypes.html#bltin-file-objects
Escape Windows filenames
Backslashes in Windows filenames can cause problems in Python programs. You must escape the string using double backslashes or use raw strings.
For example: if your Windows filename is "dir\notUsed.txt", you should escape it like this: "dir\\notUsed.txt" or use a raw string r"dir\notUsed.txt". If you don't do that, the "\n" will be interpreted as a newline!
Note: if you need to support Unicode filenames, you can use Unicode raw strings: ur"dir\notUsed.txt".
See also the question 19065115 in StockOverFlow.
store the filenames in a set: it is an optimized collection without duplicates
not_used_path = ur"dir\sNotUsed.txt"
with open(not_used_path) as not_used_file:
not_used_set = set([line.strip() for line in not_used_file])
Compile your regex
It is more efficient to compile a regex when used numerous times. Again, you should use raw strings to avoid backslashes interpretation.
pattern = re.compile(r"\w/([\w\-]+)")
Warning: os.listdir() function return a list of filenames not a list of full paths. See this function in the Python documentation.
In your example, you read a desktop directory 'C:\Users\xxx\Desktop\dir' with os.listdir(). And then you want to open each XML file in this directory with open(files, "r+"). But this is wrong, until your current working directory isn't your desktop directory. The classic usage is to used os.path.join() function like this:
desktop_dir = r'C:\Users\xxx\Desktop\dir'
for filename in os.listdir(desktop_dir):
desktop_path = os.path.join(desktop_dir, filename)
If you want to extract the filename's extension, you can use the os.path.splitext() function.
desktop_dir = r'C:\Users\xxx\Desktop\dir'
for filename in os.listdir(desktop_dir):
if os.path.splitext(filename)[1].lower() != '.xml':
continue
desktop_path = os.path.join(desktop_dir, filename)
You can simplify this with a comprehension list:
desktop_dir = r'C:\Users\xxx\Desktop\dir'
xml_list = [os.path.join(desktop_dir, filename)
for filename in os.listdir(desktop_dir)
if os.path.splitext(filename)[1].lower() == '.xml']
Parse a XML file
How to parse a XML file? This is a great question!
There a several possibility:
- use regex, efficient but dangerous;
- use SAX parser, efficient too but confusing and difficult to maintain;
- use DOM parser, less efficient but clearer...
Consider using lxml package (#see: http://lxml.de/)
It is dangerous, because the way you read the file, you don't care of the XML encoding. And it is bad! Very bad indeed! XML files are usually encoded in UTF-8. You should first decode UTF-8 byte stream. A simple way to do that is to use codecs.open() to open an encoded file.
for xml_path in xml_list:
with codecs.open(xml_path, "r", encoding="UTF-8") as xml_file:
content = xml_file.read()
With this solution, the full XML content is store in the content variable as an Unicode string. You can then use a Unicode regex to parse the content.
Finally, you can use a set intersection to find if a given XML file contains commons names with the text file.
for xml_path in xml_list:
with codecs.open(xml_path, "r", encoding="UTF-8") as xml_file:
content = xml_file.read()
actual_set = set(pattern.findall(content))
print(not_used_set & actual_set)
How can I open a text file, read the contents of the file and create a hash table from this content? So far I have tried:
import json
json_data = open(/home/azoi/Downloads/yes/1.txt).read()
data = json.loads(json_data)
pprint(data)
I suggest this solution:
import json
with open("/home/azoi/Downloads/yes/1.txt") as f:
data=json.load(f)
pprint(data)
The with statement ensures that your file is automatically closed whatever happens and that your program throws the correct exception if the open fails. The json.load function directoly loads data from an open file handle.
Additionally, I strongly suggest reading and understanding the Python tutorial. It's essential reading and won't take too long.
To open a file you have to use the open statment correctly, something like:
json_data=open('/home/azoi/Downloads/yes/1.txt','r')
where the first string is the path to the file and the second is the mode: r = read, w = write, a = append
I have two binary input files, firstfile and secondfile. secondfile is firstfile + additional material. I want to isolate this additional material in a separate file, newfile. This is what I have so far:
import os
import struct
origbytes = os.path.getsize(firstfile)
fullbytes = os.path.getsize(secondfile)
numbytes = fullbytes-origbytes
with open(secondfile,'rb') as f:
first = f.read(origbytes)
rest = f.read()
Naturally, my inclination is to do (which seems to work):
with open(newfile,'wb') as f:
f.write(rest)
I can't find it but thought I read on SO that I should pack this first using struct.pack before writing to file. The following gives me an error:
with open(newfile,'wb') as f:
f.write(struct.pack('%%%ds' % numbytes,rest))
-----> error: bad char in struct format
This works however:
with open(newfile,'wb') as f:
f.write(struct.pack('c'*numbytes,*rest))
And for the ones that work, this gives me the right answer
with open(newfile,'rb') as f:
test = f.read()
len(test)==numbytes
-----> True
Is this the correct way to write a binary file? I just want to make sure I'm doing this part correctly to diagnose if the second part of the file is corrupted as another reader program I am feeding newfile to is telling me, or I am doing this wrong. Thank you.
If you know that secondfile is the same as firstfile + appended data, why even read in the first part of secondfile?
with open(secondfile,'rb') as f:
f.seek(origbytes)
rest = f.read()
As for writing things out,
with open(newfile,'wb') as f:
f.write(rest)
is just fine. The stuff with struct would just be a no-op anyway. The only thing you might consider is the size of rest. If it could be large, you may want to read and write the data in blocks.
There is no reason to use the struct module, which is for converting between binary formats and Python objects. There's no conversion needed here.
Strings in Python 2.x are just an array of bytes and can be read and written to and from files. (In Python 3.x, the read function returns a bytes object, which is the same thing, if you open the file with open(filename, 'rb').)
So you can just read the file into a string, then write it again:
import os
origbytes = os.path.getsize(firstfile)
fullbytes = os.path.getsize(secondfile)
numbytes = fullbytes-origbytes
with open(secondfile,'rb') as f:
first = f.seek(origbytes)
rest = f.read()
with open(newfile,'wb') as f:
f.write(rest)
You don't need to read origbytes, just move file pointer to the right position: f.seek(numbytes)
You don't need struct packing, write rest to the newfile.
This is not c, there is no % in the format string. What you want is:
f.write(struct.pack('%ds' % numbytes,rest))
It worked for me:
>>> struct.pack('%ds' % 5,'abcde')
'abcde'
Explanation: '%%%ds' % 15 is '%15s', while what you want is '%ds' % 15 which is '15s'