Pandas Read_CSV Handle Empty Files - python

I am looping through a directory with CSV files, reading them using pandas' read_csv.
Some of these files will be empty (only headers), so i get the following error:
CParserError: Passed header=1 but only 1 lines in file
Though i am sure there is a better way, my first instinct is to handle the error, as follows:
try:
v = pd.read_csv(strPath, header = 1)
except CParserError:
return
But this causes the following error:
NameError: name 'CParserError' is not defined
Your advice is greatly appreciated.

Related

I was trying to create a .CSV file from my pandas DataFrame but it gives me an error " function() argument 1 must be code, not str"

"function() argument 1 must be code, not str - Getting this error while creating csv file.
But if we create xlsx file there is no error.
Code tried : file_name.to_csv('text.csv')"
Make sure that file_name is actually a DataFrame, not a string.
Refer to:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html

Error when reading csv with merged cells

I have a txt file that I open in Excel that has merged cells (see image).
.
These cause an error message when reading the file:
CParserError: Error tokenizing data. C error: Expected 1 fields in line 1883, saw 2
At the moment I'm manually taking them out in Excel. I'm sure there could be a way to taken these out when reading a file but I can't find anything on SO. I'm not sure if I'm using the right terminology though.
Using Excel may also be an option. I just wanted to see if there was a method using Python.
If you just want to skip the headers, you might look at this SO answer which suggests the following:
data = pd.read_csv('file1.csv', error_bad_lines=False)

how to NOT read_csv if csv is empty

Using Python 2.7 and Pandas
I have to parse through my directory and plot a bunch of CSVs. If the CSV is empty, the script breaks and produces the error message:
pandas.io.common.EmptyDataError: No columns to parse from file
If I have my file paths stored in
file_paths=[]
how do I read through each one and only plot the non empty CSVs? If I have an empty dataframe defined as df=[] I attempt the following code
for i in range(0,len(file_paths)):
if pd.read_csv(file_paths[i] == ""):
print "empty"
else df.append(pd.read_csv(file_paths[i],header=None))
I would just catch the appropriate exception, as a catch all is not recommended in python:
import pandas.io.common
for i in range(0,len(file_paths)):
try:
pd.read_csv(file_paths[i])
except pandas.errors.EmptyDataError:
print file_paths[i], " is empty"
Note, as of pandas 0.22.0 (that I can be sure of) , the exception raised for empty csv is pandas.errors.EmptyDataError. And if you're importing pandas like import pandas as pd, then use pd instead of pandas.
If your csv filenames are in an array manyfiles, then
import pandas as pd
for filename in manyfiles:
try:
df = pd.read_csv(filename)
except pd.errors.EmptyDataError:
print('Note: filename.csv was empty. Skipping.')
continue # will skip the rest of the block and move to next file
# operations on df
I'm not sure if pandas.io.common.EmptyDataError is still valid or not. Can't find it in reference docs. And I also would advise against the catch-all except: as you won't be able to know if it's something else causing the issue.
You can use the in built try and except syntax to skip over files that return you an error, as follows:
Described here: Try/Except in Python: How do you properly ignore Exceptions?
for i in range(0,len(file_paths)):
try:
pd.read_csv(file_paths[i])
### Do Some Stuff
except:
continue
# or pass
This will attempt to read each file, and if unsuccessful continue to the next file.

read_csv() & EOF character in string cause parsing issue

I am trying to read in 50 csv files from a zip file but keep getting
CParserError: Error tokenizing data. C error: EOF inside string starting at line 166
I know there is an error with reading a particular string within the data and can fix in manually but dont want to have to extract all csv files manually to fix each one.
with zipfile.ZipFile('C:\Users\Austen\Anaconda\cs109_final\CA34.zip') as zf:
for name in zf.namelist():
container[name] = pd.read_csv(zf.open(name))
The problem I found is that there is a single ; in each csv file towards the end of the file. How would I ignore that?
With reference from:
https://github.com/pydata/pandas/issues/5500
Tried to add
container[name] = pd.read_csv(zf.open(name),skipfooter=4)
But I get 'unexpected end of data'
Would adding an option to read_csv fix the problem? I had a similar problem and it was fixed by adding the option quoting=csv.QUOTE_NONE
For example:
df = pd.read_csv(csvfile, header = None, delimiter="\t", quoting=csv.QUOTE_NONE, encoding='utf-8')
The second comment in this discussion talks about why:
https://github.com/pydata/pandas/issues/5500
Passing engine="python" solves the issue.
Reference:Most frequent errors

Validate and format JSON files

I have around 2000 JSON files which I'm trying to run through a Python program. A problem occurs when a JSON file is not in the correct format. (Error: ValueError: No JSON object could be decoded) In turn, I can't read it into my program.
I am currently doing something like the below:
for files in folder:
with open(files) as f:
data = json.load(f); # It causes an error at this part
I know there's offline methods to validating and formatting JSON files but is there a programmatic way to check and format these files? If not, is there a free/cheap alternative to fixing all of these files offline i.e. I just run the program on the folder containing all the JSON files and it formats them as required?
SOLVED using #reece's comment:
invalid_json_files = []
read_json_files = []
def parse():
for files in os.listdir(os.getcwd()):
with open(files) as json_file:
try:
simplejson.load(json_file)
read_json_files.append(files)
except ValueError, e:
print ("JSON object issue: %s") % e
invalid_json_files.append(files)
print invalid_json_files, len(read_json_files)
Turns out that I was saving a file which is not in JSON format in my working directory which was the same place I was reading data from. Thanks for the helpful suggestions.
The built-in JSON module can be used as a validator:
import json
def parse(text):
try:
return json.loads(text)
except ValueError as e:
print('invalid json: %s' % e)
return None # or: raise
You can make it work with files by using:
with open(filename) as f:
return json.load(f)
instead of json.loads and you can include the filename as well in the error message.
On Python 3.3.5, for {test: "foo"}, I get:
invalid json: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
and on 2.7.6:
invalid json: Expecting property name: line 1 column 2 (char 1)
This is because the correct json is {"test": "foo"}.
When handling the invalid files, it is best to not process them any further. You can build a skipped.txt file listing the files with the error, so they can be checked and fixed by hand.
If possible, you should check the site/program that generated the invalid json files, fix that and then re-generate the json file. Otherwise, you are going to keep having new files that are invalid JSON.
Failing that, you will need to write a custom json parser that fixes common errors. With that, you should be putting the original under source control (or archived), so you can see and check the differences that the automated tool fixes (as a sanity check). Ambiguous cases should be fixed by hand.
Yes, there are ways to validate that a JSON file is valid. One way is to use a JSON parsing library that will throw exceptions if the input you provide is not well-formatted.
try:
load_json_file(filename)
except InvalidDataException: # or something
# oops guess it's not valid
Of course, if you want to fix it, you naturally cannot use a JSON loader since, well, it's not valid JSON in the first place. Unless the library you're using will automatically fix things for you, in which case you probably wouldn't even have this question.
One way is to load the file manually and tokenize it and attempt to detect errors and try to fix them as you go, but I'm sure there are cases where the error is just not possible to fix automatically and would be better off throwing an error and asking the user to fix their files.
I have not written a JSON fixer myself so I can't provide any details on how you might go about actually fixing errors.
However I am not sure whether it would be a good idea to fix all errors, since then you'd have assume your fixes are what the user actually wants. If it's a missing comma or they have an extra trailing comma, then that might be OK, but there may be cases where it is ambiguous what the user wants.
Here is a full python3 example for the next novice python programmer that stumbles upon this answer. I was exporting 16000 records as json files. I had to restart the process several times so I needed to verify that all of the json files were indeed valid before I started importing into a new system.
I am no python programmer so when I tried the answers above as written, nothing happened. Seems like a few lines of code were missing. The example below handles files in the current folder or a specific folder.
verify.py
import json
import os
import sys
from os.path import isfile,join
# check if a folder name was specified
if len(sys.argv) > 1:
folder = sys.argv[1]
else:
folder = os.getcwd()
# array to hold invalid and valid files
invalid_json_files = []
read_json_files = []
def parse():
# loop through the folder
for files in os.listdir(folder):
# check if the combined path and filename is a file
if isfile(join(folder,files)):
# open the file
with open(join(folder,files)) as json_file:
# try reading the json file using the json interpreter
try:
json.load(json_file)
read_json_files.append(files)
except ValueError as e:
# if the file is not valid, print the error
# and add the file to the list of invalid files
print("JSON object issue: %s" % e)
invalid_json_files.append(files)
print(invalid_json_files)
print(len(read_json_files))
parse()
Example:
python3 verify.py
or
python3 verify.py somefolder
tested with python 3.7.3
It was not clear to me how to provide path to the file folder, so I'd like to provide answer with this option.
path = r'C:\Users\altz7\Desktop\your_folder_name' # use your path
all_files = glob.glob(path + "/*.json")
data_list = []
invalid_json_files = []
for filename in all_files:
try:
df = pd.read_json(filename)
data_list.append(df)
except ValueError:
invalid_json_files.append(filename)
print("Files in correct format: {}".format(len(data_list)))
print("Not readable files: {}".format(len(invalid_json_files)))
#df = pd.concat(data_list, axis=0, ignore_index=True) #will create pandas dataframe
from readable files, if you like

Categories