I tried to read a csv file of 4GB initially with pandas pd.read_csv but my system is running out of memory (I guess) and the kernel is restarting or the system hangs.
So, I tried using vaex library to convert csv to HDF5 and do operations(aggregations,group by)on that. For that I've used:
df = vaex.from_csv('Wager-Win_April-Jul.csv',column_names = None, convert=True, chunk_size=5000000)
and
df = vaex.from_csv('Wager-Win_April-Jul.csv',header = None, convert=True, chunk_size=5000000)
But still I'm getting my first record in csv file as the header(column names to be precise)and I'm unable to change the column names. I tried finding function to change the names but didn't come across any. Pls help me on that. Thanks :)
The column names 1559104, 10289, 991... is actually the first record in the csv and somehow vaex is taking the first row as my column names which I want to avoid
vaex.from_csv is a wrapper around pandas.read_csv with few extra options for the conversion.
So reading the pandas documentation, header='infer' (which is the default) if you want the csv reader to automatically infer the column names. Otherwise the 1st row of the file is used as the header. Alternatively you can pass the column names manually via the names kwarg. Same holds true for both vaex and pandas.
I would read the pandas.read_csv documentation to better understand all the options. Then you can use those options with vaex and the convert and chunk_size arguments.
Related
While handling csv files we can say:
df = pd.read_csv("test.csv", names=header_list, dtype=dtype_dict)
Above would create a dataframe with headers as header_list and dtypes as of the dtype_dict
Can we do something similar with pd.read_parquet() ?
My issue involves passing in headers separately and would thus not be available in the "test.csv"
Another way to bypass it could be to move the entire data in df downwards by 1 (including shifting headers into rows) and then replacing the header with header_list (if it's even possible?)
Is there an optimal solution to my issue?
I'm not too familiar with parquet so any guidance would be appreciated, thanks.
Can we do something similar with pd.read_parquet() ?
parquet files contain some metadata, including the name of the columns and their types. So there is no need to pass this information when loading the data.
I know similar questions to this have been asked, but I couldn't find any that were dealing with the error I'm getting (though I apologize if I'm missing something!). I am trying to remove a few columns from a CSV that wouldn't load in Excel so I couldn't just delete them within the file. I have the following code:
import os
import pandas as pd
os.chdir(r"C:\Users\maria\Desktop\Project\North American Breeding Bird Survey")
data = pd.read_csv("NABBSStateData.csv")
data.drop(["CountryNum", "Route", "RPID"], axis = 1, inplace = True)
but when I run it I get this error message:
c:\program files (x86)\microsoft visual studio\2019\professional\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\_vendored\pydevd\pydevd.py:1664: DtypeWarning: Columns (0,1,2,3,4,5,6,7,8,9,10,11,12,13) have mixed types. Specify dtype option on import or set low_memory=False.
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
I am relatively new to python/visual studio, and I am having a hard time figuring out what this error message is saying and how to fix it. Thank you!!
Edit: The CSV in question is the state files from this site concatenated together, so you can open one of the state files to see the columns/data types.
Looks like you have mixed data types in some of your columns (e.g. columns 0,1,2,3,4,5,6,7,8,9,10,11,12,13).
Mixed data type means in one column, say column 'a', most rows are numbers, but there might be strings in some rows as well.
Try use dtype option from pd.read_csv to specify the column types. If you are not sure about the type, use object or str.
This is an example:
df = pd.read_csv('D:\\foo.csv', header=0, dtype={'currency':str, 'v1':object, 'v2':object})
A link to use read_csv
Here's list of all the types you can specify.
I have a messy text file that I need to sort into columns in a dataframe so I
can do the data analysis I need to do. Here is the messy looking file:
Messy text
I can read it in as a csv file, that looks a bit nicer using:
import pandas as pd
data = pd.read_csv('phx_30kV_indepth_0_0_outfile.txt')
print(data)
And this prints out the data aligned, but the issue is that the output is [640 rows x 1 column]. And I need to separate it into multiple columns and manipulate it as a dataframe.
I have tried a number of solutions using StringIO that have worked here before, but nothing seems to be doing the trick.
However, when I do this, there is the issue that the
delim_whitespace=True
Link to docs ^
df = pd.read_csv('phx_30kV_indepth_0_0_outfile.txt', delim_whitespace=True)
Your input file is actually not in CSV format.
As you provided only .png picture, it is even not clear, whether this file
is divided into rows or not.
If not, you have to start from "cutting" the content into individual lines and
read the content from the output file - result of this cutting.
I think, this is the first step, before you can use either read_csv or read_table (of course, with delim_whitespace=True).
I am trying to open a csv file using pandas.
This is a screenshot of the file opened in excel.
Some columns have names and some do not. When trying to read this in with pandas I get the "ValueError: Passed header names mismatches usecols" error.
When I open part of the file in excel, add column names, save, and then import with pandas it works.
The problem is the files are large and cannot fully open in excel (plus I'd prefer a more elegant solution anyway).
Is there a way to deal with this issue in pandas?
I have read answers to other questions regarding this error but none were relevant.
Thanks so much in advance!
In names you can provide column names:
df = pd.read_csv('pandas_dataframe_importing_csv/example.csv', names=['col1', 'col2', 'col3'], engine='python')
I have this simple code
data = pd.read_csv(file_path + 'PSI_TS_clean.csv', nrows=None,
names=None, usecols=None)
data.to_hdf(file_path + 'PSI_TS_clean.h5', 'table')
but my data is too big and I run into memory issues.
What is a clean way to do this chunk by chunk?
If the csv is really big split the file using a method such as detailed here : chunking-data-from-a-large-file-for-multiprocessing
then iterate through the files and use pd.read_csv on each then use the pd.to_hdf method
for to_hdf check the parameters here: DataFrame.to_hdf you need to ensure mode 'a' and consider append.
Without knowing further detail about the dataframe structure its difficult to comment further.
also for read_csv there is the param: low_memory=False