Using this code to work with a dataset:
cities_orig = pd.read_csv("cities.csv")
I presume that now that cities_orig is assigned, I can alter that dataframe and the original csv file won't be changed. Am I correct? Are there exceptions?
Yes, you are correct and there are no exceptions. The read_csv means to read the file and load into memory. Any changes to the in-memory variable does not change the actual csv file.
Related
I'm trying to open a excel .csv file using pandas, and storing it in a variable. However, it truncates one of the strings.
Excel .csv file
That's the file information, but when I check this is what i get.
Case Owner; Resolved Date/Time;Case Origin;Case Number;Status;Subject
Reinaldo Franco;10/16/2021 3:54 PM;Chat;20546561;Resolved;General Support
Catalina Sanchez;10/16/2021 5:38 AM;Chat;5625033;Resolved;Support for pay...
As you can see, it truncates where it says Support for pay..., and when I try to use to_csv() it doesn't save the entire column. So I think is a problem when reading the file, but not sure.
Since I needed to keep all the information in one cell and not separating it by columns, I was able to display all the information by maximizing the cell width by using: pd.options.display.max_colwidth = 1000 (it is 50 by def)
I tried to read a csv file of 4GB initially with pandas pd.read_csv but my system is running out of memory (I guess) and the kernel is restarting or the system hangs.
So, I tried using vaex library to convert csv to HDF5 and do operations(aggregations,group by)on that. For that I've used:
df = vaex.from_csv('Wager-Win_April-Jul.csv',column_names = None, convert=True, chunk_size=5000000)
and
df = vaex.from_csv('Wager-Win_April-Jul.csv',header = None, convert=True, chunk_size=5000000)
But still I'm getting my first record in csv file as the header(column names to be precise)and I'm unable to change the column names. I tried finding function to change the names but didn't come across any. Pls help me on that. Thanks :)
The column names 1559104, 10289, 991... is actually the first record in the csv and somehow vaex is taking the first row as my column names which I want to avoid
vaex.from_csv is a wrapper around pandas.read_csv with few extra options for the conversion.
So reading the pandas documentation, header='infer' (which is the default) if you want the csv reader to automatically infer the column names. Otherwise the 1st row of the file is used as the header. Alternatively you can pass the column names manually via the names kwarg. Same holds true for both vaex and pandas.
I would read the pandas.read_csv documentation to better understand all the options. Then you can use those options with vaex and the convert and chunk_size arguments.
I'm attempting to convert a JSON file to an SQLite or CSV file so that I can manipulate the data with python. Here is where the data is housed: JSON File.
I found a few converters online, but those couldn't handle the quite large JSON file I was working with. I tried using a python module called sqlbiter but again, like the others, was never really able to output or convert the file.
I'm not. sure where to go now, if anyone has any recommendations or insights on how to get this data into a database, I'd really appreciate it.
Thanks in advance!
EDIT: I'm not looking for anyone to do it for me, I just need to be pointed in the right direction. Are there other methods I haven't tried that I could learn?
You can utilize pandas module for this data processing task as follows:
First, you need to read the JSON file using with, open and json.load.
Second, you need to change the format of your file a bit by changing the large dictionary that has a main key for every airport into a list of dictionaries instead.
Third, you can now utilize some pandas magic to convert your list of dictionaries into a DataFrame using pd.DataFrame(data=list_of_dicts).
Finally, you can utilize pandas's to_csv function to write your DataFrame as a CSV file into disk.
It would look something like this:
import pandas as pd
import json
with open('./airports.json.txt','r') as f:
j = json.load(f)
l = list(j.values())
df = pd.DataFrame(data=l)
df.to_csv('./airports.csv', index=False)
You need to load your json file and parse it to have all the fields available, or load the contents to a dictionary, then you could using pyodbc to write to the database these fields, or write them to the csv if you use import csv first.
But this is just a general idea. You need to study python and how to do every step.
For instance for writting to the database you could do something like:
for i in range(0,max_len):
sql_order = "UPDATE MYTABLE SET MYTABLE.MYFIELD ...."
cursor1.execute(sql_order)
cursor1.commit()
I have this simple code
data = pd.read_csv(file_path + 'PSI_TS_clean.csv', nrows=None,
names=None, usecols=None)
data.to_hdf(file_path + 'PSI_TS_clean.h5', 'table')
but my data is too big and I run into memory issues.
What is a clean way to do this chunk by chunk?
If the csv is really big split the file using a method such as detailed here : chunking-data-from-a-large-file-for-multiprocessing
then iterate through the files and use pd.read_csv on each then use the pd.to_hdf method
for to_hdf check the parameters here: DataFrame.to_hdf you need to ensure mode 'a' and consider append.
Without knowing further detail about the dataframe structure its difficult to comment further.
also for read_csv there is the param: low_memory=False
I am creating a dataframe with a bunch of calculations and adding new columns using these formulas (calculations). Then I am saving the dataframe to an Excel file.
I lose the formula after I save the file and open the file again.
For example, I am using something like:
total = 16
for s in range(total):
df_summary['Slopes(avg)' + str(s)]= df_summary[['Slope_S' + str(s)]].mean(axis=1)*df_summary['Correction1']/df_summary['Correction2'].mean(axis=1)
How can I make sure this formula appears in my excel file I write to, similar to how we have a formula in an excel worksheet?
You can write formulas to an excel file using the XlsxWriter module. Use .write_formula() https://xlsxwriter.readthedocs.org/worksheet.html#worksheet-write-formula. If you're not attached to using an excel file to store your dataframe you might want to look into using the pickle module.
import pickle
# to save
pickle.dump(df,open('saved_df.p','wb'))
# to load
df = pickle.load(open('saved_df.p','rb'))
I think my answer here may be responsive. The short of it is you need to use openpyxl (or possibly xlrd if they've added support for it) to extract the formula, and then xlsxwriter to write the formula back in. It can definitely be done.
This assumes, of course, as #jay s pointed out, that you first write Excel formulas into the DataFrame. (This solution is an alternative to pickling.)