How to process .dat file Dataframe with multiple columns in different rows? - python

I am trying to import data from .dat files.
The files have the following structure (and there are a few hundred for each measurement):
#-G8k5perc
#acf0
4e-07 1.67466
8e-07 1.57061
...
13.4217728 0.97419
&
#fit0
2.4e-06 1.5376
3.2e-06 1.5312
...
13.4 0.99578
&
...
#cnta0
#with g2
#cnta0
0 109.74
0.25 107.97
...
19.75 104.05
#rate0 107.2
I have tried:
1)
df = pd.read_csv("G8k5perc-1.dat")
which only gives one column.
Adding ,sep=' ', ,delimiter=' ' or ,delim_whitespace=True leads to
ParserError: Error tokenizing data. C error: Expected 1 fields in line 3, saw 2
2)
I have seen someone using:
from string import find, rfind, split, strip
Which raises the error: ImportError: cannot import name 'find' from 'string' for all four.
3)
Creating slices and changing them afterwards wont work either:
acf=df[1:179]
acf["#-G8k5perc"]= acf["#-G8k5perc"].str.split(" ", n = 1, expand = True)
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
app.launch_new_instance()
Any Ideas on how to get two columns for each set of data (acf0, fit0, etc.) in the files?

You cannot use csv reader with a data format .dat.
Try the code below:
import csv
datContent = [i.strip().split() for i in open("./yourdata.dat").readlines()]
with open("./yourdata.csv", "wb") as f:
writer = csv.writer(f)
writer.writerows(datContent)
Then try to use pandas to make new columns:
import pandas as pd
def your_func(row):
return row['x-momentum'] / row['mass']
columns_to_keep = ['#time', 'x-momentum', 'mass']
dataframe = pd.read_csv("./yourdata.csv", usecols=columns_to_keep)
dataframe['new_column'] = dataframe.apply(your_func, axis=1)
print dataframe
Replace yourdata.csv with your input file name.

Related

Handle variable as file with pandas dataframe

I would like to create a pandas dataframe out of a list variable.
With pd.DataFrame() I am not able to declare delimiter which leads to just one column per list entry.
If I use pd.read_csv() instead, I of course receive the following error
ValueError: Invalid file path or buffer object type: <class 'list'>
If there a way to use pd.read_csv() with my list and not first save the list to a csv and read the csv file in a second step?
I also tried pd.read_table() which also need a file or buffer object.
Example data (seperated by tab stops):
Col1 Col2 Col3
12 Info1 34.1
15 Info4 674.1
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
Current workaround:
with open(f'{filepath}tmp.csv', 'w', encoding='UTF8') as f:
[f.write(line + "\n") for line in consolidated_file]
df = pd.read_csv(f'{filepath}tmp.csv', sep='\t', index_col=1 )
import pandas as pd
df = pd.DataFrame([x.split('\t') for x in test])
print(df)
and you want header as your first row then
df.columns = df.iloc[0]
df = df[1:]
It seems simpler to convert it to nested list like in other answer
import pandas as pd
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
data = [line.split('\t') for line in test]
df = pd.DataFrame(data[1:], columns=data[0])
but you can also convert it back to single string (or get it directly form file on socket/network as single string) and then you can use io.BytesIO or io.StringIO to simulate file in memory.
import pandas as pd
import io
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
single_string = "\n".join(test)
file_like_object = io.StringIO(single_string)
df = pd.read_csv(file_like_object, sep='\t')
or shorter
df = pd.read_csv(io.StringIO("\n".join(test)), sep='\t')
This method is popular when you get data from network (socket, web API) as single string or data.

Python & Pandas: How to address NaN values in a loop?

With Python and Pandas I'm seeking to take values from CSV cells and write them as txt files via a loop. The structure of the CSV file is:
user_id, text, text_number
0, test text A, text_0
1,
2,
3,
4,
5, test text B, text_1
The script below successfully writes a txt file for the first row - it is named text_0.txt and contains test text A.
import pandas as pd
df= pd.read_csv("test.csv", sep=",")
for index in range(len(df)):
with open(df["text_number"][index] + '.txt', 'w') as output:
output.write(df["text"][index])
However, I receive an error when it proceeds to the next row:
TypeError: write() argument must be str, not float
I'm guessing the error is generated when it encounters values it reads as NaN. I attempted to add the dropna feature per the pandas documentation like so:
import pandas as pd
df= pd.read_csv("test.csv", sep=",")
df2 = df.dropna(axis=0, how='any')
for index in range(len(df)):
with open(df2["text_number"][index] + '.txt', 'w') as output:
output.write(df2["text"][index])
However, the same issue persists - a txt file is created for the first row, but a new error message is returned for the next row: KeyError: 1.
Any suggestions? All assistance greatly appreciated.
The issue here is that you are creating a range index which is not necessarily in the data frame's index. For your use case, you can just iterate through rows of data frame and write to the file.
for t in df.itertuples():
if t.text_number: # do not write if text number is None
with open(t.text_number + '.txt', 'w') as output:
output.write(str(t.text))

How to split a column into multiple columns? [duplicate]

I a importing a .csv file in python with pandas.
Here is the file format from the .csv :
a1;b1;c1;d1;e1;...
a2;b2;c2;d2;e2;...
.....
here is how get it :
from pandas import *
csv_path = "C:...."
data = read_csv(csv_path)
Now when I print the file I get that :
0 a1;b1;c1;d1;e1;...
1 a2;b2;c2;d2;e2;...
And so on... So I need help to read the file and split the values in columns, with the semi color character ;.
read_csv takes a sep param, in your case just pass sep=';' like so:
data = read_csv(csv_path, sep=';')
The reason it failed in your case is that the default value is ',' so it scrunched up all the columns as a single column entry.
In response to Morris' question above:
"Is there a way to programatically tell if a CSV is separated by , or ; ?"
This will tell you:
import pandas as pd
df_comma = pd.read_csv(your_csv_file_path, nrows=1,sep=",")
df_semi = pd.read_csv(your_csv_file_path, nrows=1, sep=";")
if df_comma.shape[1]>df_semi.shape[1]:
print("comma delimited")
else:
print("semicolon delimited")

Why is the cdc_list getting updated after calling the function read_csv() in total_list?

# Program to combine data from 2 csv file
The cdc_list gets updated after second call of read_csv
overall_list = []
def read_csv(filename):
file_read = open(filename,"r").read()
file_split = file_read.split("\n")
string_list = file_split[1:len(file_split)]
#final_list = []
for item in string_list:
int_fields = []
string_fields = item.split(",")
string_fields = [int(x) for x in string_fields]
int_fields.append(string_fields)
#final_list.append()
overall_list.append(int_fields)
return(overall_list)
cdc_list = read_csv("US_births_1994-2003_CDC_NCHS.csv")
print(len(cdc_list)) #3652
total_list = read_csv("US_births_2000-2014_SSA.csv")
print(len(total_list)) #9131
print(len(cdc_list)) #9131
I don't think the code you pasted explains the issue you've had, at least it's not anywhere I can determine. Seems like there's a lot of code you did not include in what you pasted above, that might be responsible.
However, if all you want to do is merge two csvs (assuming they both have the same columns), you can use Pandas' read_csv and Pandas DataFrame methods append and to_csv, to achieve this with 3 lines of code (not including imports):
import pandas as pd
# Read CSV file into a Pandas DataFrame object
df = pd.read_csv("first.csv")
# Read and append the 2nd CSV file to the same DataFrame object
df = df.append( pd.read_csv("second.csv") )
# Write merged DataFrame object (with both CSV's data) to file
df.to_csv("merged.csv")

pandas: Split a column on delimiter, and get unique values

I am translating some code from R to python to improve performance, but I am not very familiar with the pandas library.
I have a CSV file that looks like this:
O43657,GO:0005737
A0A087WYV6,GO:0005737
A0A087WZU5,GO:0005737
Q8IZE3,GO:0015630 GO:0005654 GO:0005794
X6RHX1,GO:0015630 GO:0005654 GO:0005794
Q9NSG2,GO:0005654 GO:0005739
I would like to split the second column on a delimiter (here, a space), and get the unique values in this column. In this case, the code should return [GO:0005737, GO:0015630, GO:0005654 GO:0005794, GO:0005739].
In R, I would do this using the following code:
df <- read.csv("data.csv")
unique <- unique(unlist(strsplit(df[,2], " ")))
In python, I have the following code using pandas:
df = pd.read_csv("data.csv")
split = df.iloc[:, 1].str.split(' ')
unique = pd.unique(split)
But this produces the following error:
TypeError: unhashable type: 'list'
How can I get the unique values in a column of a CSV file after splitting on a delimiter in python?
setup
from io import StringIO
import pandas as pd
txt = """O43657,GO:0005737
A0A087WYV6,GO:0005737
A0A087WZU5,GO:0005737
Q8IZE3,GO:0015630 GO:0005654 GO:0005794
X6RHX1,GO:0015630 GO:0005654 GO:0005794
Q9NSG2,GO:0005654 GO:0005739"""
s = pd.read_csv(StringIO(txt), header=None, squeeze=True, index_col=0)
solution
pd.unique(s.str.split(expand=True).stack())
array(['GO:0005737', 'GO:0015630', 'GO:0005654', 'GO:0005794', 'GO:0005739'], dtype=object)

Categories