I have a text file:
sample value1 value2
A 0.1212 0.2354
B 0.23493 1.3442
i import it:
with open('file.txt', 'r') as fo:
notes = next(fo)
headers,*raw_data = [row.strip('\r\n').split('\t') for row in fo] # get column headers and data
names = [row[0] for row in raw_data] # extract first row (variables)
data= np.array([row[1:] for row in raw_data],dtype=float) # get rid of first row
if i then convert it:
s = pd.DataFrame(data,index=names,columns=headers[1:])
the data is recognized as floats. I could get the sample names back as column by s=s.reset_index().
if i do
s = pd.DataFrame(raw_data,columns=headers)
the floats are objects and i cannot perform standard calculations.
How would you make the data frame ? Is it better to import the data as dict ?
BTW i am using python 3.3
You can parse your data file directly into data frame as follows:
df = pd.read_csv('file.txt', sep='\t', index_col='sample')
Which will give you:
value1 value2
sample
A 0.12120 0.2354
B 0.23493 1.3442
[2 rows x 2 columns]
Then, you can do your computations.
To parse such a file, one should use pandas read_csv function.
Below is a minimal example showing the use of read_csv with parameter delim_whitespace set to True
import pandas as pd
from StringIO import StringIO # Python2 or
from io import StringIO # Python3
data = \
"""sample value1 value2
A 0.1212 0.2354
B 0.23493 1.3442"""
# Creation of the dataframe
df = pd.read_csv(StringIO(data), delim_whitespace=True)
Related
I am working with the bioservices package in python and I want to take the output of this function and put it into a dataframe using pandas
from bioservices import UniProt
u = UniProt(verbose=False)
d = u.search("yourlist:M20211203F248CABF64506F29A91F8037F07B67D133A278O", frmt="tab", limit=5,
columns="id, entry name")
print(d)
this is the result I am getting, almost like a neat little table
The problem however is I cannot work with the data in this form and I want to put it into a dataframe using pandas
trying this code below does not work and it only returns the error "ValueError: DataFrame constructor not properly called"
import pandas as pd
df = pd.DataFrame(columns= ['Entry','Entry name'],
data=d)
print(df)
Use pd.read_csv, after encapsulating your output in a StringIO (to present a file-like interface):
import io
import pandas as pd
data = 'Entry\tEntry name\na\t1\nb\t2'
io_data = io.StringIO(data)
df = pd.read_csv(io_data, sep='\t')
print(df)
The output is a dataframe:
Entry Entry name
0 a 1
1 b 2
Sample data:
from bioservices import UniProt
import io
u = UniProt(verbose=False)
d = u.search("yourlist:M20211203F248CABF64506F29A91F8037F07B67D133A278O", frmt="tab", limit=5,
columns="id, entry name")
#print(d)
df = pd.read_csv(io.StringIO(d), sep='\t')
print(df)
Entry Entry name
0 Q8TAS1 UHMK1_HUMAN
1 P35916 VGFR3_HUMAN
2 Q96SB4 SRPK1_HUMAN
3 Q6P3W7 SCYL2_HUMAN
4 Q9UKI8 TLK1_HUMAN
I would like to create a pandas dataframe out of a list variable.
With pd.DataFrame() I am not able to declare delimiter which leads to just one column per list entry.
If I use pd.read_csv() instead, I of course receive the following error
ValueError: Invalid file path or buffer object type: <class 'list'>
If there a way to use pd.read_csv() with my list and not first save the list to a csv and read the csv file in a second step?
I also tried pd.read_table() which also need a file or buffer object.
Example data (seperated by tab stops):
Col1 Col2 Col3
12 Info1 34.1
15 Info4 674.1
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
Current workaround:
with open(f'{filepath}tmp.csv', 'w', encoding='UTF8') as f:
[f.write(line + "\n") for line in consolidated_file]
df = pd.read_csv(f'{filepath}tmp.csv', sep='\t', index_col=1 )
import pandas as pd
df = pd.DataFrame([x.split('\t') for x in test])
print(df)
and you want header as your first row then
df.columns = df.iloc[0]
df = df[1:]
It seems simpler to convert it to nested list like in other answer
import pandas as pd
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
data = [line.split('\t') for line in test]
df = pd.DataFrame(data[1:], columns=data[0])
but you can also convert it back to single string (or get it directly form file on socket/network as single string) and then you can use io.BytesIO or io.StringIO to simulate file in memory.
import pandas as pd
import io
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
single_string = "\n".join(test)
file_like_object = io.StringIO(single_string)
df = pd.read_csv(file_like_object, sep='\t')
or shorter
df = pd.read_csv(io.StringIO("\n".join(test)), sep='\t')
This method is popular when you get data from network (socket, web API) as single string or data.
I have a tsv file containing an array which has been read using read_csv().
The dtype of the array is shown as dtype: object. How do I read it and access it as an array?
For example:
df=
id values
1 [0,1,0,3,5]
2 [0,0,2,3,4]
3 [1,1,0,2,3]
4 [2,4,0,3,5]
5 [3,5,0,3,5]
Currently I am unpacking it as below:
for index,row in df.iterrows():
string = row['col2']
string=string.replace('[',"")
string=string.replace(']',"")
v1,v2,v3,v4,v5=string.split(",")
v1=int(v1)
v2=int(v2)
v3=int(v3)
v4=int(v4)
v5=int(v5)
Is there any alternative to this?
I want to do this because I want to create another column in the dataframe taking the average of all the values.
Adding additional details:col2
My tsv file looks as below:
id values
1 [0,1,0,3,5]
2 [0,0,2,3,4]
3 [1,1,0,2,3]
4 [2,4,0,3,5]
5 [3,5,0,3,5]
I am reading the tsv file as follows:
df=pd.read_csv('tsv_file_name.tsv',sep='\t', header=0)
You can use json to simplify your parsing:
import json
df['col2'] = df.col2.apply(lambda t: json.loads(t))
edit: following your comment, getting the average is easy:
# using numpy
df['col2_mean'] df.col2.apply(lambda t: np.array(t).mean())
# by hand
df['col2_mean'] df.col2.apply(lambda t: sum(t)/len(t))
import csv
with open('myfile.tsv) as tsvfile:
line = csv.reader(tsvfile, delimiter='...')
...
OR
from pandas import DataFrame
df = DataFrame.from_csv("myfile.tsv", sep="...")
I am dealing with a csv file that contains three columns and three rows containing numeric data. The csv data file simply looks like the following:
Colum1,Colum2,Colum3
1,2,3
1,2,3
1,2,3
My question is how to write a python code that take a single value of one of the column and perform a specific operation. For example, let say I want to take the first value in 'Colum1' and subtract it from the sum of all the values in the column.
Here is my attempt:
import csv
f = open('columns.csv')
rows = csv.DictReader(f)
value_of_single_row = 0.0
for i in rows:
value_of_single_Row += float(i) # trying to isolate a single value here!
print value_of_single_row - sum(float(r['Colum1']) for r in rows)
f.close()
Based on the code you provided, I suggest you take a look at the doc to see the preferred approach on how to read through a csv file. Take a look here:
How to use CsvReader
with that being said, you can modify the beginning of your code slightly to this:
import csv
with open('data.csv', 'rb') as f:
rows = csv.DictReader(f)
for row in rows:
# perform operation per row
From there you now have access to each row.
This should give you what you need to do proper row-by-row operations.
What I suggest you do is play around with printing out your rows to see what your data looks like. You will see that each row being outputted is a dictionary.
So if you were going through each row, you can just simply do something like this:
for row in rows:
row['Colum1'] # or row.get('Colum1')
# to do some math to add everything in Column1
s += float(row['Column1'])
So all of that will look like this:
import csv
s = 0
with open('data.csv', 'rb') as f:
rows = csv.DictReader(f)
for row in rows:
s += float(row['Colum1'])
You can do pretty much all of this with pandas
from pandas import DataFrame, read_csv
import matplotlib.pyplot as plt
import pandas as pd
import sys
import os
Location = r'path/test.csv'
df = pd.read_csv(Location, names=['Colum1','Colum2','Colum3'])
df = df[1:] #Remove the headers since they're unnecessary
print df
df.xs(1)['Colum1']=int(df.loc[1,'Colum1'])+5
print df
You can write back to your csv using df.to_csv('File path', index=False,header=True) Having headers=True will add the headers back in.
To do this more along the lines of what you have you can do it like this
import csv
Location = r'C:/Users/tnabrelsfo/Documents/Programs/Stack/test.csv'
data = []
with open(Location, 'r') as f:
for line in f:
data.append(line.replace('\n','').replace(' ','').split(','))
data = data[1:]
print data
data[1][1] = 5
print data
it will read in each row, cut out the column names, and then you can modify the values by index
So here is my simple solution using pandas library. Suppose we have sample.csv file
import pandas as pd
df = pd.read_csv('sample.csv') # df is now a DataFrame
df['Colum1'] = df['Colum1'] - df['Colum1'].sum() # here we replace the column by subtracting sum of value in the column
print df
df.to_csv('sample.csv', index=False) # save dataframe back to csv file
You can also use map function to do operation to one column, for example,
import pandas as pd
df = pd.read_csv('sample.csv')
col_sum = df['Colum1'].sum() # sum of the first column
df['Colum1'] = df['Colum1'].map(lambda x: x - col_sum)
How do I prevent Python from automatically writing objects into csv as a different format than originally? For example, I have list object such as the following:
row = ['APR16', '100.00000']
I want to write this row as is, however when I use writerow function of csv writer, it writes into the csv file as 16-Apr and just 10. I want to keep the original formatting.
EDIT:
Here is the code:
import pandas as pd
dates = ['APR16', 'MAY16', 'JUN16']
numbers = [100.00000, 200.00000, 300.00000]
for i in range(3):
row = []
row.append(dates[i])
row.append(numbers[i])
prow = pd.DataFrame(row)
prow.to_csv('test.csv', index=False, header=False)
And result:
Using pandas:
import pandas as pd
dates = ['APR16', 'MAY16', 'JUN16']
numbers = [100.00000, 200.00000, 300.00000]
data = zip(dates,numbers)
fd = pd.DataFrame(data)
fd.to_csv('test.csv', index=False, header=False) # csv-file
fd.to_excel("test.xls", header=False,index=False) # or xls-file
Result in my terminal:
➜ ~ cat test.csv
APR16
100.00000
Result in LibreOffice: