how to convert link list to matrix in python - python

my input data looks like(input.txt):
AGAP2 TCGA-BL-A0C8-01A-11R-A10U-07 66.7328
AGAP2 TCGA-BL-A13I-01A-11R-A13Y-07 186.8366
AGAP3 TCGA-BL-A13J-01A-11R-A10U-07 183.3767
AGAP3 TCGA-BL-A3JM-01A-12R-A21D-07 33.2927
AGAP3 TCGA-BT-A0S7-01A-11R-A10U-07 57.9040
AGAP3 TCGA-BT-A0YX-01A-11R-A10U-07 99.8540
AGAP4 TCGA-BT-A20J-01A-11R-A14Y-07 88.8278
AGAP4 TCGA-BT-A20N-01A-11R-A14Y-07 129.7021
i want the output.txt looks like :
TCGA-BL-A0C8-01A-11R-A10U-07 TCGA-BL-A13I-01A-11R-A13Y-07 ...
AGAP2 66.7328 186.8366
AGAP3 0 0

Using pandas: read csv, create pivot and write csv.
import pandas as pd
df = pd.read_table("input.txt", names="xy", sep=r'\s+')
# reset index first - we need named column
new = df.reset_index().pivot(index="index", columns='x', values='y')
new.fillna(0, inplace=True)
new.to_csv("output.csv", sep='\t') # tab separated
Reshaping and Pivot Tables
EDIT: filling empty values

Related

Python Pandas Dataframe from API JSON Response >>

I am new to Python, Can i please seek some help from experts here?
I wish to construct a dataframe from https://api.cryptowat.ch/markets/summaries JSON response.
based on following filter criteria
Kraken listed currency pairs (Please take note, there are kraken-futures i dont want those)
Currency paired with USD only, i.e aaveusd, adausd....
Ideal Dataframe i am looking for is (somehow excel loads this json perfectly screenshot below)
Dataframe_Excel_Screenshot
resp = requests.get(https://api.cryptowat.ch/markets/summaries) kraken_assets = resp.json() df = pd.json_normalize(kraken_assets) print(df)
Output:
result.binance-us:aaveusd.price.last result.binance-us:aaveusd.price.high ...
0 264.48 267.32 ...
[1 rows x 62688 columns]
When i just paste the link in browser JSON response is with double quotes ("), but when i get it via python code. All double quotes (") are changed to single quotes (') any idea why?. Though I tried to solve it with json_normalize but then response is changed to [1 rows x 62688 columns]. i am not sure how do i even go about working with 1 row with 62k columns. i dont know how to extract exact info in the dataframe format i need (please see excel screenshot).
Any help is much appreciated. thank you!
the result JSON is a dict
load this into a dataframe
decode columns into products & measures
filter to required data
import requests
import pandas as pd
import numpy as np
# load results into a data frame
df = pd.json_normalize(requests.get("https://api.cryptowat.ch/markets/summaries").json()["result"])
# columns are encoded as product and measure. decode columns and transpose into rows that include product and measure
cols = np.array([c.split(".", 1) for c in df.columns]).T
df.columns = pd.MultiIndex.from_arrays(cols, names=["product","measure"])
df = df.T
# finally filter down to required data and structure measures as columns
df.loc[df.index.get_level_values("product").str[:7]=="kraken:"].unstack("measure").droplevel(0,1)
sample output
product
price.last
price.high
price.low
price.change.percentage
price.change.absolute
volume
volumeQuote
kraken:aaveaud
347.41
347.41
338.14
0.0274147
9.27
1.77707
613.281
kraken:aavebtc
0.008154
0.008289
0.007874
0.0219326
0.000175
403.506
3.2797
kraken:aaveeth
0.1327
0.1346
0.1327
-0.00673653
-0.0009
287.113
38.3549
kraken:aaveeur
219.87
226.46
209.07
0.0331751
7.06
1202.65
259205
kraken:aavegbp
191.55
191.55
179.43
0.030559
5.68
6.74476
1238.35
kraken:aaveusd
259.53
267.48
246.64
0.0339841
8.53
3623.66
929624
kraken:adaaud
1.61792
1.64602
1.563
0.0211692
0.03354
5183.61
8366.21
kraken:adabtc
3.757e-05
3.776e-05
3.673e-05
0.0110334
4.1e-07
252403
9.41614
kraken:adaeth
0.0006108
0.00063
0.0006069
-0.0175326
-1.09e-05
590839
367.706
kraken:adaeur
1.01188
1.03087
0.977345
0.0209986
0.020811
1.99104e+06
1.98693e+06
Hello Try the below code. I have understood the structure of the Dataset and modified to get the desired output.
`
resp = requests.get("https://api.cryptowat.ch/markets/summaries")
a=resp.json()
a['result']
#creating Dataframe froom key=result
da=pd.DataFrame(a['result'])
#using Transpose to get required Columns and Index
da=da.transpose()
#price columns contains a dict which need to be seperate Columns on the data frame
db=da['price'].to_dict()
da.drop('price', axis=1, inplace=True)
#intialising seperate Data frame for price
z=pd.DataFrame({})
for i in db.keys():
i=pd.DataFrame(db[i], index=[i])
z=pd.concat([z,i], axis=0 )
da=pd.concat([z, da], axis=1)
da.to_excel('nex.xlsx')`

How to read file in pandas with unfix whitespace/s separation?

I have a textfile that contains 2 columns of data. They are separated with unfix number of whitespace/s. I want to load it on a pandas DataFrame.
Example:
306.000000 1.125783
307.000000 0.008101
308.000000 -0.005917
309.000000 0.003784
310.000000 -0.516513
Please note that it also starts with whitespace/s.
My desired output would be like:
output = {'Wavelength': [306.000000, 307.000000, 308.000000, 309.000000, 310.000000],
'Reflectance': [1.125783, 0.008101, -0.005917, 0.003784, -0.516513]}
df = pd.DataFrame(data=output)
Use read_csv:
df = pd.read_csv('file.txt', sep='\\s+', names=['Wavelength', 'Reflectance'], header=None)

Pandas not able to merge the file

I am trying to merge two files, I am supplying them headers as they are not able to pick up headers when I merge them using concatenate , I get an error when I am trying to drop a column......
ValueError: labels ['lh.aparc.a2009s.meancurv'] not contained in axis
Therefore I am trying the below method.....
The headers are important because I want to compute average, mean etc on the basis of these headers....
But currently, the result file looks like this
CSV 1CSV1 looks like this CSV 2 looks the same just with rh
# !/bin/bash
ls -d */ | sed -e "s/\///g" | grep -v "Results" | grep -v "Output">> subjects.txt;
module unload freesurfer
module load freesurfer/5.3.0
module load python
export SUBJECTS_DIR=/N/u/shrechak/Karst/GENFL_FREESURFER53_KARST_RES
source $FREESURFER_HOME/FreeSurferEnv.sh
aparcstats2table --hemi lh --subjectsfile=subjects.txt --parc aparc.a2009s --meas meancurv --tablefile lh.a2009s.meancurv.txt
aparcstats2table --hemi rh --subjectsfile=subjects.txt --parc aparc.a2009s --meas meancurv --tablefile rh.a2009s.meancurv.txt
for f in *.txt; do
mv "$f" "${f%.txt}.csv"
done
python <<END_OF_PYTHON
import csv
import pandas as pd
names= ["meancurv",
"lh_G_and_S_frontomargin_meancurv",
"lh_G_and_S_occipital_inf_meancurv",
"lh_G_and_S_paracentral_meancurv",
"lh_G_and_S_subcentral_meancurv",
"lh_G_and_S_transv_frontopol_meancurv",
"lh_G_and_S_cingul-ant_meancurv",
"lh_G_and_S_cingul-Mid-Ant_meancurv",
"lh_G_and_S_cingul-Mid-Post_meancurv",
"lh_G_cingul-Post-dorsal_meancurv",
"lh_G_cingul-Post-ventral_meancurv",
"lh_G_cuneus_meancurv",
"lh_G_front_inf-Opercular_meancurv",
"lh_G_front_inf-orbital_meancurv",
"lh_G_front_inf-Triangul_meancurv",
"lh_G_front_middle_meancurv",
"lh_G_front_sup_meancurv",
"lh_G_Ins_lg_and_S_cent_ins_meancurv",
"lh_G_insular_short_meancurv",
"lh_G_occipital_middle_meancurv",
"lh_G_occipital_sup_meancurv",
"lh_G_oc-temp_lat-fusifor_meancurv",
"lh_G_oc-temp_med-Lingual_meancurv",
"lh_G_oc-temp_med-Parahip_meancurv",
"lh_G_orbital_meancurv",
"lh_G_pariet_infoangular_meancurv",
"lh_G_pariet_infSupramar_meancurv",
"lh_G_parietal_sup_meancurv",
"lh_G_postcentral_meancurv",
"lh_G_precentral_meancurv",
"lh_G_precuneus_meancurv",
"lh_G_rectus_meancurv",
"lh_G_subcallosal_meancurv",
"lh_G_temp_sup-G_T_transv_meancurv",
"lh_G_temp_sup-Lateral_meancurv",
"lh_G_temp_sup-Plan_polar_meancurv",
"lh_G_temp_supPlan_tempo_meancurv",
"lh_G_temporal_inf_meancurv",
"lh_G_temporal_middle_meancurv",
"lh_Lat_Fis-ant-Horizont_meancurv",
"lh_Lat_Fis-ant-Vertical_meancurv",
"lh_Lat_Fispost_meancurv",
"lh_Pole_occipital_meancurv",
"lh_Pole_temporal_meancurv",
"lh_S_calcarine_meancurv",
"lh_S_central_meancurv",
"lh_S_cingulMarginalis_meancurv",
"lh_S_circular_insula_ant_meancurv",
"lh_S_circular_insula_inf_meancurv",
"lh_S_circular_insula_sup_meancurv",
"lh_S_collat_transv_ant_meancurv",
"lh_S_collat_transv_post_meancurv",
"lh_S_front_inf_meancurv",
"lh_S_front_middle_meancurv",
"lh_S_front_sup_meancurv",
"lh_S_interm_prim-Jensen_meancurv",
"lh_S_intrapariet_and_P_trans_meancurv",
"lh_S_oc_middle_and_Lunatus_meancurv",
"lh_S_oc_sup_and_transversal_meancurv",
"lh_S_occipital_ant_meancurv",
"lh_S_oc-temp_lat_meancurv",
"lh_S_oc-temp_med_and_Lingual_meancurv",
"lh_S_orbital_lateral_meancurv",
"lh_S_orbital_med-olfact_meancurv",
"lh_S_orbital-H_Shaped_meancurv",
"lh_S_parieto_occipital_meancurv",
"lh_S_pericallosal_meancurv",
"lh_S_postcentral_meancurv",
"lh_S_precentral-inf-part_meancurv",
"lh_S_precentral-sup-part_meancurv",
"lh_S_suborbital_meancurv",
"lh_S_subparietal_meancurv",
"lh_S_temporal_inf_meancurv",
"lh_S_temporal_sup_meancurv",
"lh_S_temporal_transverse_meancurv"]
df1 = pd.read_csv('lh.a2009s.meancurv.csv', header = None, names = names)
names1 = ["meancurv",
"rh_G_and_S_frontomargin_meancurv",
"rh_G_and_S_occipital_inf_meancurv",
"rh_G_and_S_paracentral_meancurv",
"rh_G_and_S_subcentral_meancurv",
"rh_G_and_S_transv_frontopol_meancurv",
"rh_G_and_S_cingul-Ant_meancurv",
"rh_G_and_S_cingul-Mid-Ant_meancurv",
"rh_G_and_S_cingul-Mid-Post_meancurv",
"rh_G_cingul-Post-dorsal_meancurv",
"rh_G_cingul-Post-ventral_meancurv",
"rh_G_cuneus_meancurv",
"rh_G_front_inf-Opercular_meancurv",
"rh_G_front_inf-Orbital_meancurv",
"rh_G_front_inf-Triangul_meancurv",
"rh_G_front_middle_meancurv",
"rh_G_front_sup_meancurv",
"rh_G_Ins_lg_and_S_cent_ins_meancurv",
"rh_G_insular_short_meancurv",
"rh_G_occipital_middle_meancurv",
"rh_G_occipital_sup_meancurv",
"rh_G_oc-temp_lat-fusifor_meancurv",
"rh_G_oc-temp_med-Lingual_meancurv",
"rh_G_oc-temp_med-Parahip_meancurv",
"rh_G_orbital_meancurv",
"rh_G_pariet_inf-Angular_meancurv",
"rh_G_pariet_inf-Supramar_meancurv",
"rh_G_parietal_sup_meancurv",
"rh_G_postcentral_meancurv",
"rh_G_precentral_meancurv",
"rh_G_precuneus_meancurv",
"rh_G_rectus_meancurv",
"rh_G_subcallosal_meancurv",
"rh_G_temp_sup-G_T_transv_meancurv",
"rh_G_temp_sup-Lateral_meancurv",
"rh_G_temp_sup-Plan_polar_meancurv",
"rh_G_temp_sup-Plan_tempo_meancurv",
"rh_G_temporal_inf_meancurv",
"rh_G_temporal_middle_meancurv",
"rh_Lat_Fis-ant-Horizont_meancurv",
"rh_Lat_Fis-ant-Vertical_meancurv",
"rh_Lat_Fis-post_meancurv",
"rh_Pole_occipital_meancurv",
"rh_Pole_temporal_meancurv",
"rh_S_calcarine_meancurv",
"rh_S_central_meancurv",
"rh_S_cingulMarginalis_meancurv",
"rh_S_circular_insula_ant_meancurv",
"rh_S_circular_insula_inf_meancurv",
"rh_S_circular_insula_sup_meancurv",
"rh_S_collat_transv_ant_meancurv",
"rh_S_collat_transv_post_meancurv",
"rh_S_front_inf_meancurv",
"rh_S_front_middle_meancurv",
"rh_S_front_sup_meancurv",
"rh_S_interm_prim-Jensen_meancurv",
"rh_S_intrapariet_and_P_trans_meancurv",
"rh_S_oc_middle_and_Lunatus_meancurv",
"rh_S_oc_sup_and_transversal_meancurv",
"rh_S_occipital_ant_meancurv",
"rh_S_oc-temp_lat_meancurv",
"rh_S_oc-temp_med_and_Lingual_meancurv",
"rh_S_orbital_lateral_meancurv",
"rh_S_orbital_med-olfact_meancurv",
"rh_S_orbital-H_Shaped_meancurv",
"rh_S_parieto_occipital_meancurv",
"rh_S_pericallosal_meancurv",
"rh_S_postcentral_meancurv",
"rh_S_precentral-inf-part_meancurv",
"rh_S_precentral-sup-part_meancurv",
"rh_S_suborbital_meancurv",
"rh_S_subparietal_meancurv",
"rh_S_temporal_inf_meancurv",
"rh_S_temporal_sup_meancurv",
"rh_S_temporal_transverse_meancurv"
]
df2 = pd.read_csv('rh.a2009s.meancurv.csv', header = None, names = names1)
result = pd.merge(df1, df2, on='meancurv', how='outer')
result.to_csv('result.csv')
END_OF_PYTHON
echo "goodbye!";
So you want to skip the first row and only pull the data parts.
Here's an MCVE.
Code:
import io
import pandas as pd
csv1 = io.StringIO(u'''
a,b,c
1,4,7
2,5,8
3,6,9
''')
df = pd.read_csv(csv1, names = ['d','e','f'], skiprows = [1])
print df
Output:
d e f
0 1 4 7
1 2 5 8
2 3 6 9
Here's a way you can merge two files together file keeping the headers from the one of the files after merging.
Say you're keeping files in a list 'files':
files = ['file1.csv', 'file2.csv'] #keep files here
finalDF = pd.DataFrame() #this is an empty dataframe
for file in files:
thisDF = pd.read_csv(file)
finalDF = finalDF.append(thisDF, ignore_index=True)
Now if you want try these two lines:
say you want to check the header using a simple print head()
print finalDF.head()
and if you want to write this merged data frame to a csv file
finalDF.to_csv('merged-file.csv', encoding="utf-8", index=False)
for skipping rows are you trying to skip rows after or before
merging? let me know and i can try helping with that too.
Example:
file1.csv:
,column1,column2,column3,column4,Date,Device,sample_site
2,14888,0.060011931,248084,13.40535464,3/15/2017,DESKTOP,http://www.example1.com
11,1358,0.033212679,40888,7.465099785,3/15/2017,MOBILE,http://www.example2.com
23,130,0.02998155,4336,8.337638376,3/15/2017,TABLET,http://www.example3.com
file2.csv:
,column1,column2,column3,column4,Date,Device,sample_site
35,2685,0.034564882,77680,10.97812822,3/15/2017,DESKTOP,https://www.example4.com
45,280,0.026197605,10688,7.801272455,3/15/2017,MOBILE,https://www.example5.com
54,24,0.022878932,1049,8.202097235,3/15/2017,TABLET,https://www.example6.com
merged-file.csv:
Unnamed: 0,column1,column2,column3,column4,Date,Device,sample_site
2,14888,0.060011931,248084,13.40535464,3/15/2017,DESKTOP,http://www.example1.com
11,1358,0.033212679,40888,7.465099785,3/15/2017,MOBILE,http://www.example2.com
23,130,0.02998155,4336,8.337638376,3/15/2017,TABLET,http://www.example3.com
35,2685,0.034564882,77680,10.97812822,3/15/2017,DESKTOP,https://www.example4.com
45,280,0.026197605,10688,7.801272455,3/15/2017,MOBILE,https://www.example5.com
54,24,0.022878932,1049,8.202097235,3/15/2017,TABLET,https://www.example6.com
Reply:
Are you trying to merge data based on a column? In that case you can concat or merge with join based on an axis.
Say for example:
pd.concat([df1, df2]) #add axis and join type if necessary.
Here's the documentation to help you understand: merging and concat in pandas

Parsing data in Excel using python

In Excel, I have to separate the following value from one cell into two:
2016-12-12 (r=0.1)
2016-12-13* (r=0.7)
How do I do that in Python so that in the Excel file, dates and "r=#" will be in different cells? And also, is there a way to automatically remove the "*" sign?
This task is pretty straight forward if you use pandas:
Build a test file:
import pandas as pd
df_out = pd.DataFrame(
['2016-12-12 (r=0.1)', '2016-12-13* (r=0.7)'], columns=['data'])
df_out.to_excel('test.xlsx')
Code to convert string:
def convert_date(row):
return pd.Series([c.strip('*').strip('(').strip(')')
for c in row.split()])
Test code:
# read in test file
df_in = pd.read_excel('test.xlsx')
print(df_in)
# build a new dataframe
df_new = df_in['data'].apply(convert_date)
df_new.columns = ['date', 'r']
print(df_new)
# save the dataframe
df_new.to_excel('test2.xlsx')
Results:
data
0 2016-12-12 (r=0.1)
1 2016-12-13* (r=0.7)
date r
0 2016-12-12 r=0.1
1 2016-12-13 r=0.7

python numpy csv header in column not row

I have a script which produces a 15x1096 array of data using
np.savetxt("model_concentrations.csv", model_con, header="rows:','.join(sources), delimiter=",")
Each of the 15 rows corresponds to a source of emissions, while each column is 1 day over 3 years. If at all possible I would like to have a 'header' in column 1 which states the emssion source. When i use the option "header='source1,source2,...'" these labels get placed in the first row (like expected). ie.
2per 3rd_pvd 3rd_unpvd 4rai_rd 4rai_yd 5rmo 6hea
2.44E+00 2.12E+00 1.76E+00 1.33E+00 6.15E-01 3.26E-01 2.29E+00 ...
1.13E-01 4.21E-02 3.79E-02 2.05E-02 1.51E-02 2.29E-02 2.36E-01 ...
My question is, is there a way to inverse the header so the csv appears like this:
2per 7.77E+00 8.48E-01 ...
3rd_pvd 1.86E-01 3.62E-02 ...
3rd_unpvd 1.04E+00 2.65E-01 ...
4rai_rd 8.68E-02 2.88E-02 ...
4rai_yd 1.94E-01 8.58E-02 ...
5rmo 7.71E-01 1.17E-01 ...
6hea 1.07E+01 2.71E+00 ...
...
Labels for rows and columns is one of main reasons for the existence of pandas.
import pandas as pd
# Assemble your source labels in a list
sources = ['2per', '3rd_pvd', '3rd_unpvd', '4rai_rd',
'4rai_yd', '5rmo', '6hea', ...]
# Create a pandas DataFrame wrapping your numpy array
df = pd.DataFrame(model_con, index=sources)
# Saving it a .csv file writes the index too
df.to_csv('model_concentrations.csv', header=None)

Categories