how to join multiple tab files by using python - python

I have multiple tab files with same name in different folders like this
F:/RNASEQ2019/ballgown/abundance_est/RBRN02.sorted.bam\t_data.ctab
F:/RNASEQ2019/ballgown/abundance_est/RBRN151.sorted.bam\t_data.ctab
Each file have 5-6 common columns and I want to pick up two columns- Gene and FPKM. Gene column is same for all only FPKM value differ. I want to pickup Gene and FPKM column form each file and make a master file like this
Gene RBRN02 RBRN03 RBRN151
gene1 67 699 88
gene2 66 77 89
I did this
import os
path ="F:/RNASEQ2019/ballgown/abundance_est/"
files =[]
## r=root, d=directory , f=file
for r, d, f in os.walk(path):
for file in f:
if 't_data.ctab' in file:
files.append(os.path.join(r, file))
df=[]
for f in files:
df.append(pd.read_csv(f, sep="\t"))
But this is not doing side wise merge. How do I get that above format? please help

Using datatable, you can read multiple files at once by specifying the pattern:
import datatable as dt
dfs = dt.fread("F:/RNASEQ2019/ballgown/abundance_est/**/t_data.ctab",
columns={"Gene", "FPKM"})
If there are multiple files, this will produce a dictionary where each key is the name of the file, and the corresponding value is the content of that file, parsed into a frame. The optional columns parameter limits which columns you want to read.
In your case it seems like you want to rename the columns based on the name of the file where it came from, so you may do something like this:
frames = []
for filename, frame in dfs.items():
mm = re.search(r"(\w+)\.sorted\.bam", filename)
frame.names = {"FPKM": mm.group(1)}
frames.append(frame)
In the end, you can cbind the list of frames:
df = dt.cbind(frames)
If you need to work with a pandas dataframe, you can convert easily: df.to_pandas().

IIUC, you can get your desired result with a simple list comprehension :
dfs = [pd.read_csv(f,sep='\t') for f in files]
df = pd.concat(dfs)
print(df)
or as a one liner
df = pd.concat([pd.read_csv(f,sep='\t') for f in files])

How about reading each file in a separate data frame and then merging them?

Related

Merge csv files based on file names and suffix in Python

First time poster and fairly new to Python here. I have a collection of +1,7000 csv files with 2 columns each. The number and labels of the rows are the same in every file. The files are named with a specific format. For example:
Species_1_OrderA_1.csv
Species_1_OrderA_2.csv
Species_1_OrderA_3.csv
Species_10_OrderB_1.csv
Species_10_OrderB_2.csv
Each imported dataframe is formatted like so:
TreeID Species_1_OrderA_2
0 Bu2_1201_1992 0
1 Bu3_1201_1998 0
2 Bu4_1201_2000 0
3 Bu5_1201_2002 0
4 Bu6_1201_2004 0
.. ... ...
307 Fi141_16101_2004 0
308 Fi142_16101_2006 0
309 Fi143_16101_2008 0
310 Fi144_16101_2010 0
311 Fi147_16101_2015 0
I would like to join the files that correspond to the same species, based on the first column. So, in the end, I would get the files Species_1_OrderA.csv and Species_10_OrderB.csv. Please note that all the species do not necessarily have the same number of files.
This is what I have tried so far.
import os
import glob
import pandas as pd
# Importing csv files from directory
path = '.'
extension = 'csv'
os.chdir(path)
files = glob.glob('*.{}'.format(extension))
# Create a dictionary to loop through each file to read its contents and create a dataframe
file_dict = {}
for file in files:
key = file
df = pd.read_csv(file)
file_dict[key] = df
# Extract the name of each dataframe, convert to a list and extract the relevant
# information (before the 3rd underscore). Compare each of these values to the next and
# if they are the same, append them to a list. This list (in my head, at least) will help
# me merge them using pandas.concat
keys_list = list(file_dict.keys())
group = ''
for line in keys_list:
type = "_".join(line.split("_")[:3])
for i in range(len(type) - 1):
if type[i] == type[i+1]:
group.append(line[keys_list])
print(group)
However, the last bit is not even working, and at this point, I am not sure this is the best way to deal with my problem. Any pointers on how to solve this will be really appreciated.
--- EDIT:
This is the expected output for the files per species. Ideally, I would remove the rows that have zeros in them, but that can easily be done with awk.
TreeID,Species_1_OrderA_0,Species_1_OrderA_1,Species_1_OrderA_2
Bu2_1201_1992,0,0,0
Bu3_1201_1998,0,0,0
Bu4_1201_2000,0,0,0
Bu5_1201_2002,0,0,0
Bu6_1201_2004,0,0,0
Bu7_1201_2006,0,0,0
Bu8_1201_2008,0,0,0
Bu9_1201_2010,0,0,0
Bu10_1201_2012,0,0,0
Bu11_1201_2014,0,0,0
Bu14_1201_2016,0,0,0
Bu16_1201_2018,0,0,0
Bu18_3103_1989,0,0,0
Bu22_3103_1999,0,0,0
Bu23_3103_2001,0,0,0
Bu24_3103_2003,0,0,0
...
Fi141_16101_2004,0,0,10
Fi142_16101_2006,0,4,0
Fi143_16101_2008,0,0,0
Fi144_16101_2010,2,0,0
Fi147_16101_2015,0,7,0
``
Try it like this:
import os
import pandas as pd
path = "C:/Users/username"
files = [file for file in os.listdir(path) if file.endswith(".csv")]
dfs = dict()
for file in files:
#everything before the final _ is the species name
species = file.rsplit("_", maxsplit=1)[0]
#read the csv to a dataframe
df = pd.read_csv(os.path.join(path, file))
#if you don't have a df for a species, create a new key
if species not in dfs:
dfs[species] = df
#else, merge current df to existing df on the TreeID
else:
dfs[species] = pd.merge(dfs[species], df, on="TreeID", how="outer")
#write all dfs to their own csv files
for key in dfs:
dfs[key].to_csv(f"{key}.csv")
If your goal is to concatenate all the csv's for each species-order into a consolidated csv, this is one approach. I haven't tested it so there might be a few errors. The idea is to first use glob, as you're doing, to make a dict of file_paths so that all the file_paths of the same species-order are grouped together. Then for each species-order read in all the data into a single table in memory and then write out to a consolidated file.
import pandas as pd
import glob
#Create a dictionary keyed by species_order, valued by a list of files
#i.e. file_paths_by_species_order['Species_10_OrderB'] = ['Species_10_OrderB_1.csv', 'Species_10_OrderB_2.csv']
file_paths_by_species_order = {}
for file_path in glob.glob('*.csv'):
species_order = file_path.split("_")[:3]
if species_order not in file_paths_by_species_order:
file_paths_by_species_order[species_order] = [file_path]
else:
file_paths_by_species_order[species_order].append(file_path)
#For each species_order, concat all files and save the info into a new csv
for species_order,file_paths in file_paths_by_species_order.items():
df = pd.concat(pd.read_csv(file_path) for file_path in file_paths)
df.to_csv('consolidated_{}.csv'.format(species_order))
There are definitely improvements that can be made such as using collections.defaultdict and writing one file at a time out to the consolidated file, instead of reading them all into memory

Reading, calculate and group data of several files with pandas

I'm trying to make a small script to automate something at my work. I have a ton of text files that I need to group into a large dataframe to plot after.
The files have this general structure like this
5.013130280 4258.0
5.039390845 4198.0
... ...
49.944957015 858.0
49.971217580 833.0
What I want to do is
Keep the first column as the column of the final dataframe (as these values are the same for all files)
The rest of the dataframe is just extracting the second column of each file, normalize it and group everything together.
Use the file name as the header for extracted column (from point to) to use after in the plotting of the data
Right I was able to only make step 2, here is the code
import os
import pandas as pd
import glob
path = "mypath"
extension = 'xy'
os.chdir(path)
dir = os.listdir(path)
files = glob.glob(path + "/*.xy")
li = []
for file in files:
df = pd.read_csv(file, names=('angle','int'), delim_whitespace=True)
df['int_n']=data['int']/data['int'].max()
li_norm.append(df['int_n'])
norm_files = pd.concat(li_norm, axis = 1)
So is there any way to solve this in an easy way?
Assuming that all of your files have exactly the same length (# of rows) and values for angles, then you don't really need to make a bunch of dataframes and concatenate them all together.
If I'm understanding correctly, you just want a final dataframe with a new column for each file (named with the filename) with the 'int' data, normalized with all the values from only that specific file
On the first file, you can create a dataframe to use as your final output, then just add columns to it on each subsequent file
for idx,file in enumerate(files):
df = pd.read_csv(file, names=('angle','int'), delim_whitespace=True)
filename = file.split('\\')[-1][:-3] #get filename from splitting full path and removing last 3 characters (file extension)
df[filename]=df['int']/df['int'].max() #use the filename itself as the new column name
if idx == 0: #create norm_files output dataframe on first file
norm_files = df[['angle',file]]
else: #add column to norm_files for all subsequent files
norm_files[file] = df[file]
You can add a calculated column quite simply, although I'm not sure if that's what you're asking.
for file in files:
df = pd.read_csv(file, names=('angle','int'), delim_whitespace=True)
df[file.split('.')[0]]=data['int']/data['int'].max()
li_norm.append(df['int_n'])

Read multiple CSV files in pairs of two them compare each two pair

I have a folder with multiple CSV files named like this
CINinfo_2019-08-08_rev1,CINinfo_2019-08-08_rev2,CINinfo_2019-08-08_rev3, CINinfo_2019-08-08_rev4, I have about 70 files in one folder my intention is to automate this process so that I can read them automatically in pairs of two then compare for differences in each pair and have the result as one combined table. Currently, I am reading them manually and comparing differences,this is the code:
import pandas as pd
df1 = pd.read_csv("CINinfo_2019-08-08_rev1.csv")
df2 = pd.read_csv("CINinfo_2019-08-08_rev2.csv")
import numpy as np
rows,cols=np.where(comparison_values==False)
for item in zip(rows,cols):
df1.iloc[item[0], item[1]] = '{} --> {}'.format(df1.iloc[item[0], item[1]],df2.iloc[item[0], item[1]])
This process is so tedious being that I have other folders with CSV files that I need to read. Note how the CSV files are named, all CSV files have the same prefixes (CINinfo_2019-08-08_) but suffix in this case name (rev) has an incremental number from 1 to 70. The way I need this to read files in pairs is in format 1 and 2, 2 and 3, 3 and 4 going on. In this case I compare pairs like this, CINinfo_2019-08-08_rev1 and CINinfo_2019-08-08_rev2 then CINinfo_2019-08-08_rev2 nd CINinfo_2019-08-08_rev3 going like that, How can I automate the reading of this files in pairs then compare for differences in each pair of them and have one joined table?
You could try something like this:
import os, re
import pandas as pd
import numpy as np
# your directory path here
path = r'path'
# get all files
file_, pat = [], re.compile('.csv')
for root, dirs, files in os.walk(path):
file_ = [os.path.join(root, f) for f in files if pat.search(f)]
# you may want to filter here, this line is just an example
# filter for all csv files containing 'rev'
file_ = [f for f in file_ if 'rev' in f]
# loop through the files of interest
for (idx, ff) in enumerate(file_[1:]):
df1 = pd.read_csv(ff)
df2 = pd.read_csv(file_[idx])
rows, cols = np.where(comparison_values==False)
for item in zip(rows,cols):
# do calculation
This answer is not all inclusive, but hopefully will give you a possible approach. You may need to adjust filtering, or possibly sort. I have not shown how to add the results to a final table, but the best thing to do is create a DataFrame temp and assign the values from the file pairs to it and then use pd.concat to add to a final DataFrame that wil contain all results.

Read multiple csv files into separate dataframes using pandas

I like to read two csv files from a particular folder into two separate dataframes.
The two file names are: 23314621_MACI_NAV.CSV and 23314623_MACI_Holding.CSV
The file second part of the file names are fixed MACI_NAV.CSV and MACI_Holding.CSV, however the first part of the file name which are numbers change everyday.
I like to read them into two different dataframe by trying this:
import pandas as pd
import glob
msci_folder = 'N:/Operation/Daily CDS E_Report/CDS/MACI/'
mscifile = glob.glob(msci_folder + "\*.csv")
for file in mscifile:
df_nav=pd.read_csv(file)
df_holding=pd.read_csv(file)
It seems like both lines are reading the same file, how do I make them read different files (second file)?
If want create list of DataFrames:
dfs = []
for file in mscifile:
df = pd.read_csv(file)
dfs.append(df)
Or use list comprehension:
dfs = [pd.read_csv(file) for file in mscifile]
print (dfs[0])
print (dfs[1])
Another solution is create dictionary of DataFrames with keys by last substring after _ in filename:
from os.path import splitext, basename
dfs = {splitext(basename(fp))[0].split('_')[-1] : pd.read_csv(fp) for fp in mscifile}
print (dfs)
print (dfs['NAV'])
print (dfs['Holding'])

Merging csv columns while checking ID of the first column

I have a 4 csv files exported from e-shop database I need to merge them by columns, which I would maybe manage to do alone. But the problem is to match the right columns
First file:
"ep_ID","ep_titleCS","ep_titlePL".....
"601","Kancelářská židle šedá",NULL.....
...
Second file:
"pe_photoID","pe_productID","pe_sort"
"459","603","1"
...
Third file:
"epc_productID","epc_categoryID","epc_root"
"2155","72","1"
...
Fourth file:
"ph_ID","ph_titleCS"...
"379","5391132275.jpg"
...
I need to match the rows so rows with same "ep_ID" and "epc_productID" are merged together and rows with same "ph_ID", "pe_photoID" too. I don't really know where to start, hopefully, I wrote it understandably
Update:
I am using :
files = ['produkty.csv', 'prirazenifotek.csv', 'pprirazenikategorii.csv', 'adresyfotek.csv']
dfs = []
for f in files:
df = pd.read_csv(f,low_memory=False)
dfs.append(df)
first_and_third =pd.merge(dfs[0],dfs[1],left_on = "ep_ID",right_on="pe_photoID")
first_and_third.to_csv('new_filepath.csv', index=False)
Ok this code works, but it does two things in another way than I need:
When there is a row in file one with ID = 1 for example and in the next file two there is 5 rows with bID = 1, then it creates 5 rows int the final file I would like to have one row that would have multiple values from every row with bID = 1 in file number two. Is it possible?
And it seems to be deleting some rows... not sure till i get rid of the "duplicates"...
You can use pandas's merge method to merge the csvs together. In your question you only provide keys between the 1st and 3rd files, and the 2nd and 4th files. Not sure if you want one giant table that has them all together -- if so you will need to find another intermediary key, maybe one you haven't listed(?).
import pandas as pd
files = ['path_to_first_file.csv', 'second_file.csv', 'third_file.csv', 'fourth_file.csv']
dfs = []
for f in files:
df = pd.read_csv(f)
dfs.append(df)
first_and_third = dfs[0].merge(dfs[2], left_on='ep_ID', right_on='epc_productID', how='left')
second_and_fourth = dfs[1].merge(dfs[3], left_on='pe_photoID', right_on='ph_ID', how='left')
If you want to save the dataframe back down to a file you can do so:
first_and_third.to_csv('new_filepath.csv', index=False)
index=False assumes you have no index set on the dataframe and don't want the dataframe's row numbers to be included in the final csv.

Categories