I'm trying to figure out how to implement an automatic backup file naming/recycling strategy that keeps older backup files but with decreasing frequency over time. The basic idea is that it would be possible to remove at maximum one file when adding a new one, but I was not successful implementing this from scratch.
That's why I started to try out the Grandfather-Father-Son pattern, but there is not a requirement to stick to this. I started my experiments using a single pool, but I failed more than once, so I started again from this more descriptive approach using four pools, one for each frequency:[1]
import datetime
t = datetime.datetime(2001, 1, 1, 5, 0, 0) # start at 1st of Jan 2001, at 5:00 am
d = datetime.timedelta(days=1)
days = []
weeks = []
months = []
years = []
def pool_it(t):
days.append(t)
if len(days) > 7: # keep not more than seven daily backups
del days[0]
if (t.weekday() == 6):
weeks.append(t)
if len(weeks) > 5: # ...not more than 5 weekly backups
del weeks[0]
if (t.day == 28):
months.append(t)
if len(months) > 12: # ... limit monthly backups
del months[0]
if (t.day == 28 and t.month == 12):
years.append(t)
if len(years) > 10: # ... limit yearly backups...
del years[0]
for i in range(4505):
pool_it(t)
t += d
no = 0
def print_pool(pool, rt):
global no
print("----")
for i in pool:
no += 1
print("{:3} {} {}".format(no, i.strftime("%Y-%m-%d %a"), (i-rt).days))
print_pool(years, t)
print_pool(months,t)
print_pool(weeks,t)
print_pool(days,t)
The output shows that there are duplicates, marked with * and **
----
1 2003-12-28 Sun -3414
2 2004-12-28 Tue -3048
3 2005-12-28 Wed -2683
4 2006-12-28 Thu -2318
5 2007-12-28 Fri -1953
6 2008-12-28 Sun -1587
7 2009-12-28 Mon -1222
8 2010-12-28 Tue -857
9 2011-12-28 Wed -492
10 2012-12-28 Fri -126 *
----
11 2012-05-28 Mon -340
12 2012-06-28 Thu -309
13 2012-07-28 Sat -279
14 2012-08-28 Tue -248
15 2012-09-28 Fri -217
16 2012-10-28 Sun -187
17 2012-11-28 Wed -156
18 2012-12-28 Fri -126 *
19 2013-01-28 Mon -95
20 2013-02-28 Thu -64
21 2013-03-28 Thu -36
22 2013-04-28 Sun -5 **
----
23 2013-03-31 Sun -33
24 2013-04-07 Sun -26
25 2013-04-14 Sun -19
26 2013-04-21 Sun -12
27 2013-04-28 Sun -5 **
----
28 2013-04-26 Fri -7
29 2013-04-27 Sat -6
30 2013-04-28 Sun -5 **
31 2013-04-29 Mon -4
32 2013-04-30 Tue -3
33 2013-05-01 Wed -2
34 2013-05-02 Thu -1
...which is not a big problem. What I'm getting from it is daily backups in the last week, weekly backups for the last month, monthly backups for the last year, and yearly backups for 10 years. The amount of files is always limited to 10+12+5+7=34.
My ideal solution would
create files with human-readable names including timestampes (i.e. xyz-yyyy-mm-dd.bak)
use only one pool (store/remove files within one folder)
recycle targeted, that is, would not delete more than one file a day
(naturally) not contain any duplicates
Do you have a trivial solution at hand or a suggestion where to learn more about it?
[1] I used python as to better understand/communicate my question, but the question is about the algorithm.
As a committer of pyExpireBackups i can point you to the ExpirationRule implementation of my solution (source below and in the github repo)
see https://wiki.bitplan.com/index.php/PyExpireBackups for the doku.
An example run would lead to:
keeping 7 files for dayly backup
keeping 6 files for weekly backup
keeping 8 files for monthly backup
keeping 4 files for yearly backup
expiring 269 files dry run
# 1✅: 0.0 days( 5 GB/ 5 GB)→./sql_backup.2022-04-02.tgz
# 2✅: 3.0 days( 5 GB/ 9 GB)→./sql_backup.2022-03-30.tgz
# 3✅: 4.0 days( 5 GB/ 14 GB)→./sql_backup.2022-03-29.tgz
# 4✅: 5.0 days( 5 GB/ 18 GB)→./sql_backup.2022-03-28.tgz
# 5✅: 7.0 days( 5 GB/ 23 GB)→./sql_backup.2022-03-26.tgz
# 6✅: 9.0 days( 5 GB/ 27 GB)→./sql_backup.2022-03-24.tgz
# 7✅: 11.0 days( 5 GB/ 32 GB)→./sql_backup.2022-03-22.tgz
# 8❌: 15.0 days( 5 GB/ 37 GB)→./sql_backup.2022-03-18.tgz
# 9❌: 17.0 days( 5 GB/ 41 GB)→./sql_backup.2022-03-16.tgz
# 10✅: 18.0 days( 5 GB/ 46 GB)→./sql_backup.2022-03-15.tgz
# 11❌: 19.0 days( 5 GB/ 50 GB)→./sql_backup.2022-03-14.tgz
# 12❌: 20.0 days( 5 GB/ 55 GB)→./sql_backup.2022-03-13.tgz
# 13❌: 22.0 days( 5 GB/ 59 GB)→./sql_backup.2022-03-11.tgz
# 14❌: 23.0 days( 5 GB/ 64 GB)→./sql_backup.2022-03-10.tgz
# 15✅: 35.0 days( 4 GB/ 68 GB)→./sql_backup.2022-02-26.tgz
# 16❌: 37.0 days( 4 GB/ 73 GB)→./sql_backup.2022-02-24.tgz
# 17❌: 39.0 days( 4 GB/ 77 GB)→./sql_backup.2022-02-22.tgz
# 18❌: 40.0 days( 5 GB/ 82 GB)→./sql_backup.2022-02-21.tgz
# 19✅: 43.0 days( 4 GB/ 86 GB)→./sql_backup.2022-02-18.tgz
...
class ExpirationRule():
'''
an expiration rule keeps files at a certain
'''
def __init__(self,name,freq:float,minAmount:int):
'''
constructor
name(str): name of this rule
freq(float): the frequency) in days
minAmount(int): the minimum of files to keep around
'''
self.name=name
self.ruleName=name # will late be changed by a sideEffect in getNextRule e.g. from "week" to "weekly"
self.freq=freq
self.minAmount=minAmount
if minAmount<0:
raise Exception(f"{self.minAmount} {self.name} is invalid - {self.name} must be >=0")
def reset(self,prevFile:BackupFile):
'''
reset my state with the given previous File
Args:
prevFile: BackupFile - the file to anchor my startAge with
'''
self.kept=0
if prevFile is None:
self.startAge=0
else:
self.startAge=prevFile.ageInDays
def apply(self,file:BackupFile,prevFile:BackupFile,debug:bool)->bool:
'''
apply me to the given file taking the previously kept File prevFile (which might be None) into account
Args:
file(BackupFile): the file to apply this rule for
prevFile(BackupFile): the previous file to potentially take into account
debug(bool): if True show debug output
'''
if prevFile is not None:
ageDiff=file.ageInDays - prevFile.ageInDays
keep=ageDiff>=self.freq
else:
ageDiff=file.ageInDays - self.startAge
keep=True
if keep:
self.kept+=1
else:
file.expire=True
if debug:
print(f"Δ {ageDiff}({ageDiff-self.freq}) days for {self.ruleName}({self.freq}) {self.kept}/{self.minAmount}{file}")
return self.kept>=self.minAmount
Related
I am trying to import csv with time, but when I import it, it does not show the correct format, how can I fix the time uploaded format?
import pandas as pd
# load the data from the CSV file
data = pd.read_csv('H_data - Copy.csv')
print (data.head(5))
Result :
0 0 24:39.5
1 1 25:20.4
2 2 25:56.1
3 3 26:36.1
4 4 27:21.0
CSV looks like this when I copy and past, not sure how to upload it here:
time
0 24:39.5
1 25:20.4
2 25:56.1
3 26:36.1
4 27:21.0
5 27:57.1
6 28:34.2
7 29:11.0
8 29:47.6
9 30:27.4
10 31:06.6
11 31:46.9
12 32:22.9
13 32:58.4
14 33:30.3
15 34:13.2
16 34:51.8
17 35:32.8
18 36:04.5
19 36:46.4
20 37:27.0
21 37:58.2
22 38:43.1
23 39:23.5
24 39:54.6
25 40:39.5
26 41:15.1
27 41:55.6
28 42:27.8
I worked on calculating churn using the mix of pandas dataframe of git logs and git show command for a particular commit to see where exactly the changes has been done based on loc. However, I could not able to calculate churn based on the days i.e. I mean calculate churn when an engineer rewrites or deletes their own code that is less than 3 weeks old.
This is how I have done for such dataframe for each commit based
git logs dataframe
sha timestamp date author message body age insertion deletion filepath churn merges
1 1 cae635054 Sat Jun 26 14:51:23 2021 -0400 2021-06-26 18:51:23+00:00 Andrew Clark `act`: Resolve to return value of scope function (#21759) When migrating some internal tests I found it annoying that I couldn't -24 days +12:21:32.839997
2 21 cae635054 Sat Jun 26 14:51:23 2021 -0400 2021-06-26 18:51:23+00:00 Andrew Clark `act`: Resolve to return value of scope function (#21759) When migrating some internal tests I found it annoying that I couldn't -24 days +12:21:32.839997 31.0 0.0 packages/react-reconciler/src/__tests__/ReactIsomorphicAct-test.js 31.0
3 22 cae635054 Sat Jun 26 14:51:23 2021 -0400 2021-06-26 18:51:23+00:00 Andrew Clark `act`: Resolve to return value of scope function (#21759) When migrating some internal tests I found it annoying that I couldn't -24 days +12:21:32.839997 1.0 1.0 packages/react-test-renderer/src/ReactTestRenderer.js 0.0
4 23 cae635054 Sat Jun 26 14:51:23 2021 -0400 2021-06-26 18:51:23+00:00 Andrew Clark `act`: Resolve to return value of scope function (#21759) When migrating some internal tests I found it annoying that I couldn't -24 days +12:21:32.839997 24.0 14.0 packages/react/src/ReactAct.js 10.0
5 25 e2453e200 Fri Jun 25 15:39:46 2021 -0400 2021-06-25 19:39:46+00:00 Andrew Clark act: Add test for bypassing queueMicrotask (#21743) Test for fix added in #21740 -25 days +13:09:55.839997 50.0 0.0 packages/react-reconciler/src/__tests__/ReactIsomorphicAct-test.js 50.0
6 27 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 4.0 5.0 packages/react-devtools-shared/src/__tests__/FastRefreshDevToolsIntegration-test.js -1.0
7 28 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 4.0 4.0 packages/react-devtools-shared/src/__tests__/componentStacks-test.js 0.0
8 29 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 12.0 12.0 packages/react-devtools-shared/src/__tests__/console-test.js 0.0
9 30 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 7.0 6.0 packages/react-devtools-shared/src/__tests__/editing-test.js 1.0
10 31 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 47.0 42.0 packages/react-devtools-shared/src/__tests__/inspectedElement-test.js 5.0
11 32 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 7.0 6.0 packages/react-devtools-shared/src/__tests__/ownersListContext-test.js 1.0
12 33 73ffce1b6 Thu Jun 24 22:42:44 2021 -0400 2021-06-25 02:42:44+00:00 Brian Vaughn DevTools: Update tests to fix warnings/errors (#21748) Some new ones had slipped in (e.g. deprecated ReactDOM.render message from 18) -26 days +20:12:53.839997 22.0 21.0 packages/react-devtools-shared/src/__tests__/profilerContext-test.js 1.0
churn calculation
commits = df["sha"].unique().tolist()
for commit in commits:
contribution, churn = await self.calculate_churn(commit)
async def calculate_churn(self, stream):
PREVIOUS_BASE_DIR = os.path.abspath("")
try:
GIT_DIR = os.path.join(PREVIOUS_BASE_DIR, "app/git/react.git")
os.chdir(GIT_DIR)
except FileNotFoundError as e:
raise ValueError(e)
cmd = f"git show --format= --unified=0 --no-prefix {stream}"
cmds = [f"{cmd}"]
results = get_proc_out(cmds)
[files, contribution, churn] = get_loc(results)
# need to circle back to previous path
os.chdir(PREVIOUS_BASE_DIR)
return contribution, churn
def is_new_file(result, file):
# search for destination file (+++ ) and update file variable
if result.startswith("+++"):
return result[result.rfind(" ") + 1 :]
else:
return file
def is_loc_change(result, loc_changes):
# search for loc changes (## ) and update loc_changes variable
# ## -1,5 +1,4 ##
# ## -l,s +l,s ##
if result.startswith("##"):
# loc_change = result[2+1: ] -> -1,5 +1,4 ##
loc_change = result[result.find(" ") + 1 :]
# loc_change = loc_change[:9] -> -1,5 +1,4
loc_change = loc_change[: loc_change.find(" ##")]
return loc_change
else:
return loc_changes
def get_loc_change(loc_changes):
# removals
# -1,5 +1,4 = -1,5
left = loc_changes[: loc_changes.find(" ")]
left_dec = 0
# 2
if left.find(",") > 0:
# 2
comma = left.find(",")
# 5
left_dec = int(left[comma + 1 :])
# 1
left = int(left[1:comma])
else:
left = int(left[1:])
left_dec = 1
# additions
# +1,4
right = loc_changes[loc_changes.find(" ") + 1 :]
right_dec = 0
if right.find(",") > 0:
comma = right.find(",")
right_dec = int(right[comma + 1 :])
right = int(right[1:comma])
else:
right = int(right[1:])
right_dec = 1
if left == right:
return {left: (right_dec - left_dec)}
else:
return {left: left_dec, right: right_dec}
def get_loc(results):
files = {}
contribution = 0
churn = 0
file = ""
loc_changes = ""
for result in results:
new_file = is_new_file(result, file)
if file != new_file:
file = new_file
if file not in files:
files[file] = {}
else:
new_loc_changes = is_loc_change(
result, loc_changes
) # returns either empmty or -6 +6 or -13, 0 +14, 2 format
if loc_changes != new_loc_changes:
loc_changes = new_loc_changes
locc = get_loc_change(loc_changes) # {2: 0} or {8: 0, 9: 1}
for loc in locc:
# files[file] = {2: 0, 8: 0, 9: 1}
# print("loc", loc, files[file], locc[loc])
if loc in files[file]:
# change of lines triggered
files[file][loc] += locc[loc]
churn += abs(locc[loc])
else:
files[file][loc] = locc[loc]
contribution += abs(locc[loc])
else:
continue
return [files, contribution, churn]
How can I utilize this same code but check churn only if there is changes in code that is only 3 weeks old?
The only practical way to do this is to iterate through the DataDrame, and because that sucks with pandas, it almost always means you have the wrong data structure. If you're not doing numerical analysis, and it looks like you aren't, then just keep a simple list of dicts. Pandas has its shining points, but it's not a universal database.
Here's the rough code you'd need, although I'm glossing over details:
# Go through the df row by row.
lastdate = {}
for index,row in df.iterrows():
if row['filepath'] in lastdate:
if lastdate[row['filepath']] - row['date'] < timedelta(days=21):
print( "Last change to", row['filepath'], "was within three weeks" )
lastdate[row['filepath']] = row['date']
I have a sample dataset which contains id and costs in diff years as the one below:
Id
2015-04
2015-05
2015-06
2015-07
2016-04
2016-05
2016-06
2016-07
2017-04
2017-05
2017-06
2017-07
2018-04
2018-05
2018-06
2018-07
10
58500
58500
58300
57800
57500
57700
57800
57800
57800
57900
58400
59000
59500
59500
59000
58500
11
104600
104600
105700
106100
106300
107300
108000
107600
107800
108300
109200
109600
109300
108700
109000
110700
12
104900
106700
107900
107500
106100
105200
105700
106400
106700
107100
107200
107100
107500
108300
109200
110500
13
50500
49600
48900
48400
48100
48000
47700
47500
47400
47600
47800
47800
47600
47600
48100
48400
14
49800
49900
50300
50800
51100
51200
51200
51400
51600
51900
52400
52600
52300
51800
51100
50900
How can I create a function in Python to find the median cost of each year belonging to their respective id? I want the function to be dynamic in terms of the start and end year so that if new data comes for different years, the code will calculate the changes accordingly. For example, if new data comes for 2019, the end date would automatically be considered as 2019 instead of 2018 and calculate its median respectively.
With the current data sample given above, the result should look something like one below:
Id
2015
2016
2017
2018
10
58400
57750
58150
59250
11
105150
107450
108750
109150
12
107100
105900
107100
108750
13
49250
47850
47700
47850
14
50100
51200
52150
51450
First we split the column names on - and get only the year. Then we groupby over axis=1 based on these years and take the median:
df = df.set_index("Id")
df = df.groupby(df.columns.str.split("-").str[0], axis=1).median().reset_index()
# or get first 4 characters
# df = df.groupby(df.columns.str[:4], axis=1).median().reset_index()
Id 2015 2016 2017 2018
0 10 58400 57750 58150 59250
1 11 105150 107450 108750 109150
2 12 107100 105900 107100 108750
3 13 49250 47850 47700 47850
4 14 50100 51200 52150 51450
I am trying to write function that obtains a df and a dictionary that maps columns to values. The function slices rows (indexes) such that it returns only rows whose values match ‘criteria’ keys values.
for example:
df_isr13 = filterby_criteria(df, {"Area":["USA"], "Year":[2013]}) Only rows with "Year"=2013 and "Area"="USA" are included in the output.
I tried:
def filterby_criteria(df, criteria):
for key, values in criteria.items():
return df[df[key].isin(values)]
but I get only the first criterion
How can I get the new dataframe that except all criterias by pd.Dataframe.isin()?
You can use for loop and add every criterion by pandas merge function:
def filterby_criteria(df, criteria):
for key, values in criteria.items():
df = pd.merge(df[df [key].isin(values)], df, how='inner')
return df
Consider a simple merge of two data frames since by default merge uses all matching names:
from itertools import product
import pandas as pd
def filterby_criteria(df, criteria):
# EXTRACT DICT ITEMS
k,v = criteria.keys(), criteria.values()
# BUILD DF OF ALL POSSIBLE MATCHES
all_matches = (pd.DataFrame(product(*v))
.set_axis(list(k), axis='columns', inplace=False)
)
# RETURN MERGED DF
return df.merge(all_matches)
To demonstrate with random, seeded data:
Data
import numpy as np
import pandas as pd
np.random.seed(61219)
tools = ['sas', 'stata', 'spss', 'python', 'r', 'julia']
years = list(range(2013, 2019))
random_df = pd.DataFrame({'Tool': np.random.choice(tools, 500),
'Int': np.random.randint(1, 10, 500),
'Num': np.random.uniform(1, 100, 500),
'Year': np.random.choice(years, 500)
})
print(random_df.head(10))
# Tool Int Num Year
# 0 spss 4 96.465327 2016
# 1 sas 7 23.455771 2016
# 2 r 5 87.349825 2014
# 3 julia 4 18.214028 2017
# 4 julia 7 17.977237 2016
# 5 stata 3 41.196579 2013
# 6 stata 8 84.943676 2014
# 7 python 4 60.576030 2017
# 8 spss 4 47.024075 2018
# 9 stata 3 87.271072 2017
Function call
criteria = {"Tool":["python", "r"], "Year":[2013, 2015]}
def filterby_criteria(df, criteria):
k,v = criteria.keys(), criteria.values()
all_matches = (pd.DataFrame(product(*v))
.set_axis(list(k), axis='columns', inplace=False)
)
return df.merge(all_matches)
final_df = filterby_criteria(random_df, criteria)
Output
print(final_df)
# Tool Int Num Year
# 0 python 8 96.611384 2015
# 1 python 7 66.782828 2015
# 2 python 9 73.638629 2015
# 3 python 4 70.763264 2015
# 4 python 2 28.311917 2015
# 5 python 3 69.888967 2015
# 6 python 8 97.609694 2015
# 7 python 3 59.198276 2015
# 8 python 3 64.497017 2015
# 9 python 8 87.672138 2015
# 10 python 9 33.605467 2015
# 11 python 8 25.225665 2015
# 12 r 3 72.202364 2013
# 13 r 1 62.192478 2013
# 14 r 7 39.264766 2013
# 15 r 3 14.599786 2013
# 16 r 4 22.963723 2013
# 17 r 1 97.647922 2013
# 18 r 5 60.457344 2013
# 19 r 5 15.711207 2013
# 20 r 7 80.273330 2013
# 21 r 7 74.190107 2013
# 22 r 7 37.923396 2013
# 23 r 2 91.970678 2013
# 24 r 4 31.489810 2013
# 25 r 1 37.580665 2013
# 26 r 2 9.686955 2013
# 27 r 6 56.238919 2013
# 28 r 6 72.820625 2015
# 29 r 3 61.255351 2015
# 30 r 4 45.690621 2015
# 31 r 5 71.143601 2015
# 32 r 6 54.744846 2015
# 33 r 1 68.171978 2015
# 34 r 5 8.521637 2015
# 35 r 7 87.027681 2015
# 36 r 3 93.614377 2015
# 37 r 7 37.918881 2015
# 38 r 3 7.715963 2015
# 39 python 1 42.681928 2013
# 40 python 6 57.354726 2013
# 41 python 1 48.189897 2013
# 42 python 4 12.201131 2013
# 43 python 9 1.078999 2013
# 44 python 9 75.615457 2013
# 45 python 8 12.631277 2013
# 46 python 9 82.227578 2013
# 47 python 7 97.802213 2013
# 48 python 1 57.103964 2013
# 49 python 1 1.941839 2013
# 50 python 3 81.981437 2013
# 51 python 1 56.869551 2013
PyFiddle Demo (click Run at top)
It is the first time I use pandas and I do not really know how to deal with my problematic.
In fact I have 2 data frame:
import pandas
blast=pandas.read_table("blast")
cluster=pandas.read_table("cluster")
Here is an exemple of their contents:
>>> cluster
cluster_name seq_names
0 1 g1.t1_0035
1 1 g1.t1_0035_0042
2 119365 g1.t1_0042
3 90273 g1.t1_0042_0035
4 71567 g10.t1_0035
5 37976 g10.t1_0035_0042
6 22560 g10.t1_0042
7 90280 g10.t1_0042_0035
8 82698 g100.t1_0035
9 47392 g100.t1_0035_0042
10 28484 g100.t1_0042
11 22580 g100.t1_0042_0035
12 19474 g1000.t1_0035
13 5770 g1000.t1_0035_0042
14 29708 g1000.t1_0042
15 99776 g1000.t1_0042_0035
16 6283 g10000.t1_0035
17 39828 g10000.t1_0035_0042
18 25383 g10000.t1_0042
19 106614 g10000.t1_0042_0035
20 6285 g10001.t1_0035
21 13866 g10001.t1_0035_0042
22 121157 g10001.t1_0042
23 106615 g10001.t1_0042_0035
24 6286 g10002.t1_0035
25 113 g10002.t1_0035_0042
26 25397 g10002.t1_0042
27 106616 g10002.t1_0042_0035
28 4643 g10003.t1_0035
29 13868 g10003.t1_0035_0042
... ... ...
and
[78793 rows x 2 columns]
>>> blast
qseqid sseqid pident length mismatch \
0 g1.t1_0035_0042 g1.t1_0035_0042 100.0 286 0
1 g1.t1_0035_0042 g1.t1_0035 100.0 257 0
2 g1.t1_0035_0042 g9307.t1_0035 26.9 134 65
3 g2.t1_0035_0042 g2.t1_0035_0042 100.0 445 0
4 g2.t1_0035_0042 g2.t1_0035 95.8 451 3
5 g2.t1_0035_0042 g24520.t1_0042_0035 61.1 429 137
6 g2.t1_0035_0042 g9924.t1_0042 61.1 429 137
7 g2.t1_0035_0042 g1838.t1_0035 86.2 29 4
8 g3.t1_0035_0042 g3.t1_0035_0042 100.0 719 0
9 g3.t1_0035_0042 g3.t1_0035 84.7 753 62
10 g4.t1_0035_0042 g4.t1_0035_0042 100.0 242 0
11 g4.t1_0035_0042 g3.t1_0035 98.8 161 2
12 g5.t1_0035_0042 g5.t1_0035_0042 100.0 291 0
13 g5.t1_0035_0042 g3.t1_0035 93.1 291 0
14 g6.t1_0035_0042 g6.t1_0035_0042 100.0 152 0
15 g6.t1_0035_0042 g4.t1_0035 100.0 152 0
16 g7.t1_0035_0042 g7.t1_0035_0042 100.0 216 0
17 g7.t1_0035_0042 g5.t1_0035 98.1 160 3
18 g7.t1_0035_0042 g11143.t1_0042 46.5 230 99
19 g7.t1_0035_0042 g27537.t1_0042_0035 40.8 233 111
20 g3778.t1_0035_0042 g3778.t1_0035_0042 100.0 86 0
21 g3778.t1_0035_0042 g6174.t1_0035 98.0 51 1
22 g3778.t1_0035_0042 g20037.t1_0035_0042 100.0 50 0
23 g3778.t1_0035_0042 g37190.t1_0035 100.0 50 0
24 g3778.t1_0035_0042 g15112.t1_0042_0035 66.0 53 18
25 g3778.t1_0035_0042 g6061.t1_0042 66.0 53 18
26 g18109.t1_0035_0042 g18109.t1_0035_0042 100.0 86 0
27 g18109.t1_0035_0042 g33071.t1_0035 100.0 81 0
28 g18109.t1_0035_0042 g32810.t1_0035 96.4 83 3
29 g18109.t1_0035_0042 g17982.t1_0035_0042 98.6 72 1
... ... ... ... ... ...
if you stay focus on the cluster database, the first column correspond to the cluster ID and inside those clusters there are several sequences ID.
What I need to to is first to split all my cluster (in R it would be like: liste=split(x = data$V2, f = data$V1) )
And then, creat a function which displays the most similarity paires sequence within each cluster.
here is an exemple:
let's say I have two clusters (dataframe cluster):
cluster 1:
seq1
seq2
seq3
seq4
cluster 2:
seq5
seq6
seq7
...
On the blast dataframe there is on the 3th column the similarity between all sequences (all against all), so something like:
seq1 vs seq1 100
seq1 vs seq2 90
seq1 vs seq3 56
seq1 vs seq4 49
seq1 vs seq5 40
....
seq2 vs seq3 70
seq2 vs seq4 98
...
seq5 vs seq5 100
seq5 vs seq6 89
seq5 vs seq7 60
seq7 vs seq7 46
seq7 vs seq7 100
seq6 vs seq6 100
and what I need to get is :
cluster 1 (best paired sequences):
seq 1 vs seq 2
cluster2 (best paired sequences):
seq 5 vs seq6
...
So as you can see, I do not want to take into account the sequences paired by themselves
IF someone could give me some clues it would be fantastic.
Thank you all.
Firstly I assume that there are no Pairings in 'blast' with sequences from two different Clusters. In other words: in this solution the cluster-ID of a pairing will be evaluated by only one of the two sequence IDs.
Including cluster information and pairing information into one dataframe:
data = cluster.merge(blast, left_on='seq_names', right_on='qseqid')
Then the data should only contain pairings of different sequences:
data = data[data['qseqid']!=data['sseqid']]
To ignore pairings which have the same substrings in their seqid, the most readable way would be to add data columns with these data:
data['qspec'] = [seqid.split('_')[1] for seqid in data['qseqid'].values]
data['sspec'] = [seqid.split('_')[1] for seqid in data['sseqid'].values]
Now equal spec-values can be filtered the same way like it was done with equal seqids above:
data = data[data['qspec']!=data['sspec']]
In the end the data should be grouped by cluster-ID and within each group, the maximum of pident is of interest:
data_grpd = data.groupby('cluster_name')
result = data.loc[data_grpd['pident'].idxmax()]
The only drawback here - except the above mentioned assumption - is, that if there are several exactly equal max-values, only one of them would be taken into account.
Note: if you don't want the spec-columns to be of type string, you could easiliy turn them into integers on the fly by:
import numpy as np
data['qspec'] = [np.int(seqid.split('_')[1]) for seqid in data['qseqid'].values]
This merges the dataframes based first on sseqid, then on qseqid, and then returns results_df. Any with 100% match are filtered out. Let me know if this works. You can then order by cluster name.
blast = blast.loc[blast['pident'] != 100]
results_df = cluster.merge(blast, left_on='seq_names',right_on='sseqid')
results_df = results_df.append(cluster.merge(blast, left_on='seq_names',right_on='qseqid'))