Issues in converting sas macro to pandas - python

I am new to pandas, and I'm learning it through its web documentation. I am facing issues in converting the following SAS code to pandas.
My SAS code:
data tmp2;
set tmp1;
retain group 0;
if _n_=1 and group_v1 = -1 then group = group_v1;
else if _n_=1 and group_v1 ne -1 then group=0;
else group=group+1;
run;
Note: In the above code group_v1 is a column from tmp1

There may be a more succinct and efficient way to do this in pandas, but this approach quite closely matches what SAS does internally when your code is run:
tmp1 = pd.DataFrame({"group_v1": [-1, 0, 1]})
def build_tmp2(tmp1):
# Contains the new rows for tmp2
_tmp2 = []
# Loop over the rows of tmp1 - like a data step does
for i, row in tmp1.iterrows():
# equivalent to the data statement - copy the current row to memory
tmp2 = row.copy()
# _N_ is equivalent to i, except i starts at zero in Pandas/Python
if i == 0:
# Create a new variable called pdv to contain values across loops
# This is equivalent to the Program Data Vector in SAS
pdv = {}
if row['group_v1'] == -1:
pdv['group'] = row['group_v1']
else:
pdv['group'] = 0
# Equivalent to both retain group and also group=group+1
pdv['group']+=1
# Copy the accumulating group variable to the target row
tmp2['group'] = pdv['group']
# Append the updated row to the list
_tmp2.append(tmp2.copy())
# After the loop has finished build the new DataFrame from the list
return pd.DataFrame(_tmp2)
build_tmp2(tmp1)

Related

Python while loop not updating the DataFrame column calculations

I am writing a python code where I have a condition which till the time it is true I want the calculations to happen and update the dataframe columns. However I am noticing that the dataframe is not getting updated and all the values are of the 1st iteration only. Can an expert guide on where I am going wrong. Below is my sample code -
'''
mbd_out_ub2 = mbd_out_ub1
mbd_out_ub2_len = len(mbd_out_ub2)
plt_mbd_c1_all = pd.DataFrame()
brd2c2_all = pd.DataFrame()
iterc=1
### plt_mbd_c >> this is the data frame with data before the loop starts
plt_mbd_c0 = plt_mbd_c.copy()
plt_mbd_c0 = plt_mbd_c0[plt_mbd_c0['UB_OUT']==1]
while (iterc < 10):
plt_mbd_c1 = plt_mbd_c0.copy()
brd2c2 = plt_mbd_c1.groupby('KEY1')['NEST_VAL_PER'].agg([('KEY1_CNT','count'),('PER1c', lambda x: x.quantile(0.75))]).reset_index()
brd2c2_all = brd2c2_all.append(brd2c2).reset_index(drop=True)
plt_mbd_c1 = pd.merge(plt_mbd_c1,brd2c2[['KEY1','PER1c']],on='KEY1', how='left')
del brd2c2, plt_mbd_c0
plt_mbd_c1['NEST_VAL_PER1'] = plt_mbd_c1['PER1c'] * (plt_mbd_c1['EVAL_LP_%'] / 100)
plt_mbd_c1['NEST_VAL_PER1'] = np.where((plt_mbd_c1['BRD_OUT_FLAG'] == 0),plt_mbd_c1['NEST_VAL'],plt_mbd_c1['NEST_VAL_PER1'] )
plt_mbd_c1['SALESC'] = plt_mbd_c1['NEST_VAL_PER1']/plt_mbd_c1['PROJR']/plt_mbd_c1['NEWPRICE']
plt_mbd_c1['C_SALES_C'] = np.where(plt_mbd_c1['OUT_FLAG'] == 1,plt_mbd_c1['SALESC'],plt_mbd_c1['SALESUNIT'])
plt_mbd_c1['NEST_VAL_PER'] = plt_mbd_c1['C_SALES_C'] * plt_mbd_c1['PROJR'] * plt_mbd_c1['NEWPRICE']
plt_mbd_c1['ITER'] = iterc
plt_mbd_c1_all = plt_mbd_c1_all.append(plt_mbd_c1).reset_index(drop=True)
plt_mbd_c1.drop(['PER1c'],axis=1,inplace=True)
plt_mbd_c0 = plt_mbd_c1.copy()
del plt_mbd_c1
print("iter = ",iterc)
iterc = iterc + 1
'''
So above I want to take 75th percentile of a column by KEY1 and do few calculations. The idea is after every iteration my 75th percentile will keep reducing as I am updating the same column with calculated value which would be lower then the current value (since it is based on 75th percentile). However when I check I find for all the iterations the values are same as the 1st iteration only. I have tried to delete the data frames, save to temp data frame, copy dataframe but non seem to be working.
Please help !!

efficient soultion to create multiple columns with formula. pandas/python

i'm trying to create multiple columns(couple of hundreds) using values within the same df. is there a more efficient way for me to create multiple columns in batches? below is an example where i have to manually input new column names jwrl2_rank.r1, jwrl2_rank.1r1,jwrl2_rank.2r1, etc.. attached to the formula.
i0, i1, i2 are the original column names
and rn is the value within the column.
i0='jwrl2_rank'
i1='jwrl2_rank.1'
i2='jwrl2_rank.2'
i3='jwrl2_rank.3'
i4='jwrl2_rank.4'
i5='jwrl2_rank.5'
i6='jwrl2_rank.6'
i7='jwrl2_rank.7'
rn=1
df['jwrl2_rank.r1']=((df.loc[(df[i0]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i0]==rn),i0].count()))-1
df['jwrl2_rank.1r1']=((df.loc[(df[i1]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i1]==rn),i1].count()))-1
df['jwrl2_rank.2r1']=((df.loc[(df[i2]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i2]==rn),i2].count()))-1
df['jwrl2_rank.3r1']=((df.loc[(df[i3]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i3]==rn),i3].count()))-1
df['jwrl2_rank.4r1']=((df.loc[(df[i4]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i4]==rn),i4].count()))-1
df['jwrl2_rank.5r1']=((df.loc[(df[i5]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i5]==rn),i5].count()))-1
df['jwrl2_rank.6r1']=((df.loc[(df[i6]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i6]==rn),i6].count()))-1
df['jwrl2_rank.7r1']=((df.loc[(df[i7]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[i7]==rn),i7].count()))-1
many thanks. regards
Using a for loop should work.
Incrementing string value
By using string interpolation you could solve your problem. See here for a quick introduction. I am using f-strings in the example below.
base_name='jwrl2_rank'
MAX_NUMBER = 3
for i in range(1, MAX_NUMBER + 1):
new_name = f"{base_name}.{i}"
print(new_name)
>>>
jwrl2_rank.1
jwrl2_rank.2
jwrl2_rank.3
Example of for loop
base_name='jwrl2_rank'
MAX_NUMBER = 3
for i in range(MAX_NUMBER + 1):
current_iN = f"{base_name}.{i}"
new_col_name = f"{base_name}.{i}r1"
if i == 0: # compensate for missing zero in column name
current_iN = base_name
new_col_name = f"{base_name}.r1"
df[new_col_name]=((df.loc[(df[current_iN]==rn)&(df['result']==1),'timing'].sum())/(df.loc[(df[current_iN]==rn),current_iN].count()))-1

How to insert a value in google sheets through python just when the cell is empty/null

I'm trying to automate googlesheets through python, and every time my DF query runs, it inserts the data with the current day.
To put it simple, when a date column is empty, it have to be fulfilled with date when the program runs. The image is:
EXAMPLE IMAGE
I was trying to do something like it:
ws = client.open("automation").worksheet('sheet2')
ws.update(df_h.fillna('0').columns.values.tolist())
I'm not able to fulfill just the empty space, seems that or all the column is replaced, or all rows, etc.
Solved it thorugh another account:
ws_date_pipe = client.open("automation").worksheet('sheet2')
# Range of date column (targeted one, which is the min range)
next_row_min = str(len(list(filter(None, ws_date_pipe.col_values(8))))+1)
# Range of first column (which is the max range)
next_row_max = str(len(list(filter(None, ws_date_pipe.col_values(1)))))
cell_list = ws_date_pipe.range(f"H{next_row_min}:H{next_row_max}")
cell_values = []
# Difference between max-min ranges, space that needs to be fulfilled
for x in range(0, ((int(next_row_max)+1)-int(next_row_min)), 1):
iterator = x
iterator = datetime.datetime.now().strftime("%Y-%m-%d")
iterator = str(iterator)
cell_values.append(iterator)
for i, val in enumerate(cell_values):
cell_list[i].value = val
# If date range len "next_row_min" is lower than the first column, then fill.
if int(next_row_min) < int(next_row_max)+1:
ws_date_pipe.update_cells(cell_list)
print(f'Saved to csv file. {datetime.datetime.now().strftime("%Y-%m-%d")}')

Randomization of a list with conditions using Pandas

I'm new to any kind of programming as you can tell by this 'beautiful' piece of hard coding. With sweat and tears (not so bad, just a little), I've created a very sequential code and that's actually my problem. My goal is to create a somewhat-automated script - probably including for-loop (I've unsuccessfully tried).
The main aim is to create a randomization loop which takes original dataset looking like this:
dataset
From this data set picking randomly row by row and saving it one by one to another excel list. The point is that the row from columns called position01 and position02 should be always selected so it does not match with the previous pick in either of those two column values. That should eventually create an excel sheet with randomized rows that are followed always by a row that does not include values from the previous pick. So row02 should not include any of those values in columns position01 and position02 of the row01, row3 should not contain values of the row2, etc. It should also iterate in the range of the list length, which is 0-11. Important is also the excel output since I need the rest of the columns, I just need to shuffle the order.
I hope my aim and description are clear enough, if not, happy to answer any questions. I would appreciate any hint or help, that helps me 'unstuck'. Thank you. Code below. (PS: I'm aware of the fact that there is probably much more neat solution to it than this)
import pandas as pd
import random
dataset = pd.read_excel("C:\\Users\\ibm\\Documents\\Psychopy\\DataInput_Training01.xlsx")
# original data set use for comparisons
imageDataset = dataset.loc[0:11, :]
# creating empty df for storing rows from imageDataset
emptyExcel = pd.DataFrame()
randomPick = imageDataset.sample() # select randomly one row from imageDataset
emptyExcel = emptyExcel.append(randomPick) # append a row to empty df
randomPickIndex = randomPick.index.tolist() # get index of the row
imageDataset2 = imageDataset.drop(index=randomPickIndex) # delete the row with index selected before
# getting raw values from the row 'position01'/02 are columns headers
randomPickTemp1 = randomPick['position01'].values[0]
randomPickTemp2 = randomPick
randomPickTemp2 = randomPickTemp2['position02'].values[0]
# getting a dataset which not including row values from position01 and position02
isit = imageDataset2[(imageDataset2.position01 != randomPickTemp1) & (imageDataset2.position02 != randomPickTemp1) & (imageDataset2.position01 != randomPickTemp2) & (imageDataset2.position02 != randomPickTemp2)]
# pick another row from dataset not including row selected at the beginning - randomPick
randomPick2 = isit.sample()
# save it in empty df
emptyExcel = emptyExcel.append(randomPick2, sort=False)
# get index of this second row to delete it in next step
randomPick2Index = randomPick2.index.tolist()
# delete the another row
imageDataset3 = imageDataset2.drop(index=randomPick2Index)
# AND REPEAT the procedure of comparison of the raw values with dataset already not including the original row:
randomPickTemp1 = randomPick2['position01'].values[0]
randomPickTemp2 = randomPick2
randomPickTemp2 = randomPickTemp2['position02'].values[0]
isit2 = imageDataset3[(imageDataset3.position01 != randomPickTemp1) & (imageDataset3.position02 != randomPickTemp1) & (imageDataset3.position01 != randomPickTemp2) & (imageDataset3.position02 != randomPickTemp2)]
# AND REPEAT with another pick - save - matching - picking again.. until end of the length of the dataset (which is 0-11)
So at the end I've used a solution provided by David Bridges (post from Sep 19 2019) on psychopy websites. In case anyone is interested, here is a link: https://discourse.psychopy.org/t/how-do-i-make-selective-no-consecutive-trials/9186
I've just adjusted the condition in for loop to my case like this:
remaining = [choices[x] for x in choices if last['position01'] != choices[x]['position01'] and last['position01'] != choices[x]['position02'] and last['position02'] != choices[x]['position01'] and last['position02'] != choices[x]['position02']]
Thank you very much for the helpful answer! and hopefully I did not spam it over here too much.
import itertools as it
import random
import pandas as pd
# list of pair of numbers
tmp1 = [x for x in it.permutations(list(range(6)),2)]
df = pd.DataFrame(tmp1, columns=["position01","position02"])
df1 = pd.DataFrame()
i = random.choice(df.index)
df1 = df1.append(df.loc[i],ignore_index = True)
df = df.drop(index = i)
while not df.empty:
val = list(df1.iloc[-1])
tmp = df[(df["position01"]!=val[0])&(df["position01"]!=val[1])&(df["position02"]!=val[0])&(df["position02"]!=val[1])]
if tmp.empty: #looped for 10000 times, was never empty
print("here")
break
i = random.choice(tmp.index)
df1 = df1.append(df.loc[i],ignore_index = True)
df = df.drop(index=i)

Empty Pandas dataframe when attempting to threshold

I am attempting to threshold a pandas dataframe which contains gene id's and statistical information. The input to my python program is a config.yaml file that holds the initial threshold values and a path to a CSV file (the eventual dataframe). The problem that I seem to be running into stems from the passing of my threshold variables into a "cut-down" dataframe. I am able to successfully threshold when using the integer values (in a deprecated method), but I receive an empty dataframe when trying to threshold using variables pointing to values in the config file.
Below is my current implementation:
config = yaml.full_load(file)
# for item, doc in config.items():
# print (item, ":", doc)
input_path = config['DESeq_input']['path']
# print(input_path)
baseMean = config['baseMean']
log2FoldChange = config['log2FoldChange']
lfcSE = config['lfcSE']
pvalue = config['pvalue']
padj = config['padj']
df = pd.read_csv(input_path)
# print if 0 < than padj for test
# convert to #, most likely being read as string
# now use threshold value to cut down CSV
# only columns defined in config.yaml file
df_select = df[['genes', 'baseMean', 'log2FoldChange', 'lfcSE', 'pvalue', 'padj']]
# print(df_select)
# print(df_select['genes'])
df_threshold = df_select.loc[(df_select['baseMean'] < baseMean)
& (df_select['log2FoldChange'] < log2FoldChange)
& (df_select['lfcSE'] < lfcSE)
& (df_select['pvalue'] < pvalue)
& (df_select['padj'] < padj)]
print(df_threshold)
And below is my (deprecated) implementation (that works):
df = pd.read_csv('/Users/nmaki/Documents/GitHub/IDEA/tests/eDESeq2.csv')
df_select = df[['genes', 'pvalue', 'padj', 'log2FoldChange']]
df_threshold = df_select.loc[(df_select['pvalue'] < 0.05)
& (df_select['padj'] < 0.1)
& (df_select['log2FoldChange'] < 0.5)]
print(df_threshold)
Upon execution of my current code get:
Empty DataFrame
Columns: [genes, baseMean, log2FoldChange, lfcSE, pvalue, padj]
Index: []
Example contents of the csv file I am loading in as a dataframe:
"genes","baseMean","log2FoldChange","lfcSE","stat","pvalue","padj"
"ENSDARG00000000001",98.1095154977918,-0.134947665995593,0.306793322887575,-0.439865068527078,0.660034837008121,0.93904992415549
"ENSDARG00000000002",731.125841719954,0.666095249996351,0.161764851506172,4.11767602043598,3.82712199388831e-05,0.00235539468663284
"ENSDARG00000000018",367.699187187462,-0.170546910862128,0.147128047078344,-1.1591733476304,0.246385533026112,0.756573630543937
"ENSDARG00000000019",1133.08821430092,-0.131148919306121,0.104742185100469,-1.25211173683576,0.210529151546469,0.718240791187956
"ENSDARG00000000068",397.13408030651,-0.111332941901299,0.161417383863387,-0.689720891496564,0.49036972534723,0.8864754582597
"ENSDARG00000000069",1886.21783387126,-0.107901197025113,0.113522109960702,-0.950486183374019,0.341865271089735,0.82295928359482
"ENSDARG00000000086",246.197553048504,0.390421091410488,0.215725761369183,1.80980282063921,0.0703263703690051,0.466064880589034
"ENSDARG00000000103",797.782152145232,0.236382332789599,0.145111727277908,1.62896781138092,0.103319833277229,0.550658656731341
"ENSDARG00000000142",26.1411622212853,0.248419645848534,0.495298350652519,0.501555568519983,0.615980180267141,0.927327861190167
"ENSDARG00000000151",121.397701922367,0.276123125224845,0.244276041791451,1.13037333993066,0.25831894300396,0.766841249972654
"ENSDARG00000000161",22.2863001989718,0.837640942615127,0.542200061816621,1.54489274643135,0.122372208261173,0.587106227452529
"ENSDARG00000000183",215.47910609869,0.567221763062732,0.188807351259458,3.00423558340829,0.00266249076445763,0.0615311290935424
"ENSDARG00000000189",620.819069705942,0.0525797819665496,0.142171888686286,0.369832478504743,0.711507313969775,0.950479626809728
"ENSDARG00000000212",54472.1417532637,0.344813324409911,0.130070467015575,2.65097321722249,0.00802602056136946,0.132041563800088
"ENSDARG00000000229",172.985864037855,-0.0814838221355631,0.22200915791162,-0.367029103222856,0.713597309421024,0.95157821096128
"ENSDARG00000000241",511.449190233542,-0.431854805500191,0.157764756166574,-2.73733383801019,0.0061939401710654,0.114238610824236
"ENSDARG00000000324",179.189751392247,0.0141623609187069,0.206197755704643,0.0686833902256096,0.945241639658214,0.992706066946251
"ENSDARG00000000349",13.6578995386995,0.86981405362392,0.716688718472183,1.21365668414338,0.224878851627296,0.731932542953245
"ENSDARG00000000369",9.43959070533812,-0.042383076946964,0.868977019485631,-0.0487735302506061,0.961099776861288,NA
"ENSDARG00000000370",129.006520833067,0.619490133053518,0.250960632807829,2.46847533863165,0.0135690001510168,0.184768676917612
"ENSDARG00000000380",17.695581482726,-0.638493654324115,0.597289695632778,-1.06898488119351,0.285076482019819,0.786103920659844
"ENSDARG00000000394",2200.41651475378,-0.00605761754099435,0.0915611724486909,-0.0661592395443486,0.947251047773153,0.992978480118812
"ENSDARG00000000423",195.477813443242,-0.18634265895713,0.188820984694016,-0.986874733542448,0.323704052061987,0.810439992736898
"ENSDARG00000000442",1102.47980192551,0.0589654622770368,0.112333519273845,0.524914225586502,0.599642819781172,0.920807266898811
"ENSDARG00000000460",8.52822266110357,0.229130838495461,0.957763036484278,0.239235416034165,0.810923041830713,NA
"ENSDARG00000000472",0.840917787550721,-0.4234502342491,3.1634759582284,-0.133855998857105,0.893516444899853,NA
"ENSDARG00000000474",5.12612778660879,0.394871266508097,1.07671345623418,0.366737560696199,0.713814786364707,NA
"ENSDARG00000000476",75.8417047936895,0.242006157627571,0.349451220882324,0.692532013528336,0.488603288756242,0.885874315527816
"ENSDARG00000000489",1233.33364888202,0.0676458807753533,0.131846296650645,0.513066217965876,0.607905001380741,0.924392802283811
As it turns out, my thresholds were too restrictive (I had added 2 additional variables that did not exist in my original implementation). I am receiving a populated dataframe now.

Categories