python - how to create a more compact group for dictionary - python

Hi this part of my code for a biology project:
# choosing and loading the file:
df = pd.read_csv('Dafniyot_Data.csv',delimiter=',')
#grouping data by C/I groups:
CII = df[df['group'].str.contains('CII')]
CCI = df[df['group'].str.contains('CCI')]
CCC = df[df['group'].str.contains('CCC')]
III = df[df['group'].str.contains('III')]
CIC = df[df['group'].str.contains('CIC')]
ICC = df[df['group'].str.contains('ICC')]
IIC = df[df['group'].str.contains('IIC')]
ICI = df[df['group'].str.contains('ICI')]
#creating a dictonary of the groups:
dict = {'CII':CII, 'CCI':CCI, 'CCC':CCC,'III':III,'CIC':CIC,'ICC':ICC,'IIC':IIC,'ICI':ICI}
#T test
#FERTUNITY
#using ttest for checking FERTUNITY - grandmaternal(F0)
t_F0a = stats.ttest_ind(CCC['N_offspring'],ICC['N_offspring'],nan_policy='omit')
t_F0b = stats.ttest_ind(CCI['N_offspring'],ICI['N_offspring'],nan_policy='omit')
t_F0c = stats.ttest_ind(IIC['N_offspring'],CIC['N_offspring'],nan_policy='omit')
t_F0d = stats.ttest_ind(CCI['N_offspring'],III['N_offspring'],nan_policy='omit')
t_F0 = {'FERTUNITY - grandmaternal(F0)':[t_F0a,t_F0b,t_F0c,t_F0d]}
I need to repeat the ttest part 6 more times with either changing the groups(CCC,etc..)or the row from the df('N_offspring',survival) which takes a lot of lines in the project.
I'm trying to find a way to still get the dictionary of each group in the end:
t_F0 = {'FERTUNITY - grandmaternal(F0)':[t_F0a,t_F0b,t_F0c,t_F0d]}
Because its vey useful for me later, but in a less repetitive way with less lines

Use itertools.product to generate all the keys, and a dict comprehension to generate the values:
from itertools import product
keys = [''.join(items) for items in product("CI", repeat=3)]
the_dict = { key: df[df['group'].str.contains(key)] for key in keys }
Similarly, you can generate the latter part of your test keys:
half_keys = [''.join(items) for items in product("CI", repeat=2)]
t_F0 = {
'FERTUNITY - grandmaternal(F0)': [
stats.ttest_ind(
the_dict[f"C{half_key}"]['N_offspring'],
the_dict[f"I{half_key}"]['N_offspring'],
nan_policy='omit'
) for half_key in half_keys
],
}
As an aside, you should not use dict as a variable name: it already has a meaning (the type of dict objects).
As a second aside, this deals with the literal question of how to DRY up creating a dictionary. However, do consider what Chris said in comments; this may be an XY problem.

Related

Unpack a python dictionary and save as variables

I have the following string extracted from a pandas column (its an sport example):
unpack ="{'TB': [['Brady', 'Godwin'], ['2023-RD1', '2023-RD4']], 'KC': [['Mahomes'], ['2023-RD2']]}"
To upack the string i use the following:
from ast import literal_eval
t_dict = literal_eval(unpack)
print(t_dict)
which gives me:
{'TB': [['Brady', 'Godwin'], ['2023-RD1', '2023-RD4']], 'KC': [['Mahomes'], ['2023-RD2']]}
I am now trying to extract all of these keys / values to variables/lists. My expected output is:
team1 = 'TB'
team2 = 'KC'
team1_trades_players = ['Brady', 'Godwin']
team1_trades_picks = ['2023-RD1', '2023-RD4']
team2_trades_players = ['Mahomes']
team2_trades_picks = ['2023-RD2]
I have tried the following but I am unsure how to send the first iteration to team1 and 2nd iteration to team2:
#extracting team for each pick
for t in t_dict:
print(t)
Gives me:
TB
KC
And then for the values i can correctly print them but unsure how to send back to the lists:
#extracting lists for each key:
for traded in t_dict.values():
#extracting the players traded for each team
for players in traded[0]:
print(players)
#extracting picks for each team
for picks in traded[1]:
print(picks)
Produces:
Brady
Godwin
2023-RD1
2023-RD4
Mahomes
2023-RD2
I think i am close but missing the final step of sending back to their variables/lists. Any help would be greatly appreciated! Thanks!
If the number of teams is known beforehand it is pretty simple:
team1, team2 = t_dict.keys()
team1_trades_players, team1_trades_picks = t_dict[team1]
team2_trades_players, team2_trades_picks = t_dict[team2]
If the number of teams is not known beforehand, I would recommend to just use t_dict.
I would recommend to put everything in a nested dict which you then can access easiely:
t_dict = {'TB': [['Brady', 'Godwin'], ['2023-RD1', '2023-RD4']], 'KC': [['Mahomes'], ['2023-RD2']]}
t_nested = {k:{"players": v[0], "picks": v[1]} for k,v in t_dict.items()}
team1 = list(t_nested.keys())[0]
team2 = list(t_nested.keys())[1]
team1_trades_players = t_nested[team1]['players']
team1_trades_picks = t_nested[team1]['picks']
team2_trades_players = t_nested[team2]['players']
team2_trades_picks = t_nested[team2]['picks']
But probably for most use cases it would be better to just keep it in that nested dict structure and use it directly instead of creating all these variables which make everything less dynamic.

Finding all possible permutations of a hash when given list of grouped elements

Best way to show what I'm trying to do:
I have a list of different hashes that consist of ordered elements, seperated by an underscore. Each element may or may not have other possible replacement values. I'm trying to generate a list of all possible combinations of this hash, after taking into account replacement values.
Example:
grouped_elements = [["1", "1a", "1b"], ["3", "3a"]]
original_hash = "1_2_3_4_5"
I want to be able to generate a list of the following hashes:
[
"1_2_3_4_5",
"1a_2_3_4_5",
"1b_2_3_4_5",
"1_2_3a_4_5",
"1a_2_3a_4_5",
"1b_2_3a_4_5",
]
The challenge is that this'll be needed on large dataframes.
So far here's what I have:
def return_all_possible_hashes(df, grouped_elements)
rows_to_append = []
for grouped_element in grouped_elements:
for index, row in enriched_routes[
df["hash"].str.contains("|".join(grouped_element))
].iterrows():
(element_used_in_hash,) = set(grouped_element) & set(row["hash"].split("_"))
hash_used = row["hash"]
replacement_elements = set(grouped_element) - set([element_used_in_hash])
for replacement_element in replacement_elements:
row["hash"] = stop_hash_used.replace(
element_used_in_hash, replacement_element
)
rows_to_append.append(row)
return df.append(rows_to_append)
But the problem is that this will only append hashes with all combinations of a given grouped_element, and not all combinations of all grouped_elements at the same time. So using the example above, my function would return:
[
"1_2_3_4_5",
"1a_2_3_4_5",
"1b_2_3_4_5",
"1_2_3a_4_5",
]
I feel like I'm not far from the solution, but I also feel stuck, so any help is much appreciated!
If you make a list of the original hash value's elements and replace each element with a list of all its possible variations, you can use itertools.product to get the Cartesian product across these sublists. Transforming each element of the result back to a string with '_'.join() will get you the list of possible hashes:
from itertools import product
def possible_hashes(original_hash, grouped_elements):
hash_list = original_hash.split('_')
variations = list(set().union(*grouped_elements))
var_list = hash_list.copy()
for i, h in enumerate(hash_list):
if h in variations:
for g in grouped_elements:
if h in g:
var_list[i] = g
break
else:
var_list[i] = [h]
return ['_'.join(h) for h in product(*var_list)]
possible_hashes("1_2_3_4_5", [["1", "1a", "1b"], ["3", "3a"]])
['1_2_3_4_5',
'1_2_3a_4_5',
'1a_2_3_4_5',
'1a_2_3a_4_5',
'1b_2_3_4_5',
'1b_2_3a_4_5']
To use this function on various original hash values stored in a dataframe column, you can do something like this:
df['hash'].apply(lambda x: possible_hashes(x, grouped_elements))

Create dictionaries from data frames stored in a dictionary in Python

I have a for loop that cycles through and creates 3 data frames and stores them in a dictionary. From each of these data frames, I would like to be able to create another dictionary, but I cant figure out how to do this.
Here is the repetitive code without the loop:
Trad = allreports2[allreports2['Trad'].notna()]
Alti = allreports2[allreports2['Alti'].notna()]
Alto = allreports2[allreports2['Alto'].notna()]
Trad_dict = dict(zip(Trad.State, Trad.Position))
Alti_dict = dict(zip(Alti.State, Alti.Position))
Alto_dict = dict(zip(Alto.State, Alto.Position))
As stated earlier, I understand how to make the 3 dataframes by storing them in a dictionary and I understand what needs to go on the right side of the equal sign in the second statement in the for loop, but not what goes on the left side (denoted below as XXXXXXXXX).
Routes = ['Trad', 'Alti', 'Alto']
dfd = {}
for route in Routes:
dfd[route] = allreports2[allreports2[route].notna()]
XXXXXXXXX = dict(zip(dfd[route].State, dfd[route].Position))
(Please note: I am very new to Python and teaching myself so apologies in advance!)
This compromises readability, but this should work.
Routes = ['Trad', 'Alti', 'Alto']
dfd, output = [{},{}] # Unpack List
for route in Routes:
dfd[route] = allreports2[allreprots2[route].notna()]
output[route] = dict(zip(dfd[route].State, dfd[route].Position))
Trad_dict, Alti_dict, Alto_dict = list(output.values()) # Unpack List
Reference
How can I get list of values from dict?

Algorithmic / coding help for a PySpark markov model

I need some help getting my brain around designing an (efficient) markov chain in spark (via python). I've written it as best as I could, but the code I came up with doesn't scale.. Basically for the various map stages, I wrote custom functions and they work fine for sequences of a couple thousand, but when we get in the 20,000+ (and I've got some up to 800k) things slow to a crawl.
For those of you not familiar with markov moodels, this is the gist of it..
This is my data.. I've got the actual data (no header) in an RDD at this point.
ID, SEQ
500, HNL, LNH, MLH, HML
We look at sequences in tuples, so
(HNL, LNH), (LNH,MLH), etc..
And I need to get to this point.. where I return a dictionary (for each row of data) that I then serialize and store in an in memory database.
{500:
{HNLLNH : 0.333},
{LNHMLH : 0.333},
{MLHHML : 0.333},
{LNHHNL : 0.000},
etc..
}
So in essence, each sequence is combined with the next (HNL,LNH become 'HNLLNH'), then for all possible transitions (combinations of sequences) we count their occurrence and then divide by the total number of transitions (3 in this case) and get their frequency of occurrence.
There were 3 transitions above, and one of those was HNLLNH.. So for HNLLNH, 1/3 = 0.333
As a side not, and I'm not sure if it's relevant, but the values for each position in a sequence are limited.. 1st position (H/M/L), 2nd position (M/L), 3rd position (H,M,L).
What my code had previously done was to collect() the rdd, and map it a couple times using functions I wrote. Those functions first turned the string into a list, then merged list[1] with list[2], then list[2] with list[3], then list[3] with list[4], etc.. so I ended up with something like this..
[HNLLNH],[LNHMLH],[MHLHML], etc..
Then the next function created a dictionary out of that list, using the list item as a key and then counted the total ocurrence of that key in the full list, divided by len(list) to get the frequency. I then wrapped that dictionary in another dictionary, along with it's ID number (resulting in the 2nd code block, up a above).
Like I said, this worked well for small-ish sequences, but not so well for lists with a length of 100k+.
Also, keep in mind, this is just one row of data. I have to perform this operation on anywhere from 10-20k rows of data, with rows of data varying between lengths of 500-800,000 sequences per row.
Any suggestions on how I can write pyspark code (using the API map/reduce/agg/etc.. functions) to do this efficiently?
EDIT
Code as follows.. Probably makes sense to start at the bottom. Please keep in mind I'm learning this(Python and Spark) as I go, and I don't do this for a living, so my coding standards are not great..
def f(x):
# Custom RDD map function
# Combines two separate transactions
# into a single transition state
cust_id = x[0]
trans = ','.join(x[1])
y = trans.split(",")
s = ''
for i in range(len(y)-1):
s= s + str(y[i] + str(y[i+1]))+","
return str(cust_id+','+s[:-1])
def g(x):
# Custom RDD map function
# Calculates the transition state probabilities
# by adding up state-transition occurrences
# and dividing by total transitions
cust_id=str(x.split(",")[0])
trans = x.split(",")[1:]
temp_list=[]
middle = int((len(trans[0])+1)/2)
for i in trans:
temp_list.append( (''.join(i)[:middle], ''.join(i)[middle:]) )
state_trans = {}
for i in temp_list:
state_trans[i] = temp_list.count(i)/(len(temp_list))
my_dict = {}
my_dict[cust_id]=state_trans
return my_dict
def gen_tsm_dict_spark(lines):
# Takes RDD/string input with format CUST_ID(or)PROFILE_ID,SEQ,SEQ,SEQ....
# Returns RDD of dict with CUST_ID and tsm per customer
# i.e. {cust_id : { ('NLN', 'LNN') : 0.33, ('HPN', 'NPN') : 0.66}
# creates a tuple ([cust/profile_id], [SEQ,SEQ,SEQ])
cust_trans = lines.map(lambda s: (s.split(",")[0],s.split(",")[1:]))
with_seq = cust_trans.map(f)
full_tsm_dict = with_seq.map(g)
return full_tsm_dict
def main():
result = gen_tsm_spark(my_rdd)
# Insert into DB
for x in result.collect():
for k,v in x.iteritems():
db_insert(k,v)
You can try something like below. It depends heavily on tooolz but if you prefer to avoid external dependencies you can easily replace it with some standard Python libraries.
from __future__ import division
from collections import Counter
from itertools import product
from toolz.curried import sliding_window, map, pipe, concat
from toolz.dicttoolz import merge
# Generate all possible transitions
defaults = sc.broadcast(dict(map(
lambda x: ("".join(concat(x)), 0.0),
product(product("HNL", "NL", "HNL"), repeat=2))))
rdd = sc.parallelize(["500, HNL, LNH, NLH, HNL", "600, HNN, NNN, NNN, HNN, LNH"])
def process(line):
"""
>>> process("000, HHH, LLL, NNN")
('000', {'LLLNNN': 0.5, 'HHHLLL': 0.5})
"""
bits = line.split(", ")
transactions = bits[1:]
n = len(transactions) - 1
frequencies = pipe(
sliding_window(2, transactions), # Get all transitions
map(lambda p: "".join(p)), # Joins strings
Counter, # Count
lambda cnt: {k: v / n for (k, v) in cnt.items()} # Get frequencies
)
return bits[0], frequencies
def store_partition(iter):
for (k, v) in iter:
db_insert(k, merge([defaults.value, v]))
rdd.map(process).foreachPartition(store_partition)
Since you know all possible transitions I would recommend using a sparse representation and ignore zeros. Moreover you can replace dictionaries with sparse vectors to reduce memory footprint.
you can achieve this result by using pure Pyspark, i did using it using pyspark.
To create frequencies, let say you have already achieved and these are input RDDs
ID, SEQ
500, [HNL, LNH, MLH, HML ...]
and to get frequencies like, (HNL, LNH),(LNH, MLH)....
inputRDD..map(lambda (k, list): get_frequencies(list)).flatMap(lambda x: x) \
.reduceByKey(lambda v1,v2: v1 +v2)
get_frequencies(states_list):
"""
:param states_list: Its a list of Customer States.
:return: State Frequencies List.
"""
rest = []
tuples_list = []
for idx in range(0,len(states_list)):
if idx + 1 < len(states_list):
tuples_list.append((states_list[idx],states_list[idx+1]))
unique = set(tuples_list)
for value in unique:
rest.append((value, tuples_list.count(value)))
return rest
and you will get results
((HNL, LNH), 98),((LNH, MLH), 458),() ......
after this you may convert result RDDs into Dataframes or yu can directly insert into DB using RDDs mapPartitions

Matching strings for multiple data set in Python

I am working on python and I need to match the strings of several data files. First I used pickle to unpack my files and then I place them into a list. I only want to match strings that have the same conditions. This conditions are indicated at the end of the string.
My working script looks approximately like this:
import pickle
f = open("data_a.dat")
list_a = pickle.load( f )
f.close()
f = open("data_b.dat")
list_b = pickle.load( f )
f.close()
f = open("data_c.dat")
list_c = pickle.load( f )
f.close()
f = open("data_d.dat")
list_d = pickle.load( f )
f.close()
for a in list_a:
for b in list_b:
for c in list_c
for d in list_d:
if a.GetName()[12:] in b.GetName():
if a.GetName[12:] in c.GetName():
if a.GetName[12:] in d.GetName():
"do whatever"
This seems to work fine for these 2 lists. The problems begin when I try to add more 8 or 9 more data files for which I also need to match the same conditions. The script simple won't process and it gets stuck. I appreciate your help.
Edit: Each of the lists contains histograms named after the parameters that were used to create them. The name of the histograms contains these parameters and their values at the end of the string. In the example I did it for 2 data sets, now I would like to do it for 9 data sets without using multiple loops.
Edit 2. I just expanded the code to reflect more accurately what I want to do. Now if I try to do that for 9 lists, it does not only look horrible, but it also doesn't work.
out of my head:
files = ["file_a", "file_b", "file_c"]
sets = []
for f in files:
f = open("data_a.dat")
sets.append(set(pickle.load(f)))
f.close()
intersection = sets[0].intersection(*sets[1:])
EDIT: Well I overlooked your mapping to x.GetName()[12:], but you should be able to reduce your problem to set logic.
Here a small piece of code you can inspire on. The main idea is the use of a recursive function.
For simplicity sake, I admit that I already have data loaded in lists but you can get them from file before :
data_files = [
'data_a.dat',
'data_b.dat',
'data_c.dat',
'data_d.dat',
'data_e.dat',
]
lists = [pickle.load(open(f)) for f in data_files]
And because and don't really get the details of what you really need to do, my goal here is to found the matches on the four firsts characters :
def do_wathever(string):
print "I have match the string '%s'" % string
lists = [
["hello", "world", "how", "grown", "you", "today", "?"],
["growl", "is", "a", "now", "on", "appstore", "too bad"],
["I", "wish", "I", "grow", "Magnum", "mustache", "don't you?"],
]
positions = [0 for i in range(len(lists))]
def recursive_match(positions, lists):
strings = map(lambda p, l: l[p], positions, lists)
match = True
searched_string = strings.pop(0)[:4]
for string in strings:
if searched_string not in string:
match = False
break
if match:
do_wathever(searched_string)
# increment positions:
new_positions = positions[:]
lists_len = len(lists)
for i, l in enumerate(reversed(lists)):
max_position = len(l)-1
list_index = lists_len - i - 1
current_position = positions[list_index]
if max_position > current_position:
new_positions[list_index] += 1
break
else:
new_positions[list_index] = 0
continue
return new_positions, not any(new_positions)
search_is_finished = False
while not search_is_finished:
positions, search_is_finished = recursive_match(positions, lists)
Of course you can optimize a lot of things here, this is draft code, but take a look at the recursive function, this is a major concept.
In the end I ended up using the map built in function. I realize now I should have been even more explicit than I was (which I will do in the future).
My data files are histograms with 5 parameters, some with 3 or 4. Something like this,
par1=["list with some values"]
par2=["list with some values"]
par3=["list with some values"]
par4=["list with some values"]
par5=["list with some values"]
I need to examine the behavior of the quantity plotted for each possible combination of the values of the parameters. In the end, I get a data file with ~300 histograms each identified in their name with the corresponding values of the parameters and the sample name. It looks something like,
datasample1-par1=val1-par2=val2-par3=val3-par4=val4-par5=val5
datasample1-"permutation of the above values"
...
datasample9-par1=val1-par2=val2-par3=val3-par4=val4-par5=val5
datasample9-"permutation of the above values"
So I get 300 histograms for each of the 9 data files, but luckily all of this histograms are created in the same order. Hence I can pair all of them just using the map built in function. I unpack the data files, put each on lists and the use the map function to pair each histogram with its corresponding configuration in the other data samples.
for lst in map(None, data1_histosli, data2_histosli, ...data9_histosli):
do_something(lst)
This solves my problem. Thank you to all for your help!

Categories