Comparing key from first dictionary to values from second dictionary - python

Please I need some help again.
I have a big data base file (let's call it db.csv) containing many informations.
Simplified database file to illustrate:
I run usearch61 -cluster_fast on my genes sequences in order to cluster them.
I obtained a file named 'clusters.uc'. I opened it as csv then I made a code to create a dictionary (let's say dict_1) to have my cluster number as keys and my gene_id (VFG...) as values.
Here is an example of what I made then stored in a file: dict_1
0 ['VFG003386', 'VFG034084', 'VFG003381']
1 ['VFG000838', 'VFG000630', 'VFG035932', 'VFG000636']
2 ['VFG018349', 'VFG018485', 'VFG043567']
...
14471 ['VFG015743', 'VFG002143']
So far so good. Then using db.csv I made another dictionary (dict_2) were gene_id (VFG...) are keys and VF_Accession (IA... or CVF.. or VF...) are values, illustration: dict_2
VFG044259 IA027
VFG044258 IA027
VFG011941 CVF397
VFG012016 CVF399
...
What I want in the end is to have for each VF_Accession the numbers of cluster groups, illustration:
IA027 [0,5,6,8]
CVF399 [15, 1025, 1562, 1712]
...
So I guess since I'm still a beginner in coding that I need to create a code that compare values from dict_1(VFG...) to keys from dict_2(VFG...). If they match put VF_Accession as a key with all cluster numbers as values. Since VF_Accession are keys they can't have duplicate I need a dictionary of list. I guess I can do that because I made it for dict_1. But my problem is that I can't figure out a way to compare values from dict_1 to keys from dict_2 and put to each VF_Accession a cluster number. Please help me.

First, let's give your dictionaries some better names then dict_1, dict_2, ... that makes it easier to work with them and to remember what they contain.
You first created a dictionary that has cluster numbers as keys and gene_ids (VFG...) as values:
cluster_nr_to_gene_ids = {0: ['VFG003386', 'VFG034084', 'VFG003381', 'VFG044259'],
1: ['VFG000838', 'VFG000630', 'VFG035932', 'VFG000636'],
2: ['VFG018349', 'VFG018485', 'VFG043567', 'VFG012016'],
5: ['VFG011941'],
7949: ['VFG003386'],
14471: ['VFG015743', 'VFG002143', 'VFG012016']}
And you also have another dictionary where gene_ids are keys and VF_Accessions (IA... or CVF.. or VF...) are values:
gene_id_to_vf_accession = {'VFG044259': 'IA027',
'VFG044258': 'IA027',
'VFG011941': 'CVF397',
'VFG012016': 'CVF399',
'VFG000676': 'VF0142',
'VFG002231': 'VF0369',
'VFG003386': 'CVF051'}
And we want to create a dictionary where each VF_Accession key has as value the numbers of cluster groups: vf_accession_to_cluster_groups.
We also note that a VF Accession belongs to multiple gene IDs (for example: the VF Accession IA027 has both the VFG044259 and the VFG044258 gene IDs.
So we use defaultdict to make a dictionary with VF Accession as key and a list of gene IDs as value
from collections import defaultdict
vf_accession_to_gene_ids = defaultdict(list)
for gene_id, vf_accession in gene_id_to_vf_accession.items():
vf_accession_to_gene_ids[vf_accession].append(gene_id)
For the sample data I posted above, vf_accession_to_gene_ids now looks like:
defaultdict(<class 'list'>, {'VF0142': ['VFG000676'],
'CVF051': ['VFG003386'],
'IA027': ['VFG044258', 'VFG044259'],
'CVF399': ['VFG012016'],
'CVF397': ['VFG011941'],
'VF0369': ['VFG002231']})
Now we can loop over each VF Accession and look up its list of gene IDs. Then, for each gene ID, we loop over every cluster and see if the gene ID is present there:
vf_accession_to_cluster_groups = {}
for vf_accession in vf_accession_to_gene_ids:
gene_ids = vf_accession_to_gene_ids[vf_accession]
cluster_group = []
for gene_id in gene_ids:
for cluster_nr in cluster_nr_to_gene_ids:
if gene_id in cluster_nr_to_gene_ids[cluster_nr]:
cluster_group.append(cluster_nr)
vf_accession_to_cluster_groups[vf_accession] = cluster_group
The end result for the above sample data now is:
{'VF0142': [],
'CVF051': [0, 7949],
'IA027': [0],
'CVF399': [2, 14471],
'CVF397': [5],
'VF0369': []}

Caveat: I don't do much Python development, so there's likely a better way to do this. You can first map your VFG... gene_ids to their cluster numbers, and then use that to process the second dictionary:
from collections import defaultdict
import sys
import ast
# see https://stackoverflow.com/questions/960733/python-creating-a-dictionary-of-lists
vfg_cluster_map = defaultdict(list)
# map all of the vfg... keys to their cluster numbers first
with open(sys.argv[1], 'r') as dict_1:
for line in dict_1:
# split the line at the first space to separate the cluster number and gene ID list
# e.g. after splitting the line "0 ['VFG003386', 'VFG034084', 'VFG003381']",
# cluster_group_num holds "0", and vfg_list holds "['VFG003386', 'VFG034084', 'VFG003381']"
cluster_group_num, vfg_list = line.strip().split(' ', 1)
cluster_group_num = int(cluster_group_num)
# convert "['VFG...', 'VFG...']" from a string to an actual list
vfg_list = ast.literal_eval(vfg_list)
for vfg in vfg_list:
vfg_cluster_map[vfg].append(cluster_group_num)
# you now have a dictionary mapping gene IDs to the clusters they
# appear in, e.g
# {'VFG003386': [0],
# 'VFG034084': [0],
# ...}
# you can look in that dictionary to find the cluster numbers corresponding
# to your vfg... keys in dict_2 and add them to the list for that vf_accession
vf_accession_cluster_map = defaultdict(list)
with open(sys.argv[2], 'r') as dict_2:
for line in dict_2:
vfg, vf_accession = line.strip().split(' ')
# add the list of cluster numbers corresponding to this vfg... to
# the list of cluster numbers corresponding to this vf_accession
vf_accession_cluster_map[vf_accession].extend(vfg_cluster_map[vfg])
for vf_accession, cluster_list in vf_accession_cluster_map.items():
print vf_accession + ' ' + str(cluster_list)
Then save the above script and invoke it like python <script name> dict1_file dict2_file > output (or you could write the strings to a file instead of printing them and redirecting).
EDIT: After looking at #BioGeek's answer, I should note that it would make more sense to process this all in one shot than to create dict_1 and dict_2 files, read them in, parse the lines back into numbers and lists, etc. If you don't need to write the dictionaries to a file first, then you can just add your other code to the script and use the dictionaries directly.

Related

Count occurances of a specific string within multi-valued elements in a set

I have generated a list of genes
genes = ['geneName1', 'geneName2', ...]
and a set of their interactions:
geneInt = {('geneName1', 'geneName2'), ('geneName1', 'geneName3'),...}
I want to find out how many interactions each gene has and put that in a vector (or dictionary) but I struggle to count them. I tried the usual approach:
interactionList = []
for gene in genes:
interactions = geneInt.count(gene)
interactionList.append(ineractions)
but of course the code fails because my set contains elements that are made out of two values while I need to iterate over the single values within.
I would argue that you are using the wrong data structure to hold interactions. You can represent interactions as a dictionary keyed by gene name, whose values are a set of all the genes it interacts with.
Let's say you currently have a process that does something like this at some point:
geneInt = set()
...
geneInt.add((gene1, gene2))
Change it to
geneInt = collections.defaultdict(set)
...
geneInt[gene1].add(gene2)
If the interactions are symmetrical, add a line
geneInt[gene2].add(gene1)
Now, to count the number of interactions, you can do something like
intCounts = {gene: len(ints) for gene, ints in geneInt.items()}
Counting your original list is simple if the interactions are one-way as well:
intCounts = dict.fromkeys(genes, 0)
for gene, _ in geneInt:
intCounts[gene] += 1
If each interaction is two-way, there are three possibilities:
Both interactions are represented in the set: the above loop will work.
Only one interaction of a pair is represented: change the loop to
for gene1, gene2 in geneInt:
intCounts[gene1] += 1
if gene1 != gene2:
intCounts[gene2] += 1
Some reverse interactions are represented, some are not. In this case, transform geneInt into a dictionary of sets as shown in the beginning.
Try something like this,
interactions = {}
for gene in genes:
interactions_count = 0
for tup in geneInt:
interactions_count += tup.count(gene)
interactions[gene] = interactions_count
Use a dictionary, and keep incrementing the value for every gene you see in each tuple in the set geneInt.
interactions_counter = dict()
for interaction in geneInt:
for gene in interaction:
interactions_counter[gene] = interactions_counter.get(gene, 0) + 1
The dict.get(key, default) method returns the value at the given key, or the specified default if the key doesn't exist. (More info)
For the set geneInt={('geneName1', 'geneName2'), ('geneName1', 'geneName3')}, we get:
interactions_counter = {'geneName1': 2, 'geneName2': 1, 'geneName3': 1}

Convert undstructured blocks of data in a columnwise manner (DataFrame)

Description of the problem:
I have an external *.xls file that I have converted to a *.csv file containing block of data such as:
"Legend number one";;;;Number of items;6
X;-358.6806792;-358.6716338;;;
Y;0.8767189;0.8966855;Avg;;50.1206378
Z;-0.7694626;-0.7520983;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
;;;;;
There is many many blocks.
Each block may contain some additional lines data;
"Legend number six";;;;Number of items;19
X;-358.6806792;-358.6716338;;;
Y;0.8767189;0.8966855;Avg;;50.1206378
Z;-0.7654644;-0.75283;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
A;0;1;Value;;0
B;1;0;;;
;;;;;
The structure is such that a new empty line separate each blocs, which is the ';;;;;;' line in my samples.
The first line after this begins with a unique identifier of the block.
It appears that each line contains 6 elements such as key1;elem1;elem2;key2;elem3;elem4 which would be nice to represent as two 3-elements vector key1;elem1;elem2 and key2;elem3;elem4 on two separate lines. Example for the second sample:
"Legend number six";;
;;Number of items;19
X;-358.6806792;-358.6716338;
;;
Y;0.8767189;0.8966855;
Avg;;50.1206378
Z;-0.7654644;-0.75283;
Std;;-0.0010354
D;8.0153902;8;
Err;;1.010385
A;0;1;
Value;;0
B;1;0;
;;
;;;;;
Some are empty but I do not want to discard them for the moment.
But I would like to end up a DataFrame containing columnwise elements for each block of data.
The cleanest "pre solution" I have so far:
With this Python code I ended up in a more organized "List of dictionaries":
import os, sys, re, glob
import pandas as pd
csvFile = os.path.join(workingDir,'file.csv')
h = 0 # Number of lines to skip in head
s = 2 # number of values per key
s += 1
str1 = 'Number of items'
# Reading file in a global list and storing each line in a sublist:
A = [line.split(';') for line in open(csvFile).read().split('\n')]
# This code splits each 6-elements sublist in one new sublist
# containing two-elements; each element with 3 values:
B = [(';'.join(el[:s])+'\n'+';'.join(el[s:])).split('\n') for el in A]
# Init empty structures:
names = [] # to store block unique identifier (the name in the legend)
L = [] # future list of dictionnaries
for el in (B):
for idx,elj in enumerate(el):
vi = elj.split(';')[1:]
# Here we grep the name only when the 2nd element of
# the first line contains the string "Number of items",
# which is constant all over the file:
if len(vi)>1 and vi[0]==str1:
name = el[idx-1].split(';')[0]
names.append(name)
#print(name)
# We loop again over B to append in a new list one dictionary
# per vector of 3 elements because each vector of 3 elements
# is structured like ; key;elem1;elem2
for el in (B):
for elj in (el):
k = elj.split(';')[0]
v = elj.split(';')[1:]
# Little tweak because the key2;elem3;elem4 of the
# first line (the one containing the name) have the
# key in the second place like "elem3;key2;elem4" :
if len(v)>1 and v[0]==str1:
kp = v[0]
v = [v[1],k]
k = kp
if k!='':
dct = {k:v}
L.append(dct)
I am unsuccessful to extract the name as a global identifier and all values of the blocs as variable so far. I can't play with some modulo based technique because of the variable number of informations in each separate block of data even if all block contain at least some common keys.
I also tried a while condition within a for loop all over each dictionary but it's a mess now.
zip could potentially be a nice option but I don't really know how to use it properly.
Target DataFrame:
What I'd like to end up should ideally look something similar to a DataFrame containing;
index 'Number of items' 'X' '' 'Y' 'Avg' 'Z' 'Std' ...
"Legend number one" 6 ...
"Legend number six" 19 ...
"Legend number 11" 6 ...
"Legend number 15" 18 ...
The columns names are the keys and the table is containing the values for each block of data on a separate line.
If there is a numbered index and a new column with "Legend name"; it's OK as well.
CSV sample to play with:
"Legend number one";;;;Number of items;6
X;8.6806792;8.6716338;;;
Y;0.1557;0.1556;Avg;;50.1206378
Z;-0.7859;-0.7860;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
;;;;;
"Legend number six";;;;Number of items;19
X;56.6806792;56.6716338;;;
Y;0.1324;0.1322;Avg;;50.1206378
Z;-0.7654644;-0.75283;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
A;0;1;Value;;0
B;1;0;;;
;;;;;
"Legend number 11";;;;Number of items;6
X;358.6806792;358.6716338;;;
Y;0.1324;0.1322;Avg;;50.1206378
Z;-0.7777;-0.7778;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
;;;;;
"Legend number 15";;;;Number of items;18
X;58.6806792;58.6716338;;;
Y;0.1324;0.1322;Avg;;50.1206378
Z;0.5555;0.5554;Std;;-0.0010354
D;8.0153902;8;Err;;1.010385
A;0;1;Value;;0
B;1;0;;;
C;0;0;k;1;0
;;;;;
I'm using Ubuntu and Python 3.6 but the script must work on a Windows computer as well.
Appending this to the previous code should work pretty well:
for elem in L:
for key,val in elem.items():
if key in names:
name = key
Dict2 = {}
else:
Dict2[key] = val
Dict1[name] = Dict2
df1 = pd.DataFrame.from_dict(Dict1, orient='index')
df2 = pd.DataFrame(index=df1.index)
for col in df1.columns:
colS = df1[col].apply(pd.Series)
colS = colS.rename(columns = lambda x : col+'_'+ str(x))
df2 = pd.concat([df2[:], colS[:]], axis=1)
df2.to_csv('output.csv', sep=',', index=True, header=True)
There is probably many other ways to go...
This link was helpful:
https://chrisalbon.com/python/data_wrangling/pandas_expand_cells_containing_lists/

Indicating a population structure in EggLib Python

In Python, I am using EggLib. I am trying to calculate Jost's D value per SNP found in a VCF file.
Data
Data is here in VCF format. The data set is small, there are 2 populations, 100 individuals per population and 6 SNPs (all on chromosome 1).
Each individual is named Pp.Ii, where p is the population index it belongs to and i is the individual index.
Code
My difficulties concern the specification of the population structure. Here is my trial
### Read the vcf file ###
vcf = egglib.io.VcfParser("MyData.vcf")
### Create the `Structure` object ###
# Dictionary for a given cluster. There is only one cluster.
dcluster = {}
# Loop through each population
for popIndex in [0,1]:
# dictionnary for a given population. There are two populations
dpop = {}
# Loop through each individual
for IndIndex in range(popIndex * 100,(popIndex + 1) * 100):
# A single list to define an individual
dpop[IndIndex] = [IndIndex*2, IndIndex*2 + 1]
dcluster[popIndex] = dpop
struct = {0: dcluster}
### Define the population structure ###
Structure = egglib.stats.make_structure(struct, None)
### Configurate the 'ComputeStats' object ###
cs = egglib.stats.ComputeStats()
cs.configure(only_diallelic=False)
cs.add_stats('Dj') # Jost's D
### Isolate a SNP ###
vcf.next()
site = egglib.stats.site_from_vcf(vcf)
### Calculate Jost's D ###
cs.process_site(site, struct=Structure)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/egglib/stats/_cstats.py", line 431, in process_site
self._frq.process_site(site, struct=struct)
File "/Library/Python/2.7/site-packages/egglib/stats/_freq.py", line 159, in process_site
if sum(struct) != site._obj.get_ning(): raise ValueError, 'invalid structure (sample size is required to match)'
ValueError: invalid structure (sample size is required to match)
The documentation indicates here
[The Structure object] is a tuple containing two items, each being a dict. The first one represents the ingroup and the second represents the outgroup.
The ingroup dictionary is itself a dictionary holding more dictionaries, one for each cluster of populations. Each cluster dictionary is a dictionary of populations, populations being themselves represented by a dictionary. A population dictionary is, again, a dictionary of individuals. Fortunately, individuals are represented by lists.
An individual list contains the index of all samples belonging to this individual. For haploid data, individuals will be one-item lists. In other cases, all individual lists are required to have the same number of items (consistent ploidy). Note that, if the ploidy is more than one, nothing enforces that samples of a given individual are grouped within the original data.
The keys of the ingroup dictionary are the labels identifying each cluster. Within a cluster dictionary, the keys are population labels. Finally, within a population dictionary, the keys are individual labels.
The second dictionary represents the outgroup. Its structure is simpler: it has individual labels as keys, and lists of corresponding sample indexes as values. The outgroup dictionary is similar to any ingroup population dictionary. The ploidy is required to match over all ingroup and outgroup individuals.
but I fail to make sense of it. The example provided is for fasta format and I don't understand to extend the logic to VCF format.
There are two errors
First error
The function make_structure returns the Structure object but does not save it within stats. You therefore have to save this output and use it in the function process_site.
Structure = egglib.stats.make_structure(struct, None)
Second error
The Structure object must designate haploids. Therefore, create the dictionary as
dcluster = {}
for popIndex in [0,1]:
dpop = {}
for IndIndex in range(popIndex * 100,(popIndex + 1) * 100):
dpop[IndIndex] = [IndIndex]
dcluster[popIndex] = dpop
struct = {0: dcluster}

Algorithmic / coding help for a PySpark markov model

I need some help getting my brain around designing an (efficient) markov chain in spark (via python). I've written it as best as I could, but the code I came up with doesn't scale.. Basically for the various map stages, I wrote custom functions and they work fine for sequences of a couple thousand, but when we get in the 20,000+ (and I've got some up to 800k) things slow to a crawl.
For those of you not familiar with markov moodels, this is the gist of it..
This is my data.. I've got the actual data (no header) in an RDD at this point.
ID, SEQ
500, HNL, LNH, MLH, HML
We look at sequences in tuples, so
(HNL, LNH), (LNH,MLH), etc..
And I need to get to this point.. where I return a dictionary (for each row of data) that I then serialize and store in an in memory database.
{500:
{HNLLNH : 0.333},
{LNHMLH : 0.333},
{MLHHML : 0.333},
{LNHHNL : 0.000},
etc..
}
So in essence, each sequence is combined with the next (HNL,LNH become 'HNLLNH'), then for all possible transitions (combinations of sequences) we count their occurrence and then divide by the total number of transitions (3 in this case) and get their frequency of occurrence.
There were 3 transitions above, and one of those was HNLLNH.. So for HNLLNH, 1/3 = 0.333
As a side not, and I'm not sure if it's relevant, but the values for each position in a sequence are limited.. 1st position (H/M/L), 2nd position (M/L), 3rd position (H,M,L).
What my code had previously done was to collect() the rdd, and map it a couple times using functions I wrote. Those functions first turned the string into a list, then merged list[1] with list[2], then list[2] with list[3], then list[3] with list[4], etc.. so I ended up with something like this..
[HNLLNH],[LNHMLH],[MHLHML], etc..
Then the next function created a dictionary out of that list, using the list item as a key and then counted the total ocurrence of that key in the full list, divided by len(list) to get the frequency. I then wrapped that dictionary in another dictionary, along with it's ID number (resulting in the 2nd code block, up a above).
Like I said, this worked well for small-ish sequences, but not so well for lists with a length of 100k+.
Also, keep in mind, this is just one row of data. I have to perform this operation on anywhere from 10-20k rows of data, with rows of data varying between lengths of 500-800,000 sequences per row.
Any suggestions on how I can write pyspark code (using the API map/reduce/agg/etc.. functions) to do this efficiently?
EDIT
Code as follows.. Probably makes sense to start at the bottom. Please keep in mind I'm learning this(Python and Spark) as I go, and I don't do this for a living, so my coding standards are not great..
def f(x):
# Custom RDD map function
# Combines two separate transactions
# into a single transition state
cust_id = x[0]
trans = ','.join(x[1])
y = trans.split(",")
s = ''
for i in range(len(y)-1):
s= s + str(y[i] + str(y[i+1]))+","
return str(cust_id+','+s[:-1])
def g(x):
# Custom RDD map function
# Calculates the transition state probabilities
# by adding up state-transition occurrences
# and dividing by total transitions
cust_id=str(x.split(",")[0])
trans = x.split(",")[1:]
temp_list=[]
middle = int((len(trans[0])+1)/2)
for i in trans:
temp_list.append( (''.join(i)[:middle], ''.join(i)[middle:]) )
state_trans = {}
for i in temp_list:
state_trans[i] = temp_list.count(i)/(len(temp_list))
my_dict = {}
my_dict[cust_id]=state_trans
return my_dict
def gen_tsm_dict_spark(lines):
# Takes RDD/string input with format CUST_ID(or)PROFILE_ID,SEQ,SEQ,SEQ....
# Returns RDD of dict with CUST_ID and tsm per customer
# i.e. {cust_id : { ('NLN', 'LNN') : 0.33, ('HPN', 'NPN') : 0.66}
# creates a tuple ([cust/profile_id], [SEQ,SEQ,SEQ])
cust_trans = lines.map(lambda s: (s.split(",")[0],s.split(",")[1:]))
with_seq = cust_trans.map(f)
full_tsm_dict = with_seq.map(g)
return full_tsm_dict
def main():
result = gen_tsm_spark(my_rdd)
# Insert into DB
for x in result.collect():
for k,v in x.iteritems():
db_insert(k,v)
You can try something like below. It depends heavily on tooolz but if you prefer to avoid external dependencies you can easily replace it with some standard Python libraries.
from __future__ import division
from collections import Counter
from itertools import product
from toolz.curried import sliding_window, map, pipe, concat
from toolz.dicttoolz import merge
# Generate all possible transitions
defaults = sc.broadcast(dict(map(
lambda x: ("".join(concat(x)), 0.0),
product(product("HNL", "NL", "HNL"), repeat=2))))
rdd = sc.parallelize(["500, HNL, LNH, NLH, HNL", "600, HNN, NNN, NNN, HNN, LNH"])
def process(line):
"""
>>> process("000, HHH, LLL, NNN")
('000', {'LLLNNN': 0.5, 'HHHLLL': 0.5})
"""
bits = line.split(", ")
transactions = bits[1:]
n = len(transactions) - 1
frequencies = pipe(
sliding_window(2, transactions), # Get all transitions
map(lambda p: "".join(p)), # Joins strings
Counter, # Count
lambda cnt: {k: v / n for (k, v) in cnt.items()} # Get frequencies
)
return bits[0], frequencies
def store_partition(iter):
for (k, v) in iter:
db_insert(k, merge([defaults.value, v]))
rdd.map(process).foreachPartition(store_partition)
Since you know all possible transitions I would recommend using a sparse representation and ignore zeros. Moreover you can replace dictionaries with sparse vectors to reduce memory footprint.
you can achieve this result by using pure Pyspark, i did using it using pyspark.
To create frequencies, let say you have already achieved and these are input RDDs
ID, SEQ
500, [HNL, LNH, MLH, HML ...]
and to get frequencies like, (HNL, LNH),(LNH, MLH)....
inputRDD..map(lambda (k, list): get_frequencies(list)).flatMap(lambda x: x) \
.reduceByKey(lambda v1,v2: v1 +v2)
get_frequencies(states_list):
"""
:param states_list: Its a list of Customer States.
:return: State Frequencies List.
"""
rest = []
tuples_list = []
for idx in range(0,len(states_list)):
if idx + 1 < len(states_list):
tuples_list.append((states_list[idx],states_list[idx+1]))
unique = set(tuples_list)
for value in unique:
rest.append((value, tuples_list.count(value)))
return rest
and you will get results
((HNL, LNH), 98),((LNH, MLH), 458),() ......
after this you may convert result RDDs into Dataframes or yu can directly insert into DB using RDDs mapPartitions

Optimizing searches in very large csv files

I have a csv file with a single column, but 6.2 million rows, all containing strings between 6 and 20ish letters. Some strings will be found in duplicate (or more) entries, and I want to write these to a new csv file - a guess is that there should be around 1 million non-unique strings. That's it, really. Continuously searching through a dictionary of 6 million entries does take its time, however, and I'd appreciate any tips on how to do it. Any script I've written so far takes at least a week (!) to run, according to some timings I did.
First try:
in_file_1 = open('UniProt Trypsinome (full).csv','r')
in_list_1 = list(csv.reader(in_file_1))
out_file_1 = open('UniProt Non-Unique Reference Trypsinome.csv','w+')
out_file_2 = open('UniProt Unique Trypsin Peptides.csv','w+')
writer_1 = csv.writer(out_file_1)
writer_2 = csv.writer(out_file_2)
# Create trypsinome dictionary construct
ref_dict = {}
for row in range(len(in_list_1)):
ref_dict[row] = in_list_1[row]
# Find unique/non-unique peptides from trypsinome
Peptide_list = []
Uniques = []
for n in range(len(in_list_1)):
Peptide = ref_dict.pop(n)
if Peptide in ref_dict.values(): # Non-unique peptides
Peptide_list.append(Peptide)
else:
Uniques.append(Peptide) # Unique peptides
for m in range(len(Peptide_list)):
Write_list = (str(Peptide_list[m]).replace("'","").replace("[",'').replace("]",''),'')
writer_1.writerow(Write_list)
Second try:
in_file_1 = open('UniProt Trypsinome (full).csv','r')
in_list_1 = list(csv.reader(in_file_1))
out_file_1 = open('UniProt Non-Unique Reference Trypsinome.csv','w+')
writer_1 = csv.writer(out_file_1)
ref_dict = {}
for row in range(len(in_list_1)):
Peptide = in_list_1[row]
if Peptide in ref_dict.values():
write = (in_list_1[row],'')
writer_1.writerow(write)
else:
ref_dict[row] = in_list_1[row]
EDIT: here's a few lines from the csv file:
SELVQK
AKLAEQAER
AKLAEQAERR
LAEQAER
LAEQAERYDDMAAAMK
LAEQAERYDDMAAAMKK
MTMDKSELVQK
YDDMAAAMKAVTEQGHELSNEER
YDDMAAAMKAVTEQGHELSNEERR
Do it with Numpy. Roughly:
import numpy as np
column = 42
mat = np.loadtxt("thefile", dtype=[TODO])
uniq = set(np.unique(mat[:,column]))
for row in mat:
if row[column] not in uniq:
print row
You could even vectorize the output stage using numpy.savetxt and the char-array operators, but it probably won't make very much difference.
First hint : Python has support for lazy evaluation, better to use it when dealing with huge datasets. So :
iterate over your csv.reader instead of building a huge in-memory list,
don't build huge in-memory lists with ranges - use enumerate(seq) instead if you need both the item and index, and just iterate over your sequence's items if you don't need the index.
Second hint : the main point of using a dict (hashtable) is to lookup on keys, not values... So don't build a huge dict that's used as a list.
Third hint : if you just want a way to store "already seen" values, use a Set.
I'm not so good in Python, so I don't know how the 'in' works, but your algorithm seems to run in n².
Try to sort your list after reading it, with an algo in n log(n), like quicksort, it should work better.
Once the list is ordered, you just have to check if two consecutive elements of the list are the same.
So you get the reading in n, the sorting in n log(n) (at best), and the comparison in n.
Although I think that the numpy solution is the best, I'm curious whether we can speed up the given example. My suggestions are:
skip csv.reader costs and just read the line
rb to skip the extra scan needed to fix newlines
use bigger file buffer sizes (read 1Meg, write 64K is probably good)
use the dict keys as an index - key lookup is much faster than value lookup
I'm not a numpy guy, so I'd do something like
in_file_1 = open('UniProt Trypsinome (full).csv','rb', 1048576)
out_file_1 = open('UniProt Non-Unique Reference Trypsinome.csv','w+', 65536)
ref_dict = {}
for line in in_file_1:
peptide = line.rstrip()
if peptide in ref_dict:
out_file_1.write(peptide + '\n')
else:
ref_dict[peptide] = None

Categories