How to count elements on each position in lists - python

I have a lot of lists like:
SI821lzc1n4
MCap1kr01lv
All of them have the same length. I need to count how many times each symbol appears on each position. Example:
abcd
a5c1
b51d
Here it'll be a5cd

One way is to use zip to associate characters in the same position. We can then send all of the characters from each position to a Counter, then use Counter.most_common to get the most common character
from collections import Counter
l = ['abcd', 'a5c1', 'b51d']
print(''.join([Counter(z).most_common(1)[0][0] for z in zip(*l)]))
# a5cd

from statistics import mode
[mode([x[i] for x in y]) for i in xrange(len(y[0]))]
where y is your list.
Python 3.4 and up

You could use combination of zip and Counter
a = ("abcd")
b = ("a5c1")
c = ("b51d")
from collections import Counter
zippedList = list(zip(a,b,c))
print("zipped: {}".format(zippedList))
final = ""
for x in zippedList:
countLetters = Counter(x)
print(countLetters)
final += countLetters.most_common(3)[0][0]
print("output: {}".format(final))
output:
zipped: [('a', 'a', 'b'), ('b', '5', '5'), ('c', 'c', '1'), ('d', '1', 'd')]
Counter({'a': 2, 'b': 1})
Counter({'5': 2, 'b': 1})
Counter({'c': 2, '1': 1})
Counter({'d': 2, '1': 1})
output: a5cd

This all depends on where your list is. Is your list coming from another file or is it an actual array? At the end of the day, the best way to do this simply is going to be to use a dictionary and a for loop.
new_dict = {}
for i in range(len(line)):
if i in new_dict:
new_dict[i].append(line[i])
else:
new_dict[i] = [line[i]]
Then after that I'm assuming that you'd like to output the four most common element appearances. For that I'd recommend importing statistics and using the mode method...
from statistics import mode
new_line = ""
for key in new_dict:
x = mode(new_dict[key])
new_line = new_line + x
However, your question is quite vague, please elaborate more next time.
P.s. I'm a newbie so all you experienced programmers plz don't hate :)

I would use a combination of defaultdict, enumerate, and Counter:
>>> from collections import Counter, defaultdict
>>> data = '''abcd
a5c1
b51d
'''
>>> poscount = defaultdict(Counter)
>>> for line in data.split():
for i, character in enumerate(line):
poscount[i][character] += 1
>>> ''.join([poscount[i].most_common(1)[0][0] for i in sorted(poscount)])
'a5cd'
Here's how it works:
The defaultdict() creates new entries when it sees a new key.
The enumerate() function returns both the character and its position in the line.
The Counter counts the occurences of individual characters
Combining the three makes a defaultdict whose keys are the column positions and whose values are character counters. That gives you one character counter per column.
The most_common() method returns the highest frequency (character, count) pair for that counter.
The [0][0] extracts the character from the list of (character, count) tuples.
The str.join() method combines the results back together.

Related

Python: Sorting a Python list to show which string is most common to least common and the number of times it appears

I have a winners list which will receive different entries each time the rest of my code is ran:
eg the list could look like:
winners = ['Tortoise','Tortoise','Hare']
I am able to find the most common entry by using:
mostWins = [word for word, word_count in Counter(winners).most_common(Animalnum)]
which would ouput:
['Tortoise']
My problem is displaying the entire list from most common to least common and the how many times each string is found in the list.
Just iterate over that .most_common:
>>> winners = ['Tortoise','Tortoise','Hare','Tortoise','Hare','Bob']
>>> import collections
>>> for name, wins in collections.Counter(winners).most_common():
... print(name, wins)
...
Tortoise 3
Hare 2
Bob 1
>>>
Counter is just a dictionary internally.
from collections import Counter
winners = ['Tortoise','Tortoise','Hare','Tortoise','Hare','Bob', 'Bob', 'John']
counts = Counter(winners)
print(counts)
# Counter({'Tortoise': 3, 'Hare': 2, 'Bob': 2, 'John': 1})
print(counts['Hare'])
# 2
Furthermore, the .most_common(n) method is just a .items() call on it that limits the output to n length.
So you should only use it, if you'd like to show the top n, e.g.: the top 3
counts.most_common(3)
# [('Tortoise', 3), ('Hare', 2), ('Bob', 2)]

Counting by partial string in Python

So, I have a list of strings (upper case letters).
list = ['DOG01', 'CAT02', 'HORSE04', 'DOG02', 'HORSE01', 'CAT01', 'CAT03', 'HORSE03', 'HORSE02']
How can I group and count occurrence in the list?
Expected output:
You may try using the Counter library here:
from collections import Counter
import re
list = ['DOG01', 'CAT02', 'HORSE04', 'DOG02', 'HORSE01', 'CAT01', 'CAT03', 'HORSE03', 'HORSE02']
list = [re.sub(r'\d+$', '', x) for x in list]
print(Counter(list))
This prints:
Counter({'HORSE': 4, 'CAT': 3, 'DOG': 2})
Note that the above approach simply strips off the number endings of each list element, then does an aggregation on the alpha names only.
you can also use dictionary
list= ['DOG01', 'CAT02', 'HORSE04', 'DOG02', 'HORSE01',
'CAT01', 'CAT03', 'HORSE03', 'HORSE02']
dic={}
for i in list:
i=i[:-2]
if i in dic:
dic[i]=dic[i]+1
else:
dic[i]=1
print(dic)

How does one find the number of subsets for a particular row, in a 2D list with python? Can collections' Counter function be used?

Please excuse the title, it is hard to express the problem correctly without showing an example.
I have a very large 2D array with rows of varying sizes, for example:
big2DArray =
[["a","g","r"],
["a","r"],
["p","q"],
["a", "r"]]
I need to return a dictionary, it has to look something like this:
{('a','g','r'): 1, ('a', 'r'): 3, ('p', 'q'):1}
The ('a', 'r') tuple is found to have a value of 3, since it occurs twice as itself and once as a subset (less than or equal) to the tuple ('a', 'g', 'r').
Normally I would use something like this:
dictCounts = Counter(map(tuple, big2DArray))
Which, for big2Darray, would give:
{('a','g','r'): 1, ('a', 'r'): 2, ('p', 'q'):1}
My question is this, can Collections' Counter function be used so that it gives the counts for the subsets as well, like explained above? If not, is there any comparably efficient method to return my desired dictionary output for subsets?
Thanks so much!
Edit 1: Just for further clarity! I do not want to return all subsets, such as {('a','g'): 1, ('a','r'):3}, and so on. I only want to return the counts for the unique rows in the 2D array. So in this case the counts for: ('a','g','r'), ('a','r'), ('p','q').
Edit 2: The row ["a","r"] should be treated as equivalent to ["r", "a"], and so should the tuples ('a','r') and ('r','a')
You can use set.issubset with collections.Counter here.
Demo:
from collections import Counter
big2DArray = [["a","g","r"],
["a","r"],
["p","q"],
["a", "r"],
["r", "a"]]
counts = Counter(map(lambda x: tuple(sorted(x)), big2DArray))
count_lst = list(counts)
for i, k1 in enumerate(count_lst):
rest = count_lst[:i] + count_lst[i+1:]
for k2 in rest:
if set(k1).issubset(k2):
counts[k1] += 1
print(counts)
Output:
Counter({('a', 'r'): 4, ('a', 'g', 'r'): 1, ('p', 'q'): 1})
In the above code, in order to make sure ["r", "a"] and ["a","r"] are equivalent, you can sort them beforehand, and add them as tuples to Counter().
The other more efficient way would be to use frozenset, as shown in the other answer.
Here is one solution. It uses defaultdict instead of Counter. The dictionary keys are frozensets. If you need ordered tuple dictionary keys, see #RoadRunner's solution.
from itertools import combinations, chain
from collections import defaultdict
big2DArray = [["a","g","r"],
["a","r"],
["p","q"],
["a", "r"]]
arr_new = [[set(i) for k in range(2, len(j)+1) \
for i in combinations(j, k)] for j in big2DArray]
full_list = set(map(frozenset, big2DArray))
counter = defaultdict(int)
for i in range(len(big2DArray)):
for j in full_list:
if j in arr_new[i]:
counter[frozenset(j)] += 1
# defaultdict(int,
# {frozenset({'a', 'r'}): 3,
# frozenset({'a', 'g', 'r'}): 1,
# frozenset({'p', 'q'}): 1})

put for loop in dict comprehension [duplicate]

I am using Python 3.3
I need to create two lists, one for the unique words and the other for the frequencies of the word.
I have to sort the unique word list based on the frequencies list so that the word with the highest frequency is first in the list.
I have the design in text but am uncertain how to implement it in Python.
The methods I have found so far use either Counter or dictionaries which we have not learned. I have already created the list from the file containing all the words but do not know how to find the frequency of each word in the list. I know I will need a loop to do this but cannot figure it out.
Here's the basic design:
original list = ["the", "car",....]
newlst = []
frequency = []
for word in the original list
if word not in newlst:
newlst.append(word)
set frequency = 1
else
increase the frequency
sort newlst based on frequency list
use this
from collections import Counter
list1=['apple','egg','apple','banana','egg','apple']
counts = Counter(list1)
print(counts)
# Counter({'apple': 3, 'egg': 2, 'banana': 1})
You can use
from collections import Counter
It supports Python 2.7,read more information here
1.
>>>c = Counter('abracadabra')
>>>c.most_common(3)
[('a', 5), ('r', 2), ('b', 2)]
use dict
>>>d={1:'one', 2:'one', 3:'two'}
>>>c = Counter(d.values())
[('one', 2), ('two', 1)]
But, You have to read the file first, and converted to dict.
2.
it's the python docs example,use re and Counter
# Find the ten most common words in Hamlet
>>> import re
>>> words = re.findall(r'\w+', open('hamlet.txt').read().lower())
>>> Counter(words).most_common(10)
[('the', 1143), ('and', 966), ('to', 762), ('of', 669), ('i', 631),
('you', 554), ('a', 546), ('my', 514), ('hamlet', 471), ('in', 451)]
words = file("test.txt", "r").read().split() #read the words into a list.
uniqWords = sorted(set(words)) #remove duplicate words and sort
for word in uniqWords:
print words.count(word), word
Pandas answer:
import pandas as pd
original_list = ["the", "car", "is", "red", "red", "red", "yes", "it", "is", "is", "is"]
pd.Series(original_list).value_counts()
If you wanted it in ascending order instead, it is as simple as:
pd.Series(original_list).value_counts().sort_values(ascending=True)
Yet another solution with another algorithm without using collections:
def countWords(A):
dic={}
for x in A:
if not x in dic: #Python 2.7: if not dic.has_key(x):
dic[x] = A.count(x)
return dic
dic = countWords(['apple','egg','apple','banana','egg','apple'])
sorted_items=sorted(dic.items()) # if you want it sorted
One way would be to make a list of lists, with each sub-list in the new list containing a word and a count:
list1 = [] #this is your original list of words
list2 = [] #this is a new list
for word in list1:
if word in list2:
list2.index(word)[1] += 1
else:
list2.append([word,0])
Or, more efficiently:
for word in list1:
try:
list2.index(word)[1] += 1
except:
list2.append([word,0])
This would be less efficient than using a dictionary, but it uses more basic concepts.
You can use reduce() - A functional way.
words = "apple banana apple strawberry banana lemon"
reduce( lambda d, c: d.update([(c, d.get(c,0)+1)]) or d, words.split(), {})
returns:
{'strawberry': 1, 'lemon': 1, 'apple': 2, 'banana': 2}
Using Counter would be the best way, but if you don't want to do that, you can implement it yourself this way.
# The list you already have
word_list = ['words', ..., 'other', 'words']
# Get a set of unique words from the list
word_set = set(word_list)
# create your frequency dictionary
freq = {}
# iterate through them, once per unique word.
for word in word_set:
freq[word] = word_list.count(word) / float(len(word_list))
freq will end up with the frequency of each word in the list you already have.
You need float in there to convert one of the integers to a float, so the resulting value will be a float.
Edit:
If you can't use a dict or set, here is another less efficient way:
# The list you already have
word_list = ['words', ..., 'other', 'words']
unique_words = []
for word in word_list:
if word not in unique_words:
unique_words += [word]
word_frequencies = []
for word in unique_words:
word_frequencies += [float(word_list.count(word)) / len(word_list)]
for i in range(len(unique_words)):
print(unique_words[i] + ": " + word_frequencies[i])
The indicies of unique_words and word_frequencies will match.
The ideal way is to use a dictionary that maps a word to it's count. But if you can't use that, you might want to use 2 lists - 1 storing the words, and the other one storing counts of words. Note that order of words and counts matters here. Implementing this would be hard and not very efficient.
Try this:
words = []
freqs = []
for line in sorted(original list): #takes all the lines in a text and sorts them
line = line.rstrip() #strips them of their spaces
if line not in words: #checks to see if line is in words
words.append(line) #if not it adds it to the end words
freqs.append(1) #and adds 1 to the end of freqs
else:
index = words.index(line) #if it is it will find where in words
freqs[index] += 1 #and use the to change add 1 to the matching index in freqs
Here is code support your question
is_char() check for validate string count those strings alone, Hashmap is dictionary in python
def is_word(word):
cnt =0
for c in word:
if 'a' <= c <='z' or 'A' <= c <= 'Z' or '0' <= c <= '9' or c == '$':
cnt +=1
if cnt==len(word):
return True
return False
def words_freq(s):
d={}
for i in s.split():
if is_word(i):
if i in d:
d[i] +=1
else:
d[i] = 1
return d
print(words_freq('the the sky$ is blue not green'))
for word in original_list:
words_dict[word] = words_dict.get(word,0) + 1
sorted_dt = {key: value for key, value in sorted(words_dict.items(), key=lambda item: item[1], reverse=True)}
keys = list(sorted_dt.keys())
values = list(sorted_dt.values())
print(keys)
print(values)
Simple way
d = {}
l = ['Hi','Hello','Hey','Hello']
for a in l:
d[a] = l.count(a)
print(d)
Output : {'Hi': 1, 'Hello': 2, 'Hey': 1}
word and frequency if you need
def counter_(input_list_):
lu = []
for v in input_list_:
ele = (v, lc.count(v)/len(lc)) #if you don't % remove <</len(lc)>>
if ele not in lu:
lu.append(ele)
return lu
counter_(['a', 'n', 'f', 'a'])
output:
[('a', 0.5), ('n', 0.25), ('f', 0.25)]
the best thing to do is :
def wordListToFreqDict(wordlist):
wordfreq = [wordlist.count(p) for p in wordlist]
return dict(zip(wordlist, wordfreq))
then try to :
wordListToFreqDict(originallist)

Splitting a string into consecutive counts?

For example, if the given string is this:
"aaabbbbccdaeeee"
I want to say something like:
3 a, 4 b, 2 c, 1 d, 1 a, 4 e
It is easy enough to do in Python with a brute force loop, but I am wondering if there is a more Pythonic / cleaner one-liner type of approach.
My brute force:
while source!="":
leading = source[0]
c=0
while source!="" and source[0]==leading:
c+=1
source=source[1:]
print(c, leading)
Use a Counter for a count of each distinct letter in the string regardless of position:
>>> s="aaabbbbccdaeeee"
>>> from collections import Counter
>>> Counter(s)
Counter({'a': 4, 'b': 4, 'e': 4, 'c': 2, 'd': 1})
You can use groupby if the position in the string has meaning:
from itertools import groupby
li=[]
for k, l in groupby(s):
li.append((k, len(list(l))))
print li
Prints:
[('a', 3), ('b', 4), ('c', 2), ('d', 1), ('a', 1), ('e', 4)]
Which can be reduce to a list comprehension:
[(k,len(list(l))) for k, l in groupby(s)]
You can even use a regex:
>>> [(m.group(0)[0], len(m.group(0))) for m in re.finditer(r'((\w)\2*)', s)]
[('a', 3), ('b', 4), ('c', 2), ('d', 1), ('a', 1), ('e', 4)]
There are a number of different ways to solve the problem. #dawg has already posted the best solution, but if for some reason you aren't allowed to use Counter() (maybe a job interview or school assignment) then you can actually solve the problem in a few ways.
from collections import Counter, defaultdict
def counter_counts(s):
""" Preferred method using Counter()
Arguments:
s {string} -- [string to have each character counted]
Returns:
[dict] -- [dictionary of counts of each char]
"""
return Counter(s)
def default_counts(s):
""" Alternative solution using defaultdict
Arguments:
s {string} -- [string to have each character counted]
Returns:
[dict] -- [dictionary of counts of each char]
"""
counts = defaultdict(int) # each key is initalized to 0
for char in s:
counts[char] += 1 # increment the count of each character by 1
return counts
def vanilla_counts_1(s):
""" Alternative solution using a vanilla dicitonary
Arguments:
s {string} -- [string to have each character counted]
Returns:
[dict] -- [dictionary of counts of each char]
"""
counts = {}
for char in s:
# we have to manually check that each value is in the dictionary before attempting to increment it
if char in counts:
counts[char] += 1
else:
counts[char] = 1
return counts
def vanilla_counts_2(s):
""" Alternative solution using a vanilla dicitonary
This version uses the .get() method to increment instead of checking if a key already exists
Arguments:
s {string} -- [string to have each character counted]
Returns:
[dict] -- [dictionary of counts of each char]
"""
counts = {}
for char in s:
# the second argument in .get() is the default value if we dont find the key
counts[char] = counts.get(char, 0) + 1
return counts
And just for fun lets take a look at how each method performs.
For s = "aaabbbbccdaeeee" and 10,000 runs:
Counter: 0.0330204963684082s
defaultdict: 0.01565241813659668s
vanilla 1: 0.01562952995300293s
vanilla 2: 0.015581130981445312s
(actually rather surprising results)
Now let's test what happens if we set our string to the entire plaintext version of the book of Genesis and 1,000 runs:
Counter: 8.500739336013794s
defaultdict: 14.721554040908813s
vanilla 1: 18.089043855667114s
vanilla 2: 27.01840090751648s
Looks like the overhead of creating the Counter() object becomes much less important!
(These weren't very scientific tests, but it was a bit of fun).

Categories