Extracting lists from tuple with a condition - python

I've been trying to extracting from this tuples
E=tuple([random.randint(0,10) for x in range(10)])
Let's say the result is (3,4,5,0,0,3,4,2,2,4) .
I want to extract from this tuple lists of numbers is ascending order without sorting the tuple or anything.
Example : [[3,4,5],[0,0,3,4],[2,2,4]]

You can create a custom function (generator in my example) to group ascending elements:
def get_ascending(itr):
lst = []
for v in itr:
if not lst:
lst = [v]
elif v < lst[-1]:
yield lst
lst = [v]
else:
lst.append(v)
yield lst
E = 3, 4, 5, 0, 0, 3, 4, 2, 2, 4
print(list(get_ascending(E)))
Prints:
[[3, 4, 5], [0, 0, 3, 4], [2, 2, 4]]

Related

Drop Dublicates, if they are in a row in Python

I got list like:
[1,1,5,4,6,6,5]
and I want to drop the element of list, if its get repeated.
Output:
[1,5,4,6,5]
I can only find solution for "normal" Duplicate-Problems.
You can use itertools.groupby and just pull the key for each group.
from itertools import groupby
[k for k, _ in groupby([1,1,5,4,6,6,5])]
# returns:
[1, 5, 4, 6, 5]
You can iterate over pairs with zip and build a new list
lst = [1, 1, 5, 4, 6, 5, 5]
lst = [x for x, y in zip(lst, lst[1:]) if x != y] + [lst[-1]]
print(lst) # [1, 5, 4, 6, 5]

Python sum in list consists of integers and other lists and nested lists

I have problem with a function that returns a sum of values in list given as argument. But this list can consists of integers and other lists, but this lists can have other lists etc etc
like in example:
[10, 5, [1, [5, 4, 3]], 9, [1, 1, [2, 3]]] => 44
You can flatten the list and then sum it:
lst = [10, 5, [1, [5, 4, 3]], 9, [1, 1, [2, 3]]]
def flatten(l):
if isinstance(l, list):
for v in l:
yield from flatten(v)
else:
yield l
out = sum(flatten(lst))
print(out)
Prints:
44
You could also write a recursive function that does the summation:
def my_sum(x):
value = 0
for i in x:
if not isinstance(i, list):
value += i
else:
value += my_sum(i)
return value
my_sum(lst)
44
Using a recursive function should work :
def your_function(embeded_list):
n = len(embeded_list)
sum = 0
for i in range(n) :
if len(embeded_list[i]) == 1 :
sum+=embeded_list[i]
else :
sum+=your_function(embeded_list[i])
return(sum)

Sort a list by frequency and value

I am trying to solve the following problem: a function takes a list A. The results must be a ordered list of list. Each list contains the elements which have the same frequency in the original list A.
Example:
Input: [3, 1, 2, 2, 4]
Output: [[1, 3, 4], [2, 2]]
I managed to sort the initial list A and determine how the frequency of an element.
However, I do not know how to split the original list A based on the frequencies.
My code:
def customSort(arr):
counter = Counter(arr)
y = sorted(arr, key=lambda x: (counter[x], x))
print(y)
x = Counter(arr)
a = sorted(x.values())
print()
customSort([3,1,2,2,4])
My current output:
[1, 3, 4, 2, 2]
[1, 1, 1, 2]
You can use a defaultdict of lists and iterate your Counter:
from collections import defaultdict, Counter
def customSort(arr):
counter = Counter(arr)
dd = defaultdict(list)
for value, count in counter.items():
dd[count].extend([value]*count)
return dd
res = customSort([3,1,2,2,4])
# defaultdict(list, {1: [3, 1, 4], 2: [2, 2]})
This gives additional information, i.e. the key represents how many times the values in the lists are seen. If you require a list of lists, you can simply access values:
res = list(res.values())
# [[3, 1, 4], [2, 2]]
Doing the grunt work suggested by Scott Hunter (Python 3):
#!/usr/bin/env python3
from collections import Counter
def custom_sort(arr):
v = {}
for key, value in sorted(Counter(arr).items()):
v.setdefault(value, []).append(key)
return [v * k for k,v in v.items()]
if __name__ == '__main__':
print(custom_sort([3, 1, 2, 2, 4])) # [[1, 3, 4], [2, 2]]
For Python 2.7 or lower use iteritems() instead of items()
Partially taken from this answer
Having sorted the list as you do:
counter = Counter(x)
y = sorted(x, key=lambda x: (counter[x], x))
#[1, 3, 4, 2, 2]
You could then use itertools.groupby, using the result from Counter(x) in the key argument to create groups according to the counts:
[list(v) for k,v in groupby(y, key = lambda x: counter[x])]
#[[1, 3, 4], [2, 2]]
Find your maximum frequency, and create a list of that many empty lists.
Loop over your values, and add each to the element of the above corresponding to its frequency.
There might be something in Collections that does at least part of the above.
Another variation of the same theme, using a Counter to get the counts and then inserting the elements into the respective position in the result list-of-lists. This retains the original order of the elemens (does not group same elements together) and keeps empty lists for absent counts.
>>> lst = [1,4,2,3,4,3,2,5,4,4]
>>> import collections
>>> counts = collections.Counter(lst)
>>> res = [[] for _ in range(max(counts.values()))]
>>> for x in lst:
... res[counts[x]-1].append(x)
...
>>> res
[[1, 5], [2, 3, 3, 2], [], [4, 4, 4, 4]]
A bit late to the party, but with plain Python:
test = [3, 1, 2, 2, 4]
def my_sort(arr):
count = {}
for x in arr:
if x in count:
count[x] += 1
else:
count[x] = 0
max_frequency = max(count.values()) + 1
res = [[] for i in range(max_frequency)]
for k,v in count.items():
for j in range(v + 1):
res[v].append(k)
return res
print(my_sort(test))
Using only Pythons built-in functions, no imports and a single for loop.
l1= []
l2 = []
def customSort(mylist):
sl = sorted(mylist)
for i in sl:
n = sl.count(i)
if n > 1:
l1.append(i)
if i not in l1:
l2.append(i)
return [l2, l1]
print(customSort([3, 1, 2, 2, 4]))
Output:
[[1, 3, 4], [2, 2]]

Last index of duplicate items in a python list

Does anyone know how I can get the last index position of duplicate items in a python list containing duplicate and non-duplicate items?
I have a list sorted in ascending order with [1, 1, 1, 2, 2, 3, 3, 4, 5]
I want it to print the last index of duplicate items and index on non-duplicate items like this
2
4
6
7
8
I tried doing this way but could only print the starting index of duplicate elements and misssed non-duplicate items.
id_list = [1, 1, 1, 2, 2, 3, 3, 4, 5]
for i in range(len(id_list)):
for j in range(i+1,len(id_list)):
if id_list[i]==id_list[j]:
print(i)
Loop on the list using enumerate to get indexes & values, and use a dictionary and retain the last index (last index "wins" when there are duplicates). In the end, sort the indexes (as dictionaries aren't ordered, but you can use an OrderedDict):
import collections
lst = [1, 1, 1, 2, 2, 3, 3, 4, 5]
d = collections.OrderedDict()
for i,v in enumerate(lst):
d[v] = i
print(list(d.values()))
prints:
[2, 4, 6, 7, 8]
The advantage of this solution is that it works even if the duplicates aren't consecutive.
Python 3.7 guarantees the order of the base dictionaries so a simple dict comprehension solves it:
{v:i for i,v in enumerate(lst)}.values()
You can use enumerate and check the next index in the list. If an element is not equal to the element in the next index, it is the last duplicate:
lst = [1, 1, 1, 2, 2, 3, 3, 4, 5]
result = [i for i, x in enumerate(lst) if i == len(lst) - 1 or x != lst[i + 1]]
print(result)
# [2, 4, 6, 7, 8]
You can use a list comprehension with enumerate and zip. The last value will always be in scope, so we can include this at the end explicitly.
L = [1, 1, 1, 2, 2, 3, 3, 4, 5]
res = [idx for idx, (i, j) in enumerate(zip(L, L[1:])) if i != j] + [len(L) - 1]
print(res)
# [2, 4, 6, 7, 8]

Getting sublist with repeated elements in Python [duplicate]

I faced some problem with solving the next problem:
We have a list of elements (integers), and we should return a list consisting of only the non-unique elements in this list. Without changing order of the list
I think the best way is to delete or remove all unique element.
Take note that I just start to learn python and would like only the simplest solutions.
Here is my code:
def checkio(data):
for i in data:
if data.count(i) == 1: #if element seen in the list just ones, we delet this el
ind = data.index(i)
del data[ind]
return data
Your function can be made to work by iterating over the list in reverse:
def checkio(data):
for index in range(len(data) - 1, -1, -1):
if data.count(data[index]) == 1:
del data[index]
return data
print(checkio([3, 3, 5, 8, 1, 4, 5, 2, 4, 4, 3, 0]))
[3, 3, 5, 4, 5, 4, 4, 3]
print(checkio([1, 2, 3, 4]))
[]
This works, because it only deletes numbers in the section of the list that has already been iterated over.
Just I used list Comprehensions.
def checkio(data):
a=[i for i in data if data.count(i)>1]
return a
print checkio([1,1,2,2,1,1,1,3,4,5,6,7,8])
You can implement a OrderedCounter, eg:
from collections import OrderedDict, Counter
class OrderedCounter(Counter, OrderedDict):
pass
data = [1, 3, 1, 2, 3, 5, 8, 1, 5, 2]
duplicates = [k for k, v in OrderedCounter(data).items() if v > 1]
# [1, 3, 2, 5]
So you count the occurrence of each value, then filter on if it has a frequency of more than one. Inheriting from OrderedDict means the order of the original elements is preserved.
Going by comments, you want all duplicated elements reserved, so you can pre-build a set of the duplicate entries, then re-iterate your original list, eg:
from collections import Counter
data = [1, 3, 1, 2, 3, 5, 8, 1, 5, 2]
duplicates = {k for k, v in Counter(data).items() if v > 1}
result = [el for el in data if el in duplicates]
# [1, 3, 1, 2, 3, 5, 1, 5, 2]
Try this:
>>> a=[1,2,3,3,4,5,6,6,7,8,9,2,0,0]
>>> a=[i for i in a if a.count(i)>1]
>>> a
[2, 3, 3, 6, 6, 2, 0, 0]
>>> a=[1, 2, 3, 1, 3]
>>> a=[i for i in a if a.count(i)>1]
>>> a
[1, 3, 1, 3]
>>> a=[1, 2, 3, 4, 5]
>>> a=[i for i in a if a.count(i)>1]
a
[]
def checkio(data):
lis = []
for i in data:
if data.count(i)>1:
lis.append(i)
print(lis)
checkio([1,2,3,3,2,1])
Yeah it's a bit late to contribute to this thread but just wanted to put it there on the net for anyone else use.
Following what you have started, iterating on the list of integers, but not counting or deleting elements, try just testing if the element has already been seen, append it to a list of duplicated elements:
def checkio(data):
elements = []
duplicates = []
for i in data:
if i not in elements:
elements.append(i)
else:
if i not in duplicates:
duplicates.append(i)
return duplicates
d = [1, 3, 1, 2, 3, 5, 8, 1, 5, 2]
print (checkio(d))
#[1, 3, 5, 2]
numbers = [1, 1, 1, 1, 3, 4, 9, 0, 1, 1, 1]
x=set(numbers)
print(x)
You can use the set key word too to get the desired solution.
I used an integer and bool to check every time the list was modified within a while loop.
rechecks = 1
runscan = True
while runscan == True:
for i in data:
if data.count(i) <2:
data.remove(i)
rechecks+=1
#need to double check now
if rechecks >0:
runscan = True
rechecks-=1
else:
runscan = False
return data
Would it not be easier to generate a new list?
def unique_list(lst):
new_list = []
for value in lst:
if value not in new_list:
new_list.append(value)
return new_list
lst = [1,2,3,1,4,5,1,6,2,3,7,8,9]
print(unique_list(lst))
Prints [1,2,3,4,5,6,7,8,9]

Categories