Element-wise product of two 2-D lists - python

I can't use Numpy or any other library function as this is a question I have to do, I have to define my own way.
I am writing a function that takes two lists (2 dimensional) as arguments. The function should calculate the element-wise product of both lists and store them in a third list and return this resultant list from the function.
An example of the input lists are:
list1:
[[2,3,5,6,7],[5,2,9,3,7]]
list2:
[[5,2,9,3,7],[1,3,5,2,2]]
The function prints the following list:
[[10, 6, 45, 18, 49], [5, 6, 45, 6, 14]]
That is 2*5=10, 3*2=6, 5*9=45 ... and so on.
This is my code below, but it is only for a list with 2 lists (elements) inside in it like the example above and works perfectly fine for that, but what I want is to edit my code so that no matter how many number of lists (elements) are there in the 2-D list, it should print out its element-wise product in a new 2-D list e.g. it should also work for
[[5,2,9,3,7],[1,3,5,2,2],[1,3,5,2,2]]
or
[[5,2,9,3,7],[1,3,5,2,2],[1,3,5,2,2],[5,2,9,3,7]]
or any number of lists there are within the whole list.
def ElementwiseProduct(l,l2):
i=0
newlist=[] #create empty list to put prouct of elements in later
newlist2=[]
newlist3=[] #empty list to put both new lists which will have proudcts in them
while i==0:
a=0
while a<len(l[i]):
prod=l[i][a]*l2[i][a] #corresponding product of lists elements
newlist.append(prod) #adding the products to new list
a+=1
i+=1
while i==1:
a=0
while a<len(l[i]):
prod=l[i][a]*l2[i][a] #corresponding product of lists elements
newlist2.append(prod) #adding the products to new list
a+=1
i+=1
newlist3.append(newlist)
newlist3.append(newlist2)
print newlist3
#2 dimensional list example
list1=[[2,3,5,6,7],[5,2,9,3,7]]
list2=[[5,2,9,3,7],[1,3,5,2,2]]
ElementwiseProduct(list1,list2)

You can zip the two lists in a list comprehension, then further zip the resulting sublists and then finally multiply the items:
list2 = [[5,2,9,3,7],[1,3,5,2,2]]
list1 = [[2,3,5,6,7],[5,2,9,3,7]]
result = [[a*b for a, b in zip(i, j)] for i, j in zip(list1, list2)]
print(result)
# [[10, 6, 45, 18, 49], [5, 6, 45, 6, 14]]
Should in case the lists/sublists do not have the same number of elements, itertools.izip_longest can be used to generate fill values such as an empty sublist for the smaller list, or 0 for the shorter sublist:
from itertools import izip_longest
list1 = [[2,3,5,6]]
list2 = [[5,2,9,3,7],[1,3,5,2,2]]
result = [[a*b for a, b in izip_longest(i, j, fillvalue=0)]
for i, j in izip_longest(list1, list2, fillvalue=[])]
print(result)
# [[10, 6, 45, 18, 0], [0, 0, 0, 0, 0]]
You may change the inner fillvalue from 0 to 1 to return the elements in the longer sublists as is, instead of a homogeneous 0.
Reference:
List comprehensions

Here is a function that can handle any type of iterable, nested to any level (any number of dimensions, not just 2):
def elementwiseProd(iterA, iterB):
def multiply(a, b):
try:
iter(a)
except TypeError:
# You have a number
return a * b
return elementwiseProd(a, b)
return [multiply(*pair) for pair in zip(iterA, iterB)]
This function works recursively. For each element in a list, it checks if the element is iterable. If it is, the output element is a list containing the elementwise multiplication of the iterables. If not, the product of the numbers is returned.
This solution will work on mixed nested types. A couple of assumptions that are made here are that all the levels of nesting are the same size, and that an element that is a number in one iterable (vs a nested iterable), is always a number in the other.
In fact, this snippet can be extended to apply any n-ary function to any n iterables:
def elementwiseApply(op, *iters):
def apply(op, *items):
try:
iter(items[0])
except TypeError:
return op(*items)
return elementwiseApply(op, *items)
return [apply(op, *items) for items in zip(*iters)]
To do multiplication, you would use operator.mul:
from operator import mul
list1=[[2,3,5,6,7], [5,2,9,3,7]]
list2=[[5,2,9,3,7], [1,3,5,2,2]]
elementwiseApply(mul, list1, list2)
produces
[[10, 6, 45, 18, 49], [5, 6, 45, 6, 14]]

In Python, it's generally better to loop directly over the items in a list, rather than looping indirectly using indices. It makes the code easier to read as well as more efficient since it avoids the tedious index arithmetic.
Here's how to solve your problem using traditional for loops. We use the built-in zip function to iterate over two (or more) lists simultaneously.
def elementwise_product(list1,list2):
result = []
for seq1, seq2 in zip(list1,list2):
prods = []
for u, v in zip(seq1, seq2):
prods.append(u * v)
result.append(prods)
return result
list1=[[2,3,5,6,7], [5,2,9,3,7]]
list2=[[5,2,9,3,7], [1,3,5,2,2]]
print(elementwise_product(list1,list2))
output
[[10, 6, 45, 18, 49], [5, 6, 45, 6, 14]]
We can use list comprehensions to make that code a lot more compact. It may seem harder to read at first, but you'll get used to list comprehensions with practice.
def elementwise_product(list1,list2):
return [[u*v for u, v in zip(seq1, seq2)]
for seq1, seq2 in zip(list1,list2)]

You could use numpy arrays. They are your best option as they run on a C background and hence are much faster computationally
First, install numpy. Shoot up your terminal (CMD if you're in windows), type
pip install numpy
or, if in Linux, sudo pip install numpy
Then, go on to write your code
import numpy as np
list1=np.array([[2,3,5,6,7],[5,2,9,3,7]]) #2 dimensional list example
list2=np.array([[5,2,9,3,7],[1,3,5,2,2]])
prod = np.multiply(list1,list2)
# or simply, as suggested by Mad Physicist,
prod = list1*list2

Related

Pythonic way to group values from a list based on values from another list

I have 2 lists:
List_A = [1, 25, 40]
List_B = [2, 19, 23, 26, 30, 32, 34, 36]
I want to generate a list of lists such that I group values in list B by determining if they are in between values in list A. So in this example, list B would be grouped into:
[[2,19,23], [26,30,32,34,36]]
Is there any clean way in python to achieve this without multiple nested for loops?
Tried a messy double nested loop structure, was not pleased with how clunky it was (due to lack of readability).
Group List_B according to the index that they would have, if inserted into List_A. The standard library provides functionality in the bisect module to figure out (by using a standard bisection algorithm) where the value would go; it provides functionality in the itertools module to group adjacent values in an input sequence, according to some predicate ("key" function).
This looks like:
from bisect import bisect
from itertools import groupby
List_A = [1, 25, 40]
List_B = [2, 19, 23, 26, 30, 32, 34, 36]
groups = groupby(List_B, key=lambda x: bisect(List_A, x))
print([list(group) for key, group in groups])
which gives [[2, 19, 23], [26, 30, 32, 34, 36]] as requested.
bisect.bisect is an alias for bisect.bisect_right; that is, a value in List_B that is equal to a value from List_A will be put at the beginning of a later list. To have it as the end of the previous list instead, use bisect.bisect_left.
bisect.bisect also relies on List_A being sorted, naturally.
itertools.groupby will group adjacent values; it will make separate groups for values that belong in the same "bin" but are separated by values that belong in a different "bin". If this is an issue, sort the input first.
This will be O(N * lg M) where N is the length of List_B and M is the length of List_A. That is: finding a bin takes logarithmic time in the number of bins, and this work is repeated for each value to be binned.
This will not generate empty lists if there is a bin that should be empty; the actual indices into List_A are ignored by the list comprehension in this example.
This is the simplest way I can think of to code it.
result = []
for start, end in zip(List_A, List_A[1:]):
result.append([i for i in List_B if start <= i < end])
It's O(NxM), so not very efficient for large lists.
You could make it more efficient by sorting List_B (I assume List_A is already sorted) and stepping through both of them together, but it will be more complicated.

What is the pythonic method to produce a list of the differences between the neighbouring values of an original list

I have a list and want to know if the following can be done in Python without adding extra libraries to improve my code.
I want to get a list of the differences between elements of a list.
orig_list = [12, 27,31,55,95]
#desired output
spacings =[15, 4, 24,40]
I know I can do it by making a second list and subtracting it, I just wondered if there was another/better way.
You can use a list comprehension and zip:
[j-i for i,j in zip(orig_list[:-1], orig_list[1:])]
# [15, 4, 24, 40]
Though if NumPy is an option you have np.diff:
np.diff(orig_list)
# array([15, 4, 24, 40])
This is also possible with list comprehension, without using zip. Just iterate over the list elements from index 1 to n:
[orig_list[i]- orig_list[i-1] for i in range(1, len(orig_list))]

Python 3: Efficient way to loop through and compare integer lists?

I'm trying to compare two huge lists which contain 10,000+ lists integers. Each sub-list contains 20 integers, which are random between 1 and 99. Within the sub-lists all integers are unique.
list1 = [[1, 25, 23, 44, ...], [3, 85, 9, 24, 34, ...], ...]
list2 = [[3, 83, 45, 24, ...], [9, 82, 3, 47, 36, ...], ...]
result = compare_lists(list1, list2)
The compare_lists() function would compare integer from two lists that are in the same position, and return the two lists if the integers are different.
It is obviously very inefficient to loop through each sub-list as there are 100 Million+ possible combinations. (each of the 10,000+ sub-lists in list1 gets compared to 10,000+ in list2)
import itertools
def compare_lists(list1, list2):
for (a, b) in itertools.product(list1, list2):
count = 0
for z in range(20):
if a[z] != b[z]:
count += 1
if count == 20:
yield [a, b]
For example (i'll use 4 integers per list):
a = [1, 2, 3, 4] # True
b = [5, 6, 7, 8] # (integers are different)
a = [1, 2, 3, 4] # True
b = [2, 3, 4, 1] # (same integers but not in same position, still true)
a = [1, 2, 3, 4] # False
b = [1, 6, 7, 8] # (position [0] is identical)
itertools.product appears to be very inefficient in situations like this. Is there a faster or more efficient way to do this?
Sorry if this is unclear, I've only recently started using Python.
I don't know how to reduce the number of list-list comparisons based on some precomputed data in general.
Maybe you can get some advantage if the dataset has some property. For example, if you know that the vast majority of the possible 100M+ pairs will be in your output, I would focus on finding the small minority of rejected pairs. If value V appears on position P in a sublist, you can categorize the data in such way that every sublist belongs to 20 categories (P,V) from roughly 2K possibilities (20 positions * 99 values). Two sublist compare False it they share a category. This way you could build in few steps a set of (i,j) pairs such that list1[i] compares False with list2[j]. The output is than everything else from the carthesian product of possible indices i,j.
BTW, you can make the comparison a little bit more efficient than it currently is.
One matching pair a[z] == b[z] is enough to know the result is False.
for z in range(20):
if a[z] == b[z]:
break
else:
yield [a, b]
or equivalent:
if all(i != j for i,j in zip(a,b)):
yield [a, b]
I did not run timing test which one is faster. Anyway the speedup is probably marginal.

Calling functions on lists

I have a spectra of wavelengths as a list and some number of other lists I use in a formula (using tmm.tmm_core). Is there something more efficient than iterating through the wavelength if I'm just basically doing the same thing for all wavelengths?
Example
def go(n, thk, theta):
#do stuff
return(something)
wv = [1, 2, 3, 4]
a_vec = [3, 7, 3, 9]
b_vec = [6, 5, 9, 3]
c_vec = [0, 1, 8, 9]
theta = 0
th = [10, 1, 10]
final = []
for i in range(len(wv)):
n = [a[i], b[i], c[i]]
answer = go(n, th, theta)
final.append(answer)
in reality there are maybe 5000-10000 rows. It just seems to lag a bit when I press go and I assume it's because of the iteration. Pretty new to optimizing so I haven't used any benchmarking tools or anything.
I think you're looking for the map function in Python!
>>> list1 = [1,2,3,4]
>>> list2 = [5,6,7,8]
>>> map(lambda x,y: x+y, list1, list2)
[6, 8, 10, 12]
it takes in a function (in the above case, an anonymous lambda function), one or more lists and returns another list. At each iteration within the function, both lists are iterated and the result is added to the new list. You don't need to limit yourself to the expressive power of a lambda statement; you can also use globally defined functions as in the case below:
>>> def go(a,b,c):
... return a+b+c
...
>>> map(go, list1,list2, range(9,13))
[15, 18, 21, 24]
You can put all of your lists within a custom list like C_list and use map to create a new list all_len contain the length of all lists then use a list comprehension to create the list final :
all_len=map(len,C_list)
final =[[go([a[i], b[i], c[i]], th, theta) for i in range(li)] for li in all_len]
Also if the length of a and b and c are equal you can use zip function to zip then and refuse of multiple indexing :
all_len=map(len,C_list)
z=zip(a,b,c)
final =[[go(z[i], th, theta) for i in range(li)] for li in all_len]
If you have to perform an operation on every item in the list, then you're gonna have to go through every item in the list. However, you could gain speed through the use of list comprehensions: List Comprehensions

"[float(n)-50 for n in range(100)]" - what does this do?

I stumbled upon this one-liner:
[float(n)-50 for n in range(100)]
Could somebody tell me what it does? It's supposed to return a float value for a vector.
Best,
Marius
That's a list comprehension that reads "create a list of 100 elements such that for each element at index n, set that element equal to n-50".
It's a list comprehension:
List comprehensions provide a concise way to create lists. Common
applications are to make new lists where each element is the result of
some operations applied to each member of another sequence or
iterable, or to create a subsequence of those elements that satisfy a
certain condition.
For example, assume we want to create a list of squares, like:
>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
We can obtain the same result with:
squares = [x**2 for x in range(10)]
This is also equivalent to squares = map(lambda x: x**2, range(10)),
but it’s more concise and readable.
It means the same as:
[float(x) for x in range(-50, 50)]
Or (at least in Python 2):
map(float, range(-50, 50))
which are self-explanatory if you know how list comprehensions or the map function work: They transform the integer range -50...50 into a list of floats (the upper 50 is exclusive). The result is the list:
[-50.0, -49.0 ... 49.0]

Categories