Calling functions on lists - python

I have a spectra of wavelengths as a list and some number of other lists I use in a formula (using tmm.tmm_core). Is there something more efficient than iterating through the wavelength if I'm just basically doing the same thing for all wavelengths?
Example
def go(n, thk, theta):
#do stuff
return(something)
wv = [1, 2, 3, 4]
a_vec = [3, 7, 3, 9]
b_vec = [6, 5, 9, 3]
c_vec = [0, 1, 8, 9]
theta = 0
th = [10, 1, 10]
final = []
for i in range(len(wv)):
n = [a[i], b[i], c[i]]
answer = go(n, th, theta)
final.append(answer)
in reality there are maybe 5000-10000 rows. It just seems to lag a bit when I press go and I assume it's because of the iteration. Pretty new to optimizing so I haven't used any benchmarking tools or anything.

I think you're looking for the map function in Python!
>>> list1 = [1,2,3,4]
>>> list2 = [5,6,7,8]
>>> map(lambda x,y: x+y, list1, list2)
[6, 8, 10, 12]
it takes in a function (in the above case, an anonymous lambda function), one or more lists and returns another list. At each iteration within the function, both lists are iterated and the result is added to the new list. You don't need to limit yourself to the expressive power of a lambda statement; you can also use globally defined functions as in the case below:
>>> def go(a,b,c):
... return a+b+c
...
>>> map(go, list1,list2, range(9,13))
[15, 18, 21, 24]

You can put all of your lists within a custom list like C_list and use map to create a new list all_len contain the length of all lists then use a list comprehension to create the list final :
all_len=map(len,C_list)
final =[[go([a[i], b[i], c[i]], th, theta) for i in range(li)] for li in all_len]
Also if the length of a and b and c are equal you can use zip function to zip then and refuse of multiple indexing :
all_len=map(len,C_list)
z=zip(a,b,c)
final =[[go(z[i], th, theta) for i in range(li)] for li in all_len]

If you have to perform an operation on every item in the list, then you're gonna have to go through every item in the list. However, you could gain speed through the use of list comprehensions: List Comprehensions

Related

return highest value of lists

Hello I have a few lists and im trying to create a new list of the highest values repsectively. for an example, these are the lists:
list1 = 5, 1, 4, 3
list2 = 3, 4, 2, 1
list3 = 10, 2, 5, 4
this is what I would like it to return:
[10, 4, 5, 4]
I thought that I could do a something like this:
largest = list(map(max(list1, list2, list3)))
but I get an error that map requires more than 1 argument.
I also thought I could write if, elif statements for greater than but it seems like it only does the first values and returns that list as the "greater value"
thanks for any help
This is the "zip splat" trick:
>>> lists = [list1, list2, list3]
>>> [max(col) for col in zip(*lists)]
[10, 4, 5, 4]
You could also use numpy arrays:
>>> import numpy as np
>>> np.array(lists).max(axis=0)
array([10, 4, 5, 4])
You have used map incorrectly. Replace that last line with this:
largest = list(map(max, zip(list1, list2, list3)))
In map, the first argument is the function to be applied, and the second argument is an iterable which will yield elements to apply the function on. The zip function lets you iterate over multiple iterables at once, returning tuples of corresponding elements. So that's how this code works!
Using map's iterableS argument has an implicit zip-like effects on the iterables.
map(max, *(list1, list2, list3))

I have a problem on python list comprehension code [duplicate]

Is it possible to define a recursive list comprehension in Python?
Possibly a simplistic example, but something along the lines of:
nums = [1, 1, 2, 2, 3, 3, 4, 4]
willThisWork = [x for x in nums if x not in self] # self being the current comprehension
Is anything like this possible?
No, there's no (documented, solid, stable, ...;-) way to refer to "the current comprehension". You could just use a loop:
res = []
for x in nums:
if x not in res:
res.append(x)
of course this is very costly (O(N squared)), so you can optimize it with an auxiliary set (I'm assuming that keeping the order of items in res congruent to that of the items in nums, otherwise set(nums) would do you;-)...:
res = []
aux = set()
for x in nums:
if x not in aux:
res.append(x)
aux.add(x)
this is enormously faster for very long lists (O(N) instead of N squared).
Edit: in Python 2.5 or 2.6, vars()['_[1]'] might actually work in the role you want for self (for a non-nested listcomp)... which is why I qualified my statement by clarifying there's no documented, solid, stable way to access "the list being built up" -- that peculiar, undocumented "name" '_[1]' (deliberately chosen not to be a valid identifier;-) is the apex of "implementation artifacts" and any code relying on it deserves to be put out of its misery;-).
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), which gives the possibility to name the result of an expression, we could reference items already seen by updating a variable within the list comprehension:
# items = [1, 1, 2, 2, 3, 3, 4, 4]
acc = []; [acc := acc + [x] for x in items if x not in acc]
# acc = [1, 2, 3, 4]
This:
Initializes a list acc which symbolizes the running list of elements already seen
For each item, this checks if it's already part of the acc list; and if not:
appends the item to acc (acc := acc + [x]) via an assignment expression
and at the same time uses the new value of acc as the mapped value for this item
Actually you can! This example with an explanation hopefully will illustrate how.
define recursive example to get a number only when it is 5 or more and if it isn't, increment it and call the 'check' function again. Repeat this process until it reaches 5 at which point return 5.
print [ (lambda f,v: v >= 5 and v or f(f,v+1))(lambda g,i: i >= 5 and i or g(g,i+1),i) for i in [1,2,3,4,5,6] ]
result:
[5, 5, 5, 5, 5, 6]
>>>
essentially the two anonymous functions interact in this way:
let f(g,x) = {
expression, terminal condition
g(g,x), non-terminal condition
}
let g(f,x) = {
expression, terminal condition
f(f,x), non-terminal condition
}
make g,f the 'same' function except that in one or both add a clause where the parameter is modified so as to cause the terminal condition to be reached and then go
f(g,x) in this way g becomes a copy of f making it like:
f(g,x) = {
expression, terminal condition
{
expression, terminal condition,
g(g,x), non-terminal codition
}, non-terminal condition
}
You need to do this because you can't access the the anonymous function itself upon being executed.
i.e
(lambda f,v: somehow call the function again inside itself )(_,_)
so in this example let A = the first function and B the second. We call A passing B as f and i as v. Now as B is essentially a copy of A and it's a parameter that has been passed you can now call B which is like calling A.
This generates the factorials in a list
print [ (lambda f,v: v == 0 and 1 or v*f(f,v-1))(lambda g,i: i == 0 and 1 or i*g(g,i-1),i) for i in [1,2,3,5,6,7] ]
[1, 2, 6, 120, 720, 5040]
>>>
Not sure if this is what you want, but you can write nested list comprehensions:
xs = [[i for i in range(1,10) if i % j == 0] for j in range(2,5)]
assert xs == [[2, 4, 6, 8], [3, 6, 9], [4, 8]]
From your code example, you seem to want to simply eliminate duplicates, which you can do with sets:
xs = sorted(set([1, 1, 2, 2, 3, 3, 4, 4]))
assert xs == [1, 2, 3, 4]
no. it won't work, there is no self to refer to while list comprehension is being executed.
And the main reason of course is that list comprehensions where not designed for this use.
No.
But it looks like you are trying to make a list of the unique elements in nums.
You could use a set:
unique_items = set(nums)
Note that items in nums need to be hashable.
You can also do the following. Which is a close as I can get to your original idea. But this is not as efficient as creating a set.
unique_items = []
for i in nums:
if i not in unique_items:
unique_items.append(i)
Do this:
nums = [1, 1, 2, 2, 3, 3, 4, 4]
set_of_nums = set(nums)
unique_num_list = list(set_of_nums)
or even this:
unique_num_list = sorted(set_of_nums)

Flattening list in python

I have seen many posts regarding how to flatten a list in Python. But I was never able to understand how this is working: reduce(lambda x,y:x+y,*myList)
Could someone please explain, how this is working:
>>> myList = [[[1,2,3],[4,5],[6,7,8,9]]]
>>> reduce(lambda x,y:x+y,*myList)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
Linked already posted :
How to print list of list into one single list in python without using any for or while loop?
Flattening a shallow list in Python
Flatten (an irregular) list of lists
If anybody thinks this is duplicate to other post, I'll remove it once I understood how it works.
Thanks.
What reduce does, in plain English, is that it takes two things:
A function f that:
Accepts exactly 2 arguments
Returns a value computed using those two values
An iterable iter (e.g. a list or str)
reduce computes the result of f(iter[0],iter[1]) (the first two items of the iterable), and keeps track of this value that was just computed (call it temp). reduce then computes f(temp,iter[2]) and now keeps track of this new value. This process continues until every item in iter has been passed into f, and returns the final value computed.
The use of * in passing *myList into the reduce function is that it takes an iterable and turns it into multiple arguments. These two lines do the same thing:
myFunc(10,12)
myFunc(*[10,12])
In the case of myList, you're using a list that contains only exactly one list in it. For that reason, putting the * in front replaces myList with myList[0].
Regarding compatibility, note that the reduce function works totally fine in Python 2, but in Python 3 you'll have to do this:
import functools
functools.reduce(some_iterable)
It is equivalent to :
def my_reduce(func, seq, default=None):
it = iter(seq)
# assign either the first item from the iterable to x or the default value
# passed to my_reduce
x = next(it) if default is None else default
#For each item in iterable, update x by appying the function on x and y
for y in it:
x = func(x, y)
return x
...
>>> my_reduce(lambda a, b: a+b, *myList, default=[])
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> my_reduce(lambda a, b: a+b, *myList)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> from operator import add
>>> my_reduce(add, *myList)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> my_reduce(lambda a, b: a+b, ['a', 'b', 'c', 'd'])
'abcd'
Docstring of reduce has a very good explanation:
reduce(...)
reduce(function, sequence[, initial]) -> value
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.
First of all, this is a very bad method. Just so you know.
reduce(f, [a, b, c, d]) runs
f(f(f(f(a, b), c), d)
Since f is lambda x,y:x+y, this is equivalent to
((a + b) + c) + d
For lists, a + b is the concatenation of the lists, so this joins each list.
This is slow because each step has to make a new list from scratch.
First, I don't know why it's wrapped in an array and then splatted (*). This will work the same way:
>>> myList = [[1,2,3],[4,5],[6,7,8,9]]
>>> reduce(lambda x,y:x+y,myList)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
Explanation: reduce takes a method with two parameters - the accumulator and the element. It calls the method with each element and then sets the accumulator to the result of the lambda. Therefore, you're basically concatenating all the inner lists together.
Here's a step-by-step explanation:
accumulator is initialized to myList[0] which is [1,2,3]
lambda is called with [1,2,3] and [4,5], it returns [1,2,3,4,5], which is assigned to the accumulator
lambda is called with [1,2,3,4,5] and [6,7,8,9], it returns [1,2,3,4,5,6,7,8,9]
no more elements left, so reduce returns that

Concatenate List Object Name with a Number and Retain the List Python

I'm using python 2.7 I'm trying to figure out a way to change the names of my lists automatically.
Let me explain i have multiple lists
list1 = [1, 2, 3, 4, 5]
list2 = [4, 5, 9, 3]
list3 = [8, 4, 3, 2, 1]
I would like to call the lists in a loop to determine which lists contain or do not contain a particular number.
My first thought was
x = "list" + str(i) # (where i iterates in the loop)
print x
However, using the above code only gave me the string "list1"(when i=1).
What I want is to be able to call the list that is named list1 and use the .count() operator to determine whether or not the number exists if it doesn't i want to call the next list until I'm out of lists(there will eventually be up to 30 lists).
Thanks,
Ryan
You shouldn't approach it like this. Put your lists in a container to iterate over them instead:
In [5]: for l in (list1, list2, list3):
...: print l.count(2)
...:
1
0
1
What you could do in a real-life use case is create a list of lists and fill it dynamically.
Then to get the first list that contains a given number, you could do:
In [6]: lists = [list1, list2, list3]
In [7]: next(l for l in lists if 9 in l)
Out[7]: [4, 5, 9, 3]
put the list in dict:
list1 = [1,2.4]
list2 = [2,5,6]
dlist = {1:list1,2:list2}
for k in dlist:
print dlist[k]

How to find elements existing in two lists but with different indexes

I have two lists of the same length which contains a variety of different elements. I'm trying to compare them to find the number of elements which exist in both lists, but have different indexes.
Here are some example inputs/outputs to demonstrate what I mean:
>>> compare([1, 2, 3, 4], [4, 3, 2, 1])
4
>>> compare([1, 2, 3], [1, 2, 3])
0
# Each item in the first list has the same index in the other
>>> compare([1, 2, 4, 4], [1, 4, 4, 2])
2
# The 3rd '4' in both lists don't count, since they have the same indexes
>>> compare([1, 2, 3, 3], [5, 3, 5, 5])
1
# Duplicates don't count
The lists are always the same size.
This is the algorithm I have so far:
def compare(list1, list2):
# Eliminate any direct matches
list1 = [a for (a, b) in zip(list1, list2) if a != b]
list2 = [b for (a, b) in zip(list1, list2) if a != b]
out = 0
for possible in list1:
if possible in list2:
index = list2.index(possible)
del list2[index]
out += 1
return out
Is there a more concise and eloquent way to do the same thing?
This python function does hold for the examples you provided:
def compare(list1, list2):
D = {e:i for i, e in enumerate(list1)}
return len(set(e for i, e in enumerate(list2) if D.get(e) not in (None, i)))
since duplicates don't count, you can use sets to find only the elements in each list. A set only holds unique elements. Then select only the elements shared between both using list.index
def compare(l1, l2):
s1, s2 = set(l1), set(l2)
shared = s1 & s2 # intersection, only the elements in both
return len([e for e in shared if l1.index(e) != l2.index(e)])
You can actually bring this down to a one-liner if you want
def compare(l1, l2):
return len([e for e in set(l1) & set(l2) if l1.index(e) != l2.index(e)])
Alternative:
Functionally you can use the reduce builtin (in python3, you have to do from functools import reduce first). This avoids construction of the list which saves excess memory usage. It uses a lambda function to do the work.
def compare(l1, l2):
return reduce(lambda acc, e: acc + int(l1.index(e) != l2.index(e)),
set(l1) & set(l2), 0)
A brief explanation:
reduce is a functional programming contruct that reduces an iterable to a single item traditionally. Here we use reduce to reduce the set intersection to a single value.
lambda functions are anonymous functions. Saying lambda x, y: x + 1 is like saying def func(x, y): return x + y except that the function has no name. reduce takes a function as its first argument. The first argument a the lambda receives when used with reduce is the result of the previous function, the accumulator.
set(l1) & set(l2) is a set consisting of unique elements that are in both l1 and l2. It is iterated over, and each element is taken out one at a time and used as the second argument to the lambda function.
0 is the initial value for the accumulator. We use this since we assume there are 0 shared elements with different indices to start.
I dont claim it is the simplest answer, but it is a one-liner.
import numpy as np
import itertools
l1 = [1, 2, 3, 4]
l2 = [1, 3, 2, 4]
print len(np.unique(list(itertools.chain.from_iterable([[a,b] for a,b in zip(l1,l2) if a!= b]))))
I explain:
[[a,b] for a,b in zip(l1,l2) if a!= b]
is the list of couples from zip(l1,l2) with different items. Number of elements in this list is number of positions where items at same position differ between the two lists.
Then, list(itertools.chain.from_iterable() is for merging component lists of a list. For instance :
>>> list(itertools.chain.from_iterable([[3,2,5],[5,6],[7,5,3,1]]))
[3, 2, 5, 5, 6, 7, 5, 3, 1]
Then, discard duplicates with np.unique(), and take len().

Categories