I have two arrays of the same length as shown below.
import numpy as np
y1 = [12.1, 6.2, 1.4, 0.8, 5.6, 6.8, 8.5]
y2 = [8.2, 5.6, 2.8, 1.4, 2.5, 4.2, 6.4]
y1_a = np.array(y1)
y2_a = np.array(y2)
print(y1_a)
print(y2_a)
for i in range(len(y2_a)):
y3_a[i] = abs(y2_a[i] - y2_a[i])
I am computing the absolute difference at each index/location between the two arrays. I have to replace 'y1_a' with 'y2_a' whenever the absolute difference exceeds 2.0 at a given index/location and write it to a new array variable 'y3_a'. The starter code is added.
First of all, let numpy do the lifting for you. You can calculate your absolute differences without a manual for loop:
abs_diff = np.abs(y2_a - y1_a) # I assume your original code has a typo
Now you can get all the values where the absolute difference is more than 2.0:
y3_a = y1_a
y3_a[abs_diff > 2.0] = y2_a[abs_diff > 2.0]
I'm trying to obtain from a list of sorted α-values (Ex: 0.01, 0.2, 0.5, 1.1, 1.5, 2.4, 3.1, 4.0, 5.7, 6.3) with a confidence level set at 0.8. Where I want to use the value at this location, after traversing 80% of my array. I want to get alpha score to make prediction intervals
alpha_scores = array([0.01, 0.2, 0.5, 1.1, 1.5, 2.4, 3.1, 4.0, 5.7, 6.3])
confidence_level = 0.80
confidence_percentile = int(np.floor(confidence_level * (alpha_scores.size + 1))) - 1 #Calculate the confidence percentile
alpha_index = min(max(confidence_level , 0), alpha_scores.size - 1)
err_dist = alpha_scores[alpha_index]
Would this be the correct way to obtain this? I get a score but this does not always meet that same value.
The relevant excerpt of my code is as follows:
import numpy as np
def create_function(duration, start, stop):
rates = np.linspace(start, stop, duration*1000)
return rates
def generate_spikes(duration, start, stop):
rates = [create_function(duration, start, stop)]
array = [np.arange(0, (duration*1000), 1)]
start_value = [np.repeat(start, duration*1000)]
double_array = [np.add(array,array)]
times = np.arange(np.add(start_value,array), np.add(start_value,double_array), rates)
return times/1000.
I know this is really inefficient coding (especially the start_value and double_array stuff), but it's all a product of trying to somehow use arange with lists as my inputs.
I keep getting this error:
Type Error: int() argument must be a string, a bytes-like element, or a number, not 'list'
Essentially, an example of what I'm trying to do is this:
I had two arrays a = [1, 2, 3, 4] and b = [0.1, 0.2, 0.3, 0.4], I'd want to use np.arange to generate [1.1, 1.2, 1.3, 2.2, 2.4, 2.6, 3.3, 3.6, 3.9, 4.4, 4.8, 5.2]? (I'd be using a different step size for every element in the array.)
Is this even possible? And if so, would I have to flatten my list?
You can use broadcasting there for efficiency purposes -
(a + (b[:,None] * a)).ravel('F')
Sample run -
In [52]: a
Out[52]: array([1, 2, 3, 4])
In [53]: b
Out[53]: array([ 0.1, 0.2, 0.3, 0.4])
In [54]: (a + (b[:,None] * a)).ravel('F')
Out[54]:
array([ 1.1, 1.2, 1.3, 1.4, 2.2, 2.4, 2.6, 2.8, 3.3, 3.6, 3.9,
4.2, 4.4, 4.8, 5.2, 5.6])
Looking at the expected output, it seems you are using just the first three elements off b for the computation. So, to achieve that target, we just slice the first three elements and do that computation, like so -
In [55]: (a + (b[:3,None] * a)).ravel('F')
Out[55]:
array([ 1.1, 1.2, 1.3, 2.2, 2.4, 2.6, 3.3, 3.6, 3.9, 4.4, 4.8,
5.2])
I was wondering whether it is possible to optimise the following using Numpy or mathematical trickery.
def f1(g, b, dt, t1, t2):
p = np.copy(g)
for i in range(dt):
p += t1*np.tanh(np.dot(p, b)) + t2*p
return p
where g is a vector of length n, b is an nxn matrix, dt is the number of iterations, and t1 and t2are scalars.
I have quickly ran out of ideas on how to optimise this further, because p is used within the loop, in all three terms of the equation: when added to itself; in the dot product; and in a scalar multiplication.
But maybe there is a different way to represent this function or there are other tricks to improve its efficiency. If possible, I would prefer not to use Cython etc., but I'd be willing to use it if the speed improvements are significant. Thanks in advance, and apologies if the question is out of scope somehow.
Update:
The answers provided so far are more focused on what the values of the input/output could be to avoid unnecessary operations. I have now updated the MWE with proper initialisation values for the variables (I didn't expect the optimisation ideas to come from that side -- apologies). g will be in the range [-1, 1] and b will be in the range [-infinity, infinity]. Approximating the output is not an option because the returned vectors are later given to an evaluation function -- approximation may return the same vector for fairly similar input, so it is not an option.
MWE:
import numpy as np
import timeit
iterations = 10000
setup = """
import numpy as np
n = 100
g = np.random.uniform(-1, 1, (n,)) # Updated.
b = np.random.uniform(-1, 1, (n,n)) # Updated.
dt = 10
t1 = 1
t2 = 1/2
def f1(g, b, dt, t1, t2):
p = np.copy(g)
for i in range(dt):
p += t1*np.tanh(np.dot(p, b)) + t2*p
return p
"""
functions = [
"""
p = f1(g, b, dt, t1, t2)
"""
]
if __name__ == '__main__':
for function in functions:
print(function)
print('Time = {}'.format(timeit.timeit(function, setup=setup,
number=iterations)))
To get the code running much faster without cython or jit will be very hard, some mathematical trickery may be more the easier approach. It appears to me that if we define a k(g, b) = f1(g, b, n+1, t1, t2)/f1(g, b, n, t1, t2) for n in positive N, the k function should have a limit of t1+t2 (don't have a solid proof yet, just a gut feeling; it may be a special case for E(g)=0 & E(p)=0 also.). For t1=1 and t2=0.5, k() appears to approach the limit fairly quickly, for N>100, it is almost a constant of 1.5.
So I think a numerical approximation approach should be the easiest one.
In [81]:
t2=0.5
data=[f1(g, b, i+2, t1, t2)/f1(g, b, i+1, t1, t2) for i in range(1000)]
In [82]:
plt.figure(figsize=(10,5))
plt.plot(data[0], '.-', label='1')
plt.plot(data[4], '.-', label='5')
plt.plot(data[9], '.-', label='10')
plt.plot(data[49], '.-', label='50')
plt.plot(data[99], '.-', label='100')
plt.plot(data[999], '.-', label='1000')
plt.xlim(xmax=120)
plt.legend()
plt.savefig('limit.png')
In [83]:
data[999]
Out[83]:
array([ 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5])
I hesitate to give this as an answer, as I think it may be an artifact of the input data you gave us. Nevertheless, note that tanh(x) ~ 1 for for x>>1. Your input data, at all times I've run it has x = np.dot(p,b) >> 1, hence we can replace the f1 with f2.
def f1(g, b, dt, t1, t2):
p = np.copy(g)
for i in range(dt):
p += t1*np.tanh(np.dot(p, b)) + t2*p
return p
def f2(g, b, dt, t1, t2):
p = np.copy(g)
for i in range(dt):
p += t1 + t2*p
return p
print np.allclose(f1(g,b,dt,t1,t2), f2(g,b,dt,t1,t2))
Which indeed shows the two functions are numerically equivalent. Note that f2 is a non-homogeneous linear recurrence relation, and can be solved in one step if you choose to do so.
I have a one dimensional NumPy array:
a = numpy.array([2,3,3])
I would like to have the product of all elements, 18 in this case.
The only way I could find to do this would be:
b = reduce(lambda x,y: x*y, a)
Which looks pretty, but is not very fast (I need to do this a lot).
Is there a numpy method that does this? If not, what is the most efficient way of doing this? My real world arrays have 39 float elements.
In NumPy you can try:
numpy.prod(a)
For a larger array numpy.arange(1,40) / 10.:
array([ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1,
1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.1, 2.2,
2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3,
3.4, 3.5, 3.6, 3.7, 3.8, 3.9])
your reduce(lambda x,y: x*y, a) needs 24.2µs,
numpy.prod(a) needs 3.9µs.
EDIT: a.prod() needs 2.67µs. Thanks to J.F. Sebastian!
Or if the loss of numerical accuracy is not a problem, we can do
>>> numpy.exp(numpy.sum(numpy.log(a)))
17.999999999999996
>>> numpy.prod(a)
18