I am using math.erf to find the error function of each element in an array.
# Import the erf function from the math library
from math import erf
# Import the numpy library to represent the input data
import numpy as np
# Create a dummy np.array
dummy_array = np.arange(20)
# Apply the erf function to the array to calculate the error function of each element
erf_array = erf(dummy_array)
I am getting an error as I cannot apply this whole function to an array. Is there a way to apply the error function to the whole array (vectorised approach) without looping through each element and applying it? (The loop will take a lot of time as the tables will be large)
from scipy import special
import numpy as np
dummy_array = np.arange(20)
erf_array = special.erf(dummy_array)
It is essential that you import the special subpackage as from scipy import special. Importing just scipy as import scipy and then calling scipy.special.erf() won't work, as explained here and here.
you can use list comprehension to apply the function to each element at once
erf_array = [erf(element) for element in dummy_array)]
Related
Is it okay to call any numpy function without using the library name before the function (example: numpy.linspace())? Can we call it simply
linspace()
instead of calling
numpy.linspace()
You can import it like this
from numpy import linspace
and then use it like this
a = linspace(1, 10)
yes, its completely fine when you are importing the function separately from the numpy such as
from numpy import linespace
#you can call the function by just writing its name
result=linespace(3,50)
but the convention is to use the name alias the pakage as np
import numpy as np
#then calling the function with short name
result = np.linespace(3,50)
alias can be helpful when working with large number of libraries.and it also improves the code readability.
If you import the function from the library directly there is nothing wrong with calling said function directly.
i.e.
from numpy import linspace
# Then call linspace by itself
a = linspace(1, 10)
That being said, many find that having numpy (often shortened to np) in front of function names help improve code readability. As almost everyone does this with certain libraries (Tensorflow as tf, Numpy as np, Pandas as pd) some may view it in a poor light if you simply directly import and use the function.
I would recommend importing the library as the shortened name and then using it appropriately.
i.e.
import numpy as np
# Then call np.linspace
a = np.linspace(1, 10)
import math
import matplotlib.pyplot as plt
import numpy as np
hc=1.23984186E3
k=1.3806503E-23
T=(np.linspace(5000,10000,50))
lamb = np.linspace(0.00001,.0000001,50)
print(len(lamb))
print (len(T))
planck_top=8*math.pi*hc
planck_bottom1=lamb*1000000
planck_bottom2=math.exp(hc//lamb*k*T)
planck=planck_top//planck_bottom
I keep getting this error here;
>
planck_bottom2=math.exp(hc//lamb*k*T)
TypeError: only size-1 arrays can be converted to Python scalars
I am not sure how to correct this, as we are dealing with a large array here
hc//lamb*k*T returns an array, and math.exp() can work with a number only. So, the following would work:
planck_bottom2=[math.exp(i) for i in hc//lamb*k*T]
It returns a list containing each number in the array hc//lamb*k*T correspondingly exponentiated.
You have another option of using numpy instead of math.
Using Numpy for Array Calculations
Just replace math by np since you already import numpy as np.
planck_top=8*np.pi*hc
planck_bottom2 = np.exp(hc//lamb*k*T)
About using math and/or numpy:
As an additional piece of information, I encourage you to look at the following references for evaluating when/if you should choose math and/or numpy.
What is the difference between import numpy and import math [duplicate]
Are NumPy's math functions faster than Python's?
"Math module functions cannot be used directly on ndarrays because they only accept scalar, not array arguments."
I have a module that heavily makes use of numpy:
from numpy import array, median, nan, percentile, roll, sqrt, sum, transpose, unique, where
Is it better practice to keep the namespace clean by using
import numpy as np
and then when I need to use array just use np.array, for e.g.?
This module also gets called repeatedly, say a few million times and keeping the namespace clean appears to add a bit of an overhead?
setup = '''import numpy as np'''
function = 'x = np.sum(np.array([1,2,3,4,5,6,7,8,9,10]))'
print(min(timeit.Timer(function, setup=setup).repeat(10, 300000)))
1.66832
setup = '''from numpy import arange, array, sum'''
function = 'x = sum(array([1,2,3,4,5,6,7,8,9,10]))'
print(min(timeit.Timer(function, setup=setup).repeat(10, 300000)))
1.65137
Why does this add more time when using np.sum vs sum?
You are right, it is better to keep the namespace clean. So I would use
import numpy as np
It keeps your code more readable, when you see a call like np.sum(array) you are reminded that you should work with an numpy array. The second reason is that many of the numpy functions have identical names as functions in other modules like scipy... If you use both its always clear which one you are using.
As you you can see in the test you made, the performance difference is there and if you really need the performance you could do it the other way.
The difference in performance is that in the case of a specific function import, you are referencing the function in the numpy module at the beginning of the script.
In the case of the general module import you import only the reference to the module and python needs to resolve/find the function that you are using in that module at every call.
You could have the best of both worlds (faster name resolution, and non-shadowing), if you're ok with defining your own aliasing (subject to your team conventions, of course):
import numpy as np
(np_sum, np_min, np_arange) = (np.sum, np.min, np.arange)
x = np_arange(24)
print (np_sum(x))
Alternative syntax to define your aliases:
from numpy import \
arange as np_arange, \
sum as np_sum, \
min as np_min
I am using Python to solve an equation. I added the 'Bessel function' in scipy.special, It was working. Now I want to find a variable using Bessel function. For example, I added the order(1) and value(0.44005058574) in Python, but it is not working. (in order to find the variable, I also used solver)
How I can solve the problem?
import numpy as np
import scipy.special as sc
import math
from sympy import Symbol
from sympy.solvers import solve
x=Symbol('x')
y=sc.jn(1,x)-0.44005058574
print(solve(x))
As the output is hinting, the function scipy.special.jn does not know how to handle the object x from simpy. Instead, you should use a numerical approach
>>> from scipy import optimize
>>> f = lambda x: sc.jn(1, x) - 0.44005058574
>>> root = optimize.newton(f, 1.0)
>>> print(root)
0.9999999999848267
I have the following question concerning universal functions
in numpy. How can I define a universal function which returns
the same type of numpy array as the numpy build-in functions.
The following sample code:
import numpy as np
def mysimplefunc(a):
return np.sin(a)
mysimpleufunc = np.frompyfunc(mysimplefunc,1,1)
a = np.linspace(0.0,1.0,3)
print(np.sin(a).dtype)
print(mysimpleufunc(a).dtype)
results in the output:
float64
object
Any help is very much appreciated :)
PS.: I am using python 3.4
I found the solution
(see also discussion on stackoverflow):
use vectorize instead of frompyfunc
import numpy as np
def mysimplefunc(a):
return np.sin(a)
mysimpleufunc = np.vectorize(mysimplefunc)
a = np.linspace(0.0,1.0,3)
print(np.sin(a).dtype)
print(mysimpleufunc(a).dtype)