I am using Python35. I would like to use the mpmath.invertlaplace function used in this question. Unfortunately I am having some trouble:
>>> import mpmath as mp
>>> import numpy as np
>>> def f(s):
return(1/s)
>>> t = np.linspace(0.01,0.5,10)
>>> G = []
>>> for i in range(0,10):
G.append(mp.invertlaplace(f, t[i], method = 'dehoog', degree = 18))
Traceback (most recent call last):
File "<pyshell#254>", line 2, in <module>
G.append(mp.invertlaplace(f, t[i], method = 'dehoog', degree = 18))
AttributeError: module 'mpmath' has no attribute 'invertlaplace'
Has this function been added too recently for Python35 to pick up? Am I missing something here? I feel like this should work...
Related
x = symbols('x')
ch = 'exp(cos(cos(exp((sin(-0.06792841536110628))**(-6.045461643745118)))))'
f = lambdify(x, ch, "numpy")
print(float(f(2)))
It does not work, the programm is running and never ends(no error is issued).
My goal is to avoid this kind of cases (among multiple cases) by doing a try/except but i can't as there is no error
Why no error is released?
How can i avoid these cases ?
Thanks for your helping me !
In general, I'm not sure you can. SymPy or NumPy will keep trying to compute the number until precision is exhausted. But you can create a function that will raise and error if numbers are out of bounds for your interest:
>>> from sympy import cos as _cos, I, exp
>>> def cos(x):
... if abs(x) > 10**20: raise ValueError
... return _cos(x)
>>> exp(cos(cos(exp(5*(1+I)))))
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 2, in cos
ValueError
>>> f = lambda x: exp(cos(cos(exp(x))))
>>> f(sin(-0.06792841536110628)**-6.045461643745118)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <lambda>
File "<string>", line 2, in cos
ValueError
But you have to think carefully about when you want to raise such an error. For example, SymPy has no trouble computing f(100) or f(100*I) if the non-error-catching cos is used. So think about when actually you want the error to rise.
lambdify is a lexical translator, converting a sympy expression to a python/numpy function.
Make a string with a symbol:
In [27]: ch = 'exp(cos(cos(exp((sin(x))**(-6.045461643745118)))))'
sympify(ch) has no problem, because it doesn't need to do any numeric calculation. So lambdify also works:
In [28]: f=lambdify(x,ch)
In [29]: f?
Signature: f(x)
Docstring:
Created with lambdify. Signature:
func(x)
Expression:
exp(cos(cos(exp((sin(x))**(-6.045461643745118)))))
Source code:
def _lambdifygenerated(x):
return (exp(cos(cos(exp(sin(x)**(-6.045461643745118))))))
The equivalent mpmath:
def _lambdifygenerated(x):
return (exp(cos(cos(exp(sin(x)**(mpf((1, 54452677612106279, -53, 56))))))))
And a working numeric evaluation:
In [33]: f(0j)
Out[33]: mpc(real='nan', imag='0.0')
I am getting an overflow error
import numpy as np
pi = np.pi
from scipy.integrate import quad
from math import exp
hbar = 1.055e-34
boltz = 1.381e-23
c = 2.998e8
def z(x):
return (x**3)/(exp(x)-1)
B=quad(z,0,np.inf)
A= ((boltz**4)*B)/(4*(pi**2)*(c**2)*(hbar**3))
print (A)
It is giving me an overflow error in line 11, i.e return (x**3)/(exp(x)-1)
You're hitting machine precision and python is freaking out.
>>> def z(x):
... return (x**3)/(exp(x)-1)
...
>>> z(709)
4.336616682334302e-300
>>> z(710)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in z
OverflowError: math range error
Just integrate up to ~700 and you'll be fine.
You can use np.exp instead of math.exp: it will raise a Warning for large numbers and return np.inf (which results in 1/np.inf = 0), instead of raising an OverFlowError
def z(x):
return (x**3)/(np.exp(x)-1) #replace math.exp by np.exp
B, err =quad(z,0,np.inf) # add the err, or use B=quad(...)[0] as quad will also return the integration error
A= ((boltz**4)*B)/(4*(pi**2)*(c**2)*(hbar**3))
print(A)
>> 5.668949306250541e-08
This is the code I am running:
import Qubit
from Z import Z
q = Qubit(Z.V)
Qubit code looks like this:
from Z import Z
class Qubit:
def __init__(self, spin):
if isinstance(spin, Z):
print ('success')
Z code looks like this:
from enum import Enum
class Z(Enum):
H = 0
V = 1
When I run the code, I get this error:
Traceback (most recent call last):
File "main.py", line 4, in <module>
q = Qubit(Z.V)
TypeError: 'module' object is not callable
Am I doing something wrong?
Yes, the enum Z is a member of the module Z, which means you need to refer to it as Z.Z, both in main.py and Qubit. Alternatively, you can replace import Z with from Z import Z in both files.
I have a function returned by theano.function(), and I want to use it within multiprocessing for speedup. The following is a simplified demo script to show where I run into problem:
import numpy as np
from multiprocessing import Pool
from functools import partial
import theano
from theano import tensor
def get_theano_func():
x = tensor.dscalar()
y = x + 0.1
f = theano.function([x], [y])
return f
def func1(func, x):
return func(x)
def MPjob(xlist):
f = get_theano_func()
fp = partial(func1, func=f)
pool = Pool(processes=5)
Results = pool.imap(fp, xlist)
Y = []
for y in Results:
Y.append(y[0])
pool.close()
return Y
if __name__ == '__main__':
xlist = np.arange(0, 5, 1)
Y = MPjob(xlist)
print(Y)
In the above codes, the theano function 'f' is fed to 'func1()' as input argument. If MPjob() runs correctly, it should return [0.1, 1.1, 2.1, 3.1, 4.1]. However, an exception "TypeError: func1() got multiple values for argument 'func'" raised.
The full trackback log is as follows:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Python35\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
TypeError: func1() got multiple values for argument 'func'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "F:/DaweiLeng/Code/workspace/Python/General/theano_multiprocess_debug.py", line 36, in <module>
Y = MPjob(xlist)
File "F:/DaweiLeng/Code/workspace/Python/General/theano_multiprocess_debug.py", line 29, in MPjob
for y in Results:
File "C:\Python35\lib\multiprocessing\pool.py", line 695, in next
raise value
TypeError: func1() got multiple values for argument 'func'
Anyone got a hint?
Turns out it's related with the partial() function. The full explanation is here https://github.com/Theano/Theano/issues/4720#issuecomment-232029702
I just installed the newest development version of scipy: 0.17.0.dev0+7dd2b91 and tested it with scipy.test.
I get a single failure:
======================================================================
FAIL: test_nanmedian_all_axis (test_stats.TestNanFunc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_stats.py", line 242, in test_nanmedian_all_axis
assert_equal(len(w), 4)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 334, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
ACTUAL: 1
DESIRED: 4
This failure corresponds to the following test function:
class TestNanFunc(TestCase):
def __init__(self, *args, **kw):
TestCase.__init__(self, *args, **kw)
self.X = X.copy()
self.Xall = X.copy()
self.Xall[:] = np.nan
self.Xsome = X.copy()
self.Xsomet = X.copy()
self.Xsome[0] = np.nan
self.Xsomet = self.Xsomet[1:]
def test_nanmedian_all_axis(self):
# Check nanmedian when all values are nan.
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
m = stats.nanmedian(self.Xall.reshape(3,3), axis=1)
assert_(np.isnan(m).all())
assert_equal(len(w), 4)
assert_(issubclass(w[-1].category, RuntimeWarning))
I was curious about why I was getting this failure, so I carried out the test in the IPython interpreter:
In [1]: import scipy.stats as stats
In [2]: from numpy import array
In [3]: X = array([1,2,3,4,5,6,7,8,9], float)
In [4]: Xall = X.copy()
In [5]: import numpy as np
In [6]: Xall[:] = np.nan
In [7]: import warnings
In [8]: with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
m = stats.nanmedian(Xall.reshape(3, 3), axis=1)
print len(w)
....:
4
As you can see, here I get the length for w expected by the test -- i.e., if I had gotten this value when running the test, it would have passed.
What's going on here? What might I do to uncover the cause of the test failure?
Update
I just re-ran the test, and it passed. Strange! I'm wondering whether the difference is that earlier I did this:
>>> import numpy as np
>>> import scipy
>>> np.test()
>>> scipy.test()
And just now I did this:
>>> import scipy
>>> scipy.test()
Maybe the numpy test suite messes with something that is then used by the scipy test suite?