I have the following integral, which I have computed via other sources to match a given solution. However, SymPy returns something that appears to be garbage. Why is this happening? Have I not explicitly established something that Mathematica and Integral Calculator assume?
from sympy import symbols, integrate, oo
x, z = symbols('x,z')
expr = 1/((x**2 + z**2)**(3/2))
integrate(expr, (x,-oo,oo))
Gives the following result:
I know the result to be: 2/(z^2)
As I don't know how (or if it's even possible) to enter LaTeX here, below is the operation attempted and desired result
You have **(3 / 2) which is a float. This is an issue that SymPy struggles with and is one of the issues mentioned here under Gotchas and Pitfalls. I found this from the GitHub page integrate((x-t)**(-1/2)*t,(t,0,x)) raises ValueError.
You need to make sure that your exponent is a rational number. There are a few ways to do this. Below, we use S (sympify):
from sympy import symbols, integrate, oo, S
x, z = symbols('x,z')
expr = 1/((x**2 + z**2)**(S(3)/2))
integrate(expr, (x,-oo,oo))
Which gives the desired output.
Related
I'm trying to use Z3 to determine if an expression is satisfiable.I have created all equations that I want to use it as constraints using SymPy and the variables are Symbols.
the equations for example
"f1==(x -y >= 0 )"
"f2 == (x_y >= 1)"
"f3 == f1 -f2> =0"
I want the solver to return correct values for x,y,f1,f2,f3 that makes the equations satisfiable..
s = z3.Solver()
for m,n in zip((allvariables[3:len(allvariables)-1]),allequations):
print('variable',m)
print('equations',n)
s.add(m==(n))
print('solver check', s.check())
while s.check() == sat:
print(s)
In general, you cannot mix and match z3 and Sympy symbols. They live in different worlds, they cannot be readily interchanged.
In essence, you'll have to parse the SymPy expression, and reconstruct it in terms of z3. This can be achieved in a variety of ways, but it's not a cheap/easy thing to do since you'd need to analyze large swaths of SymPy code. However, if your expressions are "simple" enough, you can get away with a simple translator, perhaps. Start by looking at how SymPy expressions can be turned into a parse-tree. This is a good point to start: https://docs.sympy.org/latest/tutorials/intro-tutorial/manipulation.html
I using the following code (Actually similar) to create a table in Markdown using IPython.display.Markdown:
from IPython.display import Markdown
from tabulate import tabulate
table = [["Sun",696000,1989100000],
["Earth",6371,5973.6],
["Moon",1737,73.5],
["Mars",3390,641.85]]
Markdown(tabulate(
table,
headers=["Planet","R (km)", "mass (x 10^29 kg)"]
))
I was wondering if there is a way to control the actual number of digits / precision of the floating numbers in the class.
Something like :0.2f in f Strings.
I tried looking at the documentation at https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#IPython.display.Markdown yet I couldn't find any way to control it.
I know there is a magic command to control it globally: How to set number of digits for float point output in Ipython. Yet I'm looking for something per output.
You could define a function that returns a formatted value, and apply it on all elements. like so:
from math import round
def format_distance(value):
return round(value, 2)
formatted_table = [[x[0], format_distance(x[1])] for x in table]
One way I found is actually configuring, in this specific case, the table generation.
According to https://github.com/astanin/python-tabulate at Number formatting:
So I can use floatfmt.
It is not an answer in general for markdown, yet will work in the case above.
I need to use CubicSpline to interpolated between points. This is my function
cs = CubicSpline(aTime, aControl)
u = cs(t) # u is a ndarray of one element.
I cannot convert u to a float. uu = float(u) or uu = float(u[0]) doesn't work in the function.
I can convert u to a float in the shell by float(u). This shouldn't work because I have not provided an index but I get an error if I use u[0].
I have read something about np.squeeze. I tried it but it didn't help.
I added a print ("u=",u) statement after the u=cs(t). The result was
u= [ 1.88006889e+09 5.39398193e-01 5.39398193e-01]
How can this be? I expect 1 value. The second and third numbers look about right.
I found the problem. Programming error, of course but the error messages I got were very misleading. I was calling the interpolate function with 3 values so it returned three vales. Why I couldn't get just the one afterwards is still a mystery but now that I call the interpolate with just one value I get one float as expected. Overall this still didn't help as the interpolate1d function is too slow. I wrote my own cubic interpolate function that is MUCH faster.
Again, programming error and poor error messages were the problem.
I wrote a C++ wrapper class to some functions in LAPACK. In order to test the class, I use the Python C Extension, where I call numpy, and do the same operations, and compare the results by taking the difference
For example, for the inverse of a matrix, I generate a random matrix in C++, then pass it as a string (with many, many digits, like 30 digits) to Python's terminal using PyRun_SimpleString, and assign the matrix as numpy.matrix(...,dtype=numpy.double) (or numpy.complex128). Then I use numpy.linalg.inv() to calculate the inverse of the same matrix. Finally, I take the difference between numpy's result and my result, and use numpy.isclose with a specific relative tolerance to see whether the results are close enough.
The problem: The problem is that when I use C++ floats, the relative precision I need to be able to compare is about 1e-2!!! And yet with this relative precision I get some statistical failures (with low probability).
Doubles are fine... I can do 1e-10 and it's statistically safe.
While I know that floats have intrinsic bit precision of about 1e-6, I'm wondering why I have to go so low to 1e-2 to be able to compare the results, and it still fails some times!
So, going so low down to 1e-2 got me wondering whether I'm thinking about this whole thing the wrong way. Is there something wrong with my approach?
Please ask for more details if you need it.
Update 1: Eric requested example of Python calls. Here is an example:
//create my matrices
Matrix<T> mat_d = RandomMatrix<T>(...);
auto mat_d_i = mat_d.getInverse();
//I store everything in the dict 'data'
PyRun_SimpleString(std::string("data={}").c_str());
//original matrix
//mat_d.asString(...) will return in the format [[1,2],[3,4]], where 32 is 32 digits per number
PyRun_SimpleString(std::string("data['a']=np.matrix(" + mat_d.asString(32,'[',']',',') + ",dtype=np.complex128)").c_str());
//pass the inverted matrix to Python
PyRun_SimpleString(std::string("data['b_c']=np.matrix(" + mat_d_i.asString(32,'[',']',',') + ",dtype=np.complex128)").c_str());
//inverse in numpy
PyRun_SimpleString(std::string("data['b_p']=np.linalg.inv(data['a'])").c_str());
//flatten the matrices to make comparing them easier (make them 1-dimensional)
PyRun_SimpleString("data['fb_p']=((data['b_p']).flatten().tolist())[0]");
PyRun_SimpleString("data['fb_c']=((data['b_c']).flatten().tolist())[0]");
//make the comparison. The function compare_floats(f1,f2,t) calls numpy.isclose(f1,f2,rtol=t)
//prec is an integer that takes its value from a template function, where I choose the precision I want based on type
PyRun_SimpleString(std::string("res=list(set([compare_floats(data['fb_p'][i],data['fb_c'][i],1e-"+ std::to_string(prec) +") for i in range(len(data['fb_p']))]))[0]").c_str());
//the set above eliminates repeated True and False. If all results are True, we expect that res=[True], otherwise, the test failed somewhere
PyRun_SimpleString(std::string("res = ((len(res) == 1) and res[0])").c_str());
//Now if res is True, then success
Comments in the code describe the procedure step-by-step.
Please provide a working example with at least two variables. I can't seem to find out how to do this from the documentation.
I have tried:
solve(Eq(poly(x + y,domain=FF(7)),0),x,y)
but this outputs
[]
which is incorrect and appears to be a type issue. Is there a way to get around this?
>>> solve(Poly(x + y,domain=FF(7)),[x,y])
[{x: -y}]