I recently reinstalled my python environment and a code that used to work very quickly now creeps at best (usually just hangs taking up more and more memory).
The point at which the code hangs is:
solve(exp(-alpha * x**2) - 0.01, alpha)
I've been able to reproduce this problem with a fresh IPython 0.13.1 session:
In [1]: from sympy import solve, Symbol, exp
In [2]: x = 14.7296138519
In [3]: alpha = Symbol('alpha', real=True)
In [4]: solve(exp(-alpha * x**2) - 0.01, alpha)
this works for integers but also quite slow. In the original code I looped over this looking for hundreds of different alpha's for different values of x (other than 14.7296138519) and it didn't take more than a second.
any thoughts?
The rational=False flag was introduced for such cases as this.
>>> q=14.7296138519
>>> solve(exp(-alpha * q**2) - 0.01, alpha, rational=False)
[0.0212257459123917]
(The explanation is given in the issue cited above.)
Rolling back from version 0.7.2 to 0.7.1 solved this problem.
easy_install sympy==0.7.1
I've reported this as a bug to sympy's google code.
I am currently running sympy ver: 1.11.1
This is for all I know the latest version.
However, hanging as was described, when solving a set of 3 differential for 3 angular double differentials persist.
Eg.:
sols = solve([LE1, LE2, LE3], (the_dd, phi_dd, psi_dd),
simplify=False, rational=False)
Related
While searching for some numpy stuff, I came across a question discussing the rounding accuracy of numpy.dot():
Numpy: Difference between dot(a,b) and (a*b).sum()
Since I happen to have two (different) Computers with Haswell-CPUs sitting on my desk, that should provide FMAand everything, I thought I'd test the example given by Ophion in the first answer, and I got a result that somewhat surprised me:
After updating/installing/fixing lapack/blas/atlas/numpy, I get the following on both machines:
>>> a = np.ones(1000, dtype=np.float128)+1e-14
>>> (a*a).sum()
1000.0000000000199999
>>> np.dot(a,a)
1000.0000000000199948
>>> a = np.ones(1000, dtype=np.float64)+1e-14
>>> (a*a).sum()
1000.0000000000198
>>> np.dot(a,a)
1000.0000000000176
So the standard multiplication + sum() is more precise than np.dot(). timeit however confirmed that the .dot() version is faster (but not much) for both float64 and float128.
Can anyone provide an explanation for this?
edit: I accidentally deleted the info on numpy versions: same results for 1.9.0 and 1.9.3 with python 3.4.0 and 3.4.1.
It looks like they recently added a special Pairwise Summation to ndarray.sum for improved numerical stability.
From PR 3685, this affects:
all add.reduce calls that go over float_add with IS_BINARY_REDUCE true
so this also improves mean/std/var and anything else that uses sum.
See here for code changes.
I have a piece of code which computes the Helmholtz-Hodge Decomposition.
I've been running on my Mac OS Yosemite and it was working just fine. A month ago, however, my Mac got pretty slow (it was really old), and I opted to buy a new notebook (Windows 8.1, Dell).
After installing all Python libs and so on, I continued my work running this same code (versioned in Git). And then the result was pretty weird, completely different from the one obtained in the old notebook.
For instance, what I do is to construct to matrices a and b(really long calculus) and then I call the solver:
s = numpy.linalg.solve(a, b)
This was returning a (wrong, and different of the result obtained in my Mac, which was right).
Then, I tried to use:
s = scipy.linalg.solve(a, b)
And the program exits with code 0 but at the middle of it.
Then, I just made a simple test of:
print 'here1'
s = scipy.linalg.solve(a, b)
print 'here2'
And here2 is never printed.
I tried:
print 'here1'
x, info = numpy.linalg.cg(a, b)
print 'here2'
And the same happens.
I also tried to check the solution after using numpy.linalg.solve:
print numpy.allclose(numpy.dot(a, s), b)
And I got a False (?!).
I don't know what is happening, how to find a solution, I just know that the same code runs in my Mac, but it would be very good if I could run it in other platforms. Now I'm stucked in this problem (don't have a Mac anymore) and with no clue about the cause.
The weirdest thing is that I don't receive any error on runtime warning, no feedback at all.
Thank you for any help.
EDIT:
Numpy Suit Test Results:
Scipy Suit Test Results:
Download Anaconda package manager
http://continuum.io/downloads
When you download this it will already have all the dependencies for numpy worked out for you. It installs locally and will work on most platforms.
This is not really an answer, but this blog discusses in length the problems of having a numpy ecosystem that evolves fast, at the expense of reproducibility.
By the way, which version of numpy are you using? The documentation for the latest 1.9 does not report any method called cg as the one you use...
I suggest the use of this example so that you (and others) can check the results.
>>> import numpy as np
>>> import scipy.linalg
>>> np.random.seed(123)
>>> a = np.random.random(size=(10000, 10000))
>>> b = np.random.random(size=(10000,))
>>> s_np = np.linalg.solve(a, b)
>>> s_sc = scipy.linalg.solve(a, b)
>>> np.allclose(s_np,s_sc)
>>> s_np
array([-15.59186559, 7.08345804, 4.48174646, ..., -16.43310046,
-8.81301553, -10.77509242])
I hope you can find the answer - one option in the future is to create a virtual machine for each of your projects, using Docker. This allows easy portability.
See a great article here discussing Docker for research.
If the price charged for a crayon is p cents, then x thousand crayons
will be sold in a certain school store, where p(x)= 122-x/34 .
Using Python, calculate how many crayons must be sold to maximize
revenue.
I can solve this by hand much easily, the only problem is how can I do it using plain Python? I am using IDLE (Python GUI). I am new to Python and haven't downloaded any external libraries. Any help will be greatly appreciated.
What I've done up to this point is
import math
def f(x):
return (122-(x/34.0))
def g(x):
return x*f(x)
def h(x):
return (122-(2*x/34.0))
Use SymPy. It's simple, beautiful and powerful.
You can write down your equations with simpify(), like that:
p = simpify('122 - x/34')
And define symbols for symbolic evaluation with Symbol() and symbols().
With that you can do things like simply use solve() function for any given equation. i.e. x + 4 = 2x:
res = solve('x + 4 - 2*x')
It's pretty much the tool I use for any math work with python.
So, you should go and download an external library for this, as it's not functionality that python makes easy to implement natively. Also, if you're serious about doing mathematical computation in python I would suggest switching operating systems to something like OSX or linux, simply because compiling old FORTRAN libraries (required for much performant mathematical computing) is a huge pain on Windows.
You have to make use of the scipy library here, which has an optimize module. Specifically I would suggest using the optimize.minimize_scalar function. Docs can be found here.
>>> from scipy.optimize import minimize_scalar
>>> def g(x):
... return -(x*(122 - (x/34))) # inverse because you're minimizing.
>>> minimize_scalar(g, bounds=(1, 10000), method='bounded')
status: 0
nfev: 6
success: True
fun: -126514.0
x: 2074.0
message: 'Solution found.'
We have a complex optimization problem which includes several quadratic terms with integer and continous variables (using Anaconda Python / Pycharm with Gurobi 6.0.2). We applied the setPWLObj function to apprixmate the quadratic objective components. The code for this is as follows:
m.addConstr(l1[t] == 1/2.0 * (hsqrt[t]+hQ[t]))
m.addConstr(l2[t] == 1/2.0 * (hsqrt[t]-hQ[t]))
hlx1 = linspace(-10, 10, 50)
hlx2 = linspace(-10, 10, 50)
h1y1 = [0]*50
hly2 = [0]*50
for i in range(len(hlx1)):
h1y1[i] = hlx1[i] * hlx1[i] * 7.348 / 1000.0
hly2[i] = -hlx2[i] * hlx2[i] * 7.348 / 1000.0
m.setPWLObj(l1[t], hlx1, h1y1)
m.setPWLObj(l2[t], hlx2, hly2)
With l1 and l2 being continous variables.
The problem behaves inconsistently. Running it on a Mac mostly delivers the exit codes 138 and 139 (correspondent to Bus Error 10), sometimes the same problem a solution can be calculated. This is particularly the case when starting the optimization several times in a row. This appears to be random.
On Windows machines either Python crashes, or the exit code "-1073741819" is delivered. Further searches for this exit code did not really help us.
Sorry for taking so long, but we fixed the issue.
Actually we found out that the python crash is or was due to a bug in Gurobi. Following a request we filed with them the bug was removed.
If Gurobi 6.0.3. or above is used the error is not there anymore.
When I run this program, I get no solution at the end, but there should be a solution ( I believe). Any idea what I am doing wrong? If you take away the Q from e2 equation it seems to work correctly.
#!/usr/bin/python
from sympy import *
a,b,w,r = symbols('a b w r',real=True,positive=True)
L,K,Q = symbols('L K Q',real=True,positive=True)
e1=K
e2=(K*Q/2)**(a)
print solve(e1-e2,K)
It works if we do the following:
Set Q=1 or,
Change e2 to e2=(K*a)(Q/2)**(a)
I would still like it to work in the original way though, as my equations are more complicated than this.
This is just a deficiency of solve. solve is based mostly on heuristics, so sometimes it isn't able to figure out how to solve an equation when it's given in a particular form. The workaround here is to just call expand_power_base on the expression, since SymPy is able to solve K - K**a*(Q/2)**a:
In [8]: print(solve(expand_power_base(e1-e2),K))
[(2/Q)**(a/(a - 1))]
It's also worth pointing out that the result of [] from solve does not in any way mean that there are no solutions, only that solve was unable to find any. See the first note at http://docs.sympy.org/latest/tutorial/solvers.html.