The title says it all... why does sympy have the following behavior?
import sympy.physics.units as u
print(0*u.meter)
# >>> 0
And Pint has this behavior:
import pint
u = pint.UnitRegistry()
print(0*u.meter)
# >>> 0 meter
I think I prefer pint's behavior, because it allows for dimensional consistency. 0 is a proper magnitude of some unit. For instance, 0 degrees kelvin has a definitive meaning... it's not just the absence of anything...
So I realize the contributors of sympy probably chose this implementation for some reason. Can you help me see the light?
The discussion of reasons for implementation belongs on GitHub (where I raised this issue), not here. A short answer is that units are a bolted-on structure; the core of SymPy is not really unit-aware.
You can create 0*meter expression by passing evaluate=False parameter to Mul:
>>> Mul(0, u.meter, evaluate=False)
0*meter
However, it will become 0 if combined with something else.
>>> 3*Mul(0, u.meter, evaluate=False)
0
Wrapping in UnevaluatedExpr prevents the above, but causes more problems than it solves.
>>> 3*UnevaluatedExpr(Mul(0, u.meter, evaluate=False))
3*(0*meter)
Related
First I want to point out that I'm a total sympy noob.
I'm trying to create a Custom Formula-based Measurement class with this sympy expression:
from sympy import Symbol, S, floor, sympify, Float
SU = Symbol('millimeter')
exp = S(20.0) + floor((((SU-S(212.5)) / S(10.0))) / S(0.5)) * S(0.5)
The problem I face is that for the same SU I get different result based on the way the expression is evaluated. Here is what I mean:
>>> exp.subs(SU, 215)
20.0000000000000
>>> exp.evalf(subs={SU: 215})
0.e+1 #This is actually 16.0 when: float(exp.evalf(subs={SU: 215}))
More interestingly the problem exists only when SU is between [213:217] (when I expect the result to be 20.0)
For the rest of the values its fine (AFAIK)
>>> exp.subs(SU, 212)
19.5000000000000
>>> exp.evalf(subs={SU: 212})
19.50
>>> exp.subs(SU, 218)
20.5000000000000
>>> exp.evalf(subs={SU: 218})
20.50
Any Ideas for this strange behavior ?
This was due to incorrect precision values. The bug was reported and is already corrected in the current version of SymPy available on GitHub; the versions of SymPy > 1.1.1 will not have this bug.
Using srepr on the output of subs provides some explanation:
x = Symbol('x')
srepr((floor(x)+20).evalf(subs={x:0.5}))
The output is Float('16.0', precision=1). This is binary precision. SymPy thinks that the output of floor, when it happens to be zero, has only one bit of precision. So it subsequently truncates the added +20 accordingly, to nearest power of 2.
Of course, this is a bug. There are several open issues related to Float class and rounding, such as this one; they may be related.
The workaround is to avoid evalf(subs=dict) construction (is it even documented?). Using the methods in the natural order: substitute, then evaluate, gives correct results:
srepr((floor(x)+20).subs({x:0.5}).evalf())
"Float('20.0', precision=53)"
I've run into a strange situation where z3py produces two separate answers for what would logically be the same problem.
Version 1:
>>> import z3
>>> r, r2, q = z3.Reals('r r2 q')
>>> s = z3.Solver()
>>> s.add(r > 2, r2 == r, q == r2 ** z3.RealVal(0.5))
>>> s.check()
unknown
Version 2
>>> import z3
>>> r, r2, q = z3.Reals('r r2 q')
>>> s = z3.Solver()
>>> s.add(r > 2, r2 == r, q * q == r2)
>>> s.check()
sat
How do I change what I'm doing with version 1 so that it will produce an accurate result? These constraints are being generated on-the-fly and would possibly add significantly to the complexity of the application if I were to attempt to re-write them on-the-fly. Also, in the case where the root is truly symbolic, it would simply not be possible for Python itself to solve that problem.
Edit: I've discovered that if I use the following setup for my Solver, it will solve successfully (although a bit slower):
z3.Then("simplify","solve-eqs","smt").solver()
It's not entirely clear to me what the implications are of specifying that rather than just the default solver, however.
The out of the box performance of Z3 on non-linear real problems is not great, it will definitely take some fiddling to make it find all the solutions you need. My first attempt is always to switch to the NLSAT solver (apply the qfnra-nlsat tactic, or make a solver from it). That solver is often much better on QF_NRA problems, but it doesn't support any theory combination, i.e., if you have other types of variables it will bail out.
Also, search stackoverflow for "Z3" and "non-linear", there have been a multitude of questions and answers to various aspects thereof.
I have the following code in sympy
from sympy import *
x,y,G=symbols('x y G')
G=x**(3./2.) - y
g_inv=solve(G, x)
if len(g_inv)>1: g_inv=g_inv[-1]
dginvdy=diff(g_inv, y)
The problem is that this gives me
____
3 ╱ 2
2⋅╲╱ y
─────────
3⋅y
and not 2*y**(-1./3)/3 as I expected. I have tried simplify() and even cancel() but no luck. Also, if I define the variables with real=True I can't invert it with solvefor some reason. If I define only yas being real I get
2⋅sign(y)
─────────
3 _____
3⋅╲╱ │y│
which is closer (?) but still not what I want. Defining y as positive also didn't do the trick.
This may seem like something silly but it tremendously complicates the calculations I do from then on.
Any ideas?
I think you need to use sympy.factor here rather than simplify:
In [2]: dginvdy
Out[2]: 2*(y**2)**(1/3)/(3*y)
In [3]: factor(dginvdy)
Out[3]: 2/(3*y**(1/3))
The sympy docs go into some detail about this.
I have found that my root simplification headaches are often alleviated by defining my variables with the assumption positive=True, and indeed this method gets you to your desired answer here too. You'll need to get rid of your if statement and use g_inv=solve(G, x)[0] because solve(...) will now return only a single solution. This method can lead to some loss of generality so you just need to know your problem.
I am currently working with very small numbers in my python program, e.g.
x = 200 + 2e-26
One solution is to work with logarithmic values which would increase the range of my float value. The problem is that I have to do a fft with these values, too, and therefore using the logarithmic approach is not usable (and using the Decimal-module neither). Is there another way to solve that problem?
Edit: My problem with the decimal module is: How can I handle imaginary values? I tried a = Decimal(1e-26)+Decimal(1e-26*1j) and a = Decimal(1e-26)+Decimal(1e-26)*1j, and both ways failed (error on request).
Consider trying out the mpmath package.
>>> from mpmath import mpf, mpc, mp
>>> mp.dps = 40
>>> mpf(200) + mpf(2e-26) + mpc(1j)
mpc(real='200.0000000000000000000000000200000000000007', imag='1.0')
Mostly accurate and can handle complex numbers, more details in the documentation.
While numpy supports more decimal types (and also complex versions), they don't help:
>>> import numpy
>>> numpy.longfloat
<type 'numpy.float128'>
>>> a = numpy.array([200, 2e-26], dtype=numpy.longfloat)
>>> a
array([ 200.0, 2e-26], dtype=float128)
>>> a.sum()
200.0
>>> a = numpy.array([200, 2e-26], dtype=numpy.longdouble)
>>> a.sum()
200.0
The reason is explained here: Internally, numpy uses 80 bits which means 63 bits mantissa which just supports 63/3 = 21 digits.
What you need is a real 128bit float type like the one from boost.
Try the Boost.Python module which might give you access to this type. If that doesn't work, then you'll have to write your own wrapper class in C++ as explained here.
SymPy is a great tool for doing units conversions in Python:
>>> from sympy.physics import units
>>> 12. * units.inch / units.m
0.304800000000000
You can easily roll your own:
>>> units.BTU = 1055.05585 * units.J
>>> units.BTU
1055.05585*m**2*kg/s**2
However, I cannot implement this into my application unless I can convert degrees C (absolute) to K to degrees F to degrees R, or any combo thereof.
I thought maybe something like this would work:
units.degC = <<somefunc of units.K>>
But clearly that is the wrong path to go down. Any suggestions for cleanly implementing "offset"-type units conversions in SymPy?
Note: I'm open to trying other units conversion modules, but don't know of any besides Unum, and found it to be cumbersome.
Edit: OK, it is now clear that what I want to do is first determine if the two quantities to be compared are in the same coordinate system. (like time units reference to different epochs or time zones or dB to straight amplitude), make the appropriate transformation, then make the conversion. Are there any general coordinate system management tools? That would be great.
I would make the assumption that °F and °C always refer to Δ°F Δ°C within an expression but refer to absolute when standing alone. I was just wondering if there was a way to make units.degF a function and slap a decorator property() on it to deal with those two conditions.
But for now, I'll set units.C == units.K and try to make it very clear in the documentation to use functions convertCtoK(...) and convertFtoR(...) when dealing with absolute units. (Just kidding. No I won't.)
The Unum documentation has a pretty good writeup on why this is hard:
Unum is unable to handle reliably conversions between °Celsius and Kelvin. The issue is referred as the 'false origin problem' : the 0°Celsius is defined as 273.15 K. This is really a special and annoying case, since in general the value 0 is unaffected by unit conversion, e.g. 0 [m] = 0 [miles] = ... . Here, the conversion Kelvin/°Celsius is characterized by a factor 1 and an offset of 273.15 K. The offset is not feasible in the current version of Unum.
Moreover it will presumably never be integrated in a future version because there is also a conceptual problem : the offset should be applied if the quantity represents an absolute temperature, but it shouldn't if the quantity represents a difference of temperatures. For instance, a raise of temperature of 1° Celsius is equivalent to a raise of 1 K. It is impossible to guess what is in the user mind, whether it's an absolute or a relative temperature. The question of absolute vs relative quantities is unimportant for other units since the answer does not impact the conversion rule. Unum is unable to make the distinction between the two cases.
It's pretty easy to conceptually see the problems with trying to represent absolute temperature conversion symbolically. With any normal relative unit, (x unit) * 2 == (x * 2) unit—unit math is commutative. With absolute temperatures, that breaks down—it's difficult to do anything more complex than straight temperature conversions with no other unit dimensions. You're probably best off keeping all calculations in Kelvin, and converting to and from other temperature units only at the entry and exit points of your code.
I personally like Quantities thanks to its NumPy integration, however it only does relative temperatures, not absolute.
Example, how it could work:
>>> T(0*F) + 10*C
T(265.37222222222221*K) # or T(47767/180*K)
>>> T(0*F + 10*C)
T(283.15*K)
>>> 0*F + T(10*C)
T(283.15*K)
>>> 0*F + 10*C
10*K
>>> T(0*F) + T(10*C)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'absolute_temperature' and \
'absolute_temperature'
>>> T(0*F) - T(10*C)
T(245.37222222222223*K) # or T(44167/180*K)
>>> 0*F - 10*C
-10*K
The natu package handles units of temperature. For instance, you can do this:
>>> from natu.units import K, degC, degF
>>> T = 25*degC
>>> T/K
298.1500
>>> T/degF
77.0000
>>> 0*degC + 100*K
100.0 degC
Prefixes are supported too:
>>> from natu.units import mdegC
>>> 100*mdegC/K
273.2500
natu also handles nonlinear units such as the decibel, not just those with offsets like degree Celsius and degree Fahrenheit.
Relating to the first example you gave, you can do this:
>>> from natu import units
>>> 12*units.inch/units.m
0.3048
BTU is already built in. You can change its display unit to m**2*kg/s**2, but by default natu simplifies the unit to J:
>>> from natu.units import BTU
>>> BTU.display = 'm2*kg/s2'
>>> 1*BTU
1055.05585262 J