I made a simple code on python interpreter and run it.
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> x=np.array([0,1])
>>> w=np.array([0.5,0.5])
>>> b=-0.7
>>> np.sum(w*x)+b
-0.19999999999999996
the result -0.19999999999999996 is weird. I think.... it is caused by IEEE 754 rule. But when I try to run almost same code by file, result is a lot different.
import numpy as np
x = np.array([0,1])
w = np.array([0.5,0.5])
b = -0.7
print(np.sum(w * x) + b)
the result is "-0.2". IEEE 754 rule does not affect the result.
what is the difference between file based running and interpreter based running?
The difference is due to how the interpreter displays output.
The print function will try to use an object's __str__ method, but the interpreter will use an object's __repr__.
If, in the interpreter you wrote:
...
z = np.sum(w*x)+b
print(z)
(which is what you're doing in your code) you'd see -0.2.
Similarly, if in your code you wrote:
print(repr(np.sum(w * x) + b))
(which is what you're doing in the interpreter) you'd see -0.19999999999999996
I think the difference lies in the fact that you use print() for your file based code, which converts the number, while in the interpreter's case, you don't use print(), but rather ask the interpreter to show the result.
Related
Why do semicolons not suppress output in doctests? A workaround is to assign the result, but I am curious as to why this does not work.
"""
>>> 1+1; # Semicolons typically suppress output, but this fails
>>> x = 1+1 # Workaround: assign result to suppress output.
"""
Failed example:
1+1;
Expected nothing
Got:
2
Unlike other languages like C/C++, semicolons are optional terminators for statements in Python, as you can see in the Repl below:
Python 3.6.5 |Anaconda custom (64-bit)| (default, Mar 29 2018, 13:32:41) [MSC v
1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 1 + 1;
2
>>> 1 + 1
2
However, you may observe a different behavior in say IPython:
In [120]: 1 + 1;
In [121]: 1 + 1
Out[121]: 2
The docs for IPython suggest using semicolons to suppress output. However, this behavior is only specific to IPython and does not in any way extend to Python or its standard libraries(like doctest).
You're thinking of MATLAB or IPython or something. Python semicolons don't normally suppress anything. doctest simulates a normal interactive Python session, not an IPython session, so the semicolon does nothing.
The semicolon has no effect at all.
Doctest reads the expected result from the line following the Python statement (i.e. the part after >>>). In your example, there is no result, so doctest expects no result. That's why it reports "Expected nothing". However, 1+1 returns 2.
The second expression, x = 1+1, has no result, so the test is successful (although nothing really is tested).
Try this for example:
"""
>>> 1+1 # without semicolon
2
>>> 1+1; # with semicolon
2
>>> x = 1+1 # not so useful test
"""
Bug entered at https://github.com/sympy/sympy/issues/14877
Is this a known issue? Is this a new bug? Will report if new.
What could cause it?
>which python
/opt/anaconda/bin/python
>pip list | grep sympy
sympy 1.1.1
>python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
from sympy import *
x=symbols('x');
integrate(exp(1-exp(x**2)*x+2*x**2)*(2*x**3+x)/(1-exp(x**2)*x)**2,x)
gives
.....
File "/opt/anaconda/lib/python3.6/site-packages/sympy/core/mul.py", line 1067, in <genexpr>
a.is_commutative for a in self.args)
RecursionError: maximum recursion depth exceeded
>>>
btw, the anti derivative should be
-exp(1-exp(x^2)*x)/(-1+exp(x^2)*x)
It is a known issue that SymPy fails to integrate many functions. This particular function probably wasn't reported yet, so by all means, add it to the ever-growing list.
SymPy tries several integration approaches. One of them, called "manual integration", is highly recursive: a substitution or integration by parts is attempted, and then the process is repeated for the resulting integral.
In this specific case, the expression has a lot of functions that look like candidates for substitution: x**2, the denominator, the content of another exponential function. And SymPy goes into an infinite chain of substitution that leads not to a solution but to a stack overflow... There is no pattern implemented in integrate that would tell SymPy to make the crucial substitution u = 1 - x*exp(x**2).
There is a separate, experimental, integrator called RUBI which could be used with
from sympy.integrals.rubi.rubi import rubi_integrate
rubi_integrate(exp(1-exp(x**2)*x+2*x**2)*(2*x**3+x)/(1-exp(x**2)*x)**2, x)
but it relies on MatchPy which I don't have installed, so I can't tell if it would help here.
I have example that shows different result on terminal and on sublime text build console.
Terminal example:
Python 2.7.10 (default, Jul 30 2016, 19:40:32)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 1000
>>> b = 1000
>>>
>>> print a == b
True
>>> print a is b
False
Sublime text console with python build:
a = 1000
b = 1000
print a == b
print a is b
------
RESULT
------
True
True
[Finished in 0.1s]
First case is correct, but problem here is that sublime gives me wrong result.
Why it shows different result?
I use python 2.7 on both cases.
I tried this in my terminal:
a=1000
b=1000
a==b
True
a is b
True
the Python is operator has funny, sometimes undefined functionality when dealing with integers. I suspect the mismatch above is due to Python trying to do an optimization in the Sublime case (and my terminal) and thus the objects are actually the same whereas the other case it's saving them as two separate variables.
You should NOT use the is operator to do integer comparison, but rather ==.
Another good reason == is suggested for comparison (while no longer integer comparison) is the following case:
a=1000
b=1000.0
a==b
True
a is b
False
I have encountered a quite weird case in Python.
In Spyder:
>>> 274/365
0.7506849315068493
>>> sys.version
'2.7.6 (default, Dec 20 2013, 14:08:04) [MSC v.1700 64 bit (AMD64)]'
>>>
However in command line it returns 0.
>>> 274/365
0
>>> 274/365 * 1.0
0.0
>>> 274/365.0
0.7506849315068493
Same version of Python.
Could anyone tell what is wrong here? Do I need to put some other options ahead of the program? This is really nauseous since my code gave weird results if I call it through command line..
Spyder executes from __future__ import division in its console.
This is discussed at https://code.google.com/p/spyderlib/issues/detail?id=1646 - it looks like this will be deactivated by default to avoid confusion.
You either use different versions of Python (in Spyder 3.* and on command line 2.*) or in your Spyder there is automatic import for your console including
from __future__ import division
On command line for Python 2.7
>>> 4/3
1
>>> from __future__ import division
>>> 4/3
1.3333333333333333
Can anyone explain why importing cv and numpy would change the behaviour of python's struct.unpack? Here's what I observe:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
This is correct
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
Still ok, after importing cv
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
And OK after importing cv and then numpy
Now I restart python:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
So far so good, but now I import cv AFTER importing numpy:
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
0.0
I've repeated this a number of times, including on multiple servers, and it always goes the same way. I've also tried it with struct.unpack and struct.pack, which also makes no difference.
I can't understand how importing numpy and cv could have any impact at all on the output of struct.unpack (pack remains the same, btw).
The "libdc1394" thing is, I believe, a red-herring: ctypes error: libdc1394 error: Failed to initialize libdc1394
Any ideas?
tl;dr: importing numpy and then opencv changes the behaviour of struct.unpack.
UPDATE: Paulo's answer below shows that this is reproducible. Seborg's comment suggests that it's something to do with the way python handles subnormals, which sounds plausible. I looked into Contexts but that didn't seem to be the problem, as the context was the same after the imports as it had been before them.
This isn't an answer, but it's too big for a comment. I played with the values a bit to find the limits.
Without loading numpy and cv:
>>> unpack("f", pack("i", 8388608))
(1.1754943508222875e-38,)
>>> unpack("f", pack("i", 8388607))
(1.1754942106924411e-38,)
After loading numpy and cv, the first line is the same, but the second:
>>> unpack("f", pack("i", 8388607))
(0.0,)
You'll notice that the first result is the lower limit for 32 bit floats. I then tried the same with d.
Without loading the libraries:
>>> unpack("d", pack("xi", 1048576))
(2.2250738585072014e-308,)
>>> unpack("d", pack("xi", 1048575))
(2.2250717365114104e-308,)
And after loading the libraries:
>>> unpack("d",pack("xi", 1048575))
(0.0,)
Now the first result is the lower limit for 64 bit float precision.
It seems that for some reason, loading the numpy and cv libraries, in that order, constrains unpack to use 32 and 64 bit precision and return 0 for lower values.