This question already has answers here:
What does "list comprehension" and similar mean? How does it work and how can I use it?
(5 answers)
Closed 1 year ago.
I was looking for something and came across this:
subfolders = [ f.path for f in os.scandir(x) if f.is_dir() ]
I've never seen this type of conditional statement before.
What would you call this type of conditional statement to the right of the = sign? Just asking so I know what to Google.
Also, if I wanted to write the equivalent in another format, would it look something like this?
subfolders = []
x = "c:\\Users\\my_account\\AppData\\Program\\Cache"
for f in os.scandir(x):
if f.isdir():
print(f.x)
Would both of those statements be equivalent?
Can't put code in a comment but here's the small change to your code to create equivalent output:
subfolders = []
for f in os.scandir(path):
if f.is_dir():
subfolders.append(f.path)
Here's the exact thing that worked in the interactive python for me:
Python 3.7.10 (default, Mar 24 2021, 16:34:34)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import os
>>> subfolders = []
>>> for f in os.scandir('.'):
... if f.is_dir():
... subfolders.append(f.path)
...
>>> subfolders
['./the', './.directories', './were', './here']
Checking in a newer python:
Python 3.9.2 (default, Mar 26 2021, 15:28:17)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> subfolders = []
>>> for f in os.scandir('.'):
... if f.is_dir():
... subfolders.append(f.path)
...
>>> subfolders
['./directory']
Related
I made a simple code on python interpreter and run it.
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> x=np.array([0,1])
>>> w=np.array([0.5,0.5])
>>> b=-0.7
>>> np.sum(w*x)+b
-0.19999999999999996
the result -0.19999999999999996 is weird. I think.... it is caused by IEEE 754 rule. But when I try to run almost same code by file, result is a lot different.
import numpy as np
x = np.array([0,1])
w = np.array([0.5,0.5])
b = -0.7
print(np.sum(w * x) + b)
the result is "-0.2". IEEE 754 rule does not affect the result.
what is the difference between file based running and interpreter based running?
The difference is due to how the interpreter displays output.
The print function will try to use an object's __str__ method, but the interpreter will use an object's __repr__.
If, in the interpreter you wrote:
...
z = np.sum(w*x)+b
print(z)
(which is what you're doing in your code) you'd see -0.2.
Similarly, if in your code you wrote:
print(repr(np.sum(w * x) + b))
(which is what you're doing in the interpreter) you'd see -0.19999999999999996
I think the difference lies in the fact that you use print() for your file based code, which converts the number, while in the interpreter's case, you don't use print(), but rather ask the interpreter to show the result.
I have example that shows different result on terminal and on sublime text build console.
Terminal example:
Python 2.7.10 (default, Jul 30 2016, 19:40:32)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 1000
>>> b = 1000
>>>
>>> print a == b
True
>>> print a is b
False
Sublime text console with python build:
a = 1000
b = 1000
print a == b
print a is b
------
RESULT
------
True
True
[Finished in 0.1s]
First case is correct, but problem here is that sublime gives me wrong result.
Why it shows different result?
I use python 2.7 on both cases.
I tried this in my terminal:
a=1000
b=1000
a==b
True
a is b
True
the Python is operator has funny, sometimes undefined functionality when dealing with integers. I suspect the mismatch above is due to Python trying to do an optimization in the Sublime case (and my terminal) and thus the objects are actually the same whereas the other case it's saving them as two separate variables.
You should NOT use the is operator to do integer comparison, but rather ==.
Another good reason == is suggested for comparison (while no longer integer comparison) is the following case:
a=1000
b=1000.0
a==b
True
a is b
False
My understanding is that in Pyside QString has been dropped. One can write a Python string into a QLineEdit, and when the QLineEdit is read, it is returned as a unicode string (16-bits per character).
Trying to write this string from my Gui process to a sub-process started using QProcess does not seem to work and just returns 0L (see below). If one changes the unicode string back to a Python string using the str() function, then self.my_process.write(str(u'test')) now returns 4L. This behaviour does not seem correct to me.
Would it be possible for someone to explain why QProcess.write() does not seem to work on unicode strings?
(Pdb) PySide.QtCore.QString()
*** AttributeError: 'module' object has no attribute 'QString'
(Pdb) self.myprocess.write(u'test')
0L
(Pdb) self.myprocess.write(str(u'test'))
4L
(Pdb)
PySide has never provided classes like QString, QStringList, QVariant, etc. It has always done implicit conversion to and from the equivalent python types - that is, in PyQt terminology, it only implements the v2 API (see PSEP 101 for more details).
However, the behaviour of QProcess when attempting to write unicode strings seems somewhat broken in PySide compared with PyQt4. Here's a simple test in PyQt4:
Python 2.7.8 (default, Sep 24 2014, 18:26:21)
[GCC 4.9.1 20140903 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from PyQt4 import QtCore
>>> QtCore.PYQT_VERSION_STR
'4.11.2'
>>> p = QtCore.QProcess()
>>> p.start('cat'); p.waitForStarted()
True
>>> p.write(u'fóó'); p.waitForReadyRead()
3L
True
>>> p.readAll()
PyQt4.QtCore.QByteArray('f\xf3\xf3')
So it seems that PyQt will implicitly encode unicode strings as 'latin-1' before passing them to QProcess.write() (which of course expects either const char * or a QByteArray). If you want a different encoding, it must be done explicitly:
>>> p.write(u'fóó'.encode('utf-8')); p.waitForReadyRead()
5L
True
>>> p.readAll()
PyQt4.QtCore.QByteArray('f\xc3\xb3\xc3\xb3')
Now let's see what happens with PySide:
Python 2.7.8 (default, Sep 24 2014, 18:26:21)
[GCC 4.9.1 20140903 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from PySide import QtCore, __version__
>>> __version__
'1.2.2'
>>> p = QtCore.QProcess()
>>> p.start('cat'); p.waitForStarted()
True
>>> p.write(u'fóó'); p.waitForReadyRead()
0L
^C
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt
So: no implicit encoding, and the process just blocks instead of raising an error (which would seem to be a bug). However, re-trying with explicit encoding works as expected:
>>> p.start('cat'); p.waitForStarted()
True
>>> p.write(u'fóó'.encode('utf-8')); p.waitForReadyRead()
5L
True
>>> p.readAll()
PySide.QtCore.QByteArray('fóó')
Here's my experiment:
$ python
Python 2.7.5 (default, Feb 19 2014, 13:47:28)
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 3
>>> while True:
... a = a * a
...
^CTraceback (most recent call last):
File "<stdin>", line 2, in <module>
KeyboardInterrupt
>>> a
(seems to go on forever)
I understand that the interpreter looped forever at the "while True: " part, but why did it get stuck evaluating a?
a is now a really large number and it takes a while to print. Print a in the loop and you'll see it gets really big, this is just a fraction of how large it is if you omit the print, because print takes time to execute. Also, note a=1 always quickly returns 1.
Can anyone explain why importing cv and numpy would change the behaviour of python's struct.unpack? Here's what I observe:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
This is correct
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
Still ok, after importing cv
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
And OK after importing cv and then numpy
Now I restart python:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
So far so good, but now I import cv AFTER importing numpy:
>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
0.0
I've repeated this a number of times, including on multiple servers, and it always goes the same way. I've also tried it with struct.unpack and struct.pack, which also makes no difference.
I can't understand how importing numpy and cv could have any impact at all on the output of struct.unpack (pack remains the same, btw).
The "libdc1394" thing is, I believe, a red-herring: ctypes error: libdc1394 error: Failed to initialize libdc1394
Any ideas?
tl;dr: importing numpy and then opencv changes the behaviour of struct.unpack.
UPDATE: Paulo's answer below shows that this is reproducible. Seborg's comment suggests that it's something to do with the way python handles subnormals, which sounds plausible. I looked into Contexts but that didn't seem to be the problem, as the context was the same after the imports as it had been before them.
This isn't an answer, but it's too big for a comment. I played with the values a bit to find the limits.
Without loading numpy and cv:
>>> unpack("f", pack("i", 8388608))
(1.1754943508222875e-38,)
>>> unpack("f", pack("i", 8388607))
(1.1754942106924411e-38,)
After loading numpy and cv, the first line is the same, but the second:
>>> unpack("f", pack("i", 8388607))
(0.0,)
You'll notice that the first result is the lower limit for 32 bit floats. I then tried the same with d.
Without loading the libraries:
>>> unpack("d", pack("xi", 1048576))
(2.2250738585072014e-308,)
>>> unpack("d", pack("xi", 1048575))
(2.2250717365114104e-308,)
And after loading the libraries:
>>> unpack("d",pack("xi", 1048575))
(0.0,)
Now the first result is the lower limit for 64 bit float precision.
It seems that for some reason, loading the numpy and cv libraries, in that order, constrains unpack to use 32 and 64 bit precision and return 0 for lower values.