I am trying to create an animated gig from a series of heat maps with HoloViews.
I need to do this in a Python script, i. e. specifically not in a Jupyter notebook.
When saving the image, Python throws an error because it cannot create a temporary file in the temp-folder of the current user (this is under Windows). Happens regardless of the user, even when I run Python as admin.
When I stop in the debugger and change the temp-file path to some other place, e. g. Desktop, that works, but the resulting holo.gif in the working directory is empty (0 bytes). The temporary gif, though, is correctly animated, so I guess the code is basically OK.
[Edit: Not so sure anymore. I ran this the night through on 26.531 heat maps each of which consisted of a 5x5 grid. The process did not finish (i. e. did not hit the breakppoint at Image.py line 1966). Is there a way to do what I want that is less painfully slow?]
Answers to similar problems on StackOverflow did point to permission problems (but what kind of problem could that be if it doesn't even work for an admin?) and suggest saving to another location, which is impossible here as I have no control over where matplotlib will try to create temporary files.
The problem is specifically with gif's, I can create *.png or *.html output without error. (AFAIK, the difference is that gif-creation uses ImageMagick.)
Here's the code (construction of underlying heat map data left out):
import holoviews as hv
hv.extension('matplotlib')
renderer = hv.renderer('matplotlib')
renderer.fps = 3
heatMapDict = {
k: hv.HeatMap(measurements[k].sensors) for k in range(len(measurements))
}
holo = hv.HoloMap(heatMapDict, kdims='index')
renderer.save(holo, 'holo', fmt='gif')
And the traceback:
INFO:matplotlib.animation:Animation.save using <class 'matplotlib.animation.PillowWriter'>
Traceback (most recent call last):
File "cm3.py", line 69, in <module>
renderer.save(holo, 'holo', fmt='gif')
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\renderer.py", line 554, in save
rendered = self_or_cls(plot, fmt)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 108, in __call__
data = self._figure_data(plot, fmt, **({'dpi':self.dpi} if self.dpi else {}))
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 196, in _figure_data
data = self._anim_data(anim, fmt)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 246, in _anim_data
anim.save(f.name, writer=writer, **anim_kwargs)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 1174, in save
writer.grab_frame(**savefig_kwargs)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\contextlib.py", line 119, in __exit__
next(self.gen)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 232, in saving
self.finish()
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 583, in finish
duration=int(1000 / self.fps))
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\PIL\Image.py", line 1966, in save
fp = builtins.open(filename, "w+b")
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\y2046\\AppData\\Local\\Temp\\tmp4im5ozo8.gif'
Addendum:
I'm coming to think that this is not a permission problem after all. Perhaps it has to do with reentrancy and file-locking under Windows? The Python process in fact may create files in the temp directory, as proved by inserting the following test code before calling renderer.save():
import os
import builtins
filename = 'C:\\Users\\y2046\\AppData\\Local\\Temp\\test.txt'
fp = builtins.open(filename, "w+b")
try:
fp.write("first".encode('utf-8'))
finally:
fp.close()
os.remove(filename)
I should test this under Linux. If it works there, there must be a bug in the Pillow writer.
It looks like there is something broken with HoloViews. I have opened issue #3151 with them.
Related
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
Saving my IPython notebook fails.
I've set up a folder on a network drive which I'd like to save my notebooks to, but it isn't quite cooperating yet.
In ipython_notebook_config.py I've edited the following lines:
c.NotebookManager.notebook_dir = u'Z:\\Analytics\\Work\\MyFolder'
c.FileNotebookManager.notebook_dir = u'Z:\\Analytics\\Work\\MyFolder'
c.NotebookApp.notebook_dir = u'Z:\\Analytics\\Work\\MyFolder'
but still no joy.
This is IPython 2.1.0
I'm a little new to Python and IPython Notebook, so this may be obvious, not sure.
The following is the traceback:
Traceback (most recent call last):
File \"C:\\Anaconda\\lib\\site-packages\\IPython\\html\\base\\handlers.py\", line 286, in wrapper
result = method(self, *args, **kwargs)
File \"C:\\Anaconda\\lib\\site-packages\\IPython\\html\\services\\notebooks\\handlers.py\", line 209, in put
self._save_notebook(model, path, name)
File \"C:\\Anaconda\\lib\\site-packages\\IPython\\html\\services\\notebooks\\handlers.py\", line 145, in _save_notebook
model = self.notebook_manager.save_notebook(model, name, path)
File \"C:\\Anaconda\\lib\\site-packages\\IPython\\html\\services\\notebooks\\filenbmanager.py\", line 289, in save_notebook
self.create_checkpoint(name, path)
File \"C:\\Anaconda\\lib\\site-packages\\IPython\\html\\services\\notebooks\\filenbmanager.py\", line 433, in create_checkpoint
os.mkdir(self.checkpoint_dir)\nWindowsError: [Error 5] Access is denied: u'.ipynb_checkpoints'
EDIT:
Thanks to Simon Smith below I tracked it down.
Checkpoints were still saving to the wrong place. I changed this line in the config:
c.FileNotebookManager.checkpoint_dir = r'Z:\Analytics\Work\MyFolder\.ipynb_checkpoints'
and now I'm sailing along. I've also edited the other paths to be r'such and such' as well. Thanks again.
The final line looks suspicious here:
os.mkdir(self.checkpoint_dir)\nWindowsError: [Error 5] Access is denied: u'.ipynb_checkpoints'
This looks like a permissions issue on that directory (i.e. ipython can't write any data to that location). There are instructions on how to change them here.
I have a python cgi script that runs an application via subprocess over and over again (several thousand times). I keep getting the same error...
Traceback (most recent call last):
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 413, in <module>
webpage()
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 406, in main
displayOmpResult(form['odfFile'].value)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 342, in displayContainerDiv
makeSection(position,sAoiInput)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 360, in displayData
displayTable(i,j,lAmpAndVars,dOligoSet[key],position)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 247, in displayTable
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1039, in _execute_child
errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files
The function causing it is below.
def displayTable(sData):
# convert the data to the proper format
sFormattedData = convertToFormat(sData)
# write the formatted data to file
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
sOutputFileLoc = sInputFile.replace('In_','Out_')
# run app, requires two files; an input and an output
# temp file to holds stdout stderr of subprocess
fh = tempfile.TemporaryFile(mode='w',dir=tempfile.gettempdir())
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
p.communicate()
fh.close()
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
displayTableHeader(lOutputData)
displaySimpleTable(lOutputData)
As far as I can tell, I'm closing the files properly. When I run...
import resource
print resource.getrlimit(resource.RLIMIT_NOFILE)
I get...
(1024, 1024)
Do I have to increase this value? I read that subprocess opens several file descriptors. I tried adding "close_fds = True" and I tried using the with statement when creating my file but the result was the same. I suspect the problem may be with the application that I'm subprocessing, pDat, but this program was made by someone else. It requires two inputs; an input file and the location of where you want the output file written to. I suspect it may not be closing the output file that it creates. Aside from this, I can't see what I might be doing wrong. Any suggestions? Thanks.
EDIT:
I'm on ubuntu 10.04 running python 2.6.5 and apache 2.2.14
Instead of this...
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
I should have done this...
iFileHandle,sInputFile = tempfile.mkstemp(prefix='In_')
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
os.close(iFileHandle)
The mkstemp function makes OS level handles to a file and I wasn't closing them. The solution is described in more detail here...
http://www.logilab.org/blogentry/17873
You want to add close_fds=True to the popen call (just in case).
Then, here:
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
...I might remember wrong, but unless you use the with syntax, I do not think that the output file descriptor has been closed.
UPDATE: the main problem is that you need to know which files are open. On Windows this would require something like Process Explorer. In Linux it's a bit simpler; you just have to invoke the CGI from command line, or be sure that there is only one instance of the CGI running, and fetch its pid with ps command.
Once you have the pid, run a ls -la on the content of the /proc/<PID>/fd directory. All open file descriptors will be there, with the name of the files they point to. Knowing that file so-and-so is opened 377 times, that goes a long way towards finding out where exactly that file is opened (but not closed).
I have the following code fragment:
def database(self):
databasename=""
host=""
user=""
password=""
try:
self.fp=file("detailing.dat","rb")
except IOError:
self.fp=file("detailing.dat","wb")
pickle.dump([databasename,host,user,password],self.fp,-1)
self.fp.close()
selffp=file("detailing.dat","rb")
[databasename,host,user,password]=pickle.load(self.fp)
return
It has the error:
Traceback (most recent call last):
File "detailing.py", line 91, in ?
app=myApp()
File "detailing.py", line 20, in __init__
wx.App.__init__(self,redirect,filename,useBestVisual,clearSigInt)
File "/usr/lib64/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", line 7473, in __init__
self._BootstrapApp()
File "/usr/lib64/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", line 7125, in _BootstrapApp
return _core_.PyApp__BootstrapApp(*args, **kwargs)
File "detailing.py", line 33, in OnInit
self.database()
File "detailing.py", line 87, in database
[databasename,host,user,password]=pickle.load(self.fp)
File "/usr/lib64/python2.4/pickle.py", line 1390, in load
return Unpickler(file).load()
File "/usr/lib64/python2.4/pickle.py", line 872, in load
dispatch[key](self)
File "/usr/lib64/python2.4/pickle.py", line 894, in load_eof
raise EOFError
EOFError
What am I doing wrong?
Unless you've got a typo, the issue may be in this line where you assign the file handle to selffp not self.fp:
selffp=file("detailing.dat","rb")
If that is a typo, and your code actually opens the file to self.fp, then you may wish to verify that the file actually has contents (ie: that the previous pickle worked)... the error suggests that the file is empty.
Edit: In the comments to this answer, S. Lott has a nice summary of why the typo generated the error you saw that I'm pasting here for completeness of the answer: "selffp will be the unused opened file, and self.fp (the old closed file) will be used for the load".
Here's the version that I would recommend using:
def database(self):
databasename=""
host=""
user=""
password=""
try:
self.fp=open("detailing.dat","rb")
except IOError:
with open("detailing.dat", "wb") as fp:
pickle.dump([databasename,host,user,password],fp,-1)
self.fp=open("detailing.dat","rb")
[databasename,host,user,password]=pickle.load(self.fp)
return
As has been pointed out, there was a typo on self.fp. But here are a few other things that I notice that can cause problems.
First of all, you shouldn't be using the file constructor directly. You should instead use the built-in open function.
Secondly, you should avoid calling a file's close method outside a finally block. In this case, I've used python 2.6's with block. You can use this in Python 2.5 with the following command:
from __future__ import with_statement
This will prevent the file from being stuck open if an exception is thrown anywhere (as it will close the file when the with block is exited). Although this isn't the cause of your problem, it is an important thing to remember because if one of the file object's methods throws an exception, the file will get held open in sys.traceback indefinitely.
(note that you should probably accept Jarret Hardie's answer though, he caught the bug :-) )
I got this error when I didn't chose the correct mode to read the file (wb instead of rb). Changing back to rb was not sufficient to solve the issue. However, generating again a new clean pickle file solved the issue. It seems that not choosing the correct mode to open the binary file somehow "damages" the file which is then not openable whatsoever afterward.
But I am quite a beginner with Python so I may have miss something too.
While this is not a direct answer to the OP's question -- I happened upon this answer while searching for a reason for an EOFError when trying to unpickle a binary file with : pickle.load(open(filename, "r")).
import cPickle as pickle
A = dict((v, i) for i, v in enumerate(words))
with open("words.pkl", "wb") as f:
pickle.dump(A, f)
#...later open the file -- mistake:trying to read a binary with non-binary method
with open("words.pkl", "r") as f:
A =pickle.load(f) # EOFError
# change that to
with open ("words.pkl", "rb") as f: # notice the "rb" instead of "r"
A = pickle.load(f)