print "Deleting ", temp_dir
try:
os.chmod(temp_dir, stat.S_IRWXU)
shutil.rmtree(temp_dir)
except Exception, e:
print(e)
raise
throws error "[Error 5] Access is denied: 'c:\temp\metabuild\common\build\build.cmd' "
owner has only read permission on file build.cmd, I added chmod line as shown in the code to change it, but it didn't do anything, permissions are still same, any help? thanks.
On Windows, there are a wide range of reasons you may not be able to delete a file or directory even though os.stat says you have write access to it:
You may not have write access to the parent directory.
The filesystem may be read-only.
The file may be currently open in some other process which didn't use the FILE_SHARE_DELETE sharing mode (which most programs don't—in particular, any program using the POSIX-style APIs that Python uses for files will never use this sharing mode).
The actual ACLs on the file may be too complex to be represented as POSIX-style permissions, so you don't actually have permission to delete the file even though both it and its parent directly show up as +w in Unix-style functions like stat.
Unless it's the first one, there's really no way to figure out exactly what's going on via the POSIX-y APIs that Python uses; it doesn't understand Windows sharing modes, ACLs, or anything else that could be a problem. You will need to use tools that work via the native Win32 APIs. You can do this from Python via pywin32 (or ctypes or similar to the DLLs, if you really prefer), but you probably want to figure out the problem with Explorer or the various Sysinternals tools first.
If you don't know all the crazy ways in which Win32 is not like POSIX, read Creating and Opening Files at MSDN.
Also, if you're running this code under a Cygwin Python rather than a native Python, you will have to switch; inside Cygwin, you can only access the Cygwin layer, which emulates a real POSIX environment on top of Windows, and doesn't give you access to the Win32 stuff underneath.
Anyway, once you know what the problem is, most likely the only way to work around it in your code will be to again drop to the Win32 APIs via pywin32. Or, if you're trying to do something like override sharing modes, you have to drop even lower, to the NT layer, which you'll have to access via ctypes. And since most of the stuff you need isn't really documented (for the parts that are documented, search for "driver development") you'll probably have to dig through the C source to the Sysinternals tools to figure out how to force another process to close a file, or delete the file even though the other process has it closed.
Related
I have encountered a rather funny situation: I work in a big scientific collaboration whose major software package is based on C++ and python (2.7.15 still). This collaboration also has multiple servers (SL6) to run the framework on. Since I joined the collaboration recently, I received instructions on how to set up the software and run it. All works perfectly on the server. Now, there are reasons not to connect to the server to do simple tasks or code development, instead it is preferrable to do these kind of things on your local laptop. Thus, I set up a virtual machine (docker) according to a recipe I received, installed a couple of things (fuse, cvmfs, docker images, etc.) and in this way managed to connect my MacBook (OSX 10.14.2) to the server where some of the libraries need to be sourced in order for the software to be compiled and run. And after 2h it does compile! So far so good..
Now comes the fun part: you run the software by executing a specific python script which is fed as argument another python script. Not funny yet. But somewhere in this big list of python scripts sourcing one another, there is a very simple task:
import logging
variable = logging.DEBUG
This is written inside a script that is called Logging.py. So the script and library only are different by the first letter: l or L. On the server, this runs perfectly smooth. On my local VM set up, I get the error
AttributeError: 'module' object has no attribute 'DEBUG'
I checked the python versions (which python) and the location of the logging library (print logging.__file__), and in both set ups I get the same result for both commands. So the same python version is run, and the same logging library is sourced but in one case there is a mix up with the name of the file that sources the library.
So I am wondering, if there is some "convention file" (like a .vimrc for vi) sourced somewhere where this issue could be resolved by setting some tolerance parameter to some other value...?
Thanks a lot for the help!
conni
as others have said, OSX treats names as case-insensitive by default, so the Python bundled logging module will be confused with your Logging.py file. I'd suggest the better fix would be to get the Logging.py file renamed, as this would improve compatibility of the code base. otherwise, you could create a "Case-sensitive" APFS file system using "Disk Utility"
if you go with creating a file system, I'd suggest not changing the root/system partition to case-sensitive as this will break various programs in subtle ways. you could either repartition your disk and create a case-sensitive filesystem, or create an "Image" (this might be slower, not sure how much) and work in there. Just make sure you pick the "APFS (Case-sensitive)" format when creating the filesystem!
I have a custom python script that depends on MinimalModbus and the pySerial library. I am trying to deploy it to a router which runs a python interpreter.
MinimalModbus is just a single .py file which is trivial to deploy. However, the pySerial library appears to be much more robust. It looks like several python files that work together to "automatically select the appropriate backend".
Does one have to "install" pySerial in order to use it? Or is there some way to extract just the pertinent files/dependencies for a given OS?
I don't know what all is performed when you run pySerial's setup.py (e.g. files copied?). I don't know if it will work for this particular type of deployment. I was hoping to just include specific files.
Any help will be appreciated.
We are using Python version 2.6.
Update:
I basically took the "installed" files from the /site-packages/serial folder on my development box and uploaded them to the device. This got me a bit further; however, I am now getting the following error:
Line ~273 of serialposix.py , it's calling:
self.fd = os.open(self.portstr, os.O_RDWR|os.O_NOCTTY|os.O_NONBLOCK)
Why would it not be able to find the os.open routine?
Update 2:
Further simplifying the problem, my script now consists of something as simple as the following, and it still fails with the same error:
import os
serfd = os.open("/com/0", os.O_RDWR | os.O_NONBLOCK)
Under Python Standard Modules with Digi-Specific behavior, they make the following comment about the os module:
Use of the os module in Digi devices is currently very limited. The primary purpose in exposing it is to allow access to the serial ports, which are presented as nodes in the file system. Serial ports are available as files with the path in the form /com/0 with the zero replaced by the zero-based index of the serial port to control.
In addition, both of their sample applications use the os.open routine for serial communication.
I would have expected to maybe see an error such as: OSError: [Errno 2] No such file or directory: '/com/0', but this is not the case. Python can't even locate the os.open routine.
Would you expect the os.py file to have a def open(...) routine defined?
After opening a support case with the manufacturer of this router, it turns out that the os.open function is not supported on this device. However, the device does support io.open which I believe is similar. More importantly, I learned that the manufacturer provides their own "pyserial" implementation specifically designed to work on the device's operating system. Internally, it looks like they've switched out the calls to use io.open equivalent.
having had a look through the source files, i believe that it is entirely python based, and when it installs it copies the files across to the site-packages folder.
having said that it appears that you would need several of the source files in order for it to work, and if you were simply copying them across you may need to modify their imports to ensure they work properly.
eg for linux you would need the serialposix.py file and the serialutil.py
you may need more than just this, but i have only had a quick look through.
but at the top of serialposix.py there is a line:
from serial.serialutil import *
this would need to be changed to:
from serialutil import *
and there may be other such changes to make.
but ultimately this seems to use ctypes to do the hard work talking to the underlying OS, so you should be able to make it work.
UPDATE:
to explain why it is calling os.open, on most platforms serial ports are treated almost like files, in tthat the return a "file like" handle, the idea behind pyserial is to abstract operating system level differences away to create a single easy to interface to these, but ultimately it is still treated as a file like handle by the OS.
just for clarification could you let us know what version of pyserial you are using? as the line number you quoted and what i'm looking at don't match.
i suspect the main reason you are having difficulty is the zip nature of the python deployment you are using, but i do find it hard to believe that OS is not included in it, have you checked the OS python file in the ZIP or is it a compiled python file?
UPDATE 2:
haveing looked at the documentation for the distro you are using i would suggest you have a read through the following:
Python Standard Modules with Digi-Specific Behavior
where is states that some funtionality is limited on these devices, it also gives a method by which you can test if it is supported by telnet/SSH into the device and trying it on the python command line.
also, while it is anot as neat and easy to use as the pyserial module, i suggest you give this a read too:
Digi Serial Port Access
I'm working on an Inno Setup installer for a Python application for Windows 7, and I have these requirements:
The app shouldn't write anything to the installation directory
It should be able to use .pyc files
The app shouldn't require a specific Python version, so I can't just add a set of .pyc files to the installer
Is there a recommended way of handling this? Like give the user a way to (re)generate the .pyc files? Or is the shorter startup time benefit from the .pyc files usually not worth worrying about?
PYC files aren't guaranteed to be compatible for different python versions. If you don't know that all your customers are running the same python versions, you really don't want to distribute pyc's directly. So, you have to choose between distributing PYCs and supporting multiple python versions.
You could create build process that compiles all your files using py_compile and zips them up into a version-specific package. You can do this with setuptools.; however it will be awkward to do because you'll have to run py_compile in every version you need to support.
If you are basically distributing a closed application and don't want people to have trivial access to your source code, then py2exe is probably a simpler alternative. If your python is supposed to be integrated into the user's python install, then it's probably simpler to just create a zip of your .py files and add a one-line .py stub that imports the zipped package(s) using zipfile
if it makes you feel better, PYC doesn't provide much extra security and it doesn't really boost perf much either :)
If you haven't read PEP 3147, that will probably answer your questions.
I don't mean the solution described in that PEP and implemented as of Python 3.2. That's great if your "multiple Python versions" just means "3.2, 3.3, and probably future 3.x". Or even if it means "2.6+ and 3.1+, but I only really care about 3.2 and 3.3, so if I don't get the pyc speedups for other ones that's OK".
But when I asked your supported versions, you said, "2.7", which means you can't rely on PEP 3147 to solve your problems.
Fortunately, the PEP is full of discussion of earlier attempts to solve the problem, and the pitfalls of each, and there should be more than enough there to figure out what the options are and how to implement them.
The one problem is that the PEP is very linux-centric—mainly because it's primarily linux distros that tried to solve the problem in the past. (Apple also did so, but their solution was (a) pretty much working, and (b) tightly coupled with the whole Mac-specific "framework" thing, so they were mostly ignored…)
So, it largely leaves open the question of "Where should I put the .pyc files on Windows?"
The best choice is probably an app-specific directory under the user's local application data directory. See Known Folders if you can require Vista or later, CSIDL if you can't. Either way, you're looking for the FOLDERID_LocalAppData or CSIDL_LOCAL_APPDATA, which is:
The file system directory that serves as a data repository for local (nonroaming) applications. A typical path is C:\Documents and Settings\username\Local Settings\Application Data.
The point is that it's a place for applications to store data that's separate for each user (and inside that user's profile directory), and also for each machine the user's roaming profile might end up on, which means you can safely put stuff there and know that the user has the permissions to write there without UAC getting involved, and also know (as well as you ever can) that no other user or machine will interfere with what's there.
Within that directory, you create a directory for your program, and put whatever you want there, and as long as you picked a unique name (e.g., My Unique App Name or My Company Name\My App Name or a UUID), you're safe from accidental collision with other programs. (There used to be specific guidelines on this in MSDN, but I can no longer find them.)
So, how do you get to that directory?
The easiest way is to just use the env variable %LOCALAPPDATA%. If you need to deal with older Windows, you can use %USERPROFILE% and tack \Local Settings\Application Data onto the end, which is guaranteed to either be the same, or end up in the same place via junctions.
You can also use pywin32 or ctypes to access the native Windows APIs (since there are at least 3 different APIs for this and at least two ways to access those APIs, I don't want to give all possible ways to write this… but a quick google or SO search for "pywin32 SHGetFolderPath" or "ctypes SHGetKnownFolderPath" or whatever should give you what you need).
Or, there are multiple third-party modules to handle this. The first one both Google and PyPI turned up was winshell.
Re-reading the original question, there's a much simpler answer that probably fits your requirements.
I don't know much about Inno, but most installers give you a way to run an arbitrary command as a post-copy step.
So, you can just use python -m compileall to create the .pyc files for you at install time—while you've still got elevated privileges, so there's no problem with UAC.
In fact, if you look at pywin32, and various other Python packages that come as installer packages, they do exactly this. This is an idiomatic thing to do for installing libraries into the user's Python installation, so I don't see why it wouldn't be considered reasonable for installing an executable that uses the user's Python installation.
Of course if the user later decides to uninstall Python 2.6 and install 2.7, your .pyc files will be hosed… but from your description, it sounds like your entire program will be hosed anyway, and the recommended solution for the user would probably be to uninstall and reinstall anyway, right?
here is a quite cool problem.
I have a python script (main) that calls a python module (foo.py) which in turns calls another python module (barwrapper.py) uses LoadLibrary to dynamically open and access a libbar.so library.
libbar and the whole rest of the chain open and create files to perform their task. The problem arises when we issue a rmtree in the main python script to get rid of the temporary directory created by the imported modules. rmtree is invoked at the end of the script, just before exiting. The call fails because the directory contains .nfs-whatever hidden files, which I guess are the removed files. These files apparently are kept open in the code, forcing nfs to move them to these .nfs-whatever files until the file descriptor is released. This situation does not arise in other filesystems, because files associated to held descriptors are effectively removed but kept accessible by the kernel until the descriptor is closed.
We strongly suspect that the .so library is leaking file descriptors, and these non-closed files ruin the rmtree party at cleanup time. I thought about unloading the .so file in barwrapper, but apparently there's no way to do that, and I am not sure if the dynloader will actually remove the lib from the process space and close the descriptors, or if it will just mark it unloaded and that's it, waiting to be replaced by other stuff, but with the descriptors leaked.
I can't really think of other workarounds to the problem (apart from fixing the leaks, something we would not like to do, as it's a 3rd party library). Clearly, it happens only on nfs. Do you have any idea we can try out to fix it ?
The kernel keeps track of file descriptors, so even if you got python to unload the .so and release the memory, it would not know to close the leaked file descriptors. The only thing that comes to mind is importing the .so after forking, and only cleaning up after the forked child process has exited (and the file handles implicitly closed on exit by the kernel).
The good solution is to fix the handles leak, but if you're not sure of who is leaking, maybe a strace call would help you to localize the leak and submit the bug to the maintainers of the 3rd party library (or better if it is an open source library, try to submit a patch ;) ).
On the other hand, maybe a umount/mount on the nfs partition could help to force to close the handles.
My application crashed recently on a client's computer. I suspect it's because of PyQt's own memory management, which can lead to a invalid memory accesses when not properly handled.
When Python crashes like this, no backtrace is printed, only a data dump is written to the disk.
Is there a possibility to find out where in the Python code the crash occured?
Here's the dump: http://pastie.org/768550
Is this a linux core dump? If so you can examine it with gdb. You will need to run in on a system with an identical OS and version of Python, including 3rd party libraries. Run gdb -c /path/to/core/file. Once gdb has loaded then the command bt will list the stack trace for the main thread, and thread apply all bt will list the stack trace for all threads.
How useful this will be depends on whether the version of Python includes the full symbol table (i.e. is a debug build of Python) - if it is not, then you will only see addresses as offsets to the main C entry points. However this can still be of some use in diagnosing what went wrong.
If it is some other OS that does not support gdb then you are on your own - presumably the OS will have its own debugging tools.
Edit:
There is a page on the Python wiki describing how to get a python stack trace with gdb.
However a quick look at the link in the question shows that the OS is Windows, so gdb is of no use. The information in the Windows dump is minimal, so I think you are out of luck.
My only suggestions are:
try to reproduce the crash in-house.
get the user to reproduce the bug while running a tool that will catch the crash and do a proper memory dump. It is about a decade since I have done serious windows debugging so I don't know what tools are available now - there used to be one called Dr.Watson, but it may be obsolete.
If the user can't reproduce the crash then you are out of luck, on the other hand if it never happens again it is not really that big a problem. ;-)
Update:
Google tells me that Dr Watson is still the default crash handler on Windows XP (and presumably other versions of Windows) - the stack dump that was linked in the question probably came from it. However the default data saved by Dr Watson is fairly minimal, but you can configure it to save more - see this article. In short, if you run drwtsn32 -i it will pop up a dialog to let you set the options.
There's a file named gdbinit in the Python source tree (in Misc/gdbinit) which provides a set of macros for gdb so as to display the current interpreter context. Just type source gdbinit in gdb and then you can execute macros such as pystack. The list of available macros can be obtained simply by reading the file's source code.
(you can find it directly here: http://svn.python.org/view/python/trunk/Misc/gdbinit?view=log).
Of course, if the crash is so severe that it has corrupted the interpreter's internal structures, the macros may fail or crash. Also, it is better to compile the interpreter in debug mode, otherwise gdb may fail to locate the required symbols and variables.
Not sure if it helps, but if you can catch the exception, you could use http://github.com/gooli/pydump to store a dump and load it later in a Python debugger.
Does your application produce a log? If so, you can have logging produce an in-memory log which you might be able to find within the core dump. Also, you can have them send you the log file itself instead of the core dump.