Related to this thread...
I am trying to track down a bug in which the results from processing on an iPython cluster do not match what happens when the same process is run locally. Even when the iPython cluster is entirely local, and the CPU is simply running multiple engines.
I cannot seem to figure out how to log data as it is being processed on the engines. Print statements don't work and even when I try to have each engine write to a separate file, the file is created but nothing is written to it.
There must be a way to debug code running on the iPython parallel engines.
Not sure why, but I narrowed the problem and workaround down to the fact that I am using cython and compiling the .pyx files before running the program.
For some reason the cython cdef init of my float variables was not being done properly on the engines however it was being done correctly when I ran outside of the Client() queue.
Changing these variables to be normal python variables solved the problem. Though it does not seem like this should have to happen. Anyone that can shed more light on this?
Related
I have a PyQt application that occasionally will crash due to memory issues with the error shown below
The instruction at 0X00007FFC450FA07 reference memory at 0X0000000000000048. The memory could not be read.
I personally have not been able to reproduce this error in any way. It has happened on the computer's of a few people I work with while running my application. I have looked and see a lot of questions for languages like C# or C++, and it seems to be a pointer issue. Since I am using Python though, these questions and solutions are not very relevant since Python does not use pointers.
Seen here: C# example question the problem here is a long running thread. I do use threading in my application so I have been looking there. I have ran my application many times with the task manager open to monitor threads and have not seen any problems. All threads seem to close. If there is a better way to monitor this please let me know.
I am using PyCharm and my Python version is 3.9. I am also using Pyinstaller to create an executable version of the application. I have ran the program using the command line, launching the executable, and also directly through PyCharm and have not been able to see the error at all or debug it.
Is it possible this is happening due to bit related issues? Meaning that I am compiling my code using a 32-bit version of Python, could some computers have a problem with that? Would compiling with a 64 bit version possibly solve this?
So I'm using wand for a project and it's been working fine, except whenever I want to use a function that has been 'OpenCL-accelerated' (list of functions here), it stops working, no error or anything, but I'm pretty sure that OpenCL is what's causing it.
So my question is, how can I disable OpenCL in wand so that I can use those functions? Again, I've looked for solutions, but alas I couldn't find anything python specific, and the api module and the python module itself don't seem to mention anything about it either.
Any help would be greatly appreciated, thanks!
Just set the environment variable MAGICK_OCL_DEVICE=off to disable OpenCL.
Either before running the script.
export MAGICK_OCL_DEVICE=OFF
python myScript.py
Or within the python script before invoking any Wand code.
import os
os.environ['MAGICK_OCL_DEVICE'] = 'OFF'
# ... wand stuff ...
And the reason that it looks like your application stops running is that ImageMagick will need to run an OpenCL benchmark the first time and that can take a while. The result of the benchmark will be cached and used the next time an "OpenCL-accelerated" method is executed.
So I'm working on a Python 3 package and some functionalities are tested within scripts that can run for several hours/days on a remote machine.
Before I started using multiprocessing (on Windows), there was no problem editing the source files, then starting the script and continue editing, because all the imports are resolved immediately and all the code remains in memory unchanged.
With multiprocessing I'm getting syntax errors etc. when I'm in the middle of editing a file and the running script hits an import statement.
What is a safe way to edit my Python sources that doesn't interrupt my programming workflow too much?
I like the fact that I can try out ideas quickly and I don't want to add too many steps between making changes to the code and running it on the remote machine.
My ideas:
Use version control and work on a local copy on my local machine. Problem here: I have to do some of the debugging on the remote machine so I would have to push even minor changes while looking for small mistakes.
Use version control and work on a copy on the remote machine. Problem here: If I understand this correctly, I would have to switch between a "debug" and "test" conda environment. Seems a bit tedious also.
Find a solution to the multiprocessing issue .....
I'm sure there is a way to do this right, can you help me out?
Background: Working on a web application that allows users to upload python scripts to a server (Twisted web server). The UI provides full CRUD functionality on these python scripts. After uploading a script the user can then select the script and run it on the server and get results back on the UI. Everything works fine...
Problem: ...except when the user edits the python code inline (via the UI) or updates a script by uploading a new script overwriting one which already exists. It seems that twisted caches the code (both old and new) and runs new code sometimes and sometimes runs the old code.
Example: I upload a script hello.py on the server which has a function called run() which does: print 'hello world'. Someone else comes along and uploads another script named hello.py which does: print 'goodbye world'. Then, I go back and execute the run() function on the script 10 times. Half of the times it will say 'hello world' and half of the times it will say 'goodbye world'.
Tried so far: Several different ways to reload the script into memory before executing it, including:
python's builtin reload():
module = __import__('hello')
reload(module)
module.run()
imp module reload():
import imp
module = __import__('hello')
imp.reload(module)
module.run()
twisted.python.rebuild()
from twisted.python.rebuild import rebuild
module = __import__('hello')
rebuild(module)
module.run()
figured that perhaps if we force python to not write bytecode, that would solve the issue: sys.dont_write_bytecode = True
restart twisted server
a number of other things which I can't remember
And the only way to make sure that the most up to date python code executes is to restart twisted server manually. I have been researching for quite some time and have not found any better way of doing it, which works 100% of the time. This leads me to believe that bouncing twisted is the only way.
Question: Is there a better way to accomplish this (i.e. always execute the most recent code) without having to bounce twisted? Perhaps by preventing twisted from caching scripts into memory, or by clearing twisted cache before importing/reloading modules.
I'm fairly new to twisted web server, so it's possible that I may have overlooked obvious way to resolve this issue, or may have a completely wrong way of approaching this. Some insight into solving this issue would be greatly appreciated.
Thanks
T
Twisted doesn't cache Python code in memory. Python's module system works by evaluating source files once and then placing a module object into sys.modules. Future imports of the module do not re-evaluate the source files - they just pull the module object from sys.modules.
What parts of Twisted will do is keep references to objects that it is using. This is just how you write Python programs. If you don't have references to objects, you can't use them. The Twisted Web server can't call the run function unless it has a reference to the module that defines that function.
The trouble with reload is that it re-evaluates the source file defining the module but it can't track down and replace all of the references to the old version of the objects that module defined - for example, your run function. The imp.reload function is essentially the same.
twisted.python.rebuild tries to address this problem but using it correctly takes some care (and more likely than not there are edge cases that it still doesn't handle properly).
Whether any of these code reloading tools will work in your application or not is extremely sensitive to the minute, seemingly irrelevant details of how your application is written.
For example,
import somemodule
reload(somemodule)
somemodule.foo()
can be expected to run the newest version of somemodule.foo. But...
from somemodule import foo
import somemodule
reload(somemodule)
foo()
Can be expected not to run the newest version of somemodule.foo. There are even more subtle rules for using twisted.python.rebuild successfully.
Since your question doesn't include any of the actual code from your application, there's no way to know which of these cases you've run into (resulting in the inability to reliably update your objects to reflect the latest version of their source code).
There aren't any great solutions here. The solution that works the most reliably is to restart the process. This certainly clears out any old code/objects and lets things run with the newest version (though not 100% of the time - for example, timestamp problems on .py and .pyc files can result in an older .pyc file being used instead of a new .py file - but this is pretty rare).
Another approach is to use execfile (or exec) instead of import. This bypasses the entire module system (and therefore its layer of "caching"). It puts the entire burden of managing the lifetime of the objects defined by the source you're loading onto you. It's more work but it also means there are few surprises coming from other levels of the runtime.
And of course it is possible to do this with reload or twisted.python.rebuild if you're willing to go through all of your code for interacting with user modules and carefully audit it for left-over references to old objects. Oh, and any library code you're using that might have been able to get a reference to those objects, too.
I seem to be getting a Runtime error whilst running my Python script in Blackmagic Fusion.
# "The application has requested the Runtime to terminate it in an unusual way".
This does not happen every time I run the script. It only seems to pop up when I feed the Python script a heavy workload, or if I run the Python script multiple times inside of the Blackmagic Fusion compositing software, without restarting the package. I thought this might be a memory leak, but when I check the CPU memory usage, it does not seem to flinch at all.
Does anyone have any idea what might be causing this, or at least a solution of how I might start to debug the script?
Many thanks.
if you know how to get runtime error, then run your script using pdb
Perhaps this'll help. It's apparently a common error with microsoft visual c++:
http://support.microsoft.com/kb/884538