To introduce my question let get an example.
I have a file code.py. Let's say I run it, python code.py which takes some time. Python starts a process, call it p0.
Obviously changing code.py during p0 won't change its execution, because the code, code.py is loaded at the begining.
My point is then: is there a way, knowing p0's pid to see/print the python code currently used by p0 i.e. code.py's "version" when p0 started?
I'm quite curious about how difficult this could be, and, as python isn't a compiled language, I think there may be the original python source, somewhere. Plus, it's quite hacky, but this could help me when I'm not 100% sure how was the code when I started a looooong process (then modified the file, yes that's not a really good pratice, must say).
Any clue?
thx folks
pltrdy
Related
Being in the process of switching from Matlab to Python/IPython/Spyder I ran into the following issue.
In a Matlab script (and I actually did not notice this until I switched to Python) there is no warning if you are using a variable which you did not define in that script.
What I kept doing all the times in Matlab was to have, say, 2 (or more) scripts which I would then execute one after the other. The second script would typically use variables defined in the first, but would not give me any warning.
In Spyder I noticed that the situation is different. In my hypothetical second script, all the variables which are not defined in the second script itself give me a warning (undefined name '...'), which is not nice too see...
Another typical example, would be the following: I have a main script but there is something in this script I wanna look at better. So, without touching the script, I would create another file where I would copy paste a few lines from the script to play a little with them.
In this new file I would be using variables which are in the console already but that are techically unknown to the file itself, so would give me a warning.
So I guess I am simply asking if there a way to suppress this kind of warnings.
Or there might be a deeper question of whether there s a more pythonic way of working, but there I am really not sure about...
I'm working on a python project that spans across multiple files and directories. Here's my workflow:
Run main python script
Main script calls some functions in other files
Functions in other files/directories execute
In the middle of execution, there is a bug in one of the functions, but I find the bug only after the main script finishes. Sometimes, there may not be a bug, but rather some parameter that needs tweaking.
I go back and fix the bug/make the necessary tweaks and re-run the main program and this time it executes fine.
Obviously, this workflow is terribly inefficient as considerable amount of code (that runs prior to the buggy function) gets re-executed. What would be ideal is to run the program in ipython and after discovering the issue and making the necessary changes, restart from the place where the buggy function executions starts and not from the beginning. I'm not sure how to achieve this and any help would be much appreciated.
I know how to rerun lines from ipython history (%rerun) and how to ensure autoreload of changed files in ipython, but in this case, I can't really type out the lines of code into ipython. Writing unit tests may not always be feasible, so I need an alternate solution. My use case is something similar to setting a "breakpoint" and then re-executing code past the breakpoint multiple times so as to avoid re-executing the code prior to the breakpoint more than once, while ensuring that all the necessary variables (until that stage) are correctly populated. One final condition is that I may not be able to use an IDE and vim is the only editor available across all the environments I work with.
You could start writing test cases for every function and try to debug each function separately instead of the whole program/script.
There is a python unittest-module: https://docs.python.org/3.4/library/unittest.html and a lot of tutorials like (just an example): http://docs.python-guide.org/en/latest/writing/tests/
It seems annoying writing tests but thinking about test cases gives deeper understanding of "How should the function behave if...".
You could use the code module to include "breakpoints" in your code
import code
# ... later in your program add
code.interact(local=locals()) # enter python interpreter at this point (ctrl+D to continue execution)
I'm using python scripts to execute simple but long measurements. I as wondering if (and how) it's possible to edit a running script.
An example:
Let's assume I made an error in the last lines of a running script.These lines have not yet been executed. Now I'd like to fix it without restarting the script. What should I do?
Edit:
One Idea I had was loading each line of the script in a list. Then pop the first one. Feed it to an interpreter instance. Wait for it to complete and pop the next one. This way I could modify the list.
I guess I can't be the first one thinking about it. Someone must have implemented something like this before and I don't wan't to reinvent the weel. I one of you knows about a project please let me know.
I am afraid there's no easy way to arbitrarily modify a running Python script.
One approach is to test the script on a small amount of data first. This way you'll reduce the likelihood of discovering bugs when running on the actual, large, dataset.
Another possibility is to make the script periodically save its state to disk, so that it can be restarted from where it left off, rather than from the beginning.
Lets say you have a large file containing LETTER,NUMBER comma-delimited tokens. You want to write a program that reads from standard input and prints out NUMBER+1 for each line. Very trivial program, I understand. However, here is the constraint -- you can only read from this standard in pipe one-time AND you have to start out with programming an empty file.
So for example:
cat FILE.csv | python empty_program.py
This should pop up an interactive session which allows you to write what ever code you want. Since empty_program.py has not called stdin.readline(), the stdin buffer is appropriately in-tact.
Is something like this possible?
One example of something that can sort of do this is the Excel VBA debugger/IDE. It allows you to pause execution -- add new lines to the programs source code and continue exeuction.
cat FILE.csv | python empty_program.py
Well, python will try to read "empty_program.py", and fail to find anything in it, assuming there is file, and then exit. If the file doesn't exist, you get an error. I tested it [you should have been able to do that as well, doesn't take that much effort - probably a lot less than it took to go to SO and write the question].
So, my next thought was to use an interactive python process, but since you are feeding things through stdin, it won't work - I didn't have a good csv file, so I did "cat somefile.c|python", and that falls over at "int main()" with "invalid syntax". I'm surprised it got as far as that, but I guess that's because #include's are seen as comments.
Most interactive programming languages read from stdin, so you can't really do what you are describing with any of them.
I'm far from sure why you'd want to either. If your first program can produce the relevant program code, why would you not just put it in a file and let python read that file... Rather than jump through hoops? Note that an IDE is not the same as a command line program. I'm pretty sure that if you work hard enough at something, you can write a C program that acesses the Eclipse IDE with Python plugins. But that's really doing things the hard way. Why anyone would WANT to spend that much effort to achieve so little, I don't see.
Sorry, but I don't really see the point of what you are trying to do - I'm sure you have some good idea in there, but I'm sure the implementation details need to be worked on.
Python is pretty clean, and I can code neat apps quickly.
But I notice I have some minor error someplace and I dont find the error at compile but at run time. Then I need to change and run the script again. Is there a way to have it break and let me modify and run?
Also, I dislike how python has no enums. If I were to write code that needs a lot of enums and types, should I be doing it in C++? It feels like I can do it quicker in C++.
"I don't find the error at compile but at run time"
Correct. True for all non-compiled interpreted languages.
"I need to change and run the script again"
Also correct. True for all non-compiled interpreted languages.
"Is there a way to have it break and let me modify and run?"
What?
If it's a run-time error, the script breaks, you fix it and run again.
If it's not a proper error, but a logic problem of some kind, then the program finishes, but doesn't work correctly. No language can anticipate what you hoped for and break for you.
Or perhaps you mean something else.
"...code that needs a lot of enums"
You'll need to provide examples of code that needs a lot of enums. I've been writing Python for years, and have no use for enums. Indeed, I've been writing C++ with no use for enums either.
You'll have to provide code that needs a lot of enums as a specific example. Perhaps in another question along the lines of "What's a Pythonic replacement for all these enums."
It's usually polymorphic class definitions, but without an example, it's hard to be sure.
With interpreted languages you have a lot of freedom. Freedom isn't free here either. While the interpreter won't torture you into dotting every i and crossing every T before it deems your code worthy of a run, it also won't try to statically analyze your code for all those problems. So you have a few choices.
1) {Pyflakes, pychecker, pylint} will do static analysis on your code. That settles the syntax issue mostly.
2) Test-driven development with nosetests or the like will help you. If you make a code change that breaks your existing code, the tests will fail and you will know about it. This is actually better than static analysis and can be as fast. If you test-first, then you will have all your code checked at test runtime instead of program runtime.
Note that with 1 & 2 in place you are a bit better off than if you had just a static-typing compiler on your side. Even so, it will not create a proof of correctness.
It is possible that your tests may miss some plumbing you need for the app to actually run. If that happens, you fix it by writing more tests usually. But you still need to fire up the app and bang on it to see what tests you should have written and didn't.
You might want to look into something like nosey, which runs your unit tests periodically when you've saved changes to a file. You could also set up a save-event trigger to run your unit tests in the background whenever you save a file (possible e.g. with Komodo Edit).
That said, what I do is bind the F7 key to run unit tests in the current directory and subdirectories, and the F6 key to run pylint on the current file. Frequent use of these allows me to spot errors pretty quickly.
Python is an interpreted language, there is no compile stage, at least not that is visible to the user. If you get an error, go back, modify the script, and try again. If your script has long execution time, and you don't want to stop-restart, you can try a debugger like pdb, using which you can fix some of your errors during runtime.
There are a large number of ways in which you can implement enums, a quick google search for "python enums" gives everything you're likely to need. However, you should look into whether or not you really need them, and if there's a better, more 'pythonic' way of doing the same thing.