I'm more comfortable at using .ipynb files as compared to .py files. However, most of the codes I see that make use of multiprocessing are written in .py files.
Basically I'd prefer to do everything within VScode without having to deal with terminal.
Is there any downside of using Jupyter notebooks for multiprocessing?
You can use jupyter notebook for multiprocessing, with the same code that's written in python script, i.e. .py files.
Performance wise I don;t find any issue here.
People generally uses *.py files for multiprocessing stuff, mainly because its run as script which may take a long time, like testing some payload on server, so for distributing the load they may use python script. While notebook is just temporary thing, where you want to see the result immediately.
Also, you can try running python .py files, simply by python name_of_script.py, no need to deal with extra terminal stuff.
For exiting type Ctrl-c, your script will exit.
Thats' all.
Related
What happens when you turn a python file into an executable? Does it get encrypted? What happens to the imported files? Can you revert it back into a normal .py file?
So I have this python file, let's call this main.py. I also have another file, let's call it scrambler.py. The scrambler.py is an encryptor/decryptor file. So, I imported it to main.py. Then I will turn it into an executable file. Now, we don't want people to see the enryptor/decryptor file. So, can people who doesn't have the source code get the source code of the imported file? Because, from searching, I saw that some people can get the source code of the main code using pyinstxtractor.py. I haven't tried it yet, but can you also get the source code of the imported file? (also do comments get included? I mean they are useless to the program). So that's why, the ultimate question: What happens when you turn a python file into an executable?
The file that I use to turn a python file into an .exe is Pyinstaller and is it different for every converter?
I hope this is a valid question. Thanks in advance.
Pyinstaller essentially bundles a python interpreter along with your python code in a folder. This folder can be put into an installer (using something like inno setup) to be distributed so end users can use it like a normal exe program. It doesn't compile to another language or anything. So no, your code is not private and while you can make it difficult to find certain bits, it is not impossible.
As described here, it is possible to convert to C and to machine code, but pyinstaller won't do that by default for you. Also note that the python bytecode files, while not legible, are not completely uncrackable.
See: https://pyinstaller.readthedocs.io/en/stable/operating-mode.html
See here for more about the encryption option: PyInstaller Encryption --key
After 3 intensive hour, I was testing my script on terminal. However, my editor messed up and it overwrote my script when it was being still executed on terminal. Well, I didn't terminate running script, so I was wondering that does python interpreter keep the currently running file in a temporary folder or somewhere else so that I can recover my script?
Python tries to cache your .pyc files. How that's done has changed over time (see PEP 3147 -- PYC Repository Directories. Top level scripts are not cached but imported ones are. So, you may not have one.
.pyc files are compiled byte codes so its not just a question of renaming it .py and you can't figure them out just by looking at them. There are decompilers out there like the one refenced here:
Decompiling .pyc files.
Personally, I create mercurial repos for my scripts and check them in frequently.... because I've made a similar mistake a time or two. git, svn, and etc... are other popular tools for maintaining repos.
Depending on your operating system and editor, you may have a copy in Trash or even saved by the editor. You may also be able to "roll back" the file system.
If you're running Linux, you may still be able to find a handle to open files in the /proc/ directory if the process is still running. This handle will keep the file from being deleted. Details see: https://superuser.com/questions/283102/how-to-recover-deleted-file-if-it-is-still-opened-by-some-process
I am working on a project in PyCharm that involves extensive computations, with long runtimes.
I would like to do the following: I come up with a version of my code; then run it, then I edit the code some more; however, the run I started before still only uses the old version of the code (i.e. the snapshot at the point of running).
Is this possible in PyCharm?
I run my project by selecting the Run 'projectname' option from the Run menu.
I understand the run works by pre-compling the .py files to .pyc files stored in the __pycache__ folder. However, I don't know the following.
Will saving the file in PyCharm cause the .pyc files to be replaced by new versions? This is something I want to avoid since I want one run to only use one snapshot of the source tree, not multiple versions at different points of execution.
What if some python class is only needed, say, 20 minutes after the run has started. Will the .pyc file be created at the beginning of the run, or on-demand (where the corresponding .py file might already have changed)?
I use PyCharm in my classes. My experience is that the all the required code, including the imported modules, are compiled at runtime. If you change anything in that suite you need to start running from scratch for it to take effect.
I'm not a professional programmer so my experience is with small apps. I'd love to hear form an expert.
In this guide to not being a total mess doing research, the authors talk about using a .py file to execute a directory in order -- that is, delete all the output files (.pdf, .txt, etc) and run just the .py and everything will be recreated from the raw data, stata files, maybe other .py's, etc etc.
What is the best way to do this in Python? I know one option is to use subprocesses, but is that the only option? Basically, how can I best mimic a .bat file using Python on a Mac.
You can certainly use Python for shell-script type stuff - with the bonus that it will be relatively portable.
Another option you could consider is "BASH" "(The Bourne Again SHell). That will do everything you can do with .BAT files (and much more). Search for BASH shell scripting.
Whether Python or BASH is the right tool for the job depends on whether you're mostly just writing glue (to call a bunch of other programs) or if you're actually writing complex logic yourself. If it's the former, then I'd go with BASH.
Say I run a Python (2.7, though I'm not sure that makes a difference here) script. Instead of terminating the script, I tab out, or somehow switch back to my editing environment. I can then modify the script and save it, but this changes nothing in the still-running script.
Does Python load all source files into memory completely at launch? I am under the impression that this is how the Python interpreter works, but this contradicts my other views of the Python interpreter: I have heard that .pyc files serve as byte-code for Python's virtual machine, like .class files in Java. At the same time however, some (very few in my understanding) implementations of Python also use just-in-time compilation techniques.
So am I correct in thinking that if I make a change to a .py file while my script is running, I don't see that change until I re-run the script, because at launch all necessary .py files are compiled into .pyc files, and simply modifying the .py files does not remake the .pyc files?
If that is correct, then why don't huge programs, like the one I'm working on with ~6,550 kilobytes of source code distributed over 20+ .py files, take forever to compile at startup? How is the program itself so fast?
Additional Info:
I am not using third-party modules. All of the files have been written locally. The main source file is relatively small (10 kB), but the source file I primarily work on is 65 kB. It was also written locally and changes every time before launch.
Python loads the main script into memory, compiles it into bytecode and runs that. If you modify the source file in the meantime, you're not affecting the bytecode.
If you're running the script as the main script (i. e. by calling it like python myfile.py, then the bytecode will be discarded when the script exits.
If you're importing the script, however, then the bytecode will be written to disk as a .pyc file which won't be recompiled when imported again, unless you modify the corresponding .py file.
Your big 6.5 MB program consists of many modules which are imported by the (probably small) main script, so only that will have to be compiled at each run. All the other files will have their .pyc file ready to run.
First of all, you are indeed correct in your understanding that changes to a Python source file aren't seen by the interpreter until the next run. There are some debugging systems, usually built for proprietary purposes, that allow you to reload modules, but this bring attendant complexities such as existing objects retaining references to code from the old module, for example. It can get really ugly, though.
The reason huge programs start up so quickly is the the interpreter tries to create a .pyc file for every .py file it imports if either no corresponding .pyc file exists or if the .py is newer. The .pyc is indeed the program compiled into byte code, so it's relatively quick to load.
As far as JIT compilation goes you may be thinking of the PyPy implementation, which is written in Python and has backends in several different languages. It's increasingly being used in Python 2 shops where execution speed is important, but it's along way from the CPython that we all know and love.