Can I read code from a previously run python script - python

I ran a python file using the command python file.py and it executed successfully. Right after I managed to delete the file, can I recover the code that was run in the previous command? I still have the terminal open and have not typed anything else into it. Running ubuntu.

if you had imported the file instead of running it directly, a .pyc file would have been created containing the compiled bytecode which you could easily transform back into the regular python code (minus the comments.) However, you did run the script directly, so there is no .pyc file.
If you deleted the file in a GUI file browser, the file might have been sent into a "trash bin" where you might be able to recover it from.
Assuming you deleted the file using the "rm" command in the terminal, however, the data might still be on the disk provided it hasn't been overwritten ("deleting" files normally just marks the data to be overwritten on the disk)
If you're lucky, you might be able to recover it. The process isn't exactly simple, though, and there's no guarantee that the file hasn't already been overwritten since the disk is pretty much in constant use when you're using the system. More info on that here:
https://help.ubuntu.com/community/DataRecovery
There's also a handy utility called lsof which you can use to recover a 'deleted' file if there's still an open file handle for it:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c00833030
Also, in the future, I recommend using "rm -i" instead of just rm with no options as that will at least prompt you to confirm if you're sure you want to delete something first. You can also make an alias for this in your shell so that regular rm just points to 'rm -i'

Related

Issue Opening Py Files in Spyder Using Custom Automator Script

So I installed Miniconda using Homebrew and then installed Spyder using Conda. Then I wanted to make the process more "Mac" friendly by creating an application which opens Spyder so I used the solution by topoman which is below the accepted answer in this link:
Ways to invoke python and Spyder on OSX
Everything works more or less fine except I have ran into two issues (the second one isn't really an issue and more of an aesthetic related thing):
I downloaded py files from this GitHub (https://github.com/realpython/python-scripts) just to test that it will open py files. It works for them and I am also able to set Spyder as the default application by using the "SpyderOpener" solution provided by topoman in the above link.
The issue is that when I create a new file in spyder and save it then try to click-open it, it won't. I'm not sure what the difference is and why the "SpyderOpener" does not work on this py file originating from Spyder, but works fine for the ones I downloaded.
I was curious if it is possible to change the display icon for py files that have the default as "SpyderOpener". I did change the icon for the SpyderOpener application but it doesn't work. The icons for the files is just a blank printer sheet.
UPDATE:
I believe I found the issue. It depends on where the file is. When I put it on my desktop, no issue. When I put it in other certain places no issue. Based on experimenting it seems that the opener does not work when the file is within a folder that has a space in the name (e.g. folder name). The minute I change to folder_name or foldername it works fine.
Therefore, is someone able to explain why the opener script breaks down when there is a space at any place in the file path? Can the script be edited to handle this?
Does it boil down to the following stack threads (i.e. I need to apply double quotes somewhere in the script):
Shell Script and spaces in path
https://askubuntu.com/questions/1081512/how-to-pass-a-pathname-with-a-space-in-it-to-cd-inside-of-a-script
This thread also suggests the script shouldn’t use the $# argument unquoted because it will break as soon as you have “spaces or wildcards”:
https://unix.stackexchange.com/questions/41571/what-is-the-difference-between-and
Therefore looking at the script and previous steps you have:
#!/bin/bash
/usr/local/bin/spyder $#
Then the opener script has:
for f in "$#"
do
open /Applications/spyder.app --args $f
done
if [ -z "$f" ]; then
open /Applications/spyder.app --args ~
fi
As for the rest of the script, I assume $f isn’t going to cause problems? Anyways, it seems the issue comes from the initial setup?
Based on this, it should be handled by the above line:
Handle whitespaces in arguments to a bash script
So is it breaking down because of the args?

Safe to change python file during run time?

I have python 3 file that run a SVN deployment. Basically run "python3 deploy.py update" and following things happen:
Close site
Backup Ignore but secure files
SVN revert -R .
SVN update
Trigger tasks
Open site
That all sounds simple and logical, but for one thought going around my head "SVN is writing files, including python files and sub module helpers that are trigger the SVN subprocess"
I understand that python files are read and processed and only through some tricky reload will python reload. And I understand if SVN change python source then update would only take effect on next run.
But question is "should keep this structure or move file to root and run SVN to be safe side"
Applies to GIT or any python changes
From what I know it is safe to change python (i.e. .py) file while python is running, after .pyc file has been created by python (i.e. your situation). You can even remove .py file and run .pyc just fine.
On the other hand SVN revert -R . is dangerous here as it would attempt removing .pyc files, so either screw up your python or fail by itself.

Running Python program with dot slash: No such file or directory [duplicate]

I have several python scripts which work just fine but one script has (as of this morning) started giving me this error if I try to run it from the bash:
: No such file or directory
I am able to run the 'broken' script by doing python script_name.py and after looking around a bit the general idea that I picked up was that maybe my line ending of the hashbang got changed (silently) so I looked at the line ending of a working script and a broken script via the :set list option in VI as indicated in this question -> View line-endings in a text file
Both files appear to end using the same character (a $) so I am kind of at a loss on how to proceed from here. Specifically, how to actually 'see' the line ending in case the set list was not the right method.
PS: The script is executable and the shebang is in there, I stated that it's just this 1 script that was working fine before the weekend but it started giving me this error as of this morning.
-- edit: --
Running the script through dos2unix does get it working again but I would like to know of any way to visualize the line ending somehow in VI(M) or why Geany somehow converted the line endings in the first place (as I never work on a dos/windows system anyhow).
From the comments above it looks like you have dos line endings, and so the hashbang line is not properly processed.
Line ending style are not shown with :set list in Vim because that option is only used when reading/writing the file. In memory line endings are always that, line-endings. The line ending style used for a file is kept in a Vim per-file option, weirdly called fileformat.
To see/change the line ending style from Vim, you can use the following commands:
:set fileformat
:set ff
It will show dos or unix. You want unix, of course ;-).
To change it quickly you can save the file with:
:w ++ff=unix
Or if you prefer:
:set ff=unix
And then save the file normally.
So see all the gory details just do :help fileformat, :help file-formats and :help fileformats
You can also use the dos2unix command to convert the file format
dos2unix
This helped me to run the python scripts
This normally happens when we open files in windows do changes and save it.
if you open the file locate the ^M characters at the end of every line
Thanks
Personally, I find it kinda wrong using direct path to python interpreter. As you dont use windows platform, you should have program env, usually in /usr/bin (/usr/bin/env). Try using following shebang:
#!/usr/bin/env python
Different distros store python binary in /bin or /usr/bin (or some weird locations), and this one makes your script config-independent (as far as possible, here we have possibility that env is stored elsewhere; still - it is less possible that env is not in /usr/bin than that python is mislocated).
I had similiar problem (if not exactly the same) and that worked for me.
Also, I have both python interpreters (2.7.x and 3.x) installed, so I need to use "python3" argument for env. AFAIR usually distros link different names to different binaries, so "env python" will run python2.7 on my system, "env python3" (also python33, or smth like that) will run p3k, and "env python2" (also python27, etc) will run python 2.7.x. Declaring which version of interpreter should be used seems like a good idea too.
I came across this problem editing my code on Windows, checking it in with git, and checking out and running it on Linux.
My solution was: tell git to Do The Right Thing. I issued this command on the Windows box:
git config --global core.autocrlf true
Modified the files and checked them in; voila, no such problem any more.
As discussed on the Git documentation.

compiling python file with Watchman

what's the best way to capture files/path info from watchman to pass
to 'make' or another app?
here's what im trying to achieve:
when i save a .py(s) file on the dev server, i'd like to retrieve the filename and path, compile the py to pyc, then transfer the pyc file to a staging server.
should i be using watchman-make, 'heredoc' methods, ansible, etc.?
because the docs are note very helpful, are there any examples available?
and, what's the use case for pywatchman?
thanks in advance
Hopefully this will help clarify some things:
Watchman runs as a per-user service to monitor your filesystem. It can:
Provide live subscriptions to file changes as they occur
trigger a command to be run in the background as file changes occur
Answer queries about how files have changed since a given point in time
pywatchman is a python client implementation that allows you to build applications that consume information from watchman. The watchman-make and watchman-wait tools are implemented using pywatchman.
watchman-make is a tool that helps you invoke make (or a similar program) when files change. It is most appropriate in cases where the program you want to run doesn't need the specific list of files that have just changed. make is in this category; make will analyze the dependencies in your Makefile and then build only the pieces that are changed. You could alternatively execute a python distutils or setuptools setup.py script.
Native watchman triggers are a bit harder to use than watchman-make, as they are spawned in the background by the watchman service and are passed the list of changed files. These are most appropriate for completely unattended processes where you don't need to see the output and need the precise list of changed files.
From what you've described, it sounds like the simplest solution is a script that performs the compilation step and then performs the sync, something along the lines of the following; let's call it build-and-sync.sh
#!/bin/sh
python -m compileall .
rsync -avz . host:/path/
(If you don't really need a .pyc file and just need to sync, then you can simply remove the python line from the above script and just let it run rsync)
You can then use watchman-make to execute this when things change:
watchman-make --make='build-and-sync.sh' -p '**/*.py' -t dummy
Then, after any .py file (or set of .py files) are changed, watchman-make will execute build-and-sync.sh dummy. This should be sufficient unless you have a large enough number of python files that the compilation step takes too long each time you make a change. watchman-make will keep running until you hit CTRL-C or otherwise kill the process; it runs in the foreground in your terminal window unless you use something like nohup, tmux or screen to keep it around for longer.
If that is the case, then you can try using make with a pattern rule to compile only the changed python files, or if that is awkward to express using make then perhaps it is worth using pywatchman to establish a subscription and compile the changed files. This is a more advanced use-case and I'd suggest looking at the code for watchman-wait to see how that might be achieved. It may not be worth the additional effort for this unless you have a large number of files or very tight time constraints for syncing.
I'd recommend trying out the simplest solution first and see if that meets your needs before trying one of the more complex options.
Using native triggers
As an alternative, you can use triggers. These run in the background with their output going to the watchman log file. They are a bit harder to work with than using watchman-make.
You need to write a small program, typically a script, to receive the list of changed files from the trigger; the best way to do this is via stdin of the script. You can receive a list of files one-per-line or a JSON object with more structured information. Let's call this script trigger-build-and-sync; it is up to you to implement the contents of the script. Let's assume you just want a list of files on stdin.
This command will set up the trigger; you invoke it once and it will persist until the watch is removed:
watchman -j <<-EOT
["trigger", "/path/to/root", {
"name": "build-and-sync",
"expression": ["suffix", "py"],
"command": "/path/to/trigger-build-and-sync",
"append_files": false,
"stdin": "NAME_PER_LINE"
}]
EOT
The full docs for this can be found at https://facebook.github.io/watchman/docs/cmd/trigger.html#extended-syntax

Unable to find and execute python script in shell [duplicate]

I have several python scripts which work just fine but one script has (as of this morning) started giving me this error if I try to run it from the bash:
: No such file or directory
I am able to run the 'broken' script by doing python script_name.py and after looking around a bit the general idea that I picked up was that maybe my line ending of the hashbang got changed (silently) so I looked at the line ending of a working script and a broken script via the :set list option in VI as indicated in this question -> View line-endings in a text file
Both files appear to end using the same character (a $) so I am kind of at a loss on how to proceed from here. Specifically, how to actually 'see' the line ending in case the set list was not the right method.
PS: The script is executable and the shebang is in there, I stated that it's just this 1 script that was working fine before the weekend but it started giving me this error as of this morning.
-- edit: --
Running the script through dos2unix does get it working again but I would like to know of any way to visualize the line ending somehow in VI(M) or why Geany somehow converted the line endings in the first place (as I never work on a dos/windows system anyhow).
From the comments above it looks like you have dos line endings, and so the hashbang line is not properly processed.
Line ending style are not shown with :set list in Vim because that option is only used when reading/writing the file. In memory line endings are always that, line-endings. The line ending style used for a file is kept in a Vim per-file option, weirdly called fileformat.
To see/change the line ending style from Vim, you can use the following commands:
:set fileformat
:set ff
It will show dos or unix. You want unix, of course ;-).
To change it quickly you can save the file with:
:w ++ff=unix
Or if you prefer:
:set ff=unix
And then save the file normally.
So see all the gory details just do :help fileformat, :help file-formats and :help fileformats
You can also use the dos2unix command to convert the file format
dos2unix
This helped me to run the python scripts
This normally happens when we open files in windows do changes and save it.
if you open the file locate the ^M characters at the end of every line
Thanks
Personally, I find it kinda wrong using direct path to python interpreter. As you dont use windows platform, you should have program env, usually in /usr/bin (/usr/bin/env). Try using following shebang:
#!/usr/bin/env python
Different distros store python binary in /bin or /usr/bin (or some weird locations), and this one makes your script config-independent (as far as possible, here we have possibility that env is stored elsewhere; still - it is less possible that env is not in /usr/bin than that python is mislocated).
I had similiar problem (if not exactly the same) and that worked for me.
Also, I have both python interpreters (2.7.x and 3.x) installed, so I need to use "python3" argument for env. AFAIR usually distros link different names to different binaries, so "env python" will run python2.7 on my system, "env python3" (also python33, or smth like that) will run p3k, and "env python2" (also python27, etc) will run python 2.7.x. Declaring which version of interpreter should be used seems like a good idea too.
I came across this problem editing my code on Windows, checking it in with git, and checking out and running it on Linux.
My solution was: tell git to Do The Right Thing. I issued this command on the Windows box:
git config --global core.autocrlf true
Modified the files and checked them in; voila, no such problem any more.
As discussed on the Git documentation.

Categories