I try to make my code modular because it's too long, the problem is I don't know whether i'm doing it safely. I segmented my code into different files, so 1 python file runs the others, sometimes I have to call 1 file that will run another file that will run another file, so multiple chained commands.
The issue is that some of the files will process sensitive information like passwords, so I don't know whether I do it safely. Ideally after 1 file is executed, it should close itself, and delete all variables from it's memory, and free that space, like it normally would as if I were to just execute 1 file, the problem is that I don't know whether if I call multiple files nested into one another, this applies. Obviously only the file that is executed should clear itself, not the one that is active, but I don't know if this is the case.
I have been calling my modules like this
os.system('python3 ' + filename)
And in each file subsequently the same code calling another file with os.system, forming a nested or chained call system.
For example if I call the first file from shell:
python3 file1.py
and then file1 calls:
os.system('python3 file2.py')
and then file2 calls:
os.system('python3 file3.py')
I would want file3 cleaned from the memory and closed entirely after it runs, whereas file2 and file1 might still be active. I don't want file3 to be still inside the memory after it executed itself. So if file3 works with passwords, it should obviously clean them from the memory after it runs.
How to do this?
I have read about multiple options:
from subprocess import call
call(["python3", "file2.py"])
import subprocess
subprocess.call("file2.py", shell=True)
execfile('file2.py')
import subprocess
subprocess.Popen("file2.py", shell=True)
Which one is safer?
Python is heavily relying on the notion of importation. You should not try to reinvent the wheel on this one. Just import your scripts from the main script and use functions to trigger them. If you want to be sure variables are discarded you should include a del statement at the end of the functions or as soon as the variable is no longer in use.
On another hand, your problem with password is flawed from the start. If a .py file contains a password in plain text it's not, it will never be, in no scenario, secured. You should implement a secret : see this topic : I need to securely store a username and password in Python, what are my options?
Related
I have written multiple python scripts that are to be run sequentially to achieve a goal. i.e:
my-directory/
a1.py,
xyz.py,
abc.py,
....,
an.py
All these scripts are in the same directory and now I want to write a single script that can run all these .py scripts in a sequence. To achieve this goal, I want to write a single python(.py) script but don't know how to write it. I have windows10 so the bash script method isn't applicable.
What's the best possible way to write an efficient migration script in windows?
using a master python script is a possibility (and it's cross platform, as opposed to batch or shell). Scan the directory and open each file, execute it.
import glob,os
os.chdir(directory) # locate ourselves in the directory
for script in sorted(glob.glob("*.py")):
with open(script) as f:
contents = f.read()
exec(contents)
(There was a execfile method in python 2 but it's gone, in python 3 we have to read file contents and pass it to exec, which also works in python 2)
In that example, order is determined by the script name. To fix a different order, use an explicit list of python scripts instead:
for script in ["a.py","z.py"]:
That method doesn't create subprocesses. It just runs the scripts as if they were concatenated together (which can be an issue if some files aren't closed and used by following scripts). Also, if an exception occurs, it stops the whole list of scripts, which is probably not so bad since it avoids that the following scripts work on bad data.
You can name a function for all the script 2 like this:
script2.py
def main():
print('Hello World!')
And import script2 function with this:
script1.py
from script2.py import *
main()
I hope this is helpful (tell me if I didn't answer to your question I'm Italian..)
For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())
I'm writing a program, which, inter alia, works with temporary file, created using tempfile library.
The temporary file creates and fills in function:
def func():
mod_script = tempfile.NamedTemporaryFile(dir='special')
dest = open(mod_script, 'w')
# filling dest
return mod_script
(I use open() and not with open() because I execute the temporary file after calling func())
After some operations with mod_script outside func(), I call mod_script.close(). And all works fine.
But I have one problem. If my program fails (or if I interrupt it), the temporary file doesn't remove.
How do I fix it ?
I really don't want to write try...except...finally clauses because I'll have to write it so many times (there are many points, where my program can fail).
First, use a with statement, and pass delete=False to the constructor.
Then you need to put the necessary error handling in your program. Catch exceptions (see try..finally) and clean up during program exit whether it is successful or crashes.
Alternatively, keep the file open while executing it to prevent the automatic deletion-on-close from deleting it before you have executed it. This may have issues on Windows where it tends to have conflicts using files that are open.
I want to create empty file using Python script in Unix environment. Could see different ways mentioned of achieving the same. What are the benefits/pitfalls of one over the other.
os.system('touch abc')
open('abc','a').close()
open('abc','a')
subprocess.call(['touch','abc'])
Well, for a start, the ones that rely on touch are not portable. They won't work under standard Windows, for example, without the installation of CygWin, GNUWin32, or some other package providing a touch utility..
They also involve the creation of a separate process for doing the work, something that's totally unnecessary in this case.
Of the four, I would probably use open('abc','a').close() if the intent is to try and just create the file if it doesn't exist. In my opinion, that makes the intent clear.
But, if you're trying to create an empty file, I'd probably be using the w write mode rather than the a append mode.
In addition, you probably also want to catch the exception if, for example, you cannot actually create the file.
TLDR: use
open('abc','a').close()
(or 'w' instead of 'a' if the intent is to truncate the file if it already exists).
Invoking a separate process to do something Python can do itself is wasteful, and non-portable to platforms where the external command is not available. (Additionally, os.system uses two processes -- one more for a shell to parse the command line -- and is being deprecated in favor of subprocess.)
Not closing an open filehandle when you're done with it is bad practice, and could cause resource depletion in a larger program (you run out of filehandles if you open more and more files and never close them).
To create an empty file on Unix in Python:
import os
try:
os.close(os.open('abc', os.O_WRONLY | os.O_CREAT | os.O_EXCL |
getattr(os, "O_CLOEXEC", 0) |
os.O_NONBLOCK | os.O_NOCTTY))
except OSError:
pass # decide what to consider an error in your case and reraise
# 1. is it an error if 'abc' entry already exists?
# 2. is it an error if 'abc' is a directory or a symlink to a directory?
# 3. is it an error if 'abc' is a named pipe?
# 4. it is probably an error if the parent directory is not writable
# or the filesystem is read-only (can't create a file)
Or more portable variant:
try:
open('abc', 'ab', 0).close()
except OSError:
pass # see the comment above
Without the explicit .close() call, non-reference-counting Python implementations such as Pypy, Jython may delay closing the file until garbage collection is run (it may exhaust available file descriptors for your process).
The latter example may stuck on FIFO and follows symlinks. On my system, it is equivalent to:
from os import *
open("abc", O_WRONLY|O_CREAT|O_APPEND|O_CLOEXEC, 0666)
In addition, touch command updates the access and modification times of existing files to the current time.
In more recent Python 3 variants, we have Path.touch() from pathlib. This will create an empty file if it doesn't exist, and update the mtime if it does, in the same way as your example os.system('touch abc'), but it's much more portable:
from pathlib import Path
abc = Path('abc')
abc.touch()
I need to create a folder that I use only once, but need to have it exist until the next run. It seems like I should be using the tmp_file module in the standard library, but I'm not sure how to get the behavior that I want.
Currently, I'm doing the following to create the directory:
randName = "temp" + str(random.randint(1000, 9999))
os.makedirs(randName)
And when I want to delete the directory, I just look for a directory with "temp" in it.
This seems like a dirty hack, but I'm not sure of a better way at the moment.
Incidentally, the reason that I need the folder around is that I start a process that uses the folder with the following:
subprocess.Popen([command], shell=True).pid
and then quit my script to let the other process finish the work.
Creating the folder with a 4-digit random number is insecure, and you also need to worry about collisions with other instances of your program.
A much better way is to create the folder using tempfile.mkdtemp, which does exactly what you want (i.e. the folder is not deleted when your script exits). You would then pass the folder name to the second Popen'ed script as an argument, and it would be responsible for deleting it.
What you've suggested is dangerous. You may have race conditions if anyone else is trying to create those directories -- including other instances of your application. Also, deleting anything containing "temp" may result in deleting more than you intended. As others have mentioned, tempfile.mkdtemp is probably the safest way to go. Here is an example of what you've described, including launching a subprocess to use the new directory.
import tempfile
import shutil
import subprocess
d = tempfile.mkdtemp(prefix='tmp')
try:
subprocess.check_call(['/bin/echo', 'Directory:', d])
finally:
shutil.rmtree(d)
"I need to create a folder that I use only once, but need to have it exist until the next run."
"Incidentally, the reason that I need the folder around is that I start a process ..."
Not incidental, at all. Crucial.
It appears you have the following design pattern.
mkdir someDirectory
proc1 -o someDirectory # Write to the directory
proc2 -i someDirectory # Read from the directory
if [ %? == 0 ]
then
rm someDirectory
fi
Is that the kind of thing you'd write at the shell level?
If so, consider breaking your Python application into to several parts.
The parts that do the real work ("proc1" and "proc2")
A Shell which manages the resources and processes; essentially a Python replacement for a bash script.
A temporary file is something that lasts for a single program run.
What you need is not, therefore, a temporary file.
Also, beware of multiple users on a single machine - just deleting anything with the 'temp' pattern could be anti-social, doubly so if the directory is not located securely out of the way.
Also, remember that on some machines, the /tmp file system is rebuilt when the machine reboots.
You can also automatically register an function to completely remove the temporary directory on any exit (with or without error) by doing :
import atexit
import shutil
import tempfile
# create your temporary directory
d = tempfile.mkdtemp()
# suppress it when python will be closed
atexit.register(lambda: shutil.rmtree(d))
# do your stuff...
subprocess.Popen([command], shell=True).pid
tempfile is just fine, but to be on a safe side you'd need to safe a directory name somewhere until the next run, for example pickle it. then read it in the next run and delete directory. and you are not required to have /tmp for the root, tempfile.mkdtemp has an optional dir parameter for that. by and large, though, it won't be different from what you're doing at the moment.
The best way of creating the temporary file name is either using tempName.TemporaryFile(mode='w+b', suffix='.tmp', prifix='someRandomNumber' dir=None)
or u can use mktemp() function.
The mktemp() function will not actually create any file, but will provide a unique filename (actually does not contain PID).