I have a number of test cases in separate .py files that I want to test on a module I've created. All of these files use a py module that I've created and all these py files will print a pre-determined output (some in thousands of lines).
Is there a way to run a .py script that runs these other test .py scripts and checks the outputs? I've looked into doctest and unittests, but these relate to particular functions rather than py scripts?
EDIT: These py files print outputs rather than return values. Some of them also utilize multi-threading.
try this
import glob
lst = glob.glob("/home/test/*.py")
for each_file in lst:
variables= {} #what ever variables to need run
execfile(each_file, variables )
What you need to do is, you will invoke these scripts by
subprocess.Popen(['python',file_name],universal_newlines=True,stdout=stdout, stderr=stderr)
These stdout,stderr are all file objects, they will write the result into these files. Then you can use sleep() for the best appropriate time you think.
And after that you can open these files and check the results.
Read more about subprocess here.
Related
I have written multiple python scripts that are to be run sequentially to achieve a goal. i.e:
my-directory/
a1.py,
xyz.py,
abc.py,
....,
an.py
All these scripts are in the same directory and now I want to write a single script that can run all these .py scripts in a sequence. To achieve this goal, I want to write a single python(.py) script but don't know how to write it. I have windows10 so the bash script method isn't applicable.
What's the best possible way to write an efficient migration script in windows?
using a master python script is a possibility (and it's cross platform, as opposed to batch or shell). Scan the directory and open each file, execute it.
import glob,os
os.chdir(directory) # locate ourselves in the directory
for script in sorted(glob.glob("*.py")):
with open(script) as f:
contents = f.read()
exec(contents)
(There was a execfile method in python 2 but it's gone, in python 3 we have to read file contents and pass it to exec, which also works in python 2)
In that example, order is determined by the script name. To fix a different order, use an explicit list of python scripts instead:
for script in ["a.py","z.py"]:
That method doesn't create subprocesses. It just runs the scripts as if they were concatenated together (which can be an issue if some files aren't closed and used by following scripts). Also, if an exception occurs, it stops the whole list of scripts, which is probably not so bad since it avoids that the following scripts work on bad data.
You can name a function for all the script 2 like this:
script2.py
def main():
print('Hello World!')
And import script2 function with this:
script1.py
from script2.py import *
main()
I hope this is helpful (tell me if I didn't answer to your question I'm Italian..)
For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())
I try to make my code modular because it's too long, the problem is I don't know whether i'm doing it safely. I segmented my code into different files, so 1 python file runs the others, sometimes I have to call 1 file that will run another file that will run another file, so multiple chained commands.
The issue is that some of the files will process sensitive information like passwords, so I don't know whether I do it safely. Ideally after 1 file is executed, it should close itself, and delete all variables from it's memory, and free that space, like it normally would as if I were to just execute 1 file, the problem is that I don't know whether if I call multiple files nested into one another, this applies. Obviously only the file that is executed should clear itself, not the one that is active, but I don't know if this is the case.
I have been calling my modules like this
os.system('python3 ' + filename)
And in each file subsequently the same code calling another file with os.system, forming a nested or chained call system.
For example if I call the first file from shell:
python3 file1.py
and then file1 calls:
os.system('python3 file2.py')
and then file2 calls:
os.system('python3 file3.py')
I would want file3 cleaned from the memory and closed entirely after it runs, whereas file2 and file1 might still be active. I don't want file3 to be still inside the memory after it executed itself. So if file3 works with passwords, it should obviously clean them from the memory after it runs.
How to do this?
I have read about multiple options:
from subprocess import call
call(["python3", "file2.py"])
import subprocess
subprocess.call("file2.py", shell=True)
execfile('file2.py')
import subprocess
subprocess.Popen("file2.py", shell=True)
Which one is safer?
Python is heavily relying on the notion of importation. You should not try to reinvent the wheel on this one. Just import your scripts from the main script and use functions to trigger them. If you want to be sure variables are discarded you should include a del statement at the end of the functions or as soon as the variable is no longer in use.
On another hand, your problem with password is flawed from the start. If a .py file contains a password in plain text it's not, it will never be, in no scenario, secured. You should implement a secret : see this topic : I need to securely store a username and password in Python, what are my options?
To start off, I am a beginner in python so I am not even sure if my question makes sense or is even possible.
I have 2 python files app.py. and compare.py. compare.py takes in two arguments (File paths) to run. So for example, when I want to run it, I do python compare.py ./image1.jpg ./image2.jpg. Now the return I get is some text printed to the terminal such as Comparison Done, The distance is 0.544.
Now, I want to run this compare.py from inside app.py and get a string with whatever compare.py would usually output to the terminal. So for example:
result = function('compare.py ./image1.jpg ./image2.jpg') and result will have the required string. Is this possible?
You can use os.popen:
In app.py:
import os
output = os.popen('python compare.py ./image1.jpg ./image2.jpg').readlines()
I need to create a folder that I use only once, but need to have it exist until the next run. It seems like I should be using the tmp_file module in the standard library, but I'm not sure how to get the behavior that I want.
Currently, I'm doing the following to create the directory:
randName = "temp" + str(random.randint(1000, 9999))
os.makedirs(randName)
And when I want to delete the directory, I just look for a directory with "temp" in it.
This seems like a dirty hack, but I'm not sure of a better way at the moment.
Incidentally, the reason that I need the folder around is that I start a process that uses the folder with the following:
subprocess.Popen([command], shell=True).pid
and then quit my script to let the other process finish the work.
Creating the folder with a 4-digit random number is insecure, and you also need to worry about collisions with other instances of your program.
A much better way is to create the folder using tempfile.mkdtemp, which does exactly what you want (i.e. the folder is not deleted when your script exits). You would then pass the folder name to the second Popen'ed script as an argument, and it would be responsible for deleting it.
What you've suggested is dangerous. You may have race conditions if anyone else is trying to create those directories -- including other instances of your application. Also, deleting anything containing "temp" may result in deleting more than you intended. As others have mentioned, tempfile.mkdtemp is probably the safest way to go. Here is an example of what you've described, including launching a subprocess to use the new directory.
import tempfile
import shutil
import subprocess
d = tempfile.mkdtemp(prefix='tmp')
try:
subprocess.check_call(['/bin/echo', 'Directory:', d])
finally:
shutil.rmtree(d)
"I need to create a folder that I use only once, but need to have it exist until the next run."
"Incidentally, the reason that I need the folder around is that I start a process ..."
Not incidental, at all. Crucial.
It appears you have the following design pattern.
mkdir someDirectory
proc1 -o someDirectory # Write to the directory
proc2 -i someDirectory # Read from the directory
if [ %? == 0 ]
then
rm someDirectory
fi
Is that the kind of thing you'd write at the shell level?
If so, consider breaking your Python application into to several parts.
The parts that do the real work ("proc1" and "proc2")
A Shell which manages the resources and processes; essentially a Python replacement for a bash script.
A temporary file is something that lasts for a single program run.
What you need is not, therefore, a temporary file.
Also, beware of multiple users on a single machine - just deleting anything with the 'temp' pattern could be anti-social, doubly so if the directory is not located securely out of the way.
Also, remember that on some machines, the /tmp file system is rebuilt when the machine reboots.
You can also automatically register an function to completely remove the temporary directory on any exit (with or without error) by doing :
import atexit
import shutil
import tempfile
# create your temporary directory
d = tempfile.mkdtemp()
# suppress it when python will be closed
atexit.register(lambda: shutil.rmtree(d))
# do your stuff...
subprocess.Popen([command], shell=True).pid
tempfile is just fine, but to be on a safe side you'd need to safe a directory name somewhere until the next run, for example pickle it. then read it in the next run and delete directory. and you are not required to have /tmp for the root, tempfile.mkdtemp has an optional dir parameter for that. by and large, though, it won't be different from what you're doing at the moment.
The best way of creating the temporary file name is either using tempName.TemporaryFile(mode='w+b', suffix='.tmp', prifix='someRandomNumber' dir=None)
or u can use mktemp() function.
The mktemp() function will not actually create any file, but will provide a unique filename (actually does not contain PID).