how to execute linux command via python loop - python

I have a list of files which will be used to extract by another linux script like this:
gzip -dc input/test_log.gz | bin/recreate state/object_mappings.sort > output/recreate.out
I need to change the name test_log.gz with my list of files with python loop, how can I execute this script dynamically. here is my code:
import os
from subprocess import call
files = os.listdir("/my directory/")
files.sort()
for i in files:
call('gzip -dc input/test_log.gz | bin/recreate state/object_mappings.sort > output/recreate.out')
I dont know how to change this script with dynamic variable i??

You can use an f-string literal:
import os
from subprocess import call
files = os.listdir("/my directory/")
files.sort()
for file in files:
call(f'gzip -dc {os.path.abspath(file)} | bin/recreate state/object_mappings.sort > output/recreate.out')

In my case the below method works.
from subprocess import call
files = ['input.txt', 'input1.txt', 'input2.txt', 'input3.txt']
for file in files:
call(['sed', '-i', 's+NameToBeReplaced+NewName+', file])
sed : is stream editor
-i : Allows to edit the source file
+: Is delimiter.
call: Takes command in array format.
For example: if you want to run ls -l input.txt
call(['ls', '-l', file])
I hope the above information works for you 😃.

Related

Running scripts from a text file

I'm new and I'm trying to run a script that runs a sequence of scripts based on what I put in a text file.
For example, I have Script1.py with print("Script1") ... then Script2.py with print("Script2"), Script3.py with print("Script3") ... and in the text file to once:
Script1.py
Script3.py
then another time to have:
Script2.py
Script1.py
Script3.py
Script2.py
And the main script to execute the sequence of scripts in the text file.
PS: Apologies for my English language
Thank you in advance
def Script1():
Script1.py
def Script2():
Script2.py
def Script3():
Script3.py
commands_table = {'Script1':Script1, 'Script2':Script2, 'Script3':Script3}
with open('test.txt', 'r') as command_file:
for cmd in command_file:
cmd = cmd.strip()
commands_table[cmd]()
Use import to grab functions from other files
scripts.py
def script1():
print('1')
def script2():
print('2')
app.py
import scripts
commands_table = {'Script1':scripts.script1, 'Script2':scripts.script2}
with open('test.txt') as command_file:
for cmd in command_file:
commands_table[cmd.strip()]()
To run a sequence of scripts based on what is listed in a text file, you can use the following steps:
Read the contents of the text file using Python's built-in open function and the readlines method. This will give you a list of strings, where each string corresponds to a script name.
Loop through the list of script names and use Python's subprocess module to run each script in turn. The subprocess module provides a way to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
Example:
import subprocess
# Step 1: Read the contents of the text file
with open('script_list.txt', 'r') as f:
script_names = [line.strip() for line in f.readlines()]
# Step 2: Loop through the list of script names and run each script
for script_name in script_names:
subprocess.run(['python', script_name])
Assuming that your text file is named "script_list.txt" and contains the string "Script1.py", "Script2.py", and "Script3.py", this script will run each of those scripts in order, printing out "Script1", "Script2", and "Script3" to the console.
import os
with open('test.txt', 'r') as command_file:
commands = command_file.read()
commandsList = commands.splitlines()
for cmd in commandsList:
os.system(cmd)
You have to chmod +x all your scripts to be executable and the first line of each script should be:
#! /usr/local/bin/python3
or whatever your python interpreter is

Proper way of reading in files from a directory using Python 2.6 in bash shell

I am trying to read in files for text processing.
The idea is to run them through Hadoop pseudo distributed file system on my virtual machine using map-reduce code that I am writing. The interface is Ubuntu Linux, I am running Python 2.6 with the installation. I need to use sys.stdin for reading in the files, and sys.stdout so I pass from mapper to reducer.
Here is my test code for the mapper:
#!/usr/bin/env python
import sys
import string
import glob
import os
files = glob.glob(sys.stdin)
for file in files:
with open(file) as infile:
txt = infile.read()
txt = txt.split()
print(txt)
I'm not sure how glob works with sys.stdin and I get the following errors:
After testing with piping:
[training#localhost data]$ cat test | ./mapper.py
I get this:
cat: test: Is a directory
Traceback (most recent call last):
File "./mapper.py", line 8, in <module>
files = glob.glob(sys.stdin)
File "/usr/lib64/python2.6/glob.py", line 16, in glob
return list(iglob(pathname))
File "/usr/lib64/python2.6/glob.py", line 24, in iglob
if not has_magic(pathname):
File "/usr/lib64/python2.6/glob.py", line 78, in has_magic
return magic_check.search(s) is not None
TypeError: expected string or buffer
For the moment, I am just trying to read in three small .txt files in one directory.
Thanks!
Still I do not fully understand what is your expected output (list or plain
text), the following would work:
#!/usr/bin/env python
import sys, glob
dir = sys.stdin.read().rstrip('\r\n')
files = glob.glob(dir + '/*')
for file in files:
with open(file) as infile:
txt = infile.read()
txt = txt.split()
print(txt)
Then execute with:
echo "test" | ./mapper.py
My recommendation is to feed the directory name via the command line argument, not via the stdin as above.
If you want to tweak the format of the output, please let me know.
Hope this helps.
files = os.listdir(path)
Use this to list all files and then apply for loop.

Run scrapy using subprocess

I need to run several python script's, some of those are scrapy projects.
To run a spider I try this:
from subprocess import call
import subprocess
call(["scrapy",'crawl','my_spider','-o output_file.csv'],cwd='/home/luis/Schreibtisch/kukun/bbb_new_pro/scripts/2_Get_links)
I wonder whether is posible to specify the output's file directory, I tried this:
call(["scrapy",'crawl','my_spider','-o folder_1/folder_2/output_file.csv'],cwd='project_folder')
But that only creates a new folders under the project directory I want the file outside tht folder.
The other thing is, Can I specify the name of the ouput file in a variable? something like:
file_name = 'output file.csv'
call(["scrapy",'crawl','my_spider','-o + file_name '],cwd='project_folder')
This worked for me:
from subprocess import call
name = "spider_name"
call(["scrapy", "crawl", "{0}".format(name), "-o {0}.json".format(name)])

How to get the path to the opened file from a script inside a OS X .app bundle?

So I have Foo.app, which is configured to run Foo.app/Contents/MacOS/foo.py when it is launched. In the app's Info.plist file I also have it set as the application to handle the launching of ".bar" files. When I double-click a .bar file, Foo.app opens as expected.
The problem is that when a file is opened in this way, I can't figure out how to get the path to the opened file in my foo.py script. I assumed it would be in sys.argv[1], but it's not. Instead, sys.argv[1] contains a strange string like "-psn_0_2895682".
Does anyone know how I can get the absolute path to the opened file? I've also checked os.environ, but it's not there either.
You can get your app's process ID then ask the OS for all open files using lsof, looking for your process's ID:
from string import *
from os import getpid
from subprocess import check_output, STDOUT
pid = getpid()
lsof = (check_output(['/usr/sbin/lsof', '-p', str(pid)], stderr=STDOUT)).split("\n")
for line in lsof[1:]:
print line
The regular files will be of type 'REG' in the fifth column, [4] if you're indexing in.
Files open within the running code can be displayed in a similar way:
from string import *
from os import getpid
from subprocess import check_output, STDOUT
import re
pid = getpid()
f = open('./trashme.txt', 'w')
f.write('This is a test\n')
lsof = (check_output(['/usr/sbin/lsof', '-p', str(pid)], stderr=STDOUT)).split("\n")
print lsof[0]
for line in lsof[1:]:
if (re.search('trashme', line)): print line
f.close
Which results in:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 6995 greg 3w REG 14,2 0 2273252 /Users/greg/Desktop/trashme.txt

Using Python to execute a command on every file in a folder

I'm trying to create a Python script that would :
Look into the folder "/input"
For each video in that folder, run a mencoder command (to transcode them to something playable on my phone)
Once mencoder has finished his run, delete the original video.
That doesn't seem too hard, but I suck at python :)
Any ideas on what the script should look like ?
Bonus question : Should I use
os.system
or
subprocess.call
?
Subprocess.call seems to allow for a more readable script, since I can write the command like this :
cmdLine = ['mencoder',
sourceVideo,
'-ovc',
'copy',
'-oac',
'copy',
'-ss',
'00:02:54',
'-endpos',
'00:00:54',
'-o',
destinationVideo]
EDIT : Ok, that works :
import os, subprocess
bitrate = '100'
mencoder = 'C:\\Program Files\\_utilitaires\\MPlayer-1.0rc2\\mencoder.exe'
inputdir = 'C:\\Documents and Settings\\Administrator\\Desktop\\input'
outputdir = 'C:\\Documents and Settings\\Administrator\\Desktop\\output'
for fichier in os.listdir(inputdir):
print 'fichier :' + fichier
sourceVideo = inputdir + '\\' + fichier
destinationVideo = outputdir + '\\' + fichier[:-4] + ".mp4"
commande = [mencoder,
'-of',
'lavf',
[...]
'-mc',
'0',
sourceVideo,
'-o',
destinationVideo]
subprocess.call(commande)
os.remove(sourceVideo)
raw_input('Press Enter to exit')
I've removed the mencoder command, for clarity and because I'm still working on it.
Thanks to everyone for your input.
To find all the filenames use os.listdir().
Then you loop over the filenames. Like so:
import os
for filename in os.listdir('dirname'):
callthecommandhere(blablahbla, filename, foo)
If you prefer subprocess, use subprocess. :-)
Use os.walk to iterate recursively over directory content:
import os
root_dir = '.'
for directory, subdirectories, files in os.walk(root_dir):
for file in files:
print os.path.join(directory, file)
No real difference between os.system and subprocess.call here - unless you have to deal with strangely named files (filenames including spaces, quotation marks and so on). If this is the case, subprocess.call is definitely better, because you don't need to do any shell-quoting on file names. os.system is better when you need to accept any valid shell command, e.g. received from user in the configuration file.
The new recommend way in Python3 is to use pathlib:
from pathlib import Path
mydir = Path("path/to/my/dir")
for file in mydir.glob('*.mp4'):
print(file.name)
# do your stuff
Instead of *.mp4 you can use any filter, even a recursive one like **/*.mp4. If you want to use more than one extension, you can simply iterate all with * or **/* (recursive) and check every file's extension with file.name.endswith(('.mp4', '.webp', '.avi', '.wmv', '.mov'))
Python might be overkill for this.
for file in *; do mencoder -some options "$file"; rm -f "$file" ; done
The rm -f "$file" deletes the files.
AVI to MPG (pick your extensions):
files = os.listdir('/input')
for sourceVideo in files:
if sourceVideo[-4:] != ".avi"
continue
destinationVideo = sourceVideo[:-4] + ".mpg"
cmdLine = ['mencoder', sourceVideo, '-ovc', 'copy', '-oac', 'copy', '-ss',
'00:02:54', '-endpos', '00:00:54', '-o', destinationVideo]
output1 = Popen(cmdLine, stdout=PIPE).communicate()[0]
print output1
output2 = Popen(['del', sourceVideo], stdout=PIPE).communicate()[0]
print output2
Or you could use the os.path.walk function, which does more work for you than just os.walk:
A stupid example:
def walk_func(blah_args, dirname,names):
print ' '.join(('In ',dirname,', called with ',blah_args))
for name in names:
print 'Walked on ' + name
if __name__ == '__main__':
import os.path
directory = './'
arguments = '[args go here]'
os.path.walk(directory,walk_func,arguments)
I had a similar problem, with a lot of help from the web and this post I made a small application, my target is VCD and SVCD and I don't delete the source but I reckon it will be fairly easy to adapt to your own needs.
It can convert 1 video and cut it or can convert all videos in a folder, rename them and put them in a subfolder /VCD
I also add a small interface, hope someone else find it useful!
I put the code and file in here btw: http://tequilaphp.wordpress.com/2010/08/27/learning-python-making-a-svcd-gui/

Categories