I have a question about how to get the PID from a process at first, not when the process has finished.
This is because I want to be able to kill the FFmpeg process (not the python script) if necessary, so it doesn't make sense to know its PID at the end.
FYI: This script is getting the PIDs from FFmpeg processes.
Below you will see how I coded this script, which is working fine, but I get the PID at the end, as I mentioned before.
Any idea about how to do it?
import json, base64, sys, subprocess
thisList = json.loads(base64.b64decode(sys.argv[1]))
logFileName = json.loads(base64.b64decode(sys.argv[2]))
p = subprocess.Popen(thisList, stderr = open(logFileName, 'w'))
print(p.pid)
As you can see, I am decoding an encoded base64 string (the FFmpeg command line) to protect it because is coming from an URL.
Also, I need to write the FFmpeg output to a file, so I am getting the stderr to write it externally. I encoded its path in base64 too.
Finally to mention that there will be many concurrent FFmpeg PIDs working at the same time, so something like searching the PID using the FFmpeg name is too generic.
It could be possible to launch a second script to search the entire command line of FFmpeg and getting its PID, which could work well in Unix (I already have a script to do that in PHP), but not in Windows. I would like to be compatible in both SO.
That PHP script working in Unix is ...
function getPidByCommand($myString)
{
$pid = 0;
$myString = str_replace('"', '', $myString);
exec("ps aux | grep \"${myString}\" | grep -v grep | awk '{ print $2 }' | head -1", $out);
if (isset($out[0]))
{
$pid = intval($out[0]);
}
return $pid;
}
Thank you very much in advance.
Mapg
Related
I've got a Docker Service Log that takes in NiFi Actions and I want to capture only Log Entries that include "Successfully sent" and "Failed to process session" (and nothing more). They should be captured in a directory called "nifi_logs" in the present working directory. I need to do all of this using Python.
This is what I got so far:
docker_log = 'docker service logs nifi | grep -e "Successfully sent" -e "Failed to process session" >> $PWD/nifi_logs/nifi1.log'
subprocess.Popen(docker_log, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
I believe subprocess.Popen() is having difficulty with the double quotes used in the grep, as nifi1.log is completely empty. If the first command looks like the following:
docker_log = 'docker service logs nifi | grep session >> $PWD/nifi_logs/nifi1.log'
The Python code works just fine and captures all log entries with "session" in nifi1.log. As I explained above though, I need to grep for 2 kinds of Log Entires and both include multiple words, meaning I need to use quotes.
If I were to just run this command on the Terminal without Python:
docker service logs nifi | grep -e "Successfully sent" -e "Failed to process session" >> $PWD/nifi_logs/nifi1.log
The log generates the entries just fine, so I know the Docker Service command is written correctly.
I've tried switching the single and double quotes around, I've tried using \" instead of " within the single quotes ... nifi1.log continues to be empty.
I also tried using os.system() instead of subprocess.Popen(), but I run into the same problem (and I believe os.system() is somewhat deprecated).
Any ideas what I'd need to do to change what docker_log equals so that it will properly grep for the 2 search criteria? So you're aware: this question is not asking HOW I generate the log entries (I know what Docker Services I'm looking for, they generate properly), just what I need to do to get Python Subprocess Popen to accept a command with quotes in it.
Thank you for your assistance #David. Looking at your example, I found a solution: I removed stdout=subprocess.PIPE from subprocess.Popen and now it accepts double quotes just fine!
docker_log = 'docker service logs nifi | grep -e "Successfully sent" -e "Failed to process session" >> $PWD/nifi_logs/nifi1.log'
subprocess.Popen(docker_log, shell=True, stderr=subprocess.STDOUT)
After some searching and checking previous answers like here Passing objects from python to powershell, apparently the best way to send objects from a Python script to PowerShell script or command is going to be as JSON.
However, with something like this (dir_json.py):
from json import dumps
from pathlib import Path
for fn in Path('.').glob('**/*'):
print(dumps({'name': str(fn)}))
You can do this:
python .\dir_json.py | ConvertFrom-JSON
And the result is OK, but the problem I'm hoping to solve is that ConvertFrom-JSON seems to wait until the script has completed before reading any of the JSON, even though the invidual JSON objects end on each line. This can easily be verified by adding a line like time.sleep(1) after the print.
Is there a better way to send objects from Python to PowerShell than using JSON objects? And is there a way to actually stream them as they are written, instead of passing the entire output of the Python script after the script completes?
I ran into jq, which was recommended by "people on the internet" as a solution to my type of problem, stating that ConvertFrom-JSON doesn't allow streaming, but jq does. However, this did nothing to improve my situation:
python .\dir_json_slow.py | jq -cn --stream 'fromstream(1|truncate_stream(inputs))' | ConvertFrom-JSON
To make jq play nice, I did change the script to write a list of objects instead of separate objects:
from sys import stdout
from time import sleep
from json import dumps
from pathlib import Path
first = True
stdout.write('[\n')
for fn in Path('.').glob('**/*'):
if first:
stdout.write(dumps({'name': str(fn)}))
first = False
else:
stdout.write(',\n'+dumps({'name': str(fn)}))
stdout.flush()
sleep(.1)
stdout.write('\n]')
(note that the problem isn't ConvertFrom-JSON holding things up at the end, jq itself only starts writing output once the Python script completes)
As long as each line[1] that your python script outputs is a complete JSON object by itself, you can use a ForEach-Object call to process each output line as it is being received by PowerShell and call ConvertFrom-Json for each:
python .\dir_json.py | ForEach-Object { ConvertFrom-JSON $_ }
A simplified example that demonstrates that streaming occurs, pausing between lines processed (waiting for a keypress):
# Prompts for a keystroke after each line emitted by the Python command.
python -c 'from json import dumps; print(dumps({''name'': ''foo''})); print(dumps({''name'': ''bar''}))' |
ForEach-Object { ConvertFrom-Json $_ | Out-Host; pause }
Note: The Out-Host call is only used to work around a display bug in PowerShell, still present as of PowerShell 7.2: Out-Host forces synchronous printing of the implicit table-formatting that is applied - see this answer.
ConvertFrom-Json - atypically for PowerShell cmdlets - collects all input up front before emitting the object(s) that the JSON input has been parsed into, which can be demonstrated as follows:
# Prompts for a keystroke first, and only after *both*
# strings have been emitted does ConvertFrom-Json produce output.
& { '{ "name": "foo" }'; pause; '{ "name": "bar" }' } |
ConvertFrom-Json | Out-Host
[1] PowerShell relays output from external programs such as Python invariably line by line. By contrast, a PowerShell-native command is free to emit any object to the pipeline, including multiline strings.
I have a bash script to detect HDD connected in USB. I would like to convert it in python script to use it into SSH connection.
#!/bin/bash
for disk in /dev/sd?
do
detect=$(udevadm info $disk | grep USB)
if [[ "$detect" == *USB ]]; then
echo "$disk is a USBDISK.."
fi
done
I tried that, but it doesn't work. I have a problem with the for loop and with the condition in if statement :
import paramiko
for disk in "/dev/sd*" :
CMD = 'udevadm info %s | grep USB' % disk
stdin, stdout, stderr = client.exec_command(CMD, get_pty = True)
detect = stdout.read()
if detect == '*USB' :
print "disk is a USBDISK.."
Thanks.
As Selcuk mentioned, you are looping through a string instead than through the contents of the folder. When you do:
for disk in "/dev/sd*" :
you are looping through the string char by char like in the list ['/','d','e','v','/','s','d','*']
Python works different to bash when referring to strings and paths. Take a look to this answer from ghostdog74 to see how loop through a directory content.
This could fix the problems with the if condition. As a recommendation, you might consider if you want to check in the condition for:
Exact string:
if detect == '*USB' :
or for string inclusion:
if '*USB' in detect:
I am using Inkscape to take an input single page pdf file and to output an svg file. The following works from the command line
c:\progra~1\Inkscape\inkscape -z -f "N:\pdf_skunkworks\inflation-report-may-2018-page0.pdf" -l "N:\pdf_skunkworks\inflation-report-may-2018-page0.svg"
where -z is short for --without-gui, -f is short for input file, -l is short for --export-plain-svg. And that works from command line.
I could not get the equivalent to work from Python, either passing the command line as one long string or as separate arguments. stderr and stdout give no error as they both print None
import subprocess #import call,subprocess
#completed = subprocess.run(["c:\Progra~1\Inkscape\Inkscape.exe",r"-z -f \"N:\pdf_skunkworks\inflation-report-may-2018-page0.pdf\" -l \"N:\pdf_skunkworks\inflation-report-may-2018-page0.svg\""])
completed = subprocess.run(["c:\Progra~1\Inkscape\Inkscape.exe","-z", r"-f \"N:\pdf_skunkworks\inflation-report-may-2018-page0.pdf\"" , r"-l \"N:\pdf_skunkworks\inflation-report-may-2018-page0.svg\""])
print ("stderr:" + str(completed.stderr))
print ("stdout:" + str(completed.stdout))
Just to test OS plumbing I wrote some VBA code (my normal language) and this works
Sub TestShellToInkscape()
'* Tools->References->Windows Script Host Object Model (IWshRuntimeLibrary)
Dim sCmd As String
sCmd = "c:\progra~1\Inkscape\inkscape -z -f ""N:\pdf_skunkworks\inflation-report-may-2018-page0.pdf"" -l ""N:\pdf_skunkworks\inflation-report-may-2018-page0.svg"""
Debug.Print sCmd
Dim oWshShell As IWshRuntimeLibrary.WshShell
Set oWshShell = New IWshRuntimeLibrary.WshShell
Dim lProc As Long
lProc = oWshShell.Run(sCmd, 0, True)
End Sub
So I'm obviously doing something silly in the Python code. I'm sure experienced Python programmer could solve easily.
Swap your slashes:
import subprocess #import call,subprocess
completed = subprocess.run(['c:/Progra~1/Inkscape/Inkscape.exe',
'-z',
'-f', r'N:/pdf_skunkworks/inflation-report-may-2018-page0.pdf' ,
'-l', r'N:/pdf_skunkworks/inflation-report-may-2018-page0.svg'])
print ("stderr:" + str(completed.stderr))
print ("stdout:" + str(completed.stdout))
Python knows to swap forward slashes for back slashes on windows OS, and your back slashes are currently acting as escape prefixes.
I'm working on some code that performs a ping operation from python and extracts only the latency by using awk. This is currently what I have:
from os import system
l = system("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'")
print l
The system() call works fine, but I get an output in terminal rather than the value storing into l. Basically, an example output I'd get from this particular block of code would be
90.3
0
Why does this happen, and how would I go about actually storing that value into l? This is part of a larger thing I'm working on, so preferably I'd like to keep it in native python.
Use subprocess.check_output if you want to store the output in a variable:
from subprocess import check_output
l = check_output("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'", shell=True)
print l
Related: Extra zero after executing a python script
os.system() returns the return code of the called command, not the output to stdout.
For detail on how to properly get the command's output (including pre-Python 2.7), see this: Running shell command from Python and capturing the output
BTW I would use Ping Package https://pypi.python.org/pypi/ping
It looks promising
Here is how I store output to a variable.
test=$(ping -c 1 google.com | awk -F"=| " 'NR==2 {print $11}')
echo "$test"
34.9