How can we sftp a file from source host to a destinition server in python by invoking unix shell commands in python script using os.system...Please help
I have tried the following code
dstfilename="hi.txt"
host="abc.com"
user="sa"
os.system("echo cd /tmp >sample.txt)
os.system("echo put %(dstfilename)s" %locals()) // line 2
os.system("echo bye >>sample.txt")
os.system("sftp -B /var/tmp/sample.txt %(user)s#%(host)s)
How to append this result of line to sample.txt?
os.system("echo put %(dstfilename)s %locals()) >>sample.txt" // Seems this is syntatically not correct.
cat>sample.txt //should look like this
cd /tmp
put /var/tmp/hi.txt
bye
Any help?
Thanks you
You should pipe your commands into sftp. Try something like this:
import os
import subprocess
dstfilename="/var/tmp/hi.txt"
samplefilename="/var/tmp/sample.txt"
target="sa#abc.com"
sp = subprocess.Popen(['sftp', target], shell=False, stdin=subprocess.PIPE)
sp.stdin.write("cd /tmp\n")
sp.stdin.write("put %s\n" % dstfilename)
sp.stdin.write("bye\n")
[ do other stuff ]
sp.stdin.write("put %s\n" % otherfilename)
[ and finally ]
sp.stdin.write("bye\n")
sp.stdin.close()
But, in order to answer your question:
os.system("echo put %(dstfilename)s %locals()) >>sample.txt" // Seems this is syntatically not correct.
Of course it isn't. You want to pass a stringto os.system. So it has to look like
os.system(<string expression>)
with a ) at the end.
The string expression consists of a string literal with an applied % formatting:
"string literal" % locals()
And the string literal contains the redirection for the shell:
"echo put %(dstfilename)s >>sample.txt"
And together:
os.system("echo put %(dstfilename)s >>sample.txt" % locals())
. But as said, this is the worst solution I can imagine - better write directly to a temp file or even better pipe directly into the sub process.
Well, I think the literal solution to your question would look something like this:
import os
dstfilename="/var/tmp/hi.txt"
samplefilename="/var/tmp/sample.txt"
host="abc.com"
user="sa"
with open(samplefilename, "w") as fd:
fd.write("cd /tmp\n")
fd.write("put %s\n" % dstfilename)
fd.write("bye\n")
os.system("sftp -B %s %s#%s" % (samplefilename, user, host))
As #larsks says, use a proper filehandler to make the tmp file for you, and my personal preference is to not to do string formatting using locals().
However depending on the use case, I don't think this is a particularly suitable approach - how does the password the sftp site get entered for example?
I think you'd get a more robust solution if you took a look at the SFTPClient in Paramiko, or failing that, you might need something like pexpect to help with ongoing automation.
If you want a non-zero return code if any of the sftp commands fail, you should write the commands to a file, then run an sftp batch on them. In this fashion, you can then retrieve the return code to check if the sftp commands had any failure.
Here's a quick example:
import subprocess
host="abc.com"
user="sa"
user_host="%s#%s" % (user, host)
execute_sftp_commands(['put hi.txt', 'put myfile.txt'])
def execute_sftp_commands(sftp_command_list):
with open('batch.txt', 'w') as sftp_file:
for sftp_command in sftp_command_list:
sftp_file.write("%s\n" % sftp_command)
sftp_file.write('quit\n')
sftp_process = subprocess.Popen(['sftp', '-b', 'batch.txt', user_host], shell=False)
sftp_process.communicate()
if sftp_process.returncode != 0:
print("sftp failed on one or more commands: {0}".format(sftp_command_list))
Quick disclaimer: I did not run this in a shell so a typo might be present. If so, send me a comment and I will correct.
Related
I have a binary executable named as "abc" and I have a input file called as "input.txt". I can run these with following bash command:
./abc < input.txt
How can I run this bash command in Python, I tried some ways but I got errors.
Edit:
I also need the store the output of the command.
Edit2:
I solved with this way, thanks for the helps.
input_path = path of the input.txt file.
out = subprocess.Popen(["./abc"],stdin=open(input_path),stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout,stderr = out.communicate()
print(stdout)
use os.system
import os
os.system("echo test from shell");
Using subprocess is the best way to invoke system commands and executables. It provides better control than os.system() and is intended to replace it. The python documentation link below provides additional information.
https://docs.python.org/3/library/subprocess.html
Here is a bit of code that uses subprocess to read output from head to return the first 100 rows from a txt file and process it row by row. It gives you the output (out) and any errors (err).
mycmd = 'head -100 myfile.txt'
(out, err) = subprocess.Popen(mycmd, stdout=subprocess.PIPE, shell=True).communicate()
myrows = str(out.decode("utf-8")).split("\n")
for myrow in myrows:
# do something with myrow
This can be done with os module. The following code works perfectly fine.
import os
path = "path of the executable 'abc' and 'input.txt' file"
os.chdir(path)
os.system("./abc < input.txt")
Hope this works :)
I am storing the number of files in a directory in a variable and storing their names in an array. I'm unable to store file names in the array.
Here is the piece of code I have written.
import os
temp = os.system('ls -l /home/demo/ | wc -l')
no_of_files = temp - 1
command = "ls -l /home/demo/ | awk 'NR>1 {print $9}'"
file_list=[os.system(command)]
for i in range(len(file_list))
os.system('tail -1 file_list[i]')
Your shell scripting is orders of magnitude too complex.
output = subprocess.check_output('tail -qn1 *', shell=True)
or if you really prefer,
os.system('tail -qn1 *')
which however does not capture the output in a Python variable.
If you have a recent-enough Python, you'll want to use subprocess.run() instead. You can also easily let Python do the enumeration of the files to avoid the pesky shell=True:
output = subprocess.check_output(['tail', '-qn1'] + os.listdir('.'))
As noted above, if you genuinely just want the output to be printed to the screen and not be available to Python, you can of course use os.system() instead, though subprocess is recommended even in the os.system() documentation because it is much more versatile and more efficient to boot (if used correctly). If you really insist on running one tail process per file (perhaps because your tail doesn't support the -q option?) you can do that too, of course:
for filename in os.listdir('.'):
os.system("tail -n 1 '%s'" % filename)
This will still work incorrectly if you have a file name which contains a single quote. There are workarounds, but avoiding a shell is vastly preferred (so back to subprocess without shell=True and the problem of correctly coping with escaping shell metacharacters disappears because there is no shell to escape metacharacters from).
for filename in os.listdir('.'):
print(subprocess.check_output(['tail', '-n1', filename]))
Finally, tail doesn't particularly do anything which cannot easily be done by Python itself.
for filename in os.listdir('.'):
with open (filename, 'r') as handle:
for line in handle:
pass
# print the last one only
print(line.rstrip('\r\n'))
If you have knowledge of the expected line lengths and the files are big, maybe seek to somewhere near the end of the file, though obviously you need to know how far from the end to seek in order to be able to read all of the last line in each of the files.
os.system returns the exitcode of the command and not the output. Try using subprocess.check_output with shell=True
Example:
>>> a = subprocess.check_output("ls -l /home/demo/ | awk 'NR>1 {print $9}'", shell=True)
>>> a.decode("utf-8").split("\n")
Edit (as suggested by #tripleee) you probably don't want to do this as it will get crazy. Python has great functions for things like this. For example:
>>> import glob
>>> names = glob.glob("/home/demo/*")
will directly give you a list of files and folders inside that folder. Once you have this, you can just do len(names) to get the first command.
Another option is:
>>> import os
>>> os.listdir("/home/demo")
Here, glob will give you the whole filepath /home/demo/file.txt and os.listdir will just give you the filename file.txt
The ls -l /home/demo/ | wc -l command is also not the correct value as ls -l will show you "total X" on top mentioning how many total files it found and other info.
You could likely use a loop without much issue:
files = [f for f in os.listdir('.') if os.path.isfile(f)]
for f in files:
with open(f, 'rb') as fh:
last = fh.readlines()[-1].decode()
print('file: {0}\n{1}\n'.format(f, last))
fh.close()
Output:
file.txt
Hello, World!
...
If your files are large then readlines() probably isn't the best option. Maybe go with tail instead:
for f in files:
print('file: {0}'.format(f))
subprocess.check_call(['tail', '-n', '1', f])
print('\n')
The decode is optional, although for text "utf-8" usually works or if it's a combination of binary/text/etc then maybe something such as "iso-8859-1" usually should work.
you are not able to store file names because os.system does not return output as you expect it to be. For more information see : this.
From the docs
On Unix, the return value is the exit status of the process encoded in the format specified for wait(). Note that POSIX does not specify the meaning of the return value of the C system() function, so the return value of the Python function is system-dependent.
On Windows, the return value is that returned by the system shell after running command, given by the Windows environment variable COMSPEC: on command.com systems (Windows 95, 98 and ME) this is always 0; on cmd.exe systems (Windows NT, 2000 and XP) this is the exit status of the command run; on systems using a non-native shell, consult your shell documentation.
os.system executes linux shell commands as it is. for getting output for these shell commands you have to use python subprocess
Note : In your case you can get file names using either glob module or os.listdir(): see How to list all files of a directory
I am trying to copy the contents of a variable to the clipboard automatically within a python script. So, a variable is created that holds a string, and I'd like to copy that string to the clipboard.
Is there a way to do this with Pyclips or
os.system("echo '' | pbcopy")
I've tried passing the variable where the string should go, but that doesn't work which makes sense to me.
Have you tried this?
import os
def addToClipBoard(text):
command = 'echo ' + text.strip() + '| clip'
os.system(command)
Read more solutions here.
Edit:
You may call it as:
addToClipBoard(your_variable)
Since you mentioned PyCLIPS, it sounds like 3rd-party packages are on the table. Let me thrown a recommendation for pyperclip. Full documentation can be found on GitHub, but here's an example:
import pyperclip
variable = 'Some really "complex" string with\na bunch of stuff in it.'
pyperclip.copy(variable)
While the os.system(...'| pbcopy') examples are also good, they could give you trouble with complex strings and pyperclip provides the same API cross-platform.
The accepted answer was not working for me as the output had quotes, apostrophes and $$ signs which were interpreted and replaced by the shell.
I've improved the function based on answer. This solution uses temporary file instead of echoing the string in the shell.
def add_to_clipboard(text):
import tempfile
with tempfile.NamedTemporaryFile("w") as fp:
fp.write(text)
fp.flush()
command = "pbcopy < {}".format(fp.name)
os.system(command)
Replace the pbcopy with clip for Windows or xclip for Linux.
For X11 (Unix/Linux):
os.system('echo "%s" | xsel -i' % variable)
xsel also gives you a choice of writing to:
the primary selection (default)
the secondary selection (-s option), or
the clipboard (-b option).
If xsel doesn't work as you expect, it is probably because you are using the wrong selection/clipboard.
In addition, with the -a option, you can append to the clipboard instead of overwrite. With -c, the clipboard is cleared.
Improvement
The module subprocess provides a more secure way to do the same thing:
from subprocess import Popen, PIPE
Popen(('xsel', '-i'), stdin=PIPE).communicate(variable)
I am trying to run grep command from my Python module using the subprocess library. Since, I am doing this operation on the doc file, I am using Catdoc third party library to get the content in a plan text file. I want to store the content in a file. I don't know where I am going wrong but the program fails to generate a plain text file and eventually to get the grep result. I have gone through the error log but its empty. Thanks for all the help.
def search_file(name, keyword):
#Extract and save the text from doc file
catdoc_cmd = ['catdoc', '-w' , name, '>', 'testing.txt']
catdoc_process = subprocess.Popen(catdoc_cmd, stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
output = catdoc_process.communicate()[0]
grep_cmd = []
#Search the keyword through the text file
grep_cmd.extend(['grep', '%s' %keyword , 'testing.txt'])
print grep_cmd
p = subprocess.Popen(grep_cmd,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
stdoutdata = p.communicate()[0]
print stdoutdata
On UNIX, specifying shell=True will cause the first argument to be treated as the command to execute, with all subsequent arguments treated as arguments to the shell itself. Thus, the > won't have any effect (since with /bin/sh -c, all arguments after the command are ignored).
Therefore, you should actually use
catdoc_cmd = ['catdoc -w "%s" > testing.txt' % name]
A better solution, though, would probably be to just read the text out of the subprocess' stdout, and process it using re or Python string operations:
catdoc_cmd = ['catdoc', '-w' , name]
catdoc_process = subprocess.Popen(catdoc_cmd, stdout=subprocess.PIPE,stderr=subprocess.PIPE)
for line in catdoc_process.stdout:
if keyword in line:
print line.strip()
I think you're trying to pass the > to the shell, but that's not going to work the way you've done it. If you want to spawn a process, you should arrange for its standard out to be redirected. Fortunately, that's really easy to do; all you have to do is open the file you want the output to go to for writing and pass it to popen using the stdout keyword argument, instead of PIPE, which causes it to be attached to a pipe which you can read with communicate().
I have an input file test.txt as:
host:dc2000
host:192.168.178.2
I want to get all of addresses of those machines by using:
grep "host:" /root/test.txt
And so on, I get command ouput via python:
import subprocess
file_input='/root/test.txt'
hosts=subprocess.Popen(['grep','"host:"',file_input], stdout= subprocess.PIPE)
print hosts.stdout.read()
But the result is empty string.
I don't know what problem I got. Can you suggest me how to solve?
Try that :
import subprocess
hosts = subprocess.check_output("grep 'host:' /root/test.txt", shell=True)
print hosts
Your code should work, are you sure that the user has the access right to read the file?
Also, are you certain there is a "host:" in the file? You might mean this instead:
hosts_process = subprocess.Popen(['grep','host:',file_input], stdout= subprocess.PIPE)
hosts_out, hosts_err = hosts_process.communicate()
Another solution, try Plumbum package(https://plumbum.readthedocs.io/):
from plumbum.cmd import grep
print(grep("host:", "/root/test.txt"))
print(grep("-n", "host:", "/root/test.txt")) #'-n' option