os.system outputs too much? - python

I'm trying to do this, to get the content of the clipboard
clipboard_data = os.system("pbpaste")
But this doesn't work! Not only does it not store whatever was in the clipboard (some text) in the var (returns 0) but it outputs the result of the command on the screen.
How can I make it work how I want?

You should to look into what's offered in the subprocess module for Python. In version 2.7 and later you can achieve what you want with the following, for example:
from subprocess import check_output
clipboard_data = check_output(["pbpaste"])
... or in earlier versions do:
from subprocess import Popen, PIPE
clipboard_data = Popen(["pbpaste"], stdout=PIPE).communicate()[0]
That's missing some error checking, but you get the idea...

Regardless of what you're trying to achieve (I'm not familiar with pbpaste and IMHO there are better ways to access the clipboard), os.system returns the exit status of the process it's invoking, not its standard output.
You should use subprocess.Popen (with its communicate method) to get the standard output.

A simple method is to use the
f = os.popen(command)
method, it returns a file like object and then you can use the method f.readline() to return a string. The method is quite simple though and shouldn't be used to process large amounts of data because it uses alot of the computers cpu power.

Related

Import results of a c++-Programm to python

I'm currently dealing with some python based squish gui tests. Some of these tests call another tool, written in c++ and build as an executable. I have full access to that tool and I'm able to modify it. The tests call it via command line and currently evaluate the error code and create a passed or failed depending on the error codes value.
I think there is a better way to do it or? One Problem is, that the error code is limited to uint8 on unix systems and I would like to be able to share more than just an error code with my python script.
My first idea was printing everything in a file in json or xml and read that file. But this somehow sounds wrong for me. Has anybody a better idea?
When I first read the question, I immediately thought piping the output would work. Check this link out to get a better idea:
Linux Questions Piping
If this doesn't work, I do think writing your output to a file and reading it with your python script would get the job done.
You can capture the output of the external process via Python and process it as you see fit.
Here is a very simple variant:
import os
import subprocess
def main():
s = os_capture(["ls"])
if "ERROR" in s:
test.fail("Executing 'ls' failed.")
def os_capture(args, cwd=None):
if cwd is None:
cwd = os.getcwd()
stdout = subprocess.Popen(
args=args,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT).communicate()[0]
return stdout

Popen file descriptor difference between Python 2.6 and 2.7+ on Linux

I have a really odd problem. I have some code that requires a Linux file descriptor (/dev/fd/N) where N is some number. The following example code demonstrates effectively what is being done.
In order to run the code create a file called "/tmp/test" with some text in it.
The code works in Python 2.6, but in Python 2.7 or later, I get:
cat: /dev/fd/4: No such file or directory
Can anyone figure out why? And what needs to change for the following code to work on 2.7 and later. Assume that I cannot change needing file descriptors (/dev/fd/4 for example):
Here is the code:
from subprocess import Popen, PIPE
data = Popen(
["cat"],
stdin=open('/tmp/test', 'rb'),
stdout=PIPE,
universal_newlines=True).stdout
fd_name = '/dev/fd/%d' % data.fileno()
fddata = Popen(
["cat",
fd_name],
stdout=PIPE,
universal_newlines=True,).stdout
print(fddata.read())
I'm not sure why this code would appear to work on Python 2.6, the problem is fundamentally that "cat" and your Python code are in different processes which will have different associations between fd numbers and the underlying file.
To achieve the specific effect of gifting a subprocess a file descriptor, you'll need to duplicate that fd into the subprocess. This requires interacting with the OS at a somewhat lower level than Popen, and I don't know off the top of my head how best to do that from Python.
Perhaps there's an easier way to achieve the end goal you're looking for? Can you give us more information about what problem you're trying to solve and what constraints your code is under?

Call a ruby script from python, and return response

I am using python, and I want to call a Ruby function (i.e. in a similar manner to how i would call a python function), specifying function arguments, and then get back a returned value.
I have been searching around, and can see how to run ruby code e.g. using os.system, however can't seem to either:
call a specific function within the Ruby code, passing variables
Return the result of the function to Python.
e.g. I am looking to be able to do something similar to:
desired_response = ruby_function(variables)
where ruby_function is within a file called ruby_file, and the response is the result of running that function.
I don't know what is your output from ruby program but with os.startfile()you can run your ruby script.
Here's a more recent solution that i found and seems a lot smoother than having a third party tool to invoke this.
You can simply use the Python subprocess.
I ended up using it this way:
import subprocess
response = subprocess.run([f"{os.environ['YOUR_SCRIPT_PATH']}", first_variable_to_pass_to_script, second_variable])
Hope this helps!
###OLD ANSWER:
I found this article to be helpful!
http://sweetme.at/2014/03/14/a-simple-approach-to-execute-ruby-scripts-with-python/#:~:text=Execute%20the%20Ruby%20File%20from%20Naked.toolshed.shell%20import%20execute_rb,the%20Ruby%20code%20to%20the%20Python%20calling%20code.
You might need to use:
from Naked.toolshed.shell import execute_rb
success = execute_rb('testscript.rb')
to be able to handle your standard I/O error in Python rather than Ruby.
Original documentation: https://naked.readthedocs.io/toolshed_shell.html

how to copy os.system('cat /var/log/dmesg') to a variable

import sys
import os
log = os.system('cat /var/log/demesg')
This code prints the file by running the shell script cat /var/log/dmesg. However, it is not copied to the log. I want to use this data somewhere else or just print the data like print log.
How can I implement this?
Simply read from the file yourself:
with open('/var/log/dmesg') as logf:
log = logf.read()
print(log)
As an option, take a look at IPython. Interactive Python brings a lot of ease of use tools to the table.
ipy$ log = !dmesg
ipy$ type(log)
<3> IPython.utils.text.SList
ipy$ len(log)
<4> 314
calls the system, and captures stdout to the variable as a String List.
Made for collaborative science, handy for general purpose Python coding too. The web based collaborative Notebook (with interactive graphing, akin to Sage notebooks) is a sweet bonus feature as well, along with the ubiquitous support for parallel computing.
http://ipython.org
To read input from a child process you can either use fork(), pipe() and exec() from the os module; or use the subprocess module

when is 'commands' preferable to 'popen' subprocess?

I'm apprenticing into system administration without schooling, so sometimes I'm missing what is elementary information to many others.
I'm attempting to give my stdout line another argument before printing, but I'm not sure which process I should use, and I'm a bit fuzzy on the commands for subprocess if that's what I should be using.
My current code is:
f = open('filelist', 'r')
searchterm = f.readline()
f.close()|
#takes line from a separate file and gives it definition so that it may be callable.
import commands
commands.getoutput('print man searchterm')
This is running, but not giving me an ouput to the shell. My more important question is though, am I using the right command to get my preferred process? Should I be using one of the subprocess commands instead? I tried playing around with popen, but I don't understand it fully enough to use it correctly.
Ie, I was running
subprocess.Popen('print man searchterm')
but I know without a doubt that's not how you're supposed to run it. Popen requires more arguments than I have given it, like file location and where to run it (Stdout or stderr). But I was having trouble making these commands work. Would it be something like:
subprocess.Popen(pipe=stdout 'man' 'searchterm')
#am unsure how to give the program my arguments here.
I've been researching everywhere, but it is such a widely used process I seem to be suffering from a surplus of information rather than not enough. Any help would be appreciated, I'm quite new.
Preemptive thanks for any help.
The cannonical way to get data from a separate process is to use subprocess (commands is deprecated)
import subprocess
p = subprocess.Popen(['print','man','searchitem'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdoutdata, stderrdata = p.communicate()
Note that some convenience functions exist for splitting strings into lists of arguments. Most notably is shlex.split which will take a string and split it into a list the same way a shell does. (If nothing is quoted in the string, str.split() works just as well).
commands is deprecated in Python 2.6 and later, and has been removed in Python 3. There's probably no situation where it's preferable in new code, even if you are stuck with Python 2.5 or earlier.
From the docs:
Deprecated since version 2.6: The commands module has been removed in
Python 3. Use the subprocess module instead.
To run man searchterm in a separate process and display the result in the terminal, you could do this:
import subprocess
proc = subprocess.Popen('man searchterm'.split())
proc.communicate()

Categories