How exactly does Python receive
echo input | python script
and
python script input
differently? I know that one comes through stdin and the other is passed as an argument, but what happens differently in the back-end?
I'm not exactly sure what is confusing you here. stdin and command line arguments are treated as two different things.
Since you're most likely using CPython (the C implementation of Python) the command line args are passed automatically in the argv parameter as with any other c program. The main function for CPython (located in python.c) receives them:
int main(int argc, char **argv) // **argv <-- Your command line args
{
wchar_t **argv_copy;
/* We need a second copy, as Python might modify the first one. */
wchar_t **argv_copy2;
/* ..rest of main omitted.. */
While the contents of the pipe are stored in stdin which you can tap into via sys.stdin.
Using a sample test.py script:
import sys
print("Argv params:\n ", sys.argv)
if not sys.stdin.isatty():
print("Stdin: \n", sys.stdin.readlines())
Running this with no piping performed yields:
(Python3)jim#jim: python test.py "hello world"
Argv params:
['test.py', 'hello world']
While, using echo "Stdin up in here" | python test.py "hello world", we'll get:
(Python3)jim#jim: echo "Stdin up in here" | python test.py "hello world"
Argv params:
['test.py', 'hello world']
Stdin:
['Stdin up in here\n']
Not strictly related, but an interesting note:
Additionally, I remembered that you can execute content that is stored in stdin by using the - argument for Python:
(Python3)jimm#jim: echo "print('<stdin> input')" | python -
<stdin> input
Kewl!
Related
Is there a way to read data from the command prompt? I have a java program that relies on 4 input variables from an outside source. These variables are returned to the command prompt after I run a javascript program but i need a way to pass these variables from the command prompt into my java program, any help would be greatly appreciated!
While executing java program pass the parameters and all the parameters should be separated by space.
java programName parameter1 parameter2 parameter3 parameter4
This parameters would be available in your main method argument
public static void main(String[] args){
//This args array would be containing all four values, i.e. its length would be 4 and you easily iterate values.
for(int i=0; i<args.length; i++){
System.out.println("Argument " + i + " is " + args[i]);
}
Follow the link:
Command-Line Arguments - The Java™ Tutorials : https://docs.oracle.com/javase/tutorial/essential/environment/cmdLineArgs.html
shared by #BackSlash.
It has all the content which would help you to clear all your doubts.
The content from the link is quoted below:
Displaying Command-Line Arguments passed by user from command-line to a Java program
The following example displays each of its command-line arguments on a
line by itself:
public class DisplayCommandLineParameters {
public static void main (String[] args) {
for (String s: args) {
System.out.println(s);
}
}
}
To compile the program: From the Command Prompt, navigate to the directory containing your .java file, say C:\test, by typing the cd
command below.
C:\Users\username>cd c:\test
C:\test>
Assuming the file, say DisplayCommandLineParameters.java, is in the
current working directory, type the javac command below to compile it.
C:\test>javac DisplayCommandLineParameters.java
C:\test>
If everything went well, you should see no error messages.
To run the program: The following example shows how a user might run the class.
C:\test>java DisplayCommandLineParameters Hello Java World
Output:
Hello
Java
World
Note that the application displays each word — Hello, Java and World —
on a line by itself. This is because the space character separates
command-line arguments.
To have Hello, Java and World interpreted as a single argument, the
user would join them by enclosing them within quotation marks.
C:\test>java DisplayCommandLineParameters "Hello Java World"
Output: Hello Java World
(This question was asked here, but the answer was Linux-specific; I'm running on FreeBSD and NetBSD systems which (EDIT: ordinarily) do not have /proc.)
Python seems to dumb down argv[0], so you don't get what was passed in to the process, as a C program would. To be fair, sh and bash and Perl are no better. Is there any way I can work around this, so my Python programs can get that original value? I have administrative privileges on this FreeBSD system, and can do things like changing everyone's default PATH environment variable to point to some other directory before the one that contains python2 and python3, but I don't have control over creating /proc. I have a script which illustrates the problem. First, the script's output:
the C child program gets it right: arbitrary-arg0 arbitrary-arg1
the python2 program dumbs it down: ['./something2.py', 'arbitrary-arg1']
the python3 program dumbs it down: ['./something3.py', 'arbitrary-arg1']
the sh script dumbs it down: ./shscript.sh arbitrary-arg1
the bash script dumbs it down: ./bashscript.sh arbitrary-arg1
the perl script drops arg0: ./something.pl arbitrary-arg1
... and now the script:
#!/bin/sh
set -e
rm -rf work
mkdir work
cd work
cat > childc.c << EOD; cc childc.c -o childc
#include <stdio.h>
int main(int argc,
char **argv
)
{
printf("the C child program gets it right: ");
printf("%s %s\n",argv[0],argv[1]);
}
EOD
cat > something2.py <<EOD; chmod 700 something2.py
#!/usr/bin/env python2
import sys
print "the python2 program dumbs it down:", sys.argv
EOD
cat > something3.py <<EOD; chmod 700 something3.py
#!/usr/bin/env python3
import sys
print("the python3 program dumbs it down:", sys.argv)
EOD
cat > shscript.sh <<EOD; chmod 700 shscript.sh
#!/bin/sh
echo "the sh script dumbs it down:" \$0 \$1
EOD
cat > bashscript.sh <<EOD; chmod 700 bashscript.sh
#!/bin/sh
echo "the bash script dumbs it down:" \$0 \$1
EOD
cat > something.pl <<EOD; chmod 700 something.pl
#!/usr/bin/env perl
print("the perl script drops arg0: \$0 \$ARGV[0]\n")
EOD
cat > launch.c << EOD; cc launch.c -o launch; launch
#include <sys/types.h>
#include <sys/wait.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc,
char **argv,
char **arge)
{
int child_status;
size_t program_index;
pid_t child_pid;
char *program_list[]={"./childc",
"./something2.py",
"./something3.py",
"./shscript.sh",
"./bashscript.sh",
"./something.pl",
NULL
};
char *some_args[]={"arbitrary-arg0","arbitrary-arg1",NULL};
for(program_index=0;
program_list[program_index];
program_index++
)
{
child_pid=fork();
if(child_pid<0)
{
perror("fork()");
exit(1);
}
if(child_pid==0)
{
execve(program_list[program_index],some_args,arge);
perror("execve");
exit(1);
}
wait(&child_status);
}
return 0;
}
EOD
What follows is a generally useful answer to what I meant to ask.
The answer that kabanus gave is excellent, given the way I phrased the problem, so of course he gets the up-arrow and the checkmark. The transparency is a beautiful plus, in my opinion.
But it turns out that I didn't specify the situation completely. Each python script starts with a shebang, and the shebang feature makes it more complicated to launch a python script with an artificial argv[0].
Also, transparency isn't my goal; backward compatibility is. I would like the normal situation to be that sys.argv works as shipped, right out of the box, without my modifications. Also, I would like any program which launches a python script with an artificial argv[0] not to have to worry about any additional argument manipulation.
Part of the problem is to overcome the "shebang changing argv" problem.
The answer is to write a wrapper in C for each script, and the launching program launches that program instead of the actual script. The actual script looks at the arguments to the parent process (the wrapper).
The cool thing is that this can work for script types other than python. You can download a proof of concept here which demonstrates the solution for python2, python3, sh, bash, and perl. You'll have to change each CRLF to LF, using dos2unix or fromdos. This is how the python3 script handles it:
def get_arg0():
return subprocess.run("ps -p %s -o 'args='" % os.getppid(),
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).stdout.decode(encoding='latin1').split(sep=" ")[0]
The solution does not rely on /proc, so it works on FreeBSD as well as Linux.
What I think is the path of least resistance here is a bit hacky, but would probably work on any OS. Basically you double wrap your Python calls. First (using Python 3 as an example), the Python3 in your path is replaced by a small C program, which you know you can trust:
#include<stdlib.h>
#include<string.h>
int main(int argc, char **argv) {
// The python 3 below should be replaced by the path to the original one
// In my tests I named this program wrap_python so there was no problem
// but if you are changing this system wide (and calling the wrapper python3
// you can't leave this.
const char *const program = "python3 wrap_python.py";
size_t size = strlen(program) + 1; // Already added null character at end
for(int count = 0; count < argc; ++count)
size += strlen(argv[count]) + 1; // + 1 for space
char *cmd = malloc(size);
if(!cmd) exit(-1);
cmd[0] = '\0';
strcat(cmd, program);
for(int count = 1; count < argc; ++count) {
strcat(cmd, " ");
strcat(cmd, argv[count]);
}
strcat(cmd, " ");
strcat(cmd, argv[0]);
return system(cmd);
}
You can make this faster, but hey, premature optimization?
Note we are calling a script called wrap_python.py (probably you would need a full path here). We want to pass the "true" argv, but we need to work some in the Python context to make it transparent. The true argv[0] is passed as a last argument, and wrap_python.py is:
from sys import argv
argv[0] = argv.pop(-1)
print("Passing:", argv) # Delete me
exit(exec(open(argv[1]).read())) # Different in Python 2. Close the file handle if you're pedantic.
Our small wrapper replaces argv[0] with the one provided by our C wrapper removing it from the end, and then manually executes in the same context. Specifically __name__ == __main__ is true.
This would be run as
python3 my_python_script arg1 arg2 etc...
where your path now will point to that original C program. Testing this on
import sys
print(__name__)
print("Got", sys.argv)
yields
__main__
Got ['./wrap_python', 'test.py', 'hello', 'world', 'this', '1', '2', 'sad']
Note I called my program wrap_python - you want to name it python3.
Use Python's ctypes module to get the "program name" which by default is set to argv[0]. See Python source code here. For example:
import ctypes
GetProgramName = ctypes.pythonapi.Py_GetProgramName
GetProgramName.restype = ctypes.c_wchar_p
def main():
print(GetProgramName())
if __name__ == '__main__':
main()
Running the command prints:
$ exec -a hello python3 name.py
hello
I want to call a Python script from C, passing some arguments that are needed in the script.
The script I want to use is mrsync, or multicast remote sync. I got this working from command line, by calling:
python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata
-m is the list containing the target ip-addresses.
-s is the directory that contains the files to be synced.
-t is the directory on the target machines where the files will be put.
So far I managed to run a Python script without parameters, by using the following C program:
Py_Initialize();
FILE* file = fopen("/tmp/myfile.py", "r");
PyRun_SimpleFile(file, "/tmp/myfile.py");
Py_Finalize();
This works fine. However, I can't find how I can pass these argument to the PyRun_SimpleFile(..) method.
Seems like you're looking for an answer using the python development APIs from Python.h. Here's an example for you that should work:
#My python script called mypy.py
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[1])
ca_two = str(sys.argv[2])
print "My command line args are " + ca_one + " and " + ca_two
And then the C code to pass these args:
//My code file
#include <stdio.h>
#include <python2.7/Python.h>
void main()
{
FILE* file;
int argc;
char * argv[3];
argc = 3;
argv[0] = "mypy.py";
argv[1] = "-m";
argv[2] = "/tmp/targets.list";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
If you can pass the arguments into your C function this task becomes even easier:
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
You can just pass those straight through. Now my solutions only used 2 command line args for the sake of time, but you can use the same concept for all 6 that you need to pass... and of course there's cleaner ways to capture the args on the python side too, but that's just the basic idea.
Hope it helps!
You have two options.
Call
system("python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata")
in your C code.
Actually use the API that mrsync (hopefully) defines. This is more flexible, but much more complicated. The first step would be to work out how you would perform the above operation as a Python function call. If mrsync has been written nicely, there will be a function mrsync.sync (say) that you call as
mrsync.sync("/tmp/targets.list", "/tmp/sourcedata", "/tmp/targetdata")
Once you've worked out how to do that, you can call the function directly from the C code using the Python API.
This question already has an answer here:
C - Function with variable number of arguments and command line arguments
(1 answer)
Closed 5 years ago.
I want to call/execute a bash from a C program including any number of arguments passed on the command line for the script.
I found a related post How to pass command line arguments from C program to the bash script? but my case is the number of arguments pass to the command line may vary, it's not fixed number. So the C program has to collect any number of command line arguments and pass the same to the bash script to execute.
Is this possible?.
To give you a clear idea, when I run my test bash script, I get the output as expected.
# ./bashex.sh
No arguments passed
# ./bashex.sh hello world
Arguments passed are #1 = hello
Arguments passed are #2 = world
# ./bashex.sh hello world Hi
Arguments passed are #1 = hello
Arguments passed are #2 = world
Arguments passed are #3 = Hi
What I do not know is how to execute this script like this including the command line arguments from a C program
Pretty much bare minimum, nothing checked, ./foo segfaults if no argument, use at own risk:
$ cat foo.c
#include<stdlib.h>
#include<string.h>
int main (int argc, char *argv[])
{
char bar[100]="./bar.sh ";
strcat(bar, argv[1]);
system(bar);
}
The script:
$ cat bar.sh
#!/bin/sh
echo $1
The execution:
$ ./foo baz
baz
When I'm working on a bash script and need to write a particularly complex logic I usually fall back on using python, like this:
#!/bin/bash
function foo() {
python << END
if 1:
print "hello"
END
}
foo
How can I do the same thing from within a Makefile?
You may write a bash script containing your functions, say myscript.sh:
#!/bin/bash
foo() {
python << END
if 1:
print "hello $1"
END
}
Now here is a Makefile:
SHELL = /bin/bash
mytarget ::
#source myscript.sh ;\
foo world
Finally type in your terminal:
$ make mytarget
hello world
Some explanations on the Makefile: defining SHELL let make know which shell to run. The :: stands for phony target (and a little more); you can replace it with : for an actual target.
The key point is to run source and call the function in the same shell, that is, in the same line (since make run a different shell for each line); this is achieved by ;\ at the end of each line.