How would I do the following in Node.js? I realize there's probably no builtin feature or written module for this, so how might I implement this?
>>> import shlex
>>> shlex.split("-a arga -b \"argument b\" arg1 arg2")
['-a', 'arga', '-b', 'argument b', 'arg1', 'arg2']
I assume you've already searched http://npmjs.org (either searching, or browsing the shell keyword) instead of just assuming no such thing exists. At a quick glance, for example, various packages like shell-quote seem likely to do what you want, and others like nshell seem likely to either depend on a shlex-like library or to have equivalent code internally, but I haven't actually looked at any of them in detail, so I'm willing to accept that there's nothing out there you can use.
Getting all of the details right is complicated. But fortunately, the source code for Python's shlex.split is written in pure Python, and is reasonably readable. So, you should be able to port it.
If you do this, you should ideally also build a good test suite and publish it as an npm package so that the next time someone else looks, it will exist at http://npmjs.org.
I've done a basic port of shlex to Node.js: https://www.npmjs.com/package/shlex
shell-quote appears to be abandoned, with several open issues and pull requests, but the author has not responded to them in a while.
Related
When a script is invoked explicitly with python, the argv is mucked with so that argv[0] is the path to the script being run. This is the case if invoked as python foo/bar.py or even as python -m foo.bar.
I need a way to recover the original argv (ie. the one received by python). Unfortunately, it's not as easy as prepending sys.executable to sys.argv because python foo/bar.py is different than python -m foo.bar (the implicit PYTHONPATH differs, which can be crucial depending on your module structure).
More specifically in the cases of python foo/bar.py some other args and python -m foo.bar some other args, I'm looking to recover ['python', 'foo/bar.py', 'some', 'other', 'args'] and ['python', '-m', 'foo.bar', 'some', 'other', 'args'], respectively.
I am aware of prior questions about this:
how to get the ORIGINAL command line in python? with spaces, tabs, etc
Full command line as it was typed
But these seem to have a misunderstanding of how shells work and the answers reflect this. I am not interested in undoing the work of the shell (eg. evaluated shell vars and functions are fine), I just want to get at the original argv given to python.
The only solution I've found is to use /proc/<PID>/cmdline:
import os
with open("/proc/{}/cmdline".format(os.getpid()), 'rb') as f:
original_argv = f.read().split('\0')[:-1]
This does work, but it is Linux-only (no OSX, and Windows support seems to require installing the wmi package). Fortunately for my current use case this restriction is fine. But, it would be nice to have a cleaner, cross platform approach.
The fact that that /proc/<PID>/cmdline approach works gives me hope that python isn't execing before it runs the script (at least not the syscall exec, but maybe the exec builtin). I remember reading somewhere that all of this argument handling (ex. -m) is done in pure python, not C (this is confirmed by the fact that python -m this.does.not.exist will produce an exception that looks like it came from the runtime). So, I'd venture a guess that somewhere in pure python the original argv is available (perhaps this requires some spelunking through the runtime initialization?).
tl;dr Is there a cross platform (builtin, preferably) way to get at the original argv passed to python (before it remove the python executable and transforms -m blah into blah.py)?
edit From spelunking, I discovered Py_GetArgcArgv, which can be accessed via ctypes (found it here, links to several SO posts that mention this approach):
import ctypes
_argv = ctypes.POINTER(ctypes.c_wchar_p)()
_argc = ctypes.c_int()
ctypes.pythonapi.Py_GetArgcArgv(ctypes.byref(_argc),
ctypes.byref(_argv))
argv = _argv[:_argc.value]
print(argv)
Now this is OS-portable, but not python implementation portable (only works on cpython and ctypes is yucky if you don't need it). Also, peculiarly, I don't get the right output on Ubunutu 16.04 (python -m foo.bar gives me ['python', '-m', '-m']), but I may just be making a silly mistake (I get the same behavior on OSX). It would be great to have a fully portable solution (that doesn't dig into ctypes).
Python 3.10 adds sys.orig_argv, which the docs describe as the arguments originally passed to the Python executable. If this isn't exactly what you're looking for, it may be helpful in this or similar cases.
There were a bunch of possibilities considered, including changing sys.argv, but this was, I think, wisely chosen as the most effective and non-disruptive option.
This seems XY problem and you are getting into the weeds in order to accommodate some existing complicated test setup (I've found the question behind the question in your comment). Further efforts would be better spent writing a sane test setup.
Use a better test runner, not unittest.
Create any initial state within the test setup, not in the external environment before entering the Python runtime.
Use a plugin for the randomization and seed stuff, personally I use this one but there are others.
For example if you decide to go with pytest runner, all the test setup can be configured within a [tool.pytest.ini_options] section of the pyproject.toml file and/or with a fixture defined in conftest.py. Overriding the default test configuration can be done with environment variables and/or command line arguments, and neither of these approaches will get mucked around by the shell or during Python interpreter startup.
The manner in which to execute the test suite can and should be as simple as executing a single command:
pytest
And then your perceived problem of needing to recover the original sys.argv will go away.
Your stated problem is:
User called my app with environment variables and arguments.
I want to display a "run like this" diagnostic that will exactly reproduce the results of the current run.
There are at least two solutions:
Abandon the "reproduction" aspect, since the original bash calling command is lost to the portable python app, and instead go for "same effect".
Use a wrapper to capture the original calling command, as suggested by Jean-François Fabre.
With (1) you would be willing to accept ['-m', 'foo'] becoming ['foo.py'], or even turning it into ['/some/dir/foo.py'] in case PYTHONPATH could cause trouble. Displaying ['a', 'b c'] as "a" "b c", or more concisely as a "b c", is straightforward. If environment variables like SEED are an important part of the command line interface then you'll need to iterate over envp and output them, as well. For true reproducibility, you might choose to convert input args to canonical form, compare with observed input args, and exec using the canonical form if they're not identical, so there's no way to execute the bulk of your code using "odd" syntax.
With (2) you would bury the app in some inconveniently named file, advertise the wrapper program far and wide, and enjoy the benefits of seeing args before they're munged.
I want a function that programmatically returns completion options from either bash or zsh. There are lots of examples of related questions on stackoverflow but no proper, generic answers anywhere. I do NOT want to know how to write a specific completer function for bash.
I've already tried implementing this by reading debian /etc/completion shell code, by echoing control-codes for tab into "bash -i", and even tried using automated subprocess interaction with python-pexpect. Every time I thought I was successful, I find some small problem that invalidates the whole solution. I'd accept a solution in any language, but ideally it would be python. Obviously the exact input output would vary depending on systems, but take a look at the example I/O below:
function("git lo") returns ["log","lol","lola"]
function("apt-get inst") returns ["apt-get install"]
function("apt-get") returns []
function("apt-get ") returns ["apt-get autoclean","apt-get autoremove", ...]
function ("./setup") returns ["./setup.py"]
If you are thinking of a solution written in shell, it would ideally be something I can execute without "source"ing. For instance bash "compgen" command looks interesting (try "compgen -F _git"), but note that "bash -c 'compgen -F _git'" does not work because the completion helper "_git" is not in scope.
This gist is my best solution so far. It meets all the requirements, works well for multiple versions of bash on multiple OS's but it requires a subprocess call and it's so complicated it's absurd. The comments includes full documentation of all the outrageous slings and arrows. I'm still hoping for something more reasonable to come along, but unless it does.. this is it!
I have a bunch of code that uses the old deprecated popen from the platform package. Since this is deprecated, I will be moving this to the subprocess package.
What is the equivalent statement to popen("some_command")? Is there a reason that popen was deprecated?
platform.popen has not been deprecated as best as I can tell. However, this is a low-level function that you should not make use of for flexibility and portability reasons.
Lots of other process-launching things were deprecated and some removed in Python 3. Many, many attempts at doing this well were made in the history of Python, and subprocess.Popen and its convenience functions are by far the best. After its existence the others became cruft and most of the retained ones are just there to support legacy code.
If you're going to port your code to use the subprocess module, don't look for an exact equivalent to what you have been doing, or you will miss out on the ways in which it is better. Read and understand the subprocess documentation and understand the ideas it is using to solve the problem of process-launching better than the older alternatives.
How is subprocess.Popen better than the older alternatives?
It is secure. Instead of something('shell command here'), we do Popen(['shell', 'command', 'here']). This doesn't launch an unnecessary shell process, which makes it less errorprone and dangerous.
Consider if I asked the user for their name to be input. I might write something('foo %s" % name) in the old thing. It should work--if the user gives you the name "Mike", then it becomes a command like foo Mike. But what if the user's name is "Mike Graham"? Then I want foo 'Mike Graham'. So now I always put in the apostrophes, but now what if the user's name is "Mike O'Reilley"? Worse yet, what if his name is "Mike; rm -rf /"? The solution here isn't to try to escape these yourself (which is hard to do right, let alone to do cross-platform), but to pass the arguments directly without bothering with the shell--Popen(['foo', name])`.
It is flexible. You can control the input and output fully.
It is nonblocking. Popen can run a process concurrently with yours.
Spinning off from another thread, when is it appropriate to use os.system() to issue commands like rm -rf, cd, make, xterm, ls ?
Considering there are analog versions of the above commands (except make and xterm), I'm assuming it's safer to use these built-in python commands instead of using os.system()
Any thoughts? I'd love to hear them.
Rule of thumb: if there's a built-in Python function to achieve this functionality use this function. Why? It makes your code portable across different systems, more secure and probably faster as there will be no need to spawn an additional process.
One of the problems with system() is that it implies knowledge of the shell's syntax and language for parsing and executing your command line. This creates potential for a bug where you didn't validate input properly, and the shell might interpet something like variable substitution or determining where an argument begins or ends in a way you don't expect. Also, another OS's shell might have divergent syntax from your own, including very subtle divergence that you won't notice right away. For reasons like these I prefer to use execve() instead of system() -- you can pass argv tokens directly and not have to worry about something in the middle (mis-)parsing your input.
Another problem with system() (this also applies to using execve()) is that when you code that, you are saying, "look for this program, and pass it these args". This makes a couple of assumptions which may lead to bugs. First is that the program exists and can be found in $PATH. Maybe on some system it won't. Second, maybe on some system, or even a future version of your own OS, it will support a different set of options. In this sense, I would avoid doing this unless you are absolutely certain the system you will run on will have the program. (Like maybe you put the callee program on the system to begin with, or the way you invoke it is mandated by something like POSIX.)
Lastly... There's also a performance hit associated with looking for the right program, creating a new process, loading the program, etc. If you are doing something simple like a mv, it's much more efficient to use the system call directly.
These are just a few of the reasons to avoid system(). Surely there are more.
Darin's answer is a good start.
Beyond that, it's a matter of how portable you plan to be. If your program is only ever going to run on a reasonably "standard" and "modern" Linux then there's no reason for you to re-invent the wheel; if you tried to re-write make or xterm they'd be sending the men in the white coats for you. If it works and you don't have platform concerns, knock yourself out and simply use Python as glue!
If compatibility across unknown systems was a big deal you could try looking for libraries to do what you need done in a platform independent way. Or you need to look into a way to call on-board utilities with different names, paths and mechanisms depending on which kind of system you're on.
The only time that os.system might be appropriate is for a quick-and-dirty solution for a non-production script or some kind of testing. Otherwise, it is best to use built-in functions.
Your question seems to have two parts. You mention calling commands like "xterm", "rm -rf", and "cd".
Side Note: you cannot call 'cd' in a sub-shell. I bet that was a trick question ...
As far as other command-level things you might want to do, like "rm -rf SOMETHING", there is already a python equivalent. This answers the first part of your question. But I suspect you are really asking about the second part.
The second part of your question can be rephrased as "should I use system() or something like the subprocess module?".
I have a simple answer for you: just say NO to using "system()", except for prototyping.
It's fine for verifying that something works, or for that "quick and dirty" script, but there are just too many problems with os.system():
It forks a shell for you -- fine if you need one
It expands wild cards for you -- fine unless you don't have any
It handles redirect -- fine if you want that
It dumps output to stderr/stdout and reads from stdin by default
It tries to understand quoting, but it doesn't do very well (try 'Cmd" > "Ofile')
Related to #5, it doesn't always grok argument boundaries (i.e. arguments with spaces in them might get screwed up)
Just say no to "system()"!
I would suggest that you only use use os.system for things that there are not already equivalents for within the os module. Why make your life harder?
The os.system call is starting to be 'frowned upon' in python. The 'new' replacement would be subprocess.call or subprocess.Popen in the subprocess module. Check the docs for subprocess
The other nice thing about subprocess is you can read the stdout and stderr into variables, and process that without having to redirect to other file(s).
Like others have said above, there are modules for most things. Unless you're trying to glue together many other commands, I'd stick with the things included in the library. If you're copying files, use shutil, working with archives you've got modules like tarfile/zipfile and so on.
Good luck.
I need to be able to list the command line arguments (if any) passed to other running processes. I have the PIDs already of the running processes on the system, so basically I need to determine the arguments passed to process with given PID XXX.
I'm working on a core piece of a Python module for managing processes. The code is written as a Python extension in C and will be wrapped by a higher level Python library. The goal of this project is to avoid dependency on third party libs such as the pywin32 extensions, or on ugly hacks like calling 'ps' or taskkill on the command line, so I'm looking for a way to do this in C code.
I've Googled this around and found some brief suggestions of using CreateRemoteThread() to inject myself into the other process, then run GetCommandLine() but I was hoping someone might have some working code samples and/or better suggestions.
UPDATE: I've found full working demo code and a solution using NtQueryProcessInformation on CodeProject: http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx - It's not ideal since it's "unsupported" to cull the information directly from the NTDLL structures but I'll live with it. Thanks to all for the suggestions.
UPDATE 2: I managed through more Googling to dig up a C version that does not use C++ code, and is a little more direct/concisely pointed toward this problem. See http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/ for details.
Thanks!
To answer my own question, I finally found a CodeProject solution that does exactly what I'm looking for:
http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx
As #Reuben already pointed out, you can use NtQueryProcessInformation to retrieve this information. Unfortuantely it's not a recommended approach, but given the only other solution seems to be to incur the overhead of a WMI query, I think we'll take this approach for now.
Note that this seems to not work if using code compiled from 32bit Windows on a 64bit Windows OS, but since our modules are compiled from source on the target that should be OK for our purposes. I'd rather use this existing code and should it break in Windows 7 or a later date, we can look again at using WMI. Thanks for the responses!
UPDATE: A more concise and C only (as opposed to C++) version of the same technique is illustrated here:
http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/
The cached solution:
http://74.125.45.132/search?q=cache:-wPkE2PbsGwJ:windowsxp.mvps.org/listproc.htm+running+process+command+line&hl=es&ct=clnk&cd=1&gl=ar&client=firefox-a
in CMD
WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid
or
WMIC /OUTPUT:C:\ProcessList.txt path win32_process get Caption,Processid,Commandline
Also:
http://mail.python.org/pipermail/python-win32/2007-December/006498.html
http://tgolden.sc.sabren.com/python/wmi_cookbook.html#running_processes
seems to do the trick:
import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
print process.CommandLine
By using psutil ( https://github.com/giampaolo/psutil ):
>>> import psutil, os
>>> psutil.Process(os.getpid()).cmdline()
['C:\\Python26\\python.exe', '-O']
>>>
The WMI approach mentioned in another response is probably the most reliable way of doing this. Looking through MSDN, I spotted what looks like another possible approach; it's documented, but its not clear whether it's fully supported. In MSDN's language, it--
may be altered or unavailable in
future versions of Windows...
In any case, provided that your process has the right permissions, you should be able to call NtQueryProcessInformation with a ProcessInformationClass of ProcessBasicInformation. In the returned PROCESS_BASIC_INFORMATION structure, you should get back a pointer to the target process's process execution block (as field PebBaseAddress). The ProcessParameters field of the PEB will give you a pointer to an RTL_USER_PROCESS_PARAMETERS structure. The CommandLine field of that structure will be a UNICODE_STRING structure. (Be careful not too make too many assumptions about the string; there are no guarantees that it will be NULL-terminated, and it's not clear whether or not you'll need to strip off the name of the executed application from the beginning of the command line.)
I haven't tried this approach--and as I mentioned above, it seems a bit... iffy (read: non-portable)--but it might be worth a try. Best of luck...
If you aren't the parent of these processes, then this is not possible using documented functions :( Now, if you're the parent, you can do your CreateRemoteThread trick, but otherwise you will almost certainly get Access Denied unless your app has admin rights.