I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do.
Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it?
We run CentOS.
Thanks.
It is OK in the sense that you can do it. But the scripts in /etc/init.d usually need to load config data and some functions (for example to print the nice green OK on the console) which will be hard to emulate in Python.
So try to convert those which make sense (i.e. those which contain complex logic). If you need job control (starting/stopping processes), then bash is better suited than Python.
Generally, scripts in /etc/init.d are written in the "native shell" of the OS (e.g. bash, sh, posix-sh, etc). This is especially true of scripts that will be run at the lower init levels (e.g. not every directory will be mounted in single user mode, including wherever python or the site-libraries might be installed).
Most OS's provide some "helper functions" that make writing scripts in some native shell easier. These scripts define certain return codes and messages that are required/desired when writing service scripts. On RedHat based systems, see:
/etc/init.d/functions
Beyond that, the service scripts in /etc/init.d can be written in any language (including compiled languages). The general calling syntax will need to be supported. Typically there are three arguments that should be supported: start, stop, and status. Some additional arguments might be appropriate, depending on the purpose of the scripts.
% /etc/init.d/foo (start|stop|status)
Every task has languages that are better suited for it and less so. Replacing the backtick ` quote of sh is pretty ponderous in Python as would be myriad quoting details, just to name a couple. There are likely better projects to cut your teeth on.
And all that they said above about Python being relatively heavyweight and not necessarily available when needed.
Certain scripts that I write simply involving looping over a glob in some directories, and then executing some a piped series of commands on them. This kind of thing is much more tedious in python.
Related
Can Jython help here? Should I run Grails above Jython and if yes, how? Somehow I should be able to run Grails and Python script on same JVM. There are other possibilities like making REST service for Python script or some interprocess communication but lets not deal with those for now.
Jython is a JSR223 scripting language, so you should be able to follow the usual methods. (http://www.jython.org/archive/22/userguide.html#embedding-jython)
ScriptEngine engine = new ScriptEngineManager().getEngineByName("python")
engine.eval("x = 2 + 2")
It might be a non-trivial thing to get your Jython and any libraries you want to use organized on your server, but if all you need is the language and the standard libraries, you should be able to just add it as a dependency in your buildfile - it is on Maven Central (compile 'org.python:jython:2.7.1b3').
But keep in mind that many python libraries (i.e. ones that use compiled C-code) are not going to work with Jython.
So, you might need to instead use a native python installation and call it as a process (using ProcessBuilder, for example). Groovy has some nice sugar for doing this sort of thing with strings.
Process process = "python mypython.py".execute()
An internet search for things like "groovy execute shell command" will yield lots of examples. Depending on your deployment scenario, this could be tricky to set up and maintain.
I'm interested in what pitfalls can be (except Python is not installed in target system) when using Python for deb package flow control scripts (preinst, postinst, etc.). Will it be practical to implement those scripts in Python, not in sh?
As I understand it's at least possible.
The only reason this isn't commonly done, afaik, is that it's not convention, and Python isn't usually more useful or straightforward than plain shell script for the sorts of things that maintainer scripts do. When it is more useful, you can often break out the Python-needing functionality into a separate Python script which is called by the maintainer scripts.
It can help to follow convention in this sort of situation, since there are a lot of helpful tools and scripts (e.g., Lintian, Debhelper) which generally assume that maintainer scripts use bash. If they don't, it's ok, but those tools may not be as useful as they would be otherwise. The only other issue I think you need to be aware of is that if your preinst or postrm scripts need Python, then Python needs to be a pre-dependency (Pre-Depends) of your package instead of just a Depends.
That said, I've found it useful to use Python in a maintainer script before.
I created a module in Python which provides about a dozen functionalities. While it will be mostly used from within Python, there is a good fraction of legacy users which will be calling it from Perl.
What is the best way to make a plug in to this module? My thoughts are:
Provide the functionalities as command line utilities and make system calls
Create some sort of server and handle RPC calls (say, via JSON RPC)
Any advise?
One other choice is to inline Python directly in your Perl script, using Inline::Python.
This may be simpler than other solutions, and only requires one additional module.
In the short run the easiest solution is to use Inline::Python. Closely followed by calling a command-line script.
In the long run, using a server to provide RPC functionality or simply calling a command-line script will give you the most future proof solution.
Why?
Becuase that way you aren't tied to Perl or Python as the language used to build the systems that consume the services provided by your library. Either method creates a clear, language independent interface that you can use with whatever development environment you adopt.
Depending on your needs any of the presented options may be the "best choice". Depending on how your needs evolve over time, a different choice may be revealed as "best".
My approach to this would be to ask a couple of questions:
How often do you change development tools. You've switched to Python from Perl. Did you start with Tcl and go to Perl? Are you going to switch to the exciting new language X in 1, 5 or 10 years? If you change tools 'often' (whatever that means) emphasize cross tool compatibility.
How fast is fast enough? Is the start up time for command line solutions ok? Does Inline::Python slow things down too much (you are still initializing a Python interpreter, it's just embedded in your Perl interpreter)?
Based on the answers to these questions, I would do the simplest thing that is likely to work.
My guess is that means in order:
Inline::Python
Command line scripts
Build an RPC server
Provide the functionalities as command line utilities and make system calls
Works really nicely. This is the way programs like Python (and Perl) are meant to use used.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need to write a script to do the following:
Monitor a queuing system, which is accessible by shell commands.
Create directories from templates using a mix of inline text editing, cp/mv, command line scripts, and compiled c++ programs.
Check for error conditions.
Write files on error conditions.
Note: 2D arrays would be mildly useful to my program, but I'm currently making due with several 1D arrays (due to the limitations of Bash script's arrays).
Those tasks all seems somewhat "shell heavy" in so much as it can easily be implemented with a bunch of shell commands, so I thought Bash scripting would be a natural way to go. My results thus far have been good, but before I begin refactoring and finalizing my code, I was wondering whether it'd be better to port it to Python. I've read in numerous places that Python is "superior" to bash. I've done a bit of Python scripting and as far as I can tell, that's because it has more built in libraries and has support for object-oriented programming. However, all the scripts I've seen using shell commands, such as this one:
http://magazine.redhat.com/2008/02/07/python-for-bash-scripters-a-well-kept-secret/
Implement obnoxious syntax, like having to define commands as variables, like so:
#You could add another bash command here
#HOLDING_SPOT="""fake_command"""
#Determines Home Directory Usage in Gigs
HOMEDIR_USAGE = """
du -sh $HOME | cut -f1
"""
#Determines IP Address
IPADDR = """
/sbin/ifconfig -a | awk '/(cast)/ { print $2 }' | cut -d':' -f2 | head -1
"""
...and requiring wrapper functions and other fun.
Am I just being silly or does that seem less intuitive? Is there a speed advantage for use with Python that would outweigh the simplicity advantage of Bash when it comes to shell script commands? Or is the syntax of bash (no 2D arrays, brace/parentheses intricacies) reason to jump to Python?
If you can't find a reason to switch it's probably because there's no reason.
I started to switch python because some of my scripts require some templating process that was easier to do in other than shell script.
There were also another scripts which required configuration file parsing or parameter parsing that make me learn a bit more of python, and finally another ones which deal with trac (coded in python) who made me switch to python.
But if you could outline your scripts in bash fast and cleanly without require another tool, remain in bash, it is a great tool.
Bash is good right at the actual interface with the system, because it's so easy to run external programs, and for processing data that comes in streams. Python is good at work with less surface area, involving more looping, logic, data structures, and so on. Horses for courses.
It's hard to advise you which to use, or how to use both, without knowing details of your problem. But from the sound of it, this is something i'd do entirely in shell script. I'd find a way of doing it without arrays, though; the key is to stop thinking in C++ and start thinking in bash.
It really depends on what you are trying to do, but here are some general differences.
Shells, including bash, want to treat all variables as strings and to access variable values through simple textual substitution. Sometimes this is convenient; sometimes not so much.
In shell scripts you tend to manipulate data by piping it through various utilities. Sometimes this is convenient; sometimes not so much.
Python has very good string manipulation, so if you are doing a lot of text file munging it can be faster (both to write and to run) to do it in Python rather than figuring out what sed or awk incantation you want.
That's another thing -- many of the tools you will use in shell scripts have so many options that they are essentially mini-languages of their own, some of which are terse to the point of being obtuse. Python can be much more readable than shell scripts that invoke a lot of these mini-languages since it is consistent syntactically.
Some projects lend themselves to an object-oriented approach, which Python supports handily, while bash... not so much. Python also supports some functional techniques, which are handy for some projects, while bash... you get the picture.
In short, a shell's scripting language is intended to be glue for tying together various single-function utilities, with some conveniences to make that easier, while Python is a robust programming language in which you can build complete GUI or Web applications if you like.
I personally also find the overall syntax of Python much nicer than a shell scripting language; my programs tend to do what I want without a lot of fiddling, and when they don't it's usually because of a logic error rather than an error translating what I want into the language. To me, Python is nearly unique in this regard, although I hear Ruby programmers say similar things about their favorite language.
If you are familiar with and like shell scripting, Perl might be an easier step up since it was originally intended to replace shell scripts (and sed/awk).
I've got some experience with Bash, which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc.
Python is well suited for these tasks, and I would guess much easier to develop in and debug than Windows batch files.
The question is, I think, how easy and painless it is to ensure that all the computers that you have to run these scripts on, have Python installed.
Summary
Windows: no need to think, use Python.
Unix: quick or run-it-once scripts are for Bash, serious and/or long life time scripts are for Python.
The big talk
In a Windows environment, Python is definitely the best choice since cmd is crappy and PowerShell has not really settled yet. What's more Python can run on several platform so it's a better investment. Finally, Python has a huge set of library so you will almost never hit the "god-I-can't-do-that" wall. This is not true for cmd and PowerShell.
In a Linux environment, this is a bit different. A lot of one liners are shorter, faster, more efficient and often more readable in pure Bash. But if you know your quick and dirty script is going to stay around for a while or will need to be improved, go for Python since it's far easier to maintain and extend and you will be able to do most of the task you can do with GNU tools with the standard library. And if you can't, you can still call the command-line from a Python script.
And of course you can call Python from the shell using -c option:
python -c "for line in open('/etc/fstab') : print line"
Some more literature about Python used for system administration tasks:
The IBM lab point of view.
A nice example to compare bash and python to script report.
The basics.
The must-have book.
Sure, python is a pretty good choice for those tasks (I'm sure many will recommend PowerShell instead).
Here is a fine introduction from that point of view:
http://www.redhatmagazine.com/2008/02/07/python-for-bash-scripters-a-well-kept-secret/
EDIT: About gnud's concern: http://www.portablepython.com/
Are you aware of PowerShell?
Anything is a good replacement for the Batch file system in windows. Perl, Python, Powershell are all good choices.
#BKB definitely has a valid concern. Here's a couple links you'll want to check if you run into any issues that can't be solved with the standard library:
Pywin32 is a package for working with low-level win32 APIs (advanced file system modifications, COM interfaces, etc.)
Tim Golden's Python page: he maintains a WMI wrapper package that builds off of Pywin32, but be sure to also check out his "Win32 How Do I" page for details on how to accomplish typical Windows tasks in Python.
Python is certainly well suited to that. If you're going down that road, you might also want to investigate SCons which is a build system itself built with Python. The cool thing is the build scripts are actually full-blown Python scripts themselves, so you can do anything in the build script that you could otherwise do in Python. It makes make look pretty anemic in comparison.
Upon rereading your question, I should note that SCons is more suited to building software projects than to writing system maintenance scripts. But I wouldn't hesitate to recommend Python to you in any case.
As a follow up, after some experimentation the thing I've found Python most useful for is any situation involving text manipulation (yourStringHere.replace(), regexes for more complex stuff) or testing some basic concept really quickly, which it is excellent for.
For stuff like SQL DB restore scripts I find I still usually just resort to batch files, as it's usually either something short enough that it actually takes more Python code to make the appropriate system calls or I can reuse snippets of code from other people reducing the writing time to just enough to tweak existing code to fit my needs.
As an addendum I would highly recommend IPython as a great interactive shell complete with tab completion and easy docstring access.
I've done a decent amount of scripting in both Linux/Unix and Windows environments, in Python, Perl, batch files, Bash, etc. My advice is that if it's possible, install Cygwin and use Bash (it sounds from your description like installing a scripting language or env isn't a problem?). You'll be more comfortable with that since the transition is minimal.
If that's not an option, then here's my take. Batch files are very kludgy and limited, but make a lot of sense for simple tasks like 'copy some files' or 'restart this service'. Python will be cleaner, easier to maintain, and much more powerful. However, the downside is that either you end up calling external applications from Python with subprocess, popen or similar. Otherwise, you end up writing a bunch more code to do things that are comparatively simple in batch files, like copying a folder full of files. A lot of this depends on what your scripts are doing. Text/string processing is going to be much cleaner in Python, for example.
Lastly, it's probably not an attractive alternative, but you might also consider VBScript as an alternative. I don't enjoy working with it as a language personally, but if portability is any kind of concern then it wins out by virtue of being available out of the box in any copy of Windows. Because of this I've found myself writing scripts that were unwieldy as batch files in VBScript instead, since I can't usually depend on Python or Perl or Bash being available on Windows.
Python, along with Pywin32, would be fine for Windows automation. However, VBScript or JScript used with the Windows Scripting Host works just as well, and requires nothing additional to install.
I've been using a lot of Windows Script Files lately. More powerful than batch scripts, and since it uses Windows scripting, there's nothing to install.
As much as I love python, I don't think it a good choice to replace basic windows batch scripts.
I can't see see someone having to import modules like sys, os or getopt to do basic things you can do with shell like call a program, check environment variable or an argument.
Also, in my experience, goto is much easier to understand to most sysadmins than a function call.