In the hard disk partitions, I used fdisk module. In the fdisk module, it is asking some command line inputs like below.
Command (m for help): p
I need to run this module on 16 servers.So I am using fabric script to run this on the 16 servers. But every time it is asking the input commands.
Is there any option in the fabric module to give standard commands.
So, this is just a matter of figuring out the fdisk commands you need and then creating a bash script out of them. There are four options here. Pick one of them and then fabricify it--e.g.,:
sudo('apt-get update')
sudo('apt-get install parted')
sudo('parted -a optimal /dev/usb mkpart primary 0% 4096MB')
Replace /dev/usb with your disk. You'll also have to add a mount and add the entry to fstab.
Related
I was working with a sensor called Rplidar. To connect the Rplidar with my operating system(Ubuntu) sometimes i have to use this command in the terminal:
sudo chmod 666 /dev/ttyUSB0
After running this instruction, ubuntu can detect the Rplidar. Later on, i will run a python script to work with the Rplidar. Now I want to include this command inside my python script so that i do not need to run it in the terminal before working with the Rplidar. Is there any way that i could do it in python script?
The simple answer is that chmod is provided in the os module in Python:
https://docs.python.org/3/library/os.html#os.chmod
so all you need to do is run:
import os
filename = 'example.dat'
os.chmod(filename,
stat.S_IRUSR |
stat.S_IWUSR |
stat.S_IRGRP |
stat.S_IWGRP |
stat.S_IROTH)
so there's no need to shell out to perform this operation normally. Also have a look at the os.path and shutil modules for much more support in this area.
Things get a little complicated if you need to perform this operation with elevated privileges, but that's actually not the solution here.
Also, it is not a good idea to give 666 permissions to system devices. This can open up security problems when any user on the system has read/write access to system devices. As a matter of principle, use the least permissions required for correct operation.
As #KlausD. comments, you shouldn't be forcing permission changes on these device nodes anyway. The correct approach is to perform a one-time operation of adding the relevant user to the dialout group on your system. Then by being in the correct group, the user running your application will have access to the device. This is already answered here:
https://askubuntu.com/questions/112568/how-do-i-allow-a-non-default-user-to-use-serial-device-ttyusb0
Just run this once:
sudo adduser kazi dialout
then log out and back in for it to take effect. Your Rplidar app will run fine.
You can use the subprocess library to run shell command in Python
import subprocess
subprocess.Popen(["sudo", "chmod", "666", "/dev/ttyUSB0"], stdout=subprocess.PIPE, shell=True)
I am trying to use Fabric to send commands which will run many physics simulations (executables) on many different computers which all share the same storage. I would like my script to
ssh into a machine
begin the simulation, for example by running run('nohup nice -n 5 ./interp 1 2 7') (the executable is called interp and run is a function from the Fabric.api library)
detach from the shell and run another simulation on another (or the same) computer.
However I cannot get Fabric to accomplish part 3. It hangs up on the first simulation and doesn't detach until the simulation stops, which defeats the whole point.
My problem, according to the documentation
is that
Because Fabric executes a shell on the remote end for each invocation of run or sudo (see also), backgrounding a process via the shell will not work as expected. Backgrounded processes may still prevent the calling shell from exiting until they stop running, and this in turn prevents Fabric from continuing on with its own execution.
The key to fixing this is to ensure that your process’ standard pipes are all disassociated from the calling shell
The documentation provides 3 suggestions, but it is not possible for me to "use a pre-existing daemonization technique," the computers I have access to do not have screen, tmux, or dtach installed (nor can I install them), and the second proposal of including >& /dev/null < /dev/null in my command has not worked either (as far as I can tell it changed nothing).
Is there another way I can disassociate the process pipes from the calling shell?
The documentation you linked to gives an example of nohup use which you haven't followed all that closely. Merging that example with what you've tried so far gives me something that I, since I don't have Fabric installed, cannot test, but might be interesting to try:
run('nohup nice -n 5 ./interp 1 2 7 < /dev/null &> /tmp/interp127.out &')
Redirect output to /dev/null rather than my contrived output file (/tmp/interp127.out) if you don't care what the interp command emits to its stdout/stderr.
Assuming the above works, I'm unsure how you would detect that a simulation has completed, but your question doesn't seem to concern itself with that detail.
I'm facing a problem in python:
My script, at a certain point, has to run some test script written in bash, and I have to do it in parallel, and wait until they end.
I've already tried :
os.system("./script.sh &")
inside a for loop but it did not worked.
Any suggest?
Thank you!
edit
I have nt correctly explained my situation:
My phyton script resides in the home dir;
my sh scripts resides in other dirs, for instance /tests/folder1 and /tests/folder2;
Trying to use os.system implies the usage of os.chdir prior to call os.system (to avoid troubles on "no such files or directory", my .sh scripts contains some relative references), and also this method is blocking my terminal output.
Trying to use Popen and passing all the path fro home folder to my .sh lead to launch zombie processes without any responses or other.
Hope to find a solution,
Thank you guys!
Have you looked at subprocess? The convenience functions call and check_output block, but the default Popen object doesn't:
processes = []
processes.append(subprocess.Popen(['script.sh']))
processes.append(subprocess.Popen(['script2.sh']))
...
return_codes = [p.wait() for p in processes]
Can you use GNU Parallel?
ls test_scripts*.sh | parallel
Or:
parallel ::: script1.sh script2.sh ... script100.sh
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
I'm just getting started using imposm to help get openstreetmap data into a postgis database. All the docs point to making all commands via Terminal. This is fine for one off imports but I plan to have many many imports of varying bounding boxes and would like to script the loading of the data in the database.
Currently I use:
imposm --overwrite-cache --read --write -d postgis_test --user postgres -p "" /Users/Me/MapnikTest/osmXML.osm
Which works fine from the command line but as osmXML.osm is being created many times I would like somehow to import this at the point of creation.
Putting the same thing in a python script as:
os.system("imposm --overwrite-cache --read --write -d postgis_test --user postgres -p "" /Users/Ali\ Mac\ Pro/Desktop/MapnikTest/osmXML.osm")
just returns:
/bin/sh: imposm: command not found
Solving this would be the final step to automate the acquisition of data to render small maps on demand but I'm falling at the final hurdle!
** Edit full path to imposm solved the first problem but imputing the password for the postgres user happens when prompted. Is there a way to send the password in the same single line command? (maybe this needs to be a new post?, happy if someone points me in the right direction)**
This is probably because os.system() is calling /bin/sh which uses a different shell environment from the one you use when working on the command line.
To work around this, in your script, get the full path to the imposm script and then use that in your command. Use can use some code like this to find the executable.
Or you can fix your shell definitions so that /bin/sh has the proper PATH defined, but that depends greatly on your setup...
Solved with the help of further research and the comments from #Eli Rose (many thanks): find out what path to imposm (or whichever command you are trying to make) with
which <command>
Then include the path in the python shell command. Using a module from subprocess you can even see the full terminal output.
import subprocess
from subprocess import *
print Popen("/usr/local/bin/imposm --overwrite-cache --read --write --connection postgis://<database user>:<password>#<host>/<database> /path/to/data.osm", stdout=PIPE, shell=True).stdout.read()
The
--connection postgis://<database user>:<password>#<host>/<database>
means you can make the command in a single line and not have to worry about entering the database user password in a following command.
When I try to use cron to execute my python script in a future time, I found there is a command at, AFAIK, the cron is for periodically execute, but what my scenario is only execute for once in specified time.
and my question is how to add python script to at command,
also it there some python package for control the at command
My dev os is ubuntu 10.04 lucid,and my product server is ubuntu-server 10.04 lucid version.
in fact, I want through python script add python script tasks to at command, which file's change can effect at command add or remove new jobs
This works on my linux box:
echo python myscript | at 10:15
Edit: stupid quoting...
As the man page says, at (as opposed to cron for example) doesn't respect shebang (the #!/usr/bin/env python line). It always uses /bin/sh to run the file.
So in order to run a python script you have to use either
echo python myscript.py | at 10:15
as suggested by #bstpierre or create an additional file
myscript.sh:
python myscript.py
and then
at -f myscript.sh at 10:15
Shebangs are not necessary this way (but wouldn't hurt either).
type man at, it will explain how to use it. Usage will slighty differ from system to system, so there's no use to tell you here exactly.
Just do
python FILE | at TIME > app.log
replace:
FILE - Your .py file (include the shebang)
TIME - Your time