I'm trying to be clever and use a script to set specific configuration values in system files like /etc/ssh/sshd_config, /etc/audit/audit_rules, etc. when I deploy a new Fedora Server 36. The script may run as root.
The tools at my disposal are bash and python.
In my head it would look something like:
#!/pseudo code
for line in /path/to/firefox/policies.json
do
set.PopupBlocking({"Allow":"","Default":true,"Locked":true})
done
OR
sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_conf
next setting
I've also looked at using jq .container[].InstallAddonsPermission: {"Default": false} /path/to/firefox/policies.json.
What is the best practice for this type of task? Is there a python method that would work better than bash?
TIA!
Related
I am stuck since 2 days trying to set up a small automatic deployment script.
The thing is: I have been using Git for some months now, but I always used it locally just by myself, just with the purpose of easily saving version of my code. All good until here.
Now I have to find a way to "publish" the code as soon as new functionalities are implemented and I think the code is stable enough.
Searching around I've discovered these 'hooks', which are scripts that are executed by Git in certain situations. Basically the idea is to have my master branch sync'd with my published code, so that everytime I merge a branch to the master and 'push', the files are automatically copied into '/my/published/folder'.
That said, I've found this tutorial that explains to do exactly what I want using a 'hooks' post-receive script, which is written in Ruby. Since at my studio I don't have and don't want to use Ruby at this time, I've found a Python version of the same script.
I tested and tested, but I couldn't make it work. I keep getting the same error:
remote: GIT_WORK_TREE is not recognized as as internal or external command,
Consider this is based on the tutorial I've shared above. Same prj name, same structure, etc.
I even installed Ruby on my personal laptop and tried the original script, but it still doesn't work...
I'm using Windows, and the Git env variable is set and accessible. But nevertheless it seems like it's not recognizing the GIT_WORK_TREE command. If I run it from the Git Bash it works just fine, but if I use the Windows Shell I get the same error message.
I suppose that when in my py script use the call() function, it runs the cmd using the Windows Shell. That's my guess, but I don't really know how to solve it. Google didn't help, as if no one ever had this problem before.
Maybe I'm just not seeing something obvious here, but I spent the whole day on this and I cannot get out of this bog!
Does anyone know how to solve it, or at least have an idea for a workaround?
Hope someone can help...
Thanks a lot!
The Ruby script you are talking about generates "bash" command:
GIT_WORK_TREE=/deploy/path git checkout -f ...
It means: define environment variable "GIT_WORK_TREE" with value "/deploy/path" and execute "git checkout -f ...".
As I understand it doesn't work for Windows command line.
Try to use something like:
set GIT_WORK_TREE=c:\temp\deploy && git checkout -f ...
I've had this problem as well - the best solution I've found is to pass the working tree across as one of the parameters
git --work-tree="/deploy/path" checkout -f ...
A common question is "how do I make my python script executable without explicitly calling python on the command line?", and the answer is chmod +x it and then add #!/usr/bin/env python at the start of the script.
That is not the question I'm asking.
What I would like to do is tell bash, or python, or whatever is responsible for file-handling to treat all .py files that have the execute bit set as if they have the shebang at the beginning whether or not they actually do.
I understand that in Windows this can be done, and apparently in Gnome for the use-case where you double-click on a .py script from the GUI. I could have sworn I remembered hearing about an equivalent way of specifying a handler from the shell.
Why I want to know how to do this (if it's actually possible):
Not every system uses shebang and I don't want to clutter up files in a cross-platform project with it.
If I'm submitting a patch to a project I don't own, it's slightly obnoxious for me to put stuff unrelated to the patch into it for my own convenience.
Thanks.
Do you mean binfmt_misc?
binfmt_misc is a capability of the Linux kernel which allows arbitrary executable file formats to be recognized and passed to certain user space applications, such as emulators and virtual machines.
So you want to register an entry to it, so everytime you want to execute a .py file, the kernel will pass it to /usr/bin/python.
You can do it by trying something like this
# load the binfmt_misc module
if [ ! -d /proc/sys/fs/binfmt_misc ]; then
/sbin/modprobe binfmt_misc
fi
if [ ! -f /proc/sys/fs/binfmt_misc/register ]; then
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
fi
echo ':Python:E::py::/usr/bin/python:' > /proc/sys/fs/binfmt_misc/register
If you're using Debian-based distribution, you have to install binfmt-support.
You can add :Python:E::py::/usr/bin/python: to /etc/binfmt.d/python.conf so it's permenent after reboot.
Rio6's answer is correct. Only it is supported on practically no operating systems. You will need binfmt, you can compile it yourself from source at This git address
I have a set of python scripts which I run as a daemon services. These all work great, but when all the scripts are running and I use top -u <USER>, I see all my scripts running as python.
I would really like to know which script is running under which process id. So is there any way to execute a python script as a different process name?
I'm stuck here, and I'm not ever sure what terms to Google. :-)
Note: I'm using Ubuntu Linux. Not sure if the OS matters or not.
Try using setproctitle. It should work fine on Linux.
Don't have a linux system here to test this on appropriately, but if the above doesn't work, you should be able to use the same trick they use for things like gzip etc.
The script has to tell what to run it at the top like this:
#!/usr/local/bin/python
Use a softlink like this:
ln -s /usr/local/bin/python ~/bin/myutil
Then just change your script to
#!~/bin/myutil
and it should show up that way instead. You may need to use a hard link instead of a soft link.
Launching a python script using the python script itself (and file associations and/or shell magic) is not very portable, but you can use similar methods on nearly any OS.
The easiest way to get this is using she bang. The first line of your python script should be:
#!/usr/bin/python
or
#!/usr/bin/python3
depending upon whether you use python or python3
and then assign executable permissions to the script as follows:
chmod +x <scriptname>
and then run the script as
./scriptname
this will show up as scriptname in top.
Basically, what I want to do is this (in a psuedo bash-ish code)
#create ramdisk raid
diskutil erasevolume HFS+ "r1" `hdiutil attach -nomount ram://4661720`;
diskutil erasevolume HFS+ "r2" `hdiutil attach -nomount ram://4661720`;
diskutil createRAID stripe SpeedDisk HFS+ /Volumes/r1 /Volumes/r2;
#copy minecraft server files to ramdisk
cp minecraft_Server /Volumes/SpeedDisk
#start minecraft_server
cd /Volumes/SpeedDisk/minecraft_server
java -Xms2G -Xmx2G -jar minecraft_server.jar nogui
#once I stop the server, copy the files to my harddrive
cd ~
cp /Volumes/SpeedDisk/minecraft_server minecraft_server/
I'm not sure about how to do this ^ in real life :p I was considering using python but it seems like there are problems with os.system for copying files.
Also, I would like to know if there is a way for me to eject the ramdisks when I am done. This is all going to be done in Mac OS X Leopard. The reason I'm doing all of this is to speed up my minecraft server a bit without buying an SSD.
I was considering using python but it seems like there are problems with os.system for copying files.
...then use the right tool for the job:
shutil.copytree()
Shell scripting seems to be the best solution for this kind of problem ( assuming that you want this to work on a single platform mac osx ). Write a shell script with these commands and use that script everytime you want to execute these commands.
Suppose I have a script in
/home/myuser/go.py
How do I run that script, when a new instance is booted? (I'm used to using the point-and-click control panel Amazon has...)
I'm gonna try my nonexistent Linux skills here - create a shell script that runs your go.py and add a symlink to the shell script in /etc/init.d/
/home/myser/go.sh
#!/bin/bash
python /home/myuser/go.py
symlink
ln -s /etc/init.d/go.sh /home/myuser/go.sh
After reading up a bit myself, /etc/rc.local is probably a better place for this. Just edit it and add /home/myuser/go.sh there (again, make sure your go.sh is executable).