Saving output of EMACS for Multiple files - python

I am using an EMACS package (vax) to calculate some tilings of various regions. At the moment I have a large number of text files and I open each one in EMACS, execute a single command,<Shift> 3, record the output to the minibuffer manually, and add eventually sum these output. This is incredibly time consuming, and I keep making errors.
I would really like to automate this process, could anyone give me some advice on how I might write some sort of script that will open each file in a specified directory, record the output of the minibuffer after entering the command <Shift> 3, and sum the successive outputs as it proceeds through the directory?
I am not familiar with LISP or EMACS, I've read through the tutorial for the latter. I have a rough working knowledge of how to code in Python, and if there is a way I could execute this all within a python script that would be really helpful.

Step by step instructions of how to run emacs in batch:
First you need to get the list of files that you want to operate on.
find-name-dired should be enough for most needs. Open dired
in the base directory of your project and M-x find-name-dired.
Accept the default for the base directory, enter for instance *.py as wildcard.
You now have a buffer with all file names that you're interested in.
Select them all with t. You can refine your selection with
m and DEL.
Start a shell command with !. Use this command template as a start:
emacs --batch ? --eval "(message \"%s %s\" (buffer-name) (buffer-size))"
Now you have all your output neatly in the shell output buffer.
The code above shows buffer size in characters for each file.
You should replace (buffer-size) with your code, i.e.
(if (equal vax-region (buffer-substring-no-properties (point-min) (point-max))) nil (vax-quit) (setq vax-region (buffer-substring-no-properties (point-min) (point-max)))) (catch (quote chromatic) (catch (quote singular) (vax-number)))
The whole thing should be on one line.
Alternatively, you can wrap the call to emacs --batch with a bash script
and call that instead from dired.
UPD: try to run this code
Instead of (buffer-size), put (progn (vax-mode) (call-interactively (vax-col vax-number))).
UPD: load vax.el
Use emacs --batch ? -l ~/path/to/vax.el --eval "(progn (vax-mode) (message \"%s %s\" (buffer-name) (call-interactively (vax-col vax-number))))".

Related

Getting all pods for a container, storing them in text files and then using those files as args in single command

The picture above shows the list of all kubernetes pods I need to save to a text file (or multiple text files).
I need a command which:
stores multiple pod logs into text files (or on single text file) - so far I have this command which stores one pod into one text file but this is not enough since I will have to spell out each pod name individually for every pod:
$ kubectl logs ipt-prodcat-db-kp-kkng2 -n ho-it-sst4-i-ie-enf > latest.txt
I then need the command to send these files into a python script where it will check for various strings - so far this works but if this could be included with the above command then that would be extremely useful:
python CheckLogs.py latest.txt latest2.txt
Is it possible to do either (1) or both (1) and (2) in a single command?
The simplest solution is to create a shell script that does exactly what you are looking for:
#!/bin/sh
FILE="text1.txt"
for p in $(kubectl get pods -o jsonpath="{.items[*].metadata.name}"); do
kubectl logs $p >> $FILE
done
With this script you will get the logs of all the pods in your namespace in a FILE.
You can even add python CheckLogs.py latest.txt
There are various tools that could help here. Some of these are commonly available, and some of these are shortcuts that I create my own scripts for.
xargs: This is used to run multiple command lines in various combinations, based on the input. For instance, if you piped text output containing three lines, you could potentially execute three commands using the content of those three lines. There are many possible variations
arg1: This is a shortcut that I wrote that simply takes stdin and produces the first argument. The simplest form of this would just be "awk '{print $1}'", but I designed mine to take optional parameters, for instance, to override the argument number, separator, and to take a filename instead. I often use "-i{}" to specify a substitution marker for the value.
skipfirstline: Another shortcut I wrote, that simply takes some multiline text input and omits the first line. It is just "sed -n '1!p'".
head/tail: These print some of the first or last lines of stdin. Interesting forms of this take negative numbers. Read the man page and experiment.
sed: Often a part of my pipelines, for making inline replacements of text.

Testing 7-Zip archives from a python script

So I've got a python script that, at it's core, makes .7z archives of selected directories for the purpose of backing up data. For simplicty sake I've simply invoked 7-zip through the windows command line, like so:
def runcompressor(target, contents):
print("Compressing {}...".format(contents))
archive = currentmodule
archive += "{}\\{}.7z".format(target, target)
os.system('7z u "{}" "{}" -mx=9 -mmt=on -ssw -up1q0r2x2y2z1w2'.format(archive, contents))
print("Done!")
Which creates a new archive if one doesn't exist and updates the old one if it does, but if something goes wrong the archive will be corrupted, and if this command hits an existing, corrupted archive, it just gives up. Now 7zip has a command for testing the integrity of an archive, but the documentation says nothing about giving an output, and then comes the trouble of capturing that output in python.
Is there a way I can test the archives first, to determine if they've been corrupted?
The 7z executable returns a value of two or greater if it encounters a problem. In a batch script, you would generally use errorlevel to detect this. Unfortunately, os.system() under Windows gives the return value of the command interpreter used to run your program, not the exit value of your program itself.
If you want the latter, you'll probably going to have to get your hands a little dirtier with the subprocess module, rather than using the os.system() call.
If you have version 3.5 (or better), this is as simple as:
import subprocess as sp
x = sp.run(['7z', 'a', 'junk.7z', 'junk.txt'], stdout=sp.PIPE, stderr=sp.STDOUT)
print(x.returncode)
That junk.txt in my case is a real file but junk.7z is just a copy of one of my text files, hence an invalid archive. The output from the program is 2 so it's easily detectable if something went wrong.
If you print out x rather than just x.returncode, you'll see something like (reformatted and with \r\n sequences removed for readability):
CompletedProcess(
args=['7z', 'a', 'junk.7z', 'junk.txt'],
returncode=2,
stdout=b'
7-Zip [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
Error: junk.7z is not supported archive
System error:
Incorrect function.
'
)

Open .bat file with Python - not working

I have 3 different files, one Python file and two .bat files. They communicate between each other (hopefully).
When I execute the "Process_Videos.bat" by itself (double clicking in the windows explorer) it works fine, but whenever I call it from the Python file it doesnt work at all, just says "press any button to continue..."
I really need to have this structure, calling the "Process_Videos.bat" from a Python file, since I am extracting some web info. The "pythonExecute.bat" just works as a trigger for the entire process.
Also I have tried the "subprocess" approach, but not working either.
The files and respective code:
pythonExecute.bat
python "D:\\tests\\pythonCall.py"
pythonCall.py
import os
os.system('D:\\tests\\3.asc\\Process_Videos_asc.bat')
Process_Videos.bat
#echo off
setlocal EnableDelayedExpansion
set "FolderBaseName=TestName"
set "DropBoxFolder=D:\tests\3.asc\myDropBoxFolder"
set "BaseOutputFolder=D:\tests\3.asc\TEMP"
for %%I in (*.png) do (
set "slaveName=%%~nI"
set "slaveName=!slaveName:~6!
set "OutputFolder=%BaseOutputFolder%_!slaveName!"
echo !slaveName!
md "!OutputFolder!" 2>nul
for %%J in (*.mp4*) do (
ffmpeg -i "%%~fJ" -i "%%~fI" -filter_complex overlay "!OutputFolder!\%%~nJ.mp4"
)
"C:\Program Files\WinRAR\rar.exe" a -cfg- -ep1 -inul -m5 "%DropBoxFolder%\%FolderBaseName%_!slaveName!" "!slaveName:~6!\*"
rd /S /Q "!OutputFolder!"
)
pause
You need to:
a) Invoke your batch file within the directory it is in, (e.g. by changing directory first), and
b) Get rid of the pause at the end of the batch file.
You should also consider replacing the batch file altogether - python can do all of the things that it does much more neatly.
The accepted answer to this SO question gives some very good tips.

Why doesn't my bash script read lines from a file when called from a python script?

I am trying to write a small program in bash and part of it needs to be able to get some values from a txt file where the different files are separated by a line, and then either add each line to a variable or add each line to one array.
So far I have tried this:
FILE=$"transfer_config.csv"
while read line
do
MYARRAY[$index]="$line"
index=$(($index+1))
done < $FILE
echo ${MYARRAY[0]}
This just produces a blank line though, and not what was on the first line of the config file.
I am not returned with any errors which is why I am not too sure why this is happening.
The bash script is called though a python script using os.system("$HOME/bin/mcserver_config/server_transfer/down/createRemoteFolder"), but if I simply call it after the python program has made the file which the bash script reads, it works.
I am almost 100% sure it is not an issue with the directories, because pwd at the top of the bash script shows it in the correct directory, and the python program is also creating the data file in the correct place.
Any help is much appreciated.
EDIT:
I also tried the subprocess.call("path_to_script", shell=True) to see if it would make a difference, I know it is unlikely but it didn't.
I suspect that when calling the bash script from python, having just created the file, you are not really finished with that file: you should either explicitly close the file or use a with construct.
Otherwise, the written data is still in any buffer (from the file object, or in the OS, or wherever). Only closing (or at least flushing) the file makes sure the data is indeed in the file.
BTW, instead of os.system, you should use the subprocess module...

retrieve hash bang / shebang from given script file

Is there a way to retrieve the path to the interpreter a UNIX shell would use for a given script? (preferably in a Python API or as shell command)?
To be used like this:
$ get_bang ./myscript.py
/usr/bin/python3
Of course I could extract it manually using RE but I'm sure in real world that's more complicated than just handling the first line and I don't want to re-invent the wheel..
The reason I need this is I want to call the script from inside another script and I want to add parameters to the interpreter.
Actually, it isn't more complicated than reading (the first word) of the first line.
Try putting the shebang on the second line (or even just putting a space before the #) and see what happens.
Also see http://www.in-ulm.de/~mascheck/various/shebang/ and http://homepages.cwi.nl/~aeb/std/hashexclam-1.html for more than you've ever wanted to know about the shebang feature.
Many ways - for example:
sed -n '1s/^#!//p' filename
prints for example
/bin/sh
or (if multiword)
/usr/bin/env perl
or nothing, if here isn't shebang

Categories