Passing a temporary file to Popen subprocess? - python

I am trying to pass a temporary file to another python file and print the results of the second python file. I notice, however - i'm not gettin anything back from the other file. Is it possible to pass temporary files to subprocesses?
When I try reading from the tmp file I can see values are being written to it, but it appears that it is not being received by the other file. in the other file I've simply written (t = input(), for var in t: print(t))
Example Output
tmp file gen code:
for block in input_list: #iterating through blocklist
# open the file here
f = tempfile.TemporaryFile(mode='w+')
for val in block: #write to the file
f.write(val)
#print(val)
f.seek(0)
#print(f.read())
user_code= subprocess.Popen(['python', 'test_code.py'], stdin=f, stdout=PIPE)
user_code.wait()
mssg = user_code.communicate()
print(mssg)
print(mssg[0].decode('utf-8').strip())

Related

Opening .txt file does not show content or traceback but something else

I tried running this code in python. I ensured:
The .txt file was in the same file as the code file and the file name was "random.txt" saved in .txt format
file = input ('Enter File:')
if len(file) < 1 : file = 'random.txt'
fhan = open(file)
print (fhan)
My command prompt returned me <_io.TextIOWrapper name='random.txt' mode='r' encoding='cp1252'> with no traceback. I don't know how to get the file to open and print the content
Open a file, and print the file content:
with open('./demo.txt', 'r') as file:
txt = file.read()
print(txt)
fhan is a file handle, so printing it simply prints the results of calling its repr method, which shows what you see. To read the entire file, you can call fhan.read().
It's also good practice to use a with statement to manage resources. For example, your code can be written as
file = input('Enter File:')
if not file: # check for empty string
file = 'random.txt'
with open(file, 'r') as fhan: # use the `with` statement
print(fhan.read())
The benefit of this syntax is that you'll never have to worry about forgetting to close the file handle.

Converting cloud-init logs to json using a script

I am trying to convert the cloud-init logs to json, so that the filebeat can pick it up and send it to the Kibana. I want to do this by using a shell script or python script. Is there any script that converts such logs to json?
My python script is below
import json
import subprocess
filename = "/home/umesh/Downloads/scripts/cloud-init.log"
def convert_to_json_log(line):
""" convert each line to json format """
log = {}
log['msg'] = line
log['logger-name'] = 'cloud-init'
log['ServiceName'] = 'Contentprocessing'
return json.dumps(log)
def log_as_json(filename):
f = subprocess.Popen(['cat','-F',filename],
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
while True:
line = f.stdout.readline()
log = convert_to_json_log(line)
print log
with open("/home/umesh/Downloads/outputs/cloud-init-json.log", 'a') as new:
new.write(log + '\n')
log_as_json(filename)
The scripts returns a file with json format, but the msg filed returns empty string. I want to convert each line of the log as message string.
Firstly, try reading the raw log file using python inbuilt functions rather than running os commands using subprocess, because:
It will be more portable (work across OS'es)
Faster and less prone to errors
Re-writing your log_as_json function as follows worked for me:
inputfile = "cloud-init.log"
outputfile = "cloud-init-json.log"
def log_as_json(filename):
# Open cloud-init log file for reading
with open(inputfile, 'r') as log:
# Open the output file to append json entries
with open(outputfile, 'a') as jsonlog:
# Read line by line
for line in log.readlines():
# Convert to json and write to file
jsonlog.write(convert_to_json(line)+"\n")
After taking some time on preparing the customised script finally i made the below script. It might be helpful to many others.
import json
def convert_to_json_log(line):
""" convert each line to json format """
log = {}
log['msg'] = json.dumps(line)
log['logger-name'] = 'cloud-init'
log['serviceName'] = 'content-processing'
return json.dumps(log)
# Open the file with read only permit
f = open('/var/log/cloud-init.log', "r")
# use readlines to read all lines in the file
# The variable "lines" is a list containing all lines in the file
lines = f.readlines()
# close the file after reading the lines.
f.close()
jsonData = ''
for line in lines:
jsonLine = convert_to_json_log(line)
jsonData = jsonData + "\n" + jsonLine;
with open("/var/log/cloud-init/cloud-init-json.log", 'w') as new:
new.write(jsonData)

How to redirect stdout to only the console when within fileinput loop

Currently I have this piece of code for python 2.7:
h = 0
for line in fileinput.input('HISTORY',inplace=1):
if line[0:2] == x:
h = h + 1
if h in AU:
line = line.replace(x,'AU')
if 'timestep' in line:
h = 0
sys.stdout.write(('\r%s%% ') % format(((os.stat('HISTORY').st_size / os.stat('HISTORY.bak').st_size)*100),'.1f'))
sys.stdout.write(line)
What I am having trouble with is the following line:
sys.stdout.write(('\r%s%% ') % format(((os.stat('HISTORY').st_size / os.stat('HISTORY.bak').st_size)*100),'.1f'))
I need this information to be outputted to the console ONLY and not into the HISTORY file.
This code creates a temporary copy of the input file, then scans this and rewrites the original file. It handles errors during processing the file so that the original data isn't lost during the re-write. It demonstrates how to write some data to stdout occasionally and other data back to the original file.
The temporary file creation was taken from this SO answer.
import fileinput
import os, shutil, tempfile
# create a copy of the source file into a system specified
# temporary directory. You could just put this in the original
# folder, if you wanted
def create_temp_copy(src_filename):
temp_dir = tempfile.gettempdir()
temp_path = os.path.join(temp_dir, 'temp-history.txt')
shutil.copy2(src_filename,temp_path)
return temp_path
# create a temporary copy of the input file
temp = create_temp_copy('HISTORY.txt')
# open up the input file for writing
dst = open('HISTORY.txt','w+')
for line in fileinput.input(temp):
# Added a try/catch to handle errors during processing.
# If this isn't present, any exceptions that are raised
# during processing could cause unrecoverable loss of
# the HISTORY file
try:
# some sort of replacement
if line.startswith('e'):
line = line.strip() + '#\n' # notice the newline here
# occasional status updates to stdout
if '0' in line:
print 'info:',line.strip() # notice the removal of the newline
except:
# when a problem occurs, just output a message
print 'Error processing input file'
finally:
# re-write the original input file
# even if there are exceptions
dst.write(line)
# deletes the temporary file
os.remove(temp)
# close the original file
dst.close()
If you only want the information to go to the console could you just use print instead?

python: read file continuously, even after it has been logrotated

I have a simple python script, where I read logfile continuosly (same as tail -f)
while True:
line = f.readline()
if line:
print line,
else:
time.sleep(0.1)
How can I make sure that I can still read the logfile, after it has been rotated by logrotate?
i.e. I need to do the same what tail -F would do.
I am using python 2.7
As long as you only plan to do this on Unix, the most robust way is probably to check so that the open file still refers to the same i-node as the name, and reopen it when that is no longer the case. You can get the i-number of the file from os.stat and os.fstat, in the st_ino field.
It could look like this:
import os, sys, time
name = "logfile"
current = open(name, "r")
curino = os.fstat(current.fileno()).st_ino
while True:
while True:
buf = current.read(1024)
if buf == "":
break
sys.stdout.write(buf)
try:
if os.stat(name).st_ino != curino:
new = open(name, "r")
current.close()
current = new
curino = os.fstat(current.fileno()).st_ino
continue
except IOError:
pass
time.sleep(1)
I doubt this works on Windows, but since you're speaking in terms of tail, I'm guessing that's not a problem. :)
You can do it by keeping track of where you are in the file and reopening it when you want to read. When the log file rotates, you notice that the file is smaller and since you reopen, you handle any unlinking too.
import time
cur = 0
while True:
try:
with open('myfile') as f:
f.seek(0,2)
if f.tell() < cur:
f.seek(0,0)
else:
f.seek(cur,0)
for line in f:
print line.strip()
cur = f.tell()
except IOError, e:
pass
time.sleep(1)
This example hides errors like file not found because I'm not sure of logrotate details such as small periods of time where the file is not available.
NOTE: In python 3, things are different. A regular open translates bytes to str and the interim buffer used for that conversion means that seek and tell don't operate properly (except when seeking to 0 or the end of file). Instead, open in binary mode ("rb") and do the decode manually line by line. You'll have to know the file encoding and what that encoding's newline looks like. For utf-8, its b"\n" (one of the reasons utf-8 is superior to utf-16, btw).
Thanks to #tdelaney and #Dolda2000's answers, I ended up with what follows. It should work on both Linux and Windows, and also handle logrotate's copytruncate or create options (respectively copy then truncate size to 0 and move then recreate file).
file_name = 'my_log_file'
seek_end = True
while True: # handle moved/truncated files by allowing to reopen
with open(file_name) as f:
if seek_end: # reopened files must not seek end
f.seek(0, 2)
while True: # line reading loop
line = f.readline()
if not line:
try:
if f.tell() > os.path.getsize(file_name):
# rotation occurred (copytruncate/create)
f.close()
seek_end = False
break
except FileNotFoundError:
# rotation occurred but new file still not created
pass # wait 1 second and retry
time.sleep(1)
do_stuff_with(line)
A limitation when using copytruncate option is that if lines are appended to the file while time-sleeping, and rotation occurs before wake-up, the last lines will be "lost" (they will still be in the now "old" log file, but I cannot see a decent way to "follow" that file to finish reading it). This limitation is not relevant with "move and create" create option because f descriptor will still point to the renamed file and therefore last lines will be read before the descriptor is closed and opened again.
Using 'tail -F
man tail
-F same as --follow=name --retr
-f, --follow[={name|descriptor}] output appended data as the file grows;
--retry keep trying to open a file if it is inaccessible
-F option will follow the name of the file not descriptor.
So when logrotate happens, it will follow the new file.
import subprocess
def tail(filename: str) -> Generator[str, None, None]:
proc = subprocess.Popen(["tail", "-F", filename], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if line:
yield line.decode("utf-8")
else:
break
for line in tail("/config/logs/openssh/current"):
print(line.strip())
I made a variation of the awesome above one by #pawamoy into a generator function one for my log monitoring and following needs.
def tail_file(file):
"""generator function that yields new lines in a file
:param file:File Path as a string
:type file: str
:rtype: collections.Iterable
"""
seek_end = True
while True: # handle moved/truncated files by allowing to reopen
with open(file) as f:
if seek_end: # reopened files must not seek end
f.seek(0, 2)
while True: # line reading loop
line = f.readline()
if not line:
try:
if f.tell() > os.path.getsize(file):
# rotation occurred (copytruncate/create)
f.close()
seek_end = False
break
except FileNotFoundError:
# rotation occurred but new file still not created
pass # wait 1 second and retry
time.sleep(1)
yield line
Which can be easily used like the below
import os, time
access_logfile = '/var/log/syslog'
loglines = tail_file(access_logfile)
for line in loglines:
print(line)

How to call a cmd file within python script

This is the python script will do. The question is how to call the external cmd file within the function?
Read a CSV file in the directory.
If the content in 6th column is equal to 'approved', then calls an
external windows script 'TransferProd.cmd'
.
def readCSV(x):
#csvContents is a list in the global scope that will contain lists of the
#items on each line of the specified CSV file
try:
global csvContents
file = open(csvDir + x + '.csv', 'r') #Opens the CSV file
csvContents = file.read().splitlines() #Appends each line of the CSV file to csvContents
#----------------------------------------------------------------------
#This takes each item in csvContents and splits it at "," into a list.
#The list created replaces the item in csvContents
for y in range(0,len(csvContents)):
csvContents[y] = csvContents[y].lower().split(',')
if csvContents[y][6] == 'approved':
***CALL TransferProd.cmd***
file.close()
return
except Exception as error:
log(logFile, 'An error has occurred in the readCSV function: ' + str(error))
raise
Take a look at the subprocess module.
import subprocess
p = subprocess.Popen(['TransferProd.cmd'])
You can specify where you want output/errors to go (directly to a file or to a file-like object), pipe in input, etc.
import os
os.system('TransferProd.cmd')
This works in both unix/windows flavors as it send the commands to the shell. There are some variations in returned values though! Check here.
If you don't need output of the command you could use: os.system(cmd)
The better solution is to use:
from subprocess import Popen, PIPE
proc = Popen(cmd, shell = True, close_fds = True)
stdout, stderr = proc.communicate()

Categories