IPython.parallel - can I write my own log into the engine logs? - python

I'd like to be able to log outputs from the functions I pass to my engines in the relevant engine logs.
I.e.:
data = /* my list of data to operate on */
def fn(inval):
import logging
log = logging.getLogger()
log.error('This is on the engine')
// do stuff
return result
calculated_data = []
for datum in data:
calc = view.apply(fn, datum)
calculated_data.append(calc)
I'd like to be able to see the log statements in the relevant engine log that operated on the specific task.

You can grab the logger of the current app (i.e. the engine in this case) with:
from IPython.config import Application
log = Application.instance().log
Then log as normal, and it will go to the engine logs.

Related

Dynamically switching log storage folder

I am using Python's logging to log execution of functions and other actions within an application. The log files are stored in a remote folder, which is accessible automatically when I connect to VPN (let's say \remote\directory). That is normal situation, 99% of the time there is a connection and log is stored without errors.
I need a solution for a situation when either the VPN connection or Internet connection is lost and the logs are temporarily stored locally. I think that on each time something is attempted to be logged, I need to run a check if the remote folder is accessible. I couldn't really find a solution, but I guess I need to modify the FileHandler somehow.
TLDR: You can already scroll down to blues' answer and my UPDATE section - there is my latest attempt to solve the issue.
Currently my handler is set like this:
log = logging.getLogger('general')
handler_error = logging.handlers.RotatingFileHandler(log_path+"\\error.log", 'a', encoding="utf-8")
log.addHandler(handler_error)
Here is a condition that sets the log path but only once - when logging is initialized. If I think correctly, I would like to run this condition each time the
if (os.path.isdir(f"\\\\remote\\folder\\")): # if remote is accessible
log_path = f"\\\\remote\\folder\\dev\\{d.year}\\{month}\\"
os.makedirs(os.path.dirname(log_path), exist_ok=True) # create this month dir if it does not exist, logging does not handle that
else: # if remote is not accesssible
log_path = f"localFiles\\logs\\dev\\{d.year}\\{month}\\"
log.debug("Cannot access the remote directory. Are you connected to the internet and the VPN?")
I have found a related thread, but was not able to adjust it to my own needs: Dynamic filepath & filename for FileHandler in logger config file in python
Should I dig deeper into custom Handler or is there some other way? Would be enough if I could call my own function that changed the logging path if needed (or change logger to one with a proper path) when logging is being executed.
UPDATE:
Per blues's answer, I have tried modifying a handler to suit my needs. Unfortunately, the code below, in which I try to switch baseFilename between local and remote paths, does not work. The logger always saves the log to local log file (that has been set while initializing logger). Thus, I think that my attempts to modify the baseFilename do not work?
class HandlerCheckBefore(RotatingFileHandler):
print("handler starts")
def emit(self, record):
calltime = date.today()
if os.path.isdir(f"\\\\remote\\Path\\"): # if remote is accessible
print("handler remote")
# create remote folders if not yet existent
os.makedirs(os.path.dirname(f"\\\\remote\\Path\\{calltime.year}\\{calltime.strftime('%m')}\\"), exist_ok=True)
if (self.level >= 20): # if error or above
self.baseFilename = f"\\\\remote\\Path\\{calltime.year}\\{calltime.strftime('%m')}\\error.log"
else:
self.baseFilename = f"\\\\remote\\Path\\{calltime.year}\\{calltime.strftime('%m')}\\{calltime.strftime('%d')}-{calltime.strftime('%m')}.log"
super().emit(record)
else: # save to local
print("handler local")
if (self.level >= 20): # error or above
self.baseFilename = f"localFiles\\logs\\{calltime.year}\\{calltime.strftime('%m')}\\error.log"
else:
self.baseFilename = f"localFiles\\logs\\{calltime.year}\\{calltime.strftime('%m')}\\{calltime.strftime('%d')}-{calltime.strftime('%m')}.log"
super().emit(record)
# init the logger
handler_error = HandlerCheckBefore(f"\\\\remote\\Path\\{calltime.year}\\{calltime.strftime('%m')}\\error.log", 'a', encoding="utf-8")
handler_error.setLevel(logging.ERROR)
handler_error.setFormatter(fmt)
log.addHandler(handler_error)
The best way to solve this is indeed to create a custom Handler for this. You can either check before each write that the directory is still there, or you could attempt to write the log and handle the resulting error in handleError which all loggers call when an exception occurs during emit(). I recommend the former. The code below shows how both could be implemented:
import os
import logging
from logging.handlers import RotatingFileHandler
class GrzegorzRotatingFileHandlerCheckBefore(RotatingFileHandler):
def emit(self, record):
if os.path.isdir(os.path.dirname(self.baseFilename)): # put appropriate check here
super().emit(record)
else:
logging.getLogger('offline').error('Cannot access the remote directory. Are you connected to the internet and the VPN?')
class GrzegorzRotatingFileHandlerHandleError(RotatingFileHandler):
def handleError(self, record):
logging.getLogger('offline').error('Something went wrong when writing log. Probably remote dir is not accessible')
super().handleError(record)
log = logging.getLogger('general')
log.addHandler(GrzegorzRotatingFileHandlerCheckBefore('check.log'))
log.addHandler(GrzegorzRotatingFileHandlerHandleError('handle.log'))
offline_logger = logging.getLogger('offline')
offline_logger.addHandler(logging.FileHandler('offline.log'))
log.error('test logging')

Using Python Logging module to save logs to S3, how to capture all levels of logs for all modules?

It's my first time using the logging module in Python(3.7). My code uses imported modules that also have their own log statements. When I first added log statements to my code, I didn't use getLogger(). I just used logging.basicConfig(filename) and called logger.debug() directly to log statements. When I did this, all the logs from both my script and also all the imported modules was output to the same file together.
Now I need to convert my code to save logs to s3 instead of a file. I tried the solution mentioned in How Can I Write Logs Directly to AWS S3 from Memory Without First Writing to stdout? (Python, boto3) - Stack Overflow but I have two issues with it:
None of the 'prefixes' are present in the output when I check on s3.
Only INFO statements are showing up. I was under the impression that logging.basicConfig(level=logging.INFO) would make it would output all logs at or above level INFO, but I'm only seeing INFO. Also, only INFO logs get printed to stdout, when before all levels were. I don't know why the 'prefixes' are missing.
from psaw import PushshiftAPI
api = PushshiftAPI()
import time
import logging
import boto3
import io
import atexit
def write_logs(body, bucket, key):
s3 = boto3.client("s3")
s3.put_object(Body=body.getvalue(), Bucket=bucket, Key=key)
logging.basicConfig(level=logging.INFO)
log = logging.getLogger()
log_stringio = io.StringIO()
handler = logging.StreamHandler(log_stringio)
log.addHandler(handler)
def collectRange(sub,start,end):
atexit.register(write_logs, body=log_stringio, bucket="<...>", key=f'{sub}/log.txt')
s3 = boto3.resource('s3')
object = s3.Object('<...>', f'{sub}/{sub}#{start}-{end}.csv')
now = time.time()
logging.info(f'Start Time:{now}')
logging.debug('First request')
gen = api.search_comments(after=start, before=end,<...>, subreddit=sub)
r=next(gen)
<...>
quit()
Output:
Found credentials in shared credentials file: ~/.aws/credentials
Start Time:1591310443.7060978
https://api.pushshift.io/reddit/comment/search?<...>
https://api.pushshift.io/reddit/comment/search?<...>
Desired output:
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:root:Start Time:1591310443.7060978
DEBUG:root:First request
INFO:psaw.PushshiftAPI:https://api.pushshift.io/reddit/comment/search?<...>
DEBUG:psaw.PushshiftAPI:<whatever is usually here>
DEBUG:psaw.PushshiftAPI:<whatever is usually here>
INFO:psaw.PushshiftAPI:https://api.pushshift.io/reddit/comment/search?<...>
DEBUG:psaw.PushshiftAPI:<whatever is usually here>
DEBUG:psaw.PushshiftAPI:<whatever is usually here>
Any help is appreciated. Thanks.
You can at least add the level name (or time) by following this documentation:
Changing the format of displayed messages.
And to get DEBUG as well you need to use the following instead of INFO
logging.basicConfig(..., level=logging.DEBUG)

How to handle BigQuery insert errors in a Dataflow pipeline using Python?

I'm trying to create a streaming pipeline with Dataflow that reads messages from a PubSub topic to end up writing them on a BigQuery table. I don't want to use any Dataflow template.
For the moment I just want to create a pipeline in a Python3 script executed from a Google VM Instance to carry out a loading and transformation process of every message that arrives from Pubsub (parsing the records that it contains and adding a new field) to end up writing the results on a BigQuery table.
Simplifying, my code would be:
#!/usr/bin/env python
from apache_beam.options.pipeline_options import PipelineOptions
from google.cloud import pubsub_v1,
import apache_beam as beam
import apache_beam.io.gcp.bigquery
import logging
import argparse
import sys
import json
from datetime import datetime, timedelta
def load_pubsub(message):
try:
data = json.loads(message)
records = data["messages"]
return records
except:
raise ImportError("Something went wrong reading data from the Pub/Sub topic")
class ParseTransformPubSub(beam.DoFn):
def __init__(self):
self.water_mark = (datetime.now() + timedelta(hours = 1)).strftime("%Y-%m-%d %H:%M:%S.%f")
def process(self, records):
for record in records:
record["E"] = self.water_mark
yield record
def main():
table_schema = apache_beam.io.gcp.bigquery.parse_table_schema_from_json(open("TableSchema.json"))
parser = argparse.ArgumentParser()
parser.add_argument('--input_topic')
parser.add_argument('--output_table')
known_args, pipeline_args = parser.parse_known_args(sys.argv)
with beam.Pipeline(argv = pipeline_args) as p:
pipe = ( p | 'ReadDataFromPubSub' >> beam.io.ReadStringsFromPubSub(known_args.input_topic)
| 'LoadJSON' >> beam.Map(load_pubsub)
| 'ParseTransform' >> beam.ParDo(ParseTransformPubSub())
| 'WriteToAvailabilityTable' >> beam.io.WriteToBigQuery(
table = known_args.output_table,
schema = table_schema,
create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition = beam.io.BigQueryDisposition.WRITE_APPEND)
)
result = p.run()
result.wait_until_finish()
if __name__ == '__main__':
logger = logging.getLogger().setLevel(logging.INFO)
main()
(For example) The messages published in the PubSub topic use to come as follows:
'{"messages":[{"A":"Alpha", "B":"V1", "C":3, "D":12},{"A":"Alpha", "B":"V1", "C":5, "D":14},{"A":"Alpha", "B":"V1", "C":3, "D":22}]}'
If the field "E" is added in the record, then the structure of the record (dictionary in Python) and the data type of the fields is what the BigQuery table expects.
The problems that a I want to handle are:
If some messages come with an unexpected structure I want to fork the pipeline flatten and write them in another BigQuery table.
If some messages come with an unexpected data type of a field, then in the last level of the pipeline when they should be written in the table an error will occur. I want to manage this type of error by diverting the record to a third table.
I read the documentation found on the following pages but I found nothing:
https://cloud.google.com/dataflow/docs/guides/troubleshooting-your-pipeline
https://cloud.google.com/dataflow/docs/guides/common-errors
By the way, if I choose the option to configure the pipeline through the template that reads from a PubSubSubscription and writes into BigQuery I get the following schema which turns out to be the same one I'm looking for:
Template: Cloud Pub/Sub Subscription to BigQuery
You can't catch the errors that occur in the sink to BigQuery. The message that you write into bigquery must be good.
The best pattern is to perform a transform which checks your messages structure and fields type. In case of error, you create an error flow and you write this issue flow in a file (for example, or in a table without schema, you write in plain text your message)
We do the following when errors occur at the BigQuery sink.
send a message (without stacktrace) to GCP Error Reporting, for developers to be notified
log the error to StackDriver
stop the pipeline execution (the best place for messages to wait until a developer has fixed the issue, is the incomming pubSub subscription)

FileUploadMiscError while persisting output file from Azure Batch

I'm facing the following error while trying to persist log files to Azure Blob storage from Azure Batch execution - "FileUploadMiscError - A miscellaneous error was encountered while uploading one of the output files". This error doesn't give a lot of information as to what might be going wrong. I tried checking the Microsoft Documentation for this error code, but it doesn't mention this particular error code.
Below is the relevant code for adding the task to Azure Batch that I have ported from C# to Python for persisting the log files.
Note: The container that I have configured gets created when the task is added, but there's no blob inside.
import datetime
import logging
import os
import azure.storage.blob.models as blob_model
import yaml
from azure.batch import models
from azure.storage.blob.baseblobservice import BaseBlobService
from azure.storage.common.cloudstorageaccount import CloudStorageAccount
from dotenv import load_dotenv
LOG = logging.getLogger(__name__)
def add_tasks(batch_client, job_id, task_id, io_details, blob_details):
task_commands = "This is a placeholder. Actual code has an actual task. This gets completed successfully."
LOG.info("Configuring the blob storage details")
base_blob_service = BaseBlobService(
account_name=blob_details['account_name'],
account_key=blob_details['account_key'])
LOG.info("Base blob service created")
base_blob_service.create_container(
container_name=blob_details['container_name'], fail_on_exist=False)
LOG.info("Container present")
container_sas = base_blob_service.generate_container_shared_access_signature(
container_name=blob_details['container_name'],
permission=blob_model.ContainerPermissions(write=True),
expiry=datetime.datetime.now() + datetime.timedelta(days=1))
LOG.info(f"Container SAS created: {container_sas}")
container_url = base_blob_service.make_container_url(
container_name=blob_details['container_name'], sas_token=container_sas)
LOG.info(f"Container URL created: {container_url}")
# fpath = task_id + '/output.txt'
fpath = task_id
LOG.info(f"Creating output file object:")
out_files_list = list()
out_files = models.OutputFile(
file_pattern=r"../stderr.txt",
destination=models.OutputFileDestination(
container=models.OutputFileBlobContainerDestination(
container_url=container_url, path=fpath)),
upload_options=models.OutputFileUploadOptions(
upload_condition=models.OutputFileUploadCondition.task_completion))
out_files_list.append(out_files)
LOG.info(f"Output files: {out_files_list}")
LOG.info(f"Creating the task now: {task_id}")
task = models.TaskAddParameter(
id=task_id, command_line=task_commands, output_files=out_files_list)
batch_client.task.add(job_id=job_id, task=task)
LOG.info(f"Added task: {task_id}")
There is a bug in Batch's OutputFile handling which causes it to fail to upload to containers if the full container URL includes any query-string parameters other than the ones included in the SAS token. Unfortunately, the azure-storage-blob Python module includes an extra query string parameter when generating the URL via make_container_url.
This issue was just raised to us, and a fix will be released in the coming weeks, but an easy workaround is instead of using make_container_url to craft the URL, craft it yourself like so: container_url = 'https://{}/{}?{}'.format(blob_service.primary_endpoint, blob_details['container_name'], container_sas).
The resulting URL should look something like this: https://<account>.blob.core.windows.net/<container>?se=2019-01-12T01%3A34%3A05Z&sp=w&sv=2018-03-28&sr=c&sig=<sig> - specifically it shouldn't have restype=container in it (which is what the azure-storage-blob package is including)

Python Cherrypy Access Log Rotation

If I want the access log for Cherrypy to only get to a fixed size, how would I go about using rotating log files?
I've already tried http://www.cherrypy.org/wiki/Logging, which seems out of date, or has information missing.
Look at http://docs.python.org/library/logging.html.
You probably want to configure a RotatingFileHandler
http://docs.python.org/library/logging.html#rotatingfilehandler
I've already tried http://www.cherrypy.org/wiki/Logging, which seems
out of date, or has information missing.
Try adding:
import logging
import logging.handlers
import cherrypy # you might have imported this already
and instead of
log = app.log
maybe try
log = cherrypy.log
The CherryPy documentation of the custom log handlers shows this very example.
Here is the slightly modified version that I use on my app:
import logging
from logging import handlers
def setup_logging():
log = cherrypy.log
# Remove the default FileHandlers if present.
log.error_file = ""
log.access_file = ""
maxBytes = getattr(log, "rot_maxBytes", 10000000)
backupCount = getattr(log, "rot_backupCount", 1000)
# Make a new RotatingFileHandler for the error log.
fname = getattr(log, "rot_error_file", "log\\error.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(logging.DEBUG)
h.setFormatter(cherrypy._cplogging.logfmt)
log.error_log.addHandler(h)
# Make a new RotatingFileHandler for the access log.
fname = getattr(log, "rot_access_file", "log\\access.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(logging.DEBUG)
h.setFormatter(cherrypy._cplogging.logfmt)
log.access_log.addHandler(h)
setup_logging()
Cherrypy does its logging using the standard Python logging module. You will need to change it to use a RotatingFileHandler. This handler will take care of everything for you including rotating the log when it reaches a set maximum size.

Categories