How to send logger information to seq - python

I sent my logger information into 'seq' module. I have:
_log_format = f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s"
logger.info("Hello, {name}", name="world")
As result in 'seq' I have only:
No one from: f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s" was add into 'seq'.
In stream is all right:
2021-08-20 14:40:24,244 - [INFO] - request_id=None - app - (__init__.py).create_app(43) - Hello, world
I sent logs to 'seq' by this way:
import seqlog
seqlog.log_to_seq(
server_url="http://localhost:5341/",
api_key="My API Key",
level=logging.INFO,
batch_size=10,
auto_flush_timeout=10, # seconds
override_root_logger=True,
)

Related

Microsecond do not work in Python logger format

For some reason my Python logger does not want to recognize microseconds format.
import logging, io
stream = io.StringIO()
logger = logging.getLogger("TestLogger")
logger.setLevel(logging.INFO)
logger.propagate = False
log_handler = logging.StreamHandler(stream)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S.%f %Z")
log_handler.setFormatter(log_format)
logger.addHandler(log_handler)
logger.info("This is test info log")
print(stream.getvalue())
It returns:
2023-01-06 18:52:34.%f UTC - TestLogger - INFO - This is test info log
Why are microseconds missing?
Update
I am running
Python 3.10.4
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
The issue is that the formatTime method uses time.strptime to format the current time.time(), but since struct_time has no information about milliseconds and microseconds the formatter ignores the %f.
Also, note that the LogRecord calculates the milliseconds separately and stores them in another variable named msecs
To get what you're looking for we need a custom version of the Formatter class that uses a different converter than time.localtime and is able to interpret the microseconds:
from datetime import datetime
class MyFormatter(logging.Formatter):
def formatTime(self, record, datefmt=None):
if not datefmt:
return super().formatTime(record, datefmt=datefmt)
return datetime.fromtimestamp(record.created).astimezone().strftime(datefmt)
...
log_format = MyFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S.%f %Z")
...
Should output:
2023-01-06 17:47:54.828521 EST - TestLogger - INFO - This is test info log
I found that the answer given by Ashwini Chaudhary to not be applicable for me and found a slightly simpler solution in creating a function with the formatting as done with datetime and changing the value of the logging.Formatter.formatTime method to this instead i.e.:
def _formatTime(self, record, datefmt: str = None) -> str:
return datetime.datetime.fromtimestamp(record.created).astimezone().strftime(datefmt)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S.%f %Z")
logging.Formatter.formatTime = _formatTime
And then generating your logger as normal

Cannot read channel message from slack in Python using slack SocketModeHandler

Below is the python code for reading and responding to message from slack channel to python. I wrote this script by using their tutorials and ended up here with the problem. Also I am unable to send message to slack using client.chat_postMessage(channel="XXXXXXXXXXX", text=msg, user="XXXXXXXXXXX")
I don't know why but when I write command "/hi" in channel, the python reads the event and prints data but if I try any keyword like check and knock knock, the python doesn't responds to this,
import os
# Use the package we installed
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from os.path import join, dirname
import time
import re
from datetime import datetime
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
# Initializes your app with your bot token and signing secret
app = App(
token=os.environ['SLACK_BOT_TOKEN'],
signing_secret=os.environ['SIGNING_SECRET']
)
# Add functionality here
#app.message("check")
def say_hello(message, client, body, logger):
print(message)
print(client)
print(body)
msg = "Hi there from Python"
try:
client.chat_postMessage(channel="XXXXXXXXXXX", text=msg, user="XXXXXXXXXXX")
except Exception as e:
logger.exception(f"Failed to post a message {e}")
print(e)
#app.message("knock knock")
def ask_who(message, say):
say("_Who's there?_")
#app.event("message")
def handle_message_events(body, logger):
logger.info(body)
print("messaging", body)
#app.command("/hi")
def handle_some_command(ack, body, logger):
ack()
logger.info(body)
print(body)
# Start your app
if __name__ == "__main__":
SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start()
Here is the manifest of my app from slackbolt
_metadata:
major_version: 1
minor_version: 1
display_information:
name: Hotline App
features:
app_home:
home_tab_enabled: true
messages_tab_enabled: true
messages_tab_read_only_enabled: false
bot_user:
display_name: Hotline Bot
always_online: false
slash_commands:
- command: /hi
description: greets user
should_escape: false
oauth_config:
scopes:
user:
- chat:write
- channels:read
- im:history
- channels:history
- groups:history
bot:
- incoming-webhook
- calls:read
- calls:write
- app_mentions:read
- channels:history
- channels:join
- channels:manage
- channels:read
- chat:write
- chat:write.customize
- chat:write.public
- commands
- dnd:read
- emoji:read
- files:read
- files:write
- groups:history
- groups:read
- groups:write
- im:history
- im:read
- im:write
- links:read
- links:write
- mpim:history
- mpim:read
- mpim:write
- pins:read
- pins:write
- reactions:read
- reactions:write
- reminders:read
- reminders:write
- remote_files:read
- remote_files:share
- remote_files:write
- team:read
- usergroups:write
- usergroups:read
- users.profile:read
- users:read
- users:read.email
- users:write
- workflow.steps:execute
settings:
event_subscriptions:
user_events:
- channel_archive
- channel_created
- channel_deleted
- channel_rename
- message.channels
- message.groups
- message.im
bot_events:
- app_mention
- channel_archive
- channel_created
- channel_deleted
- channel_history_changed
- channel_id_changed
- channel_left
- channel_rename
- channel_shared
- channel_unarchive
- channel_unshared
- dnd_updated_user
- email_domain_changed
- emoji_changed
- file_change
- file_created
- file_deleted
- file_public
- file_shared
- file_unshared
- group_archive
- group_deleted
- group_history_changed
- group_left
- group_rename
- group_unarchive
- im_history_changed
- link_shared
- member_joined_channel
- member_left_channel
- message.channels
- message.groups
- message.im
- message.mpim
- pin_added
- pin_removed
- reaction_added
- reaction_removed
- subteam_created
- subteam_members_changed
- subteam_updated
- team_domain_change
- team_join
- team_rename
- user_change
interactivity:
is_enabled: true
org_deploy_enabled: false
socket_mode_enabled: true
Any help to this problem from experts may reduce my headache and workload, Thanks in advance!
Kind regards,
Gohar
The bot must be a member of the channel where the message is being sent — please make sure to invite the bot into that channel and it should begin receiving those message events.
Also, this is somewhat incidental to your question, but as a security precaution, please only request the scopes necessary for your bot to function. You risk creating a token with far too permissive a number of scopes. You likely don't need user scopes for this app. The same holds true for events — consider only subscribing to events your app actually requires.

Is there any way to format my logs in Cloud Function in Python?

import logging
import google.cloud.logging
class LoggingClass:
#staticmethod
def _get_logging_level(level):
level = level.lower()
if level == 'debug':
logging_level = logging.DEBUG
elif level == 'info':
logging_level = logging.INFO
elif level == 'warning':
logging_level = logging.WARNING
elif level == 'error':
logging_level = logging.ERROR
elif level == 'critical':
logging_level = logging.CRITICAL
# Default logging level
else: logging_level = logging.INFO
return logging_level
#staticmethod
def setup_logging(level='INFO', mode='formatted'):
client = google.cloud.logging.Client()
client.get_default_handler()
cloud_logger = logging.getLogger('CloudLogging')
logging_level = LoggingClass._get_logging_level(level)
cloud_logger.setLevel(logging_level)
if mode == 'simple':
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
else:
formatter = logging.Formatter(
'%(process)d - %(thread)d - %(asctime)s - '
'%(name)s - %(levelname)s - %(message)s'
)
chl = logging.StreamHandler()
if cloud_logger.handlers:
cloud_logger.handlers.pop()
chl.setFormatter(formatter)
cloud_logger.addHandler(chl)
return cloud_logger
if __name__ == "__main__":
cloud_logger = LoggingClass.setup_logging(level='INFO')
cloud_logger.error("ok")
cloud_logger.warning("ok1")
cloud_logger.info("ok2")
The output I get on my Cloud Function log is:
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - ERROR - ok
ok
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - WARNING - ok1
ok1
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - INFO - ok2
ok2
CAN SOMEBODY RECTIFY WHERE AM I GOING WRONG? Why the duplication? I only want the first line logs on Cloud Function in that format. If I do not use, google cloud logging, I see significant delay in the logs. However using google cloud logging, there's no delay but the duplication!
I would suggest to have at look at this article: Python and Stackdriver Logging
From the best of my understanding you get such results because the logging happens asynchronously and in small batches (another thread behind the scene).

How to log into different files properly in Python?

I have a logger configuration class below, my_logger.py:
def my_logger(module_name, log_file):
logger = logging.getLogger(module_name)
logger.setLevel(logging.DEBUG)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.handlers.RotatingFileHandler(filename=log_file)
c_handler.setLevel(logging.DEBUG)
f_handler.setLevel(logging.DEBUG)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger
This my_logger.py is under the package root directory:
my_package:
my_logger.py
test.py
api/api.py
logs/
Then in my test.py:
abspath = os.path.abspath(os.path.dirname(__file__))
logger_info = logger.my_logger("test_info", os.path.join(abspath,"../logs/info.log"))
logger_debug = logger.my_logger("test_debug", os.path.join(abspath,"../logs/debug.log"))
logger_error = logger.my_logger("test_error", os.path.join(abspath,"../logs/error.log"))
logger_info.info('Info test ...')
logger_debug.debug('Debug test ...')
logger_error.error('Error test ...')
I want to debugging to debug.log, info to info.log and error to error.log.
For each file that I want to log, I need to add the following 3 lines to each file:
logger_info = logger.my_logger(module_info, os.path.join(abspath,"../logs/info.log"))
logger_debug = logger.my_logger(module_debug, os.path.join(abspath,"../logs/debug.log"))
logger_error = logger.my_logger(module_error, os.path.join(abspath,"../logs/error.log"))
Is this normal practice? I want all log messages from all modules to go into the same 3 files under logs/.
Short answer
To achieve exactly what you described, you need 3 handlers (which you have in your example), and set their default level to the corresponding level at the top. No need to set the handler's levels separately.
import logging
from logging import handlers
def my_logger(module_name, log_file, level):
logger = logging.getLogger(module_name)
logger.setLevel(level)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = handlers.RotatingFileHandler(filename=log_file)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger
if __name__ == "__main__":
logger_info = my_logger("test_info", "info.log", logging.INFO)
logger_debug = my_logger("test_debug", "debug.log", logging.DEBUG)
logger_error = my_logger("test_error", "error.log", logging.ERROR)
logger_info.info('Info test ...')
logger_debug.debug('Debug test ...')
logger_error.error('Error test ...')
Output:
python test_logger2.py
2020-12-04 16:57:00,771 - test_info - INFO - 29 - Info test ...
2020-12-04 16:57:00,771 - test_debug - DEBUG - 30 - Debug test ...
2020-12-04 16:57:00,771 - test_error - ERROR - 31 - Error test ...
cat info.log
2020-12-04 16:57:00,771 - test_info - INFO - 29 - Info test ...
cat debug.log
2020-12-04 16:57:00,771 - test_debug - DEBUG - 30 - Debug test ...
cat error.log
2020-12-04 16:57:00,771 - test_error - ERROR - 31 - Error test ...
Long answer:
Why 3 copies?
You are calling your my_logger function 3 times, and each time you call it a new file handler as well as a NEW stream handler added to the logger. That's why you see 3 copies on your console (3 stream handlers). Plus all your handlers are set to DEBUG level. That's why all three logger prints out any log that you have given. You don't want a ERROR handler to process/print logs at DEBUG/INFO levels, so you should set its level to ERROR.
I don't think this is the standard approach for logging. You should instead have single logger with 4 handlers (stream, file_debug, file_info, file_error). Additionally, a debug log file should include all logs, and an info log file should include info logs and error logs. Details below.
import logging
from logging import handlers
def main():
logger = logging.getLogger()
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
# You need to set the default level to the lowest (DEBUG)
logger.setLevel(logging.DEBUG)
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.DEBUG)
c_handler.setFormatter(c_format)
f1_handler = handlers.RotatingFileHandler("debug.log")
f1_handler.setLevel(logging.DEBUG)
f1_handler.setFormatter(c_format)
f2_handler = logging.handlers.RotatingFileHandler("info.log")
f2_handler.setLevel(logging.INFO)
f2_handler.setFormatter(c_format)
f3_handler = logging.handlers.RotatingFileHandler("error.log")
f3_handler.setLevel(logging.ERROR)
f3_handler.setFormatter(c_format)
logger.addHandler(c_handler)
logger.addHandler(f1_handler)
logger.addHandler(f2_handler)
logger.addHandler(f3_handler)
logger.debug("A debug line")
logger.info("An info line")
logger.error("An error line")
if __name__ == "__main__":
main()
The output is :
python test_logger.py
2020-12-04 16:48:56,247 - root - DEBUG - 32 - A debug line
2020-12-04 16:48:56,248 - root - INFO - 33 - An info line
2020-12-04 16:48:56,248 - root - ERROR - 34 - An error line
cat debug.log
2020-12-04 16:49:06,673 - root - DEBUG - 32 - A debug line
2020-12-04 16:49:06,673 - root - INFO - 33 - An info line
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
cat info.log
2020-12-04 16:49:06,673 - root - INFO - 33 - An info line
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
cat error.log
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
Here you see your debug.log file also contains the logs from other levels, and your info.log file contain error logs too. Because that's the whole rationale behind the log levels. The lower level should also keep track of the logs of higher levels (DEBUG < INFO < WARNING < ERROR). That's why what you want is not a standard way of doing it, but can perfectly be achieved as described in the short answer.

Unable to add a timestamp to a logger in Python 3.5

My logger:
logging.basicConfig(filename="{}/log.log".format(config.log_dir), level=logging.DEBUG, format="%(asctime)s %(message)s", datefmt="%d-%b-%Y %I:%M:%S" )
And .. no timestamp is getting added:
logging.info("fdsfdsfds")
# => fdsfdsfds
This works for me
That's my output
import logging
FORMAT = '%(asctime)s %(message)s'
DATE = '%d-%b-%Y %I:%M:%S'
logging.basicConfig(filename="log.log",level=logging.DEBUG,format=FORMAT,datefmt=DATE)
logger = logging.getLogger('Stackoverflow log')
logger.info('Info 1')
If you still get no timestamp please add more code

Categories