I'm trying to learn how to use Python's apscheduler package, but periodically, it throws the following error:
No handlers could be found for logger "apscheduler.scheduler"
This message seems to be associated with errors in the scheduled jobs, for example, using jobTester as the scheduled job, the following code, which uses an undefined variable (nameStr0) in jobTester gives the above error message:
from apscheduler.scheduler import Scheduler
from apscheduler.jobstores.shelve_store import ShelveJobStore
from datetime import datetime, timedelta
from schedJob import toyJob
def jobTester(nameStr):
outFileName = nameStr0 + '.txt'
outFile = open(outFileName,'w')
outFile.write(nameStr)
outFile.close()
def schedTester(jobList):
scheduler = Scheduler()
scheduler.add_jobstore(ShelveJobStore('example.db'),'shelve')
refTime = datetime.now()
for index, currJob in enumerate(jobList):
runTime = refTime + timedelta(seconds = 15)
jobName = currJob.name + '_' + str(index)
scheduler.add_date_job(jobTester, runTime, name = jobName,
jobstore = 'shelve', args = [jobName])
scheduler.start()
stopTime = datetime.now() + timedelta(seconds = 45)
print "Starting wait loop .....",
while stopTime > datetime.now():
pass
print "Done"
def doit():
names = ['Alan','Barbara','Charlie','Dana']
jobList = [toyJob(n) for n in names]
schedTester(jobList)
This may be seen by running this code (stored in the file schedTester.py) as follows:
>>> import schedTester
>>> schedTester.doit()
No handlers could be found for logger "apscheduler.scheduler"
Starting wait loop ..... Done
However, when I replace nameStr0 with nameStr (i.e. proper spelling of variable name), the code runs fine without the error message.
How do I create a logger for apscheduler.scheduler? Am I missing something in the section of the docs dealing with configuring the scheduler
Am I correct in thinking of this logger as some sort of a stderr ? If so, where will I look for it (if that is not determined by the way I set it up)
You can just create a default logger and everything should go to it:
import logging
logging.basicConfig()
The reason that you only have a problem when you use a variable that hasn't been defined is that this causes the jobTester function to throw an error which apscheduler is catching and trying to write the error message with logging.error(). Since you haven't setup the logger it is going to complain.
If you read up on python logging you will see that there are many ways to configure it. You could have it log everything to a file or print it to stdout.
Related
I wrote the module below that will standardize how my logfiles are written as well as easily changing whether events get printed/written to the logfile or not.
FILE: Logging.py
================
import os
import datetime
import io
class Logfile():
def __init__(self,name):
self.logFile = os.getcwd() + r'\.Log\\' + name + '_' + str(datetime.date.today().year) + ('00' + str(datetime.date.today().month))[-2:] + '.log'
self.printLog = False
self.debug = False
# Setup logFile and consolidated Folder
if not os.path.exists(os.path.dirname(self.logFile)):
os.mkdir(os.path.dirname(self.logFile))
#Check if logfile exists.
if not os.path.exists(self.logFile):
with open(self.logFile, 'w') as l:
pass
# Write LogFile Entry
def logEvent(self, eventText, debugOnly): # Function to add an event to the logfile
# If this is marked as debugging only AND debugging is off
if debugOnly == True and self.debug == False:
return
if self.printLog == True:
print(datetime.datetime.strftime(datetime.datetime.now(), '%m/%d/%Y, %I:%M:%S %p, ') + str(eventText))
with open(self.logFile, 'a') as l:
l.seek(0)
l.write(datetime.datetime.strftime(datetime.datetime.now(), '%m/%d/%Y, %I:%M:%S %p, ') + str(eventText) + '\n')
return
This is very handy but, I am having trouble understanding how to make this available to all of my classes. For example, If i import the following module, I am not sure how to use the logfile i created within my main script.
FILE: HelloWorld.py
===================
class HelloWorld():
def __init__(self):
log.logEvent('You have created a HelloWorld Object!', False)
Main Script Here:
import Logging
from HelloWorld import HelloWorld
log = logging.Logfile
hw = HelloWorld()
^^ Will fail because it does not know log is a thing. What is the proper way to handle these sort of situations?
I believe you're trying to do something like this. (and as a side note, you may want to look into using pythons default logging module)
FILE: HelloWorld.py
===================
# import LogFile
from .Logging import LogFile
# create new LogFile instance
log = LogFile(name='log name')
class HelloWorld():
def __init__(self):
# call logEvent method on your LogFile instance
log.logEvent('You have created a HelloWorld Object!', False)
FILE: Main.py
===================
# import HelloWorld
from .HellowWorld import HellowWorld
# create new HellowWorld instance
hw = HellowWorld()
Also, to create a module you will need to add an __init__.py file in that given directory.
This problem is easily solved by using the built-in "Logging" module. In an answer to the broader "how to use a thing(log) within all of my modules" question, I assume the answer to this can be found by reading through the code in the logging module and mimicking that.
I have created a pytest.ini file,
addopts = --resultlog=log.txt
This creates a log file, but I would like to create a new log file everytime I run the tests.
I am new to the pytest, and pardon me if I have missed out anything while reading the documentation.
Thanks
Note
--result-log argument is deprecated and scheduled for removal in version 6.0 (see Deprecations and Removals: Result log). The possible replacement implementation is discussed in issue #4488, so watch out for the next major version bump - the code below will stop working with pytest==6.0.
Answer
You can modify the resultlog in the pytest_configure hookimpl. Example: put the code below in the conftest.py file in your project root dir:
import datetime
def pytest_configure(config):
if not config.option.resultlog:
timestamp = datetime.datetime.strftime(datetime.datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.resultlog = 'log.' + timestamp
Now if --result-log is not passed explicitly (so you have to remove addopts = --resultlog=log.txt from your pytest.ini), pytest will create a log file ending with a timestamp. Passing --result-log with a log file name will override this behaviour.
Answering my own question.
As hoefling mentioned --result-log is deprecated, I had to find a way to do it without using that flag. Here's how I did it,
conftest.py
from datetime import datetime
import logging
log = logging.getLogger(__name__)
def pytest_assertrepr_compare(op, left, right):
""" This function will print log everytime the assert fails"""
log.error('Comparing Foo instances: vals: %s != %s \n' % (left, right))
return ["Comparing Foo instances:", " vals: %s != %s" % (left, right)]
def pytest_configure(config):
""" Create a log file if log_file is not mentioned in *.ini file"""
if not config.option.log_file:
timestamp = datetime.strftime(datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.log_file = 'log.' + timestamp
pytest.ini
[pytest]
log_cli = true
log_cli_level = CRITICAL
log_cli_format = %(message)s
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format=%Y-%m-%d %H:%M:%S
test_my_code.py
import logging
log = logging.getLogger(__name__)
def test_my_code():
****test code
You can have different pytest run logs by naming the log file the time when test execution starts.
pytest tests --log-file $(date '+%F_%H:%M:%S')
This will create a log file for each test run. And the name of the test run would be the timestamp.
$(date '+%F_%H:%M:%S') is the bash command to get current timestamp in DATE_Hr:Min:Sec format.
I know crons run in a different environment than command lines, but I'm using absolute paths everywhere and I don't understand why my script behaves differently. I believe it is somehow related to my cron_supervisor which runs the django "manage.py" within a sub process.
Cron:
0 * * * * /home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py cron_supervisor --command="/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent"
This will call the cron_supervisor, and it's call the script, but the script won't be executed as it would if I would run:
/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent
Is there something particular to be done for the script to be called properly when running it through another script?
Here is the supervisor, which basically is for error handling and making sure we get warned if something goes wrong within the cron scripts themselves.
import logging
import os
from subprocess import PIPE, Popen
from django.core.management.base import BaseCommand
from command_utils import email_admin_error, isomorphic_logging
from utils.send_slack_message import send_slack_message
CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIR = CURRENT_DIR + '/../../../'
logging.basicConfig(
level=logging.INFO,
filename=PROJECT_DIR + 'cron-supervisor.log',
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Control a subprocess"
def add_arguments(self, parser):
parser.add_argument(
'--command',
dest='command',
help="Command to execute",
)
parser.add_argument(
'--mute_on_success',
dest='mute_on_success',
action='store_true',
help="Don't post any massage on success",
)
def handle(self, *args, **options):
try:
isomorphic_logging(logging, "Starting cron supervisor with command \"" + options['command'] + "\"")
if options['command']:
self.command = options['command']
else:
error_message = "Empty required parameter --command"
# log error
isomorphic_logging(logging, error_message, "error")
# send slack message
send_slack_message("Cron Supervisor Error: " + error_message)
# send email to admin
email_admin_error("Cron Supervisor Error", error_message)
raise ValueError(error_message)
if options['mute_on_success']:
self.mute_on_success = True
else:
self.mute_on_success = False
# running process
process = Popen([self.command], stdout=PIPE, stderr=PIPE, shell=True)
output, error = process.communicate()
if output:
isomorphic_logging(logging, "Output from cron:" + output)
# check for any subprocess error
if process.returncode != 0:
error_message = 'Command \"{command}\" - Error \nReturn code: {code}\n```{error}```'.format(
code=process.returncode,
error=error,
command=self.command,
)
self.handle_error(error_message)
else:
message = "Command \"{command}\" ended without error".format(command=self.command)
isomorphic_logging(logging, message)
# post message on slack if process isn't muted_on_success
if not self.mute_on_success:
send_slack_message(message)
except Exception as e:
error_message = 'Command \"{command}\" - Error \n```{error}```'.format(
error=e,
command=self.command,
)
self.handle_error(error_message)
def handle_error(self, error_message):
# log the error in local file
isomorphic_logging(logging, error_message)
# post message in slack
send_slack_message(error_message)
# email admin
email_admin_error("Cron Supervisor Error", error_message)
Example of script not executed properly when being called by the cron, through the cron_supervisor:
# -*- coding: utf-8 -*-
import json
import logging
import os
from django.conf import settings
from django.core.management.base import BaseCommand
from utils.lock import handle_lock
logging.basicConfig(
level=logging.INFO,
filename=os.path.join(settings.BASE_DIR, 'crons.log'),
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Envoi de l'argent en attente"
#handle_lock
def handle(self, *args, **options):
logging.info("some logs that won't be log (not called)")
logging.info("Those logs will be correcly logged")
Additionally, I have another issue with the logging which I don't quite understand either, I specify to store logs within cron-supervisor.log but they don't get stored there, I couldn't figure out why. (but that's not related to my main issue, just doesn't help with debug)
Your cron job can't just run the Python interpreter in the virtualenv; this is completely insufficient. You need to activate the env just like in an interactive environment.
0 * * * * . /home/p1/.virtualenvs/prod/bin/activate; python /home/p1/p1/manage.py cron_supervisor --command="python /home/p1/p1/manage.py envoyer_argent"
This is already complex enough that you might want to create a separate wrapper script containing these commands.
Without proper diagnostics of how your current script doesn't work, it's entirely possible that this fix alone is insufficient. Cron jobs do not only (or particularly) need absoute paths; the main differences compared to interactive shells is that cron jobs run with a different and more spare environment, where e.g. the shell's PATH, various library paths, environment variables etc can be different or missing altogether; and of course, no interactive facilities are available.
The system variables will hopefully be taken care of by your virtualenv; if it's correctly done, activating it will set up all the variables (PATH, PYTHONPATH, etc) your script needs. There could still be things like locale settings which are set up by your shell only when you log in interactively; but again, without details, let's just hope this isn't an issue for you.
The reason some people recommend absolute paths is that this will work regardless of your working directory. But a correctly written script should work fine in any directory; if it matters, the cron job will start in the owner's home directory. If you wanted to point to a relative path from there, this will work fine inside a cron job just as it does outside.
As an aside, you probably should not use subprocess.Popen() if one of the higher-level wrappers from the subprocess module do what you want. Unless compatibility with legacy Python versions is important, you should probably use subprocess.run() ... though running Python as a subprocess of Python is also often a useless oomplication. See also my answer to this related question.
I'm studying vCenter 6.5 and community samples help a lot, but in this particular situation I can't figure out, what's going on. The script:
from __future__ import with_statement
import atexit
from tools import cli
from pyVim import connect
from pyVmomi import vim, vmodl
def get_args():
*Boring args parsing works*
return args
def main():
args = get_args()
try:
service_instance = connect.SmartConnectNoSSL(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
vm = content.searchIndex.FindByUuid(None, args.vm_uuid, True)
creds = vim.vm.guest.NamePasswordAuthentication(
username=args.vm_user, password=args.vm_pwd
)
try:
pm = content.guestOperationsManager.processManager
ps = vim.vm.guest.ProcessManager.ProgramSpec(
programPath=args.path_to_program,
arguments=args.program_arguments
)
res = pm.StartProgramInGuest(vm, creds, ps)
if res > 0:
print "Program executed, PID is %d" % res
except IOError, e:
print e
except vmodl.MethodFault as error:
print "Caught vmodl fault : " + error.msg
return -1
return 0
# Start program
if __name__ == "__main__":
main()
When I execute it in console, it successfully connects to the target virtual machine and prints
Program executed, PID is 2036
In task manager I see process with mentioned PID, it was created by the correct user, but there is no GUI of the process (calc.exe). RMB click does not allow to "Expand" the process.
I suppose, that this process was created with special parameters, maybe in different session.
In addition, I tried to run batch file to check if it actually executes, but the answer is no, batch file does not execute.
Any help, advices, clues would be awesome.
P.S. I tried other scripts and successfully transferred a file to the VM.
P.P.S. Sorry for my English.
Update: All such processes start in session 0.
Have you tried interactiveSession ?
https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/GuestAuthentication.rst
This boolean argument passed to NamePasswordAuthentication and means the following:
This is set to true if the client wants an interactive session in the guest.
File "G:\Python25\Lib\site-packages\PyAMF-0.6b2-py2.5-win32.egg\pyamf\util\__init__.py", line 15, in <module>
ImportError: cannot import name python
How do I fix it?
If you need any info to know how to fix this problem, I can explain, just ask.
Thanks
Code:
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import webapp
from TottysGateway import TottysGateway
import logging
def main():
services_root = 'services'
#services = ['users.login']
#gateway = TottysGateway(services, services_root, logger=logging, debug=True)
#app = webapp.WSGIApplication([('/', gateway)], debug=True)
#run_wsgi_app(app)
if __name__ == "__main__":
main()
Code:
from pyamf.remoting.gateway.google import WebAppGateway
import logging
class TottysGateway(WebAppGateway):
def __init__(self, services_available, root_path, not_found_service, logger, debug):
# override the contructor and then call the super
self.services_available = services_available
self.root_path = root_path
self.not_found_service = not_found_service
WebAppGateway.__init__(self, {}, logger=logging, debug=True)
def getServiceRequest(self, request, target):
# override the original getServiceRequest method
try:
# try looking for the service in the services list
return WebAppGateway.getServiceRequest(self, request, target)
except:
pass
try:
# don't know what it does but is an error for now
service_func = self.router(target)
except:
if(target in self.services_available):
# only if is an available service import it's module
# so it doesn't access services that should be hidden
try:
module_path = self.root_path + '.' + target
paths = target.rsplit('.')
func_name = paths[len(paths) - 1]
import_as = '_'.join(paths) + '_' + func_name
import_string = "from "+module_path+" import "+func_name+' as service_func'
exec import_string
except:
service_func = False
if(not service_func):
# if is not found load the default not found service
module_path = self.rootPath + '.' + self.not_found_service
import_string = "from "+module_path+" import "+func_name+' as service_func'
# add the service loaded above
assign_string = "self.addService(service_func, target)"
exec assign_string
return WebAppGateway.getServiceRequest(self, request, target)
You need to post your full traceback. What you show here isn't all that useful. I ended up digging up line 15 of pyamf/util/init.py. The code you should have posted is
from pyamf import python
This should not fail unless your local environment is messed up.
Can you 'import pyamf.util' and 'import pyamf.python' in a interactive Python shell? What about if you start Python while in /tmp (on the assumption that you might have a file named 'pyamf.py' in the current directory. Which is a bad thing.)
= (older comment below) =
Fix your question. I can't even tell where line 15 of util/__init__.py is supposed to be. Since I can't figure that out, I can't answer your question. Instead, I'll point out ways to improve your question and code.
First, use the markup language correctly, so that all the code is in a code block. Make sure you've titled the code, so we know it's from util/__init__.py and not some random file.
In your error message, include the full traceback, and not the last two lines.
Stop using parens in things like "if(not service_func):" and use a space instead, so its " if not service_func:". This is discussed in PEP 8.
Read the Python documentation and learn how to use the language. Something like "func_name = paths[len(paths) - 1]" should be "func_name = paths[-1]"
Learn about the import function and don't use "exec" for this case. Nor do you need the "exec assign_string" -- just do the "self.addService(service_func, target)"