I get the following error when trying to run a Pyramid project. As far as I'm aware, this appeared overnight, and I've no idea how to begin to debug this:
C:\mas\mas>..\Scripts\pserve.exe serve development.ini --reload
Starting subprocess with file monitor
Traceback (most recent call last):
File "C:\mas\Scripts\pserve-script.py", line 8, in <module>
load_entry_point('pyramid==1.3.2', 'console_scripts', 'pserve')()
File "C:\mas\lib\site-packages\pyramid-1.3.2-py2.7.egg\pyramid\scripts\pserve.
py", line 47, in main
return command.run()
File "C:\mas\lib\site-packages\pyramid-1.3.2-py2.7.egg\pyramid\scripts\pserve.
py", line 221, in run
vars = self.parse_vars(restvars)
File "C:\mas\lib\site-packages\pyramid-1.3.2-py2.7.egg\pyramid\scripts\pserve.
py", line 330, in parse_vars
% arg)
ValueError: Variable assignment 'development.ini' invalid (no "=")
What is the problem, or how a should I go about determining it? Sorry my question is rather vague, but if I had any more an idea of what I was asking I may have found the answer on Google :).
Line 328 - 330:
raise ValueError(
'Variable assignment %r invalid (no "=")'
% arg)
development.ini
[app:main]
use = egg:mas
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en
pyramid.includes = pyramid_debugtoolbar
pyramid_tm
pyramid_beaker
sqlalchemy.url = sqlite:///%(here)s/mas.db
# Cache settings
cache.regions = default_term, second, short_term, long_term
cache.type = memory
cache.second.expire = 1
cache.short_term.expire = 60
cache.default_term.expire = 300
cache.long_term.expire = 3600
# Beaker sessions
#session.type = file
#session.data_dir = %(here)s/data/sessions/data
#session.lock_dir = %(here)s/data/sessions/lock
session.type = memory
session.key = akhet_demo
session.secret = 0cb243f53ad865a0f70099c0414ffe9cfcfe03ac
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
# Begin logging configuration
[loggers]
keys = root, mas, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_mas]
level = DEBUG
handlers =
qualname = mas
[logger_sqlalchemy]
level = INFO
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
# End logging configuration
Ich, I'e worked it out, I was looking for the problem in the wrong place. the problem is this command:
..\Scripts\pserve.exe serve development.ini --reload
Should be this command:
..\Scripts\pserve.exe development.ini --reload
I have no idea how or when my batch file got changed, but if someone has a similar problem hopefully this will help.
It appears that you have the order of the arguments backwards - it should be:
..\Scripts\pserve.exe development.ini --reload
Related
I'm trying to run simple Celery example: celery-example-local-filesystem.
Here is a task module:
#tasks.py
from celery import Celery
app = Celery('tasks', broker='pyamqp://guest#localhost//')
app.config_from_object('celeryconfig')
#app.task
def add(x, y):
return x + y
Here is a config:
#celeryconfig.py
"""Celery configuration using local filesystem only."""
from pathlib import Path
# paths for file backend, create folders
_root = Path(__file__).parent.resolve().joinpath('data')
#_root = Path('c:\\temp').parent.resolve().joinpath('data')
_backend_folder = _root.joinpath('results')
_backend_folder.mkdir(exist_ok=True, parents=True)
_folders = {
'data_folder_in': _root.joinpath('in'),
'data_folder_out': _root.joinpath('in'), # has to be the same as 'data_folder_in'
'processed_folder': _root.joinpath('processed')
}
for fn in _folders.values():
fn.mkdir(exist_ok=True)
# celery config
result_backend = 'file://{}'.format(str(_backend_folder))
broker_url = 'filesystem://'
broker_transport_options = {k: str(f) for k, f in _folders.items()}
task_serializer = 'json'
persist_results = True
result_serializer = 'json'
accept_content = ['json']
imports = ('tasks',)
and here is a main module:
#main.py
from celery import Celery, signature
app = Celery('tasks')
app.config_from_object('celeryconfig')
add = signature('tasks.add')
print('1 + 1 = {}'.format(add.delay(1, 1).get(timeout=3.)))
And here, when i try to run celery on windows, get an error:
$ celery -A tasks worker --loglevel=INFO
[2021-04-03 18:16:35,578: CRITICAL/MainProcess] Unrecoverable error: ValueError("Port could not be cast to integer value as '\\\\Users\\\\marci\\\\code\\\\django\\\\cellery_test\\\\data\\\\results'")
Traceback (most recent call last):
File "c:\python39\lib\site-packages\celery\worker\worker.py", line 203, in start
self.blueprint.start(self)
File "c:\python39\lib\site-packages\celery\bootsteps.py", line 112, in start
self.on_start()
File "c:\python39\lib\site-packages\celery\apps\worker.py", line 136, in on_start
self.emit_banner()
File "c:\python39\lib\site-packages\celery\apps\worker.py", line 170, in emit_banner
' \n', self.startup_info(artlines=not use_image))),
File "c:\python39\lib\site-packages\celery\apps\worker.py", line 232, in startup_info
results=self.app.backend.as_uri(),
File "c:\python39\lib\site-packages\celery\backends\base.py", line 143, in as_uri
url = maybe_sanitize_url(self.url or '')
File "c:\python39\lib\site-packages\kombu\utils\url.py", line 118, in maybe_sanitize_url
return sanitize_url(url, mask)
File "c:\python39\lib\site-packages\kombu\utils\url.py", line 111, in sanitize_url
return as_url(*_parse_url(url), sanitize=True, mask=mask)
File "c:\python39\lib\site-packages\kombu\utils\url.py", line 76, in url_to_parts
parts.port,
File "c:\python39\lib\urllib\parse.py", line 175, in port
raise ValueError(message) from None
ValueError: Port could not be cast to integer value as '\\Users\\marci\\code\\django\\cellery_test\\data\\results'
It looks like some issue with path decoding. Does anyone faced with this issue? I will be grateful for your help!
The problem for me was the usage of special characters(/, ?, #, # and :) in the url I passed to urllib.parse.
Once I removed them from the path, it worked beautifully.
In my case, I created a .env file in my project folder root with the follow variables:
REDIS_URL=redis://127.0.0.1:6379/
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_DB=0
Then, make sure do you have a redis.py file at your project, the same folder where are the wsgi.py file with this:
import redis as r
from .settings import REDIS_HOST, REDIS_PORT, REDIS_DB
redis = r.Redis(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB)
On the celery.py file, at the same folder where are the redis.py and wsgi.py files, put a similar code how this:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project.settings')
app = Celery('your_project')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
In the same folder yet, make sure do you have create a __init__.py file with:
from __future__ import absolute_import
from .celery import app as celery_app
And in the settings.py, I add the code bellow:
import os
from dotenv import load_dotenv
...
load_dotenv()
...
REDIS_URL = os.getenv('REDIS_URL')
REDIS_HOST = os.getenv('REDIS_HOST')
REDIS_PORT = os.getenv('REDIS_PORT')
REDIS_DB = os.getenv('REDIS_DB')
BROKER_URL = f'redis://{REDIS_HOST}:{REDIS_PORT}'
CELERY_RESULT_BACKEND = BROKER_URL
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
After this, do you can deploy your project with this tutorial for example: https://realpython.com/asynchronous-tasks-with-django-and-celery/ or this other tutorial: https://www.botreetechnologies.com/blog/implementing-celery-using-django-for-background-task-processing/
I hope help you.
I would first check whether the result backend URL is correct (check the https://en.wikipedia.org/wiki/File_URI_scheme for more information). I had no idea filesystem:// is a possible broker alternative. Even if it is, it is most likely highly experimental, so I recommend not using it until it reaches maturity (which I sincerely doubt will ever happen, as I really do not see the point - Celery is supposed to be a DISTRIBUTED system, so filesystem as broker makes no sense to me). Please use brokers listed here: https://docs.celeryproject.org/en/stable/getting-started/brokers/index.html or do not use Celery at all if you have hard requirement to use something that is not on that list.
I have finished a flask app. When I run it by python run.py, the app can work perfectly.
But when I want to open flask shell by flask shell or even just flask, it tell me:
Traceback (most recent call last):
File "f:\programs\anaconda\envs\web\lib\site-packages\flask\cli.py", line 556, in list_commands
rv.update(info.load_app().cli.list_commands(ctx))
File "f:\programs\anaconda\envs\web\lib\site-packages\flask\cli.py", line 388, in load_app
app = locate_app(self, import_name, name)
File "f:\programs\anaconda\envs\web\lib\site-packages\flask\cli.py", line 257, in locate_app
return find_best_app(script_info, module)
File "f:\programs\anaconda\envs\web\lib\site-packages\flask\cli.py", line 83, in find_best_app
app = call_factory(script_info, app_factory)
File "f:\programs\anaconda\envs\web\lib\site-packages\flask\cli.py", line 117, in call_factory
return app_factory(script_info)
File "C:\Users\zkhp\Desktop\flask-bigger-master\backend\startup.py", line 41, in create_app
app.config['SECRET_KEY'] = config.get('secret', '!secret!')
AttributeError: 'ScriptInfo' object has no attribute 'get'
The last sentence is here:
def create_app(config):
app = Flask(
__name__,
template_folder=template_folder,
static_folder=static_folder
)
app.config['SECRET_KEY'] = config.get('secret', '!secret!')
The config is a dictionary, which is given by:
def start_server(run_cfg=None, is_deploy=False):
config = {
'use_cdn': False,
'debug': run_cfg.get('debug', False),
'secret': md5('!secret!'),
'url_prefix': None,
'debugtoolbar': True
}
app = create_app(config)
I am confused with how the dictionary config is transformed to be a ScriptInfo?
And what should I do to solve the problem?
seeing that you've already resolved your initial query, i wanted to suggest a better structured config write up for your future flask apps that would also make it easier to add more config variables in the case your app becomes bigger.
Consider having the configs in a module of their own, preferably in a folder name instance in the app's root folder. Here's a sample.
"""
This module sets the configurations for the application
"""
import os
class Config(object):
"""Parent configuration class."""
DEBUG = False
CSRF_ENABLED = True
SECRET_KEY = os.getenv("SECRET_KEY")
DATABASE_URL = os.getenv("DATABASE_URL")
BUNDLE_ERRORS = True
class DevelopmentConfig(Config):
"""Development phase configurations"""
DEBUG = True
class TestingConfig(Config):
"""Testing Configurations."""
TESTING = True
DEBUG = True
DATABASE_URL = os.getenv("DATABASE_TEST_URL")
class ReleaseConfig(Config):
"""Release Configurations."""
DEBUG = False
TESTING = False
app_config = {
'development': DevelopmentConfig,
'testing': TestingConfig,
'release': ReleaseConfig,
}
Now I solve the problem.
I have a file manage.py to deal all shell command lines. So the right operation is input:
python manage.py shell
And now it works normally. (OK, I still don't know why……)
I am not getting uwsgi stats using uwsgitop and socket. I have put uwsgi configuration for the stats with socket and when I tried to get the stat using the command:
uwsgitop /var/www/uwsgi/proj.socket
It's throwing the error
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I am using uwsgi version 2.0.17.1.
Here is my uwsgi ini file
[uwsgi]
# Multi Thread Support
enable-threads = true
# Django-related settings
# the base directory (full path)
chdir = /home/user/base-dir/proj-path/
# Django's wsgi file
module = proj.wsgi
# the virtualenv (full path)
home = /home/user/base-path/
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
socket = /var/www/uwsgi/proj.socket
# ... with appropriate permissions - may be needed
chmod-socket = 666
# clear environment on exit
vacuum = true
daemonize = /var/www/uwsgi/uwsgi.log
pidfile = /var/www/uwsgi/uwsgi_hub.pid
logto = /var/log/proj_uwsgi%n.log
uid = user
gid = user
http-auto-gzip = true
memory-report = True
py-tracebacker=/var/www/uwsgi/proj.socket
--stats /var/www/uwsgi/proj.socket
I think that you should have something like this in your config file:
socket = /var/www/uwsgi/proj.socket
stats = /var/www/uwsgi/stats.socket
And run uwsgitop on the stats socket, like so:
uwsgitop /var/www/uwsgi/stats.socket
I am trying to enable my python logging using the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.config
import os
test_filename = 'my_log_file.txt'
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.error("No loggingpy.conf to parse", exc_info=e)
logging.basicConfig(level=logging.WARNING, format="%(asctime)-15s %(message)s")
test1_log = logging.getLogger("test1")
test1_log.critical("test1_log crit")
test1_log.error("test1_log error")
test1_log.warning("test1_log warning")
test1_log.info("test1_log info")
test1_log.debug("test1_log debug")
I would like to use a loggingpy.conf file to control the logging like the following:
[loggers]
keys=root
[handlers]
keys=handRoot
[formatters]
keys=formRoot
[logger_root]
level=INFO
handlers=handRoot
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(test_filename,)
[formatter_formRoot]
format=%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Here I am trying to route the logging to the file named by the local "test_filename". When I run this, I get:
ERROR:root:No loggingpy.conf to parse
Traceback (most recent call last):
File "logging_test.py", line 8, in <module>
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
File "/usr/lib/python2.7/logging/config.py", line 85, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/lib/python2.7/logging/config.py", line 162, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1, in <module>
NameError: name 'test_filename' is not defined
CRITICAL:test1:test1_log crit
ERROR:test1:test1_log error
WARNING:test1:test1_log warning
Reading the docs, it seems that the "args" value in the config is eval'd in the context of the logging package namespace rather than the context when fileConfig is called. Is there any decent way to try to get the logging to behave this way through a configuration file so I can configure a dynamic log filename (usually like "InputFile.log"), but still have the flexibility to use the logging config file to change it?
Even though it's an old question, I think this still has relevance. An alternative to the above mentioned solutions would be to use logging.config.dictConfig(...) and manipulating the dictionary.
MWE:
log_config.yml
version: 1
disable_existing_loggers: false
formatters:
default:
format: "%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: default
stream: ext://sys.stdout
level: DEBUG
file:
class: logging.FileHandler
formatter: default
filename: "{path}/service.log"
level: DEBUG
root:
level: DEBUG
handlers:
- file
- console
example.py
import logging.config
import sys
import yaml
log_output_path = sys.argv[1]
log_config = yaml.load(open("log_config.yml"))
log_config["handlers"]["file"]["filename"] = log_config["handlers"]["file"]["filename"].format(path = log_output_path)
logging.config.dictConfig(log_config)
logging.debug("test")
Executable as follows:
python example.py .
Result:
service.log file in current working directory contains one line of log message.
Console outputs one line of log message.
Both state something like this:
2016-06-06 20:56:56,450:root:12232:11 DEBUG test
You could place the filename in the logging namespace with:
logging.test_filename = 'my_log_file.txt'
Then your existing loggingpy.conf file should work
You should be able to pollute the logging namespace with anything you like (within reason - i wouldn't try logging.config = 'something') in your module and that should make it referencable by the the config file.
The args statement is parsed using eval at logging.config.py _install_handlers. So you can add code into the args.
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(os.getenv("LOG_FILE","default_value"),)
Now you only need to populate the environment variable.
This is very hacky so I wouldn't recommend it. But if you for some reason did not want to add to the logging namespace you could pass the log file name through a command line argument and then use sys.argv[1] to access it (sys.argv[0] is the script name).
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(sys.argv[1],)
I have a standard logging configuration like:
[loggers]
keys = root, quoting, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_quoting]
level = INFO
handlers =
qualname = quoting
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
And pyramid seems to be ignore it, giving me EVERYTHING on stdout when running with pserve --reload development.ini.
Sample log output at http://pastebin.com/1Q3Vt9xM
The log represents one page load. I'm trying to filter out specifically the SQLAlchemy stuff, but would like to know where I went wrong
I think that echo=True on a SQLAlchemy engine configuration will dump to stdout and ignore the logging configuration. This may be what you're seeing.