I have a fair bit of experience with PHP frameworks and Python for scripting so am now taking the step to Pyramid.
I'd like to know what is the 'correct' way to run a script in Pyramid. That is, how should I set it up so that it is part of the application and has access to config and thus database but does not run through paster (or whatever WSGI).
As an example, say I have a web application which while a user is offline grabs Facebook updates through a web service. I want to write a script to poll that service and store in the database ready for next login.
How should I do this in terms of:
Adding variables in the ini file
Starting the script correctly
I understand the basics of Python modules and packages; however I don't fully understand Configurator/Paster/package setup, wherein I suspect the answer lies.
Thanks
Update:
Thanks, this seems along the lines of what I am looking for. I note that you have to follow a certain structure (eg have summary and parser attributes set) and that the function called command() will always be run. My test code now looks something like this:
class AwesomeCommand(Command):
max_args = 2
min_args = 2
usage = "NAME"
# These are required
summary = "Say hello!"
group_name = "My Package Name"
# Required:
parser = Command.standard_parser(verbose=True)
def command(self):
# Load the config file/section
config_file, section_name = self.args
# What next?
I'm now stuck as to how to get the settings themselves. For example, in init.py you can do this:
engine = engine_from_config(settings, 'sqlalchemy.')
What do I need to do to transform the config file into the settings?
EDIT: The (simpler) way to do this in Pylons is here:
Run Pylons controller as separate app?
As of Pyramid 1.1, this is handled by the framework:
http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/commandline.html#writing-a-script
paster starts an application given an ini file that describes that application. the "serve" command is a built in command for starting a wsgi application and serving it. BUT, you can write other commands.
from paste.script.command import Command
class AwesomeCommand(Command):
def command(self):
print "the awesome thing it does"
and then register them as entry points in your setup.py.
setup(...
entry_points="""
[paste.app_factory]
.....
[paste.global_paster_command]
myawesome-command = mypackage.path.to.command:AwesomeCommand """)
pyramid adds it's own commands this way like the pshell command.
After going to the pylons discuss list, I came up with an answer. Hope this helps somebody:
#Bring in pyramid application--------------------------------------
import pyramid
from paste.deploy import appconfig
config_file = '/path_to_config_file/configname.ini'
name = 'app_name'
config_name = 'config:%s' % config_file
here_dir = os.getcwd()
conf = appconfig(config_name, name, relative_to=here_dir)
from main_package import main
app = main(conf.global_conf, **conf.local_conf)
#--------------------------------------------------------------------------
you need to make view for that action and then run it using:
paster request development.ini /url_to_your_view
Related
I have written a small python file which I am packaging as a .app and installing on macos (latest version). The app is intended to be invoked using a custom protocol similar to "abc://efg/ijk/lmn". The python file employs pyobjc package and intends to use it to implement the business logic. I have option of using, only the Python language to implement my business logic because of legacy reasons.
I have to access the invoking custom URL "abc://efg/ijk/lmn" from inside the python code and parse the values. The "efg" "ijk" and "lmn" in the custom URL will vary and will be used to take some decisions further down the flow.
I have tried multiple things from whatever I could find from the internet but i am unable to access the custom url from with in the python code. The value of sys.argv come as below
sys.argv = ['/Applications/XXXXXApp.app/Contents/MacOS/XXXXXApp', '-psn_0_4490312']
But on windows sys.argv[0] is populated with the custom url.
Will appreciate any directions.
Below code is what I have tried among many other variations of it.
`
from Cocoa import NSObject
mylogger = open(os.path.expanduser("~/Desktop/somefile.txt"), 'w+')
class apple_event_handler(NSObject):
def applicationWillFinishLaunching_(self, notification):
mylogger.write("Will finish ")
def applicationDidFinishLaunching_(self, notification):
mylogger.write("Did Finish")
def handleAppleEvent_withReplyEvent_(self, event, reply_event):
theURL = event.descriptorForKeyword_(fourCharToInt(b'----'))
mylogger.write("********* Handler Invoked !!! *********")
mylogger.write("********* the URL = " + str(theURL))
mylogger.write(*self.args)
aem = NSAppleEventManager.sharedAppleEventManager()
aeh = apple_event_handler.alloc().init()
aem.setEventHandler_andSelector_forEventClass_andEventID_(aeh,
"handleAppleEvent:withReplyEvent:", 1, 1)
`
I'd like to keep development.ini and production.ini under version control, but for security reason would not want the sqlalchemy.url connection string to be stored, as this would contain the username and password used for the database connection.
What's the canonical way, in Pyramid, of sourcing this setting from an additional external file?
Edit
In addition to solution using the environment variable, I came up with this solution after asking around on #pyramid:
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
# Read db password from config file outside of version control
secret_cfg = ConfigParser()
secret_cfg.read(settings['secrets'])
dbpass = secret_cfg.get("secrets", "dbpass")
settings['sqlalchemy.url'] = settings['connstr'] % (dbpass,)
I looked into this a lot and played with a lot of different approaches. However, Pyramid is so flexible, and the .ini config parser is so minimal in what it does for you, that there doesn't seem to be a de facto answer.
In my scenario, I tried having a production.example.ini in version control at first that got copied on the production server with the details filled in, but this got hairy, as updates to the example didn't get translated to the copy, and so the copy had to be re-created any time a change was made. Also, I started using Heroku, so files not in version control never made it into the deployment.
Then, there's the encrypted config approach. Which, I don't like the paradigm. Imagine a sysadmin being responsible for maintaining the production environment, but he or she is unable to change the location of a database or environment-specific setting without running it back through version control. It's really nice to have the separation between environment and code as much as possible so those changes can be made on the fly without version control revisions.
My ultimate solution was to have some values that looked like this:
[app:main]
sqlalchemy.url = ${SQLALCHEMY_URL}
Then, on the production server, I would set the environment variable SQLALCHEMY_URL to point to the database. This even allowed me to use the same configuration file for staging and production, which is nice.
In my Pyramid init, I just expanded the environment variable value using os.path.expandvars:
sqlalchemy_url = os.path.expandvars(settings.get('sqlalchemy.url'))
engine = create_engine(sqlalchemy_url)
And, if you want to get fancy with it and automatically replace all the environment variables in your settings dictionary, I made this little helper method for my projects:
def expandvars_dict(settings):
"""Expands all environment variables in a settings dictionary."""
return dict((key, os.path.expandvars(value)) for
key, value in settings.iteritems())
Use it like this in your main app entry point:
settings = expandvars_dict(settings)
The whole point of the separate ini files in Pyramid is that you do not have to version control all of them and that they can contain different settings for different scenarios (development/production/testing). Your production.ini almost always should not be in the same VCS as your source code.
I found this way for loading secrets from a extra configuration and from the env.
from pyramid.config import Configurator
from paste.deploy import appconfig
from os import path
__all__ = [ "main" ]
def _load_secrets(global_config, settings):
""" Helper to load secrets from a secrets config and
from env (in that order).
"""
if "drawstack.secrets" in settings:
secrets_config = appconfig('config:' + settings["drawstack.secrets"],
relative_to=path.dirname(global_config['__file__']))
for k, v in secrets_config.items():
if k == "here" or k == "__file__":
continue
settings[k] = v
if "ENV_DB_URL" in global_config:
settings["sqlalchemy.url"] = global_config["ENV_DB_URL"]
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
_load_secrets(global_config, settings)
config = Configurator(settings=settings)
config.include('pyramid_jinja2')
config.include('.models')
config.include('.routes')
config.scan()
return config.make_wsgi_app()
The code above, will load any variables from the value of the config key drawstack.secrets and after that it tries to load DB_URL from the enviornment.
drawstack.secrets can be relative to the original config file OR absolute.
I'm trying to setup Pyramid's Authorization/Authentication feauture using my MongoDB as the root factory. I'm wondering if including these lines (config is Configurator)
db_url = urlparse(eval(settings['mongo_uri']))
conn = pymongo.Connection(host=db_url.hostname,
port=db_url.port)
config.registry.settings['db_conn'] = conn
config.add_subscriber(add_mongo_db, NewRequest)
is redundant? Is this necessary if I've already given config a mongo root factory?
I don't recommend doing it that way. I wrote a pyramid addon to make things easier and cleaner.
Documentation here:
http://packages.python.org/pyramid_mongo/
The following is from a project I'm writing at the moment.
In my ini file (while it may be written in python settings)
mongo.uri = mongodb://localhost/
mongo.db = wife
In my configurator:
config.include('pyramid_mongo')
And in my root_factory:
from pyramid_mongo import get_db
...
...
def root_factory(request):
db = get_db(request)
return Root(db)
get_db can be called from anywhere, you have to pass a request as first argument. You can pass an other argument to query a different database.
Subscribers aren't needed in that case.
Btw, don't worry if it's written in the documentation that it might be risky, the current version of the package has 100% coverage and pass all tests. In the future, this package may integrate some tools in order to simplify traversal with mongodb.
I'm running my unit tests using nose.
I have .ini files such as production.ini, development.ini, local.ini. Finally, I have a test.ini file which looks like:
[app:main]
use = config:local.ini
# Add additional test specific configuration options as necessary.
sqlalchemy.url = sqlite:///%(here)s/tests.db
In my test class I want to setup the database as I would in my app server code. Something like:
engine = engine_from_config(settings)
initialize_sql(engine)
dbfixture = SQLAlchemyFixture(
env=model,
engine=engine,
style=NamedDataStyle()
)
How does nose pass 'settings' to my test code?
I've been reading the following link for some guidance, but I haven't been able to connect all the dots. http://farmdev.com/projects/fixture/using-fixture-with-pylons.html
Thanks much!
You will need to parse the settings from the INI file yourself. Pylons used to do this automatically for you by just hard-coding a load for "test.ini". The two options you have are 1) just load the INI settings via settings = paste.deploy.appconfig('test.ini') or 2) loading the actual WSGI app yourself, like if you wanted to use it via WebTest app = pyramid.paster.get_app('test.ini') which would parse the INI file and return an actual WSGI app. Unfortunately that route doesn't give you access to the INI file directly, it automatically just passes the settings to your app's startup function main(global_conf, **settings).
You may also find the Pyramid docs on functional tests useful.
I am creating a mapping application that uses a WSGI service and needs a different config file for each map. Currently, I launch the service with:
import os, sys
tilecachepath = '/usr/local/lib/python2.6/dist-packages/TileCache-2.10-py2.6.egg/TileCache'
sys.path.append(tilecachepath)
from TileCache.Service import Service, wsgiHandler
from paste.request import parse_formvars
theService = {}
def wsgiApp (environ, start_response):
global theService
fields = parse_formvars(environ)
cfgs = fields['cfg']
theService = Service.load(cfgs)
return wsgiHandler(environ, start_response, theService)
application = wsgiApp
This is obviously launching way too many handlers! How can I determine if a specific handler is already running? Is there anything in the apache config that I need to adjust so that handlers time out properly?
WSGI itself offers no way of knowing what layers are already wrapping a certain application, nor does Apache know about that. I would recommend having the wsgiHandler record its presence, so that you can avoid using it multiple times. If you can't alter the existing code, you can do it with your own wrappers for that code's layer (and use the environment, directly or indirectly, to do the recording of what's already active).