I'm sending a callback URL to a remote widely API over which I have no control.
I've written my callback view and it's properly named (say, myapp_callback) in my urls.py, so all that I have to do is to call reverse('myapp_callback'), right? That's what it says in the manual.
Well, not so much. The result is /myapp/callback. Where's my protocol and hostname? The remote service I'm sending these API calls to has no idea. How can I detect it while maybe behind an Apache reverse proxy?
I'm working around this problem by putting the full URL into the settings file, but I'd love to provide a more "turnkey" solution.
Try out the request.build_absolute_uri(reverse('myapp_callback')).
Returns the absolute URI form of location. If no location is provided, the location will be set to request.get_full_path().
If the location is already an absolute URI, it will not be altered. Otherwise the absolute URI is built using the server variables available in this request.
Example: "http://example.com/music/bands/the_beatles/?print=true"
Related
I am wondering if there is a way to obtain the hostname of a Django application when running tests. That is, I would like the tests to pass both locally and when run at the staging server. Hence a need to know http://localhost:<port> vs. http://staging.example.com is needed because some tests query particular URLs.
I found answers on how to do it inside templates, but that does not help since there is no response object to check the hostname.
How can one find out the hostname outside the views/templates? Is it stored in Django settings somewhere?
Why do you need to know the hostname? Tests can run just fine without it, if you use the test client. You do not need to know anything about the system they're running on.
You can also mark tests with a tag and then have the CI system run the tests including that tag.
And finally there is the LiveServerTestCase:
LiveServerTestCase does basically the same as TransactionTestCase with one extra feature: it launches a live Django server in the background on setup, and shuts it down on teardown. This allows the use of automated test clients other than the Django dummy client such as, for example, the Selenium client, to execute a series of functional tests inside a browser and simulate a real user’s actions.
The live server listens on localhost and binds to port 0 which uses a free port assigned by the operating system. The server’s URL can be accessed with self.live_server_url during the tests.
Additional information from comments:
You can test if the URL of an image file is present in your response by testing for the MEDIA_URL:
self.assertContains(response, f'{settings.MEDIA_URL}/default-avatar.svg')
You can test for the existence of an upload in various ways, but the easiest one is to check if there's a file object associated with the FileField. It will throw ValueError if there is not.
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html
My pyramid app permits users to, from www.domain.com to create new html pages in an amazon s3 bucket (call it "testbucket"). right now when the page is created, the user gets redirected to it at https://s3.amazonaws.com/testbucket/(some uuid). I want it so that it redirects them to www.subdomain.domain.com/(someuuid) where that html is stored. Right now this line is in my view callable (views.py)
return HTTPFound(location="https://s3.amazonaws.com/testbucket/%(uuid)s" % {'uuid':uuid})
Ive read (http://carltonbale.com/how-to-alias-a-domain-name-or-sub-domain-to-amazon-s3/) that in order to do this I need to create a bucket on my amazon s3 account called subdomain.domain.com and then cname it to s3.amazon.com. (What is the extra . for?). What then should I put in my return HTTPFound call? Should it be:
return HTTPFound(location="https://s3.amazonaws.com/subdomain.domain.com/%(uuid)s" % {'uuid':uuid})
That doesn't make any sense to me. What should it be instead?
If you create the CNAME record (the trailing dot is a quirk of how DNS works; it means that the domain is fully-qualified), then you don't need to mention s3.amazonaws.com. Instead, you can redirect to http://subdomain.domain.com/your_object_here.
Note, however, that Amazon does not have an SSL certificate for your domain, so if you want to access it over SSL you will need to connect to s3.amazonaws.com (or bucket_name_here.s3.amazonaws.com).
As far as I know, S3 does not currently implement a means for supplying your own SSL certificate (and even if it did, it would presumably need to use a feature called Server Name Indication, which is not yet universally supported).
This is the module I am working with: http://wiki.nginx.org/HttpGeoipModule
From what I can see, since it is configured on the nginx config and uwsgi it looks like there is no choice but to have it run the geoip on every page and then only collect and use the variable when needed.
From a performance point of view I would rather have it so I request the geoip ONLY when needed, cache it in a cookie or session and then not request it again to speed up the site.
Is anyone able to tell me if this is possible?
From a performance point of view I would rather have it so I request the geoip ONLY when needed, cache it in a cookie or session and then not request it again to speed up the site.
Is anyone able to tell me if this is possible?`
Yes, it's possible. But from a performance point of view, you should not worry, as geoip database are stored in memory (at the reading configuration phase) and nginx doing lookups very fast.
Anyway if you want, you can use something like:
set $country $cookie_country;
if ($country == '') {
set $country $geoip_country_code;
add_header Set-Cookie country=$geoip_country_code;
}
uwsgi_param GEOIP_COUNTRY $country;
No, you can't make nginx to perform GeoIP lookup on demand only. Since you define a geoip_country or geoip_city directive, nginx will request data from GeoIP database, whether the answer is used later or not. But you can fetch GeoIP data without nginx at all, i.e. directly with your application. Take a look for python geoip lib: http://dev.maxmind.com/geoip/downloadable#Python-5
I have a URL route in my web.py application that I want to run to catch all URLs that hit the server, but only after any static assets are served.
For example, if theres is js/test.js in my static directory, the path http://a.com/js/tests.js should return the file contents. But I also have my url routing set up so that there is a regex that catches everything like this:
urls = ('/.*', 'CatchAllHandler')
So this should run only if no static asset was discovered. A request for http://a.com/js/test.js should return the static file test.js, but a request for http://a.com/js/nope.js should route through the CatchAllHandler.
I've looked into writing my own StaticMiddleware for this, but it will only help if the order of web.py operations is changed. Currently the middleware is executed after the URL routes have been processed. I need the middleware to run first, and let the url routing clean up the requests that were not served static assets.
The one idea I have is to use the notfound() function as my catch all handler, but that may not be best.
the url matching is python regex. You can test/play with python regex here
that said, this should work for you:
('/(?!static)(.*)', 'CatchAllHandler')
I haven't played with web.py's middleware, but my understanding.. WSGI middleware happens before web.py gets to seeing the request/response. I would think, provided your WSGI MiddleWare is properly configured, it would just work.
pouts That sucks. There is the hook stuff, which makes it really easy, I've don't that before, and it will see all the stuff before .. docs are here: http://webpy.org/cookbook/application_processors
but I guess in regards to your other comment, 'wanting it to work regardless of URL'. How would you know it's static content otherwise? I'm confused greatly. The EASIEST way, since for production you want some other web server running your web.py scripts, is to push all the static content into the web server. Then you can of course do whatever you want in the web server that needs doing. This is exactly what happens with mod_wsgi and apache for instance (you change /static to point to the directory IN the web server).
Perhaps if you shared an actual example of what you need done, I could help you more. Otherwise I've given you now 3 different ways to handle the problem (excluding using WSGI middleware). How many more do you need? :P