Using cherrypy for HTTP and HTTPS - python

I set up an http(s) upload server using cherrypy for uploading something with a blackberry application. I use this code to send data to the server but I always get a bad request (400) error. It gives no other debug info or anything to help. Any ideas abut what may be wrong or what can I do to learn more about the problem ?
This error line is like this:
{My IP} - - [16/Nov/2012:11:35:32] "POST /upload HTTP/1.1" 400 1225 "" ""

If the server only returns 400-something messages, without any additional information, you can either set the 'engine.autoreload_on' config option to True, which should give you the detailed error messages + tracebacks when something goes wrong. Another option would be to specify filenames for log.access_file and log.error_file to redirect their output to specific files.

Related

Python requests.get() from inside a request will not complete

When I execute the line from inside a request:
page = requests.get("http://localhost:5000/some/page/")
with DEBUG logging turned on, the output is:
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:5000
send: 'GET /some/page/ HTTP/1.1\r\nHost: localhost:5000\r\nConnection: keep-alive\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.22.0\r\n\r\n'
and execution will not progress past this point. Am I missing a step somewhere?
Update:
This is happening for requests that themselves contain a requests.get(). So, I think flask is trying to do two different things at once and the dev server is unable to handle that. That also explains why this seems to work on my staging server.
I've tried running Flask with threaded=True, but that didn't make a difference. Any ideas on a fix for the purposes of local dev and testing?
Update2:
Wrapping in with app.test_client() as c: and using c.get() works for localhost, but fails in staging

Jinja template rendering in log different than returned response

I have a web service that generates configuration files for an application we run. Parameters are passed in and a configuration is generated based off those parameters.
This is what I am using to generate the configuration files.
env=Environment(loader=FileSystemLoader('./templates'))
template=env.get_template('squid.conf.j2')
return template.render(proxy_data=get_proxy_data(vendor=grp_name))
When I output the the rendered template to logs they look fine. However, web requests are doing some kind of encoding and causing the following to occur. How do I prevent this from happening so the content is written to files properly?
"#proxy_auth REQUIRED\n\n##### startconf 3329 #####\nhttp_port 0.0.0.0:3329\nacl port_3329 myport 3329\nhttp_access allow port_3329\ncache_peer 127.0.0.1 parent 8123 0 default proxy-only no-query \ncache_peer_access 127.0.0.1 allow port_3329\n##### endconf 3329 #####\n\n\n\nacl SSL_ports port 443\nacl Safe_ports port 80\t\t# http\nacl Safe_ports port 21\t\t# ftp\nacl Safe_ports port 443\t\t# https\nacl Safe_ports port 70\t\t#"
After a little more digging I ended up using something like this to get what I needed.
json.loads(response.read().decode('utf-8'))['config']

Mapping/connect to bad request issue on publishing Django website

Apologise but, I'm clueless on what to do.
My boss bought this Namecheap domain, setup web-app (example) "www.Monday.com"
Now he asks me to setup another web-app "www.Monday.com/ABC" with the Django framework.
I did the following instruction given by Namecheap: https://www.namecheap.com/support/knowledgebase/article.aspx/10048/2182/how-to-work-with-python-app
and did the setup on Django:
https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
run ping check on DNS working
run this webpage www.Monday.com/IOT(exmple) result: 400
I open up the log and all I get:
Invalid HTTP_HOST header: 'www.Moandy.com'. You may need to add 'www.Monday.com' to ALLOWED_HOSTS.
Bad Request: /IO/
The ALLOW-HOST in my Django setting is the same as the domain name they request, here is the code in my setting.py for ALLOW-HOT & DEBUG (I cover some part of IP address for security)
DEBUG = False
ALLOWED_HOSTS = ['198.54.116.***', 'Monday.com/IOT','.Monday.com/IOT/accounts/login',] # domain name gose here
that lead me to the clueless predicament; I don't even know where I could map wrongly or even to start to debug mapping issue
If you could help I really appreciate it

Using Python requests to GET not working - web client and browser works

I have my web app API running.
If I go to http://127.0.0.1:5000/ via any browser I get the right response.
If I use the Advanced REST Client Chrome app and send a GET request to my app at that address I get the right response.
However this gives me a 503:
import requests
response = requests.get('http://127.0.0.1:5000/')
I read to try this for some reason:
s = requests.Session()
response = s.get('http://127.0.0.1:5000/')
But I still get a 503 response.
Other things I've tried: Not prefixing with http://, not using a port in the URL, running on a different port, trying a different API call like Post, etc.
Thanks.
Is http://127.0.0.1:5000/ your localhost? If so, try 'http://localhost:5000' instead
Just in case someone is struggling with this as well, what finally worked was running the application on my local network ip.
I.e., I just opened up the web app and changed the app.run(debug=True) line to app.run(host="my.ip.address", debug = True).
I'm guessing the requests library perhaps was trying to protect me from a localhost attack? Or our corporate proxy or firewall was preventing communication from unknown apps to the 127 address. I had set NO_PROXY to include the 127.0.0.1 address, so I don't think that was the problem. In the end I'm not really sure why it is working now, but I'm glad that it is.

Missing OAuth request token cookie error using tornado and TwitterMixin

I'm using tornado and the TwitterMixin and I use the following basic code:
class OauthTwitterHandler(BaseHandler, tornado.auth.TwitterMixin):
#tornado.web.asynchronous
def get(self):
if self.get_argument("oauth_token", None):
self.get_authenticated_user(self.async_callback(self._on_auth))
return
self.authorize_redirect()
def _on_auth(self, user):
if not user:
raise tornado.web.HTTPError(500, "Twitter auth failed")
self.write(user)
self.finish()
For me it works very well but sometimes, users of my application get a 500 error which says:
Missing OAuth request token cookie
I don't know if it comes from the browser or the twitter api callback configuration.
I've looked through the tornado code and I don't understand why this error
appears.
Two reasons why this might happen:
Some users may have cookies turned off, in which case this won't work.
The cookie hasn't authenticated. It's possible that the oauth_token argument is set, but the cookie is not. Not sure why this would happen, you'd have to log some logging to understand why.
At any rate, this isn't an "error," but rather a sign the user isn't authenticated. Maybe if you see that you should just redirect them to the authorize URL and let them try again.
I found the solution !!
It was due to my DNS.
I didn't put the redirection for www.mydomain.com and mydomain.com so sometimes the cookie was set in www. and sometimes not then my server didn't check in the good place, didn't find the cookie and then send me a 500 error.
The reason this was happening to me is that the Callback URL configuration was pointing to a different domain.
Take a look at the settings tab for your application at https://dev.twitter.com/apps/ or if the users getting the error are accessing your site from a different domain.
See: http://groups.google.com/group/python-tornado/browse_thread/thread/55aa42eef42fa1ac

Categories