Silencing cherrypy access log for a particular method/api/url - python

The problem is simple, we would like CherryPy to not log access log for a particular exposed method/API that gets called.
Basically when this API gets called, there are some parameters in the query string of the URL which are very sensitive and if leaked, would expose potential security. Naturally this is a /GET request and unfortunately it is the only way the parameters could be passed, since its a redirect(302) from an external service to this web server.
If it would not log the URL, that would serve the purpose as well.
So, is there a way that we can filter logging messages in access log by API's, URL's etc?
Thanks in advance for the help.

cherrypy uses Python's standard logging module by default, so you can just add a custom filter. This example will ignore any GET request with /foo as the path prefix:
import logging
class IgnoreURLFilter(logging.Filter):
# simple example of log message filtering
def __init__(self, ignore):
self.ignore = 'GET /' + ignore
def filter(self, record):
return self.ignore not in record.getMessage()
app = cherrypy.tree.mount( YourApplication() )
app.log.access_log.addFilter( IgnoreURLFilter('foo') )
cherrypy.engine.start()

Related

Cookie is not created when calling the endpoint in FastAPI

I have encountered an issue, as I have to create a cookie in the backend, which I will later use to send a request from the frontend. Both apps are on the same domain. This is the general idea behind it: https://levelup.gitconnected.com/secure-frontend-authorization-67ae11953723.
Frontend - Sending GET request to Backend
#app.get('/')
async def homepage(request: Request, response_class=HTMLResponse):
keycloak_code = 'sksdkssdk'
data = {'code': keycloak_code}
url_post = 'http://127.0.0.1:8002/keycloak_code'
post_token=requests.get(url=url_post, json = data )
return 'Sent'
if __name__ == '__main__':
uvicorn.run(app, host='local.me.me', port=7999,debug=True)
Backend
#app.get("/keycloak_code")
def get_tokens(response: Response, data: dict):
code = data['code']
print(code)
....
requests.get(url='http://local.me.me:8002/set')
return True
#app.get("/set")
async def createcookie(response: Response):
r=response.set_cookie(key='tokic3', value='helloworld', httponly=True)
return True
if __name__ == '__main__':
uvicorn.run(app, host='local.me.me', port=8002, log_level="debug")
When I open the browser and access http://local.me.me:8002/set, I can see that the cookie is created.
But when I make a GET request from my frontend to backend to the same URL, the request is received—as I can see in the terminal—but the backend does not create the cookie. Does anyone know what I might be doing wrong?
I have tried different implementations from FastAPI docs, but none has similar use cases.
127.0.0.1 and localhost (or local.me.me in your case) are two different domains (and origins). Hence, when making a request you need to use the same domain you used for creating the cookie. For example, if the cookie was created for local.me.me domain, then you should use that domain when sending the request. See related posts here, as well as here and here.
You also seem to have a second FastAPI app (listenning on a different port) acting as your frontend (as you say). If that's what you are trying to do, you would need to use Session Objects in Python requests module, or preferably, use a Client instance from httpx library, in order to persist cookies across requests. The advantage of httpx is that it offers an asynchronous API as well, using the httpx.AsyncClient(). You can find more details and examples in this answer, as well as here and here.

How can i get request IP address from Google Cloud Logging using Python?

Using the google-cloud-logging module for Python, I can iterate through log messages for my GAE app, and I'd like to list the requesting IP address for each item in Request Log. The official docs don't mention how, and hack through the source code doesn't give me much hope since the log entries are mostly TextEntry with few object properties.
My code is like this:
logging_client = logging.Client()
logger = logging_client.logger('projects/MYPROJECT/logs/appengine.googleapis.com%2Frequest_log')
for entry in logging_client.list_entries():
timestamp = entry.timestamp.isoformat()
print("* {}, {}: {}".format(timestamp, type(entry), entry))
When looking at the same logs in the Google Cloud Console, I see a protoPayload.ip field for each log entry - that's exactly the field I hope to extract.
This is not currently possible since the Python client library for Stackdriver Logging doesn't support this feature yet. As you can check in the documentation for the Entry object and its code it is not possible to retrieve the section that contains the Ip of the caller.
However you can do a trick by logging this value yourself with in each call to your App Engine service and therefore you could be able to retrieve it. The value of the caller IP is in the 'X-Appengine-User-Ip' header. You can use this code as an example on how to get it:
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route('/')
def hello_world():
ip = request.headers['X-Appengine-User-Ip']

Cleanly Mocking Remote Servers and APIs for Django Unittests

I have a thorny problem that I can't seem to get to grips with. I am
currently writing unit tests for a django custom auth-backend. On our
system we actually have two backends: one the built-in django backend
and the custom backend that sends out requests to a Java based API
that returns user info in the form of XML. Now, I am writing unit
tests so I don't want to be sending requests outside the system like
that, I'm not trying to test the Java API, so my question is how can I
get around this and mock the side-effects in the most robust way.
The function I am testing is something like this, where the url
settings value is just the base url for the Java server that
authenticates the username and password data and returns the xml, and the service value is
just some magic for building the url query, its unimportant for
us:
#staticmethod
def get_info_from_api_with_un_pw(username, password, service=12345):
url = settings.AUTHENTICATE_URL_VIA_PASSWORD
if AUTH_FIELD == "username":
params = {"nick": username, "password": password}
elif AUTH_FIELD == "email":
params = {"email": username, "password": password}
params["service"] = service
encoded_params = urlencode([(k, smart_str(v, "latin1")) for k, v in params.items()])
try:
# get the user's data from the api
xml = urlopen(url + encoded_params).read()
userinfo = dict((e.tag, smart_unicode(e.text, strings_only=True))
for e in ET.fromstring(xml).getchildren())
if "nil" in userinfo:
return userinfo
else:
return None
So, we get the xml, parse it into a dict and if the key nil is present
then we can return the dict and carry on happy and authenticated.
Clearly, one solution is just to find a way to somehow override or
monkeypatch the logic in the xml variable, I found this answer:
How can one mock/stub python module like urllib
I tried to implement something like that, but the details there are
very sketchy and I couldn't seem to get that working.
I also captured the xml response and put it in a local file in the
test folder with the intention of finding a way to use that as a mock
response that is passed into the url parameter of the test function,
something like this will override the url:
#override_settings(AUTHENTICATE_URL_VIA_PASSWORD=(os.path.join(os.path.dirname(__file__), "{0}".format("response.xml"))))
def test_get_user_info_username(self):
self.backend = RemoteAuthBackend()
self.backend.get_info_from_api_with_un_pw("user", "pass")
But that also needs to take account of the url building logic that the
function defines, (i.e. "url + encoded_params"). Again, I could rename
the response file to be the same as the concatenated url but this is becoming
less like a good unit-test for the function and more of a "cheat", the whole
thing is just getting more and more brittle all the time with these solutions, and its really just a fixture anyway, which is also something I want to avoid if
at all possible.
I also wondered if there might be a way to serve the xml on the django development server and then point the function at that? It seems like a saner solution, but much googling gave me no clues if such a thing would be possible or advisable and even then I don't think that would be a test to run outside of the development environment.
So, ideally, I need to be able to somehow mock a "server" to
take the place of the Java API in the function call, or somehow serve
up some xml payload that the function can open as its url, or
monkeypatch the function from the test itself, or...
Does the mock library have the appropriate tools to do such things?
http://www.voidspace.org.uk/python/mock
So, there are two points to this question 1) I would like to solve my
particular problem in a clean way, and more importantly 2) what are
the best practices for cleanly writing Django unit-tests when you are
dependent on data, cookies, etc. for user authentication from a remote
API that is outside of your domain?
The mock library should work if used properly. I prefer the minimock library and I wrote a small base unit testcase (minimocktest) that helps with this.
If you want to integrate this testcase with Django to test urllib you can do it as follows:
from minimocktest import MockTestCase
from django.test import TestCase
from django.test.client import Client
class DjangoTestCase(TestCase, MockTestCase):
'''
A TestCase class that combines minimocktest and django.test.TestCase
'''
def _pre_setup(self):
MockTestCase.setUp(self)
TestCase._pre_setup(self)
# optional: shortcut client handle for quick testing
self.client = Client()
def _post_teardown(self):
TestCase._post_teardown(self)
MockTestCase.tearDown(self)
Now you can use this testcase instead of using the Django test case directly:
class MySimpleTestCase(DjangoTestCase):
def setUp(self):
self.file = StringIO.StringIO('MiniMockTest')
self.file.close = self.Mock('file_close_function')
def test_urldump_dumpsContentProperly(self):
self.mock('urllib2.urlopen', returns=self.file)
self.assertEquals(urldump('http://pykler.github.com'), 'MiniMockTest')
self.assertSameTrace('\n'.join([
"Called urllib2.urlopen('http://pykler.github.com')",
"Called file_close_function()",
]))
urllib2.urlopen('anything')
self.mock('urllib2.urlopen', returns=self.file, tracker=None)
urllib2.urlopen('this is not tracked')
self.assertTrace("Called urllib2.urlopen('anything')")
self.assertTrace("Called urllib2.urlopen('this is mocked but not tracked')", includes=False)
self.assertSameTrace('\n'.join([
"Called urllib2.urlopen('http://pykler.github.com')",
"Called file_close_function()",
"Called urllib2.urlopen('anything')",
]))
Here's the basics of the solution that I ended up with for the record. I used the Mock library itself rather than Mockito in the end, but the idea is the same:
from mock import patch
#override_settings(AUTHENTICATE_LOGIN_FIELD="username")
#patch("mymodule.auth_backend.urlopen")
def test_get_user_info_username(self, urlopen_override):
response = "file://" + os.path.join(os.path.dirname(__file__), "{0}".format("response.xml"))
# mock patch replaces API call
urlopen_override.return_value = urlopen(response)
# call the patched object
userinfo = RemoteAuthBackend.get_info_from_api_with_un_pw("user", "pass")
assert_equal(type(userinfo), dict)
assert_equal(userinfo["nick"], "user")
assert_equal(userinfo["pass"], "pass")

Dumping HTTP requests with Flask

I am developing a Flask application based web application ( https://github.com/opensourcehacker/sevabot ) which has HTTP based API services.
Many developers are using and extending the API and I'd like to add a feature which prints Flask's HTTP request to Python logging output, so you can see raw HTTP payloads, source IP and headers you get.
What hooks Flask offers where this kind of HTTP request dumping would be the easiest to implement
Are there any existing solutions and best practices to learn from?
Flask makes a standard logger available at at current_app.logger, there's an example configuration in this gist, though you can centralise the logging calls in a before_request handler if you want to log every request:
from flask import request, current_app
#app.before_request
def log_request():
if current_app.config.get('LOG_REQUESTS'):
current_app.logger.debug('whatever')
# Or if you dont want to use a logger, implement
# whatever system you prefer here
# print request.headers
# open(current_app.config['REQUEST_LOG_FILE'], 'w').write('...')

Route requests based on the Accept header in Python web frameworks

I have some experience with different web frameworks (Django, web.py, Pyramid and CherryPy), and I'm wondering in which one will it be easier and hopefully cleaner to implement a route dispatcher to a different "view/handler" based on the "Accept" header and the HTTP method e.g.:
Accept: application/json
POST /post/
is handled different than:
Accept: text/html
POST /post/
So the request gets routed to the particular view of the corresponding handler of the MIME "application/json" and the HTTP method "POST".
I do know how to implement something like that in CherryPy, but I lose the use of the CherryPy tools for the internal redirection of the request because I'm calling the specific method directly instead of automagically from the dispatcher. Another option is to implement a full new dispatcher from scratch, but that's the last option.
I'm aware of the alternative to use extensions in the url like /post.json or /post/.json, but I'm looking to keep the same url?
If all you are looking for is one framework that can do this easily, then use pyramid.
Pyramid view definitions are made with predicates, not just routes, and a view only matches if all predicates match. One such predicate is the accept predicate, which does exactly what you want; make view switching depending on the Accept header easy and simple:
from pyramid.view import view_config
#view_config(route_name='some_api_name', request_method='POST', accept='application/json')
def handle_someapi_json(request):
# return JSON
#view_config(route_name='some_api_name', request_method='POST', accept='text/html')
def handle_someapi_html(request):
# return HTML
I needed to do this in Django, and so I wrote a piece of middleware to make it possible: http://baltaks.com/2013/01/route-requests-based-on-the-http-accept-header-in-django
Here is the code:
# A simple middleware component that lets you use a single Django
# instance to serve multiple versions of your app, chosen by the client
# using the HTTP Accept header.
# In your settings.py, map a value you're looking for in the Accept header
# to a urls.py file.
# HTTP_HEADER_ROUTING_MIDDLEWARE_URLCONF_MAP = {
# u'application/vnd.api-name.v1': 'app.urls_v1'
# }
from django.conf import settings
class HTTPHeaderRoutingMiddleware:
def process_request(self, request):
try:
for content_type in settings.HTTP_HEADER_ROUTING_MIDDLEWARE_URLCONF_MAP:
if (request.META['HTTP_ACCEPT'].find(content_type) != -1):
request.urlconf = settings.HTTP_HEADER_ROUTING_MIDDLEWARE_URLCONF_MAP[content_type]
except KeyError:
pass # use default urlconf (settings.ROOT_URLCONF)
def process_response(self, request, response):
return response
I'm not suite sure what you mean by "internal redirection", but if you look at the code you can see that tools.accept is a really thin wrapper around lib.cptools.accept, which you can call from your own code easily. Hand it a list of Content-Types your server can send, and it will tell you which one the client prefers, or raise 406 if the types you emit and the types the client accepts don't overlap.

Categories