I’m running a simple WSGIsever with following code
botapp = bottle.app()
server = WSGIServer((myAddress, int(myPort)), botapp)
It’s logging more and more failed login attempts.
Searched for blacklist in WSGI documentation (https://uwsgi-docs.readthedocs.io/en/latest/Options.html)
How to better implement dynamic update of blacklist, e.g. when new recurring failed attempt is detected, IP is added to blacklist and applied to new connections.
Much appreciate your input.
Related
Hi i have a flask application that Build as a docker image to serve as an api
this image is deployed to multiple environments (DEV/QA/PROD)
i want to use an applicationInsight for each environment
using a single application Insight works fine
here is a code snippet
app.config['APPINSIGHTS_INSTRUMENTATIONKEY'] = APPINSIGHTS_INSTRUMENTATIONKEY
appinsights = AppInsights(app)
#app.after_request
def after_request(response):
appinsights.flush()
return response
but to have multiple application i need to configure app.config with the key of the app insight
i thought of this solution which thourghs errors
here a snippet :
app = Flask(__name__)
def monitor(key):
app.config['APPINSIGHTS_INSTRUMENTATIONKEY'] = key
appinsights = AppInsights(app)
#app.after_request
def after_request(response):
appinsights.flush()
return response
#app.route("/")
def hello():
hostname = urlparse(request.base_url).hostname
print(hostname)
if hostname == "dev url":
print('Dev')
monitor('3ed57a90-********')
if hostname == "prod url":
print('prod')
monitor('941caeca-********-******')
return "hello"
this example contains the function monitor which reads the url and decide which app key to give so it can send metrics to the right place but apparently i can't do those processes after the request is sent (is there a way a config variable can be changed based on the url condition ?)
error Message :
AssertionError: The setup method 'errorhandler' can no longer be called on the application. It has already handled its first request, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the application are done before running it.
i hope someone can guide me to a better solution
thanks in advance
AFAIK, Normally Application Insights SDK collect the telemetry data, and it has sent to Azure by batch. So, you have to keep a single application insights resource for an application. Use staging for use different application insights for same application.
When the request started for the service till to complete his response the Application insights has taking care of the specific service life cycle. The application while start to end it will track the information. So that we can't use more than one Application Insights in single application.
When Application starts the AI start collecting telemetry data when Application stops then the AI stops gathering telemetry information. We were using Flush to even though in between application stops to send information to AI.
I have tried what you have used. It confirms the same in the log
I have tried with single application insights I can be able to collect all telemetry information.
I am going through some CTF challenges at https://recruit.osiris.cyber.nyu.edu/challenges.
I got to one for Template Programming where the task is to "Read /flag.txt from the server. http://recruit.osiris.cyber.nyu.edu:2000"
I am not asking for a solution, but I would like some better understanding of what is going on below:
What is this code doing?
Should I be worried about running out of Debugging mode and/or using host="0.0.0.0"?
What are some resources that could help me understand this? I tried reading through the Flask documentation and the tutorialspoint page, but I am unclear as to how this doesn't just set up a local server for testing as opposed to accessing a remote server...
If I ctrl+C do I need to worry about leaving a server still running on an open port when I am not in Debugging mode?
#!/usr/bin/env python3
from flask import Flask, request, abort, render_template_string
import os.path
app = Flask(__name__)
#app.route('/', methods=['GET'])
def index():
name = request.args.get('name')
if name is not None:
return render_template_string(open('templates/hello.html').read().format(name=name))
return render_template_string(open('templates/index.html').read())
if __name__ == "__main__":
app.run(host="0.0.0.0")
I think I can answer most of these.
As you probably already figured out, Flask is a fairly basic web framework. By the look of things, what you have there is a copy of the code running at the CTF site. It displays just two pages; one that contains the initial web form (templates/index.html) and another that uses a query string variable to greet the user (templates/hello.html) when a name has been provided.
You don't really have to run this code yourself. The 0.0.0.0 host address is catch-all that matches all IPv4 addresses on the local machine, which would include local addresses like 192.168.0.1 and 127.0.0.1 as well as the IP address used for incoming connections to the server.
Like I said, this is the code running on the remote server.
I think what you need to do is find some way of crafting a request to this web service in such a way that it reveals the contents of /flag.txt instead of (or perhaps in addition to) just saying hello. A quick search for something like "flask include file vulnerability" should give you some idea of how to attack this problem.
I am implementing a single sign-on mechanism for a Flask server running on Ubuntu 16.04 that authenticates users against an Active Directory server in the Windows domain.
When I run the example app from https://github.com/mkomitee/flask-kerberos/tree/master/example on the Flask server, I can access the Flask server from a client computer that's logged in, the server correctly negotiates access and returns the name of the logged in user. However, this is very slow, taking about two minutes.
Following the steps of what happens in flask-kerberos, I found that the process stalls at the authGSSServerInit step. I can reproduce the behaviour using the following minimal progam:
import kerberos
rc, state = kerberos.authGSSServerInit("HTTP#flaskserver.mydomain.local")
The initalisation finishes successfully, but it takes about two minutes again.
I have successfully registered the service principal (HTTP/flaskserver.mydomain.local) on the AD server and exported the keytab to the Flask server. I can get a ticket granting ticket on the Flask server using kinit -k HTTP/flaskserver.mydomain.local. I can also verify passwords in Python using the kerberos library:
import kerberos
kerberos.checkPassword('username', 'password', 'HTTP/flaskserver.mydomain.local', 'MYDOMAIN.LOCAL'
This runs correctly and almost instantly.
What could be the cause for the delay in running kerberos.authGSSServerInit? How do I debug this?
The delay was caused by a failing reverse DNS lookup for the hostname. host flaskserver correctly returned the IP, but host <ip-of-flaskserver> returned a Host <ip-of-flaskserver>.in-addr.arpa not found: 2(SERVFAIL).
As described at https://web.mit.edu/kerberos/krb5-1.13/doc/admin/princ_dns.html, disabling the reverse DNS lookup in the krb5.conf solved the problem:
[libdefaults]
rdns = false
I am trying to add authentication to a xmlrpc server (which will be running on nodes of a P2P network) without using user:password#host as this will reveal the password to all attackers. The authentication is so to basically create a private network, preventing unauthorised users from accessing it.
My solution to this was to create a challenge response system very similar to this but I have no clue how to add this to the xmlrpc server code.
I found a similar question (Where custom authentication was needed) here.
So I tried creating a module that would be called whenever a client connected to the server. This would connect to a challenge-response server running on the client and if the client responded correctly would return True. The only problem was that I could only call the module once and then I got a reactor cannot be restarted error. So is there some way of having a class that whenever the "check()" function is called it will connect and do this?
Would the simplest thing to do be to connect using SSL? Would that protect the password? Although this solution would not be optimal as I am trying to avoid having to generate SSL certificates for all the nodes.
Don't invent your own authentication scheme. There are plenty of great schemes already, and you don't want to become responsible for doing the security research into what vulnerabilities exist in your invention.
There are two very widely supported authentication mechanisms for HTTP (over which XML-RPC runs, therefore they apply to XML-RPC). One is "Basic" and the other is "Digest". "Basic" is fine if you decide to run over SSL. Digest is more appropriate if you really can't use SSL.
Both are supported by Twisted Web via twisted.web.guard.HTTPAuthSessionWrapper, with copious documentation.
Based on your problem description, it sounds like the Secure Remote Password Protocol might be what you're looking for. It's a password-based mechanism that provides strong, mutual authentication without the complexity of SSL certificate management. It may not be quite as flexible as SSL certificates but it's easy to use and understand (the full protocol description fits on a single page). I've often found it a useful tool for situations where a trusted third party (aka Kerberos/CA authorities) isn't appropriate.
For anyone that was looking for a full example below is mine (thanks to Rakis for pointing me in the right direction). In this the user and password is stored in a file called 'passwd' (see the first useful link for more details and how to change it).
Server:
#!/usr/bin/env python
import bjsonrpc
from SRPSocket import SRPSocket
import SocketServer
from bjsonrpc.handlers import BaseHandler
import time
class handler(BaseHandler):
def time(self):
return time.time()
class SecureServer(SRPSocket.SRPHost):
def auth_socket(self, socket):
server = bjsonrpc.server.Server(socket, handler_factory=handler)
server.serve()
s = SocketServer.ForkingTCPServer(('', 1337), SecureServer)
s.serve_forever()
Client:
#! /usr/bin/env python
import bjsonrpc
from bjsonrpc.handlers import BaseHandler
from SRPSocket import SRPSocket
import time
class handler(BaseHandler):
def time(self):
return time.time()
socket, key = SRPSocket.SRPSocket('localhost', 1337, 'dht', 'testpass')
connection = bjsonrpc.connection.Connection(socket, handler_factory=handler)
test = connection.call.time()
print test
time.sleep(1)
Some useful links:
http://members.tripod.com/professor_tom/archives/srpsocket.html
http://packages.python.org/bjsonrpc/tutorial1/index.html
I have a Django web application. I also have a spell server written using twisted running on the same machine having django (running on localhost:8090). The idea being when user does some action, request comes to Django which in turn connects to this twisted server & server sends data back to Django. Finally Django puts this data in some html template & serves it back to the user.
Here's where I am having a problem. In my Django app, when the request comes in I create a simple twisted client to connect to the locally run twisted server.
...
factory = Spell_Factory(query)
reactor.connectTCP(AS_SERVER_HOST, AS_SERVER_PORT, factory)
reactor.run(installSignalHandlers=0)
print factory.results
...
The reactor.run() is causing a problem. Since it's an event loop. The next time this same code is executed by Django, I am unable to connect to the server. How does one handle this?
The above two answers are correct. However, considering that you've already implemented a spelling server then run it as one. You can start by running it on the same machine as a separate process - at localhost:PORT. Right now it seems you have a very simple binary protocol interface already - you can implement an equally simple Python client using the standard lib's socket interface in blocking mode.
However, I suggest playing around with twisted.web and expose a simple web interface. You can use JSON to serialize and deserialize data - which is well supported by Django. Here's a very quick example:
import json
from twisted.web import server, resource
from twisted.python import log
class Root(resource.Resource):
def getChild(self, path, request):
# represents / on your web interface
return self
class WebInterface(resource.Resource):
isLeaf = True
def render_GET(self, request):
log.msg('GOT a GET request.')
# read request.args if you need to process query args
# ... call some internal service and get output ...
return json.dumps(output)
class SpellingSite(server.Site):
def __init__(self, *args, **kwargs):
self.root = Root()
server.Site.__init__(self, self.root, **kwargs)
self.root.putChild('spell', WebInterface())
And to run it you can use the following skeleton .tac file:
from twisted.application import service, internet
site = SpellingSite()
application = service.Application('WebSpell')
# attach the service to its parent application
service_collection = service.IServiceCollection(application)
internet.TCPServer(PORT, site).setServiceParent(service_collection)
Running your service as another first class service allows you to run it on another machine one day if you find the need - exposing a web interface makes it easy to horizontally scale it behind a reverse proxying load balancer too.
reactor.run() should be called only once in your whole program. Don't think of it as "start this one request I have", think of it as "start all of Twisted".
Running the reactor in a background thread is one way to get around this; then your django application can use blockingCallFromThread in your Django application and use a Twisted API as you would any blocking API. You will need a little bit of cooperation from your WSGI container, though, because you will need to make sure that this background Twisted thread is started and stopped at appropriate times (when your interpreter is initialized and torn down, respectively).
You could also use Twisted as your WSGI container, and then you don't need to start or stop anything special; blockingCallFromThread will just work immediately. See the command-line help for twistd web --wsgi.
You should stop reactor after you got results from Twisted server or some error/timeout happening. So on each Django request that requires query your Twisted server you should run reactor and then stop it. But, it's not supported by Twisted library — reactor is not restartable. Possible solutions:
Use separate thread for Twisted reactor, but you will need to deploy your django app with server, which has support for long running threads (I don't now any of these, but you can write your own easily :-)).
Don't use Twisted for implementing client protocol, just use plain stdlib's socket module.