I am making a chat application, and need a conf file for the server and another for the client. However, whenever I run the client its conf properties get transferred to the configuration file of the server even though I am using SafeConfigParser. Is there any way to fix this?
Thanks.
when server starts:
config = configparser.SafeConfigParser()
config.read('chatserver.conf')
config['PORT']['port'] = str(self.port)
with open("chatserver.conf","w") as configfile:
config.write(configfile)
when a client joins:
clientconf = configparser.SafeConfigParser()
clientconf.read('chatclient.conf')
clientconf['SERVER']['last_server_used'] = str(self.host)
clientconf['SERVER']['port_used'] = str(self.port)
with open("chatserver.conf","w") as confFile:
clientconf.write(confFile)
chatclient.conf:
[SERVER]
last_server_used = '127.0.0.1'
port_used = '50000'
default_debug_mode = False
log = True
default_log_file = chat.log
chatserver.conf:
[PORT]
port = '1000'
When I run the server, a client joins the chat, and close it, then run the server again, the chatserver.conf file becomes the same as chatclient.conf
In the line:
with open("chatserver.conf","w") as confFile:
clientconf.write(confFile)
You save clientconf as chatserver.conf. I think you mean to save as chatclient.conf instead.
Related
I'm using base64 to upload an image to a Django server. When the image size is larger than 2M, the server can't get the image.
I set up the uwsgi and nginx configuration, and made the upload size 75M, but it did not work.
Client:
image1 = base64.b64encode(open(file_path, 'rb').read())
r = requests.post(url, data={"image": image1})
Server:
result = request.POST.get("image")
Nginx:
```bash
server {
# the port your site will be served on
listen ****;
# the domain name it will serve for
#server_name .example.com; # substitute your machine's IP address or FQDN
server_name ****;
charset utf-8;
# max upload size
client_max_body_size 75M;
}
```
uwsgi:
```bash
# ocr.ini file
# Django-related settings
# the base directory (full path)
chdir= /root/ubuntu
# Django's wsgi file
module= mysite.wsgi
# the virtualenv (full path)
home= /root/ubuntu
# process-related settings
# master
master= true
# maximum number of worker processes
processes= 32
max-requests= 10000
daemonize= /tmp/a.log
pidfile= /tmp/a.pid
#reload-on-as = 126
#reload-on-rss = 126
#enable-threads= true
# the socket (use the full path to be safe
socket= /root/ubuntu/mysite.sock
# ... with appropriate permissions - may be needed
chmod-socket= 666
# clear environment on exit
vacuum= true
limit-post= 20000000
harakiri=30
post-buffering=20000000
py-autoreload = 1
```
Error:
You're probably hitting a hard limit set somewhere in the server's library. If you are trying to upload such a big file, then using a POST passing everything in a data parameter is a bad idea (and also really slow).
You should use the files keyword argument for sending with requests, and the FILES property for receiving with Django.
Client:
r = requests.post(files={'image': open(file_path,'rb')})
Server:
img = request.FILES.get("xxx") # <-- file name here
with open('path/to/destination', 'wb') as dest:
for chunk in img.chunks():
dest.write(chunk)
Check out the File Uploads page on the Django documentetion.
What I have
I have a Client/Server in Flask. The client sends a query in JSON format to the server and the server creates a JSON file. There is another tool which takes this query, executes it on a db and writes the result to a results.txt file. The server periodically checks the 'results' directory for .txt files and if it finds a new file it extracts the result. For the periodic checking part I used APS.
What I want to do
Now I want to send this data (queryResult) which the server has extracted from the .txt file back to the client.
This is what I have done so far.
Server Code:
app = Flask(__name__)
api = Api(app)
# Variable to store the result file count in the Tool directory
fileCount = 0
# Variable to store the query result generated by the Tool
queryResult = 0
# Method to read .txt files generated by the Tool
def readFile():
global fileCount
global queryResult
# Path where .txt files are created by the Tool
path = "<path>"
tempFileCount = len(fnmatch.filter(os.listdir(path), '*.txt'))
if (fileCount != tempFileCount):
fileCount = tempFileCount
list_of_files = glob.iglob(path + '*.txt')
latest_file = max(list_of_files, key=os.path.getctime)
print("\nLast modified file: " + latest_file)
with open(latest_file, "r") as myfile:
queryResult = myfile.readlines()
print(queryResult) # I would like to return this queryResult to the client
scheduler = BackgroundScheduler()
scheduler.add_job(func=readFile, trigger="interval", seconds=10)
scheduler.start()
# Shut down the scheduler when exiting the app
atexit.register(lambda: scheduler.shutdown())
# Method to write url parameters in JSON to a file
def write_file(response):
time_stamp = str(time.strftime("%Y-%m-%d_%H-%M-%S"))
with open('data' + time_stamp + '.json', 'w') as outfile:
json.dump(response, outfile)
print("JSON File created!")
class GetParams(Resource):
def get(self):
response = json.loads(list(dict(request.args).keys())[0])
write_file(response)
api.add_resource(GetParams, '/data') # Route for GetJSON()
if __name__ == '__main__':
app.run(port='5890', threaded=True)
Client Code
data = {
'query': 'SELECT * FROM table_name'
}
url = 'http://127.0.0.1:5890/data'
session = requests.Session()
retry = Retry(connect=3, backoff_factor=0.5)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
resp = session.get(url, params=json.dumps(data))
print(resp)
Can anyone please help me as to how to send this queryResult back to the Client?
EDIT: I would like the server to send the queryResult back to the client each time it encounters a new file in the Tool directory i.e., every time it finds a new file, it extracts the result (which it is doing currently) and send it back to the client.
What you want to do is called a Web Worker Architecture.
To pass in a real-time abstract queryResult from a background job to a client app you can use a combination of a Message Queue (Kafka is recommended, RabbitMQ is OK too) and a Web-Socket. When a client sends a request to the /data endpoint, you should return back some unique token (like UUID if your user is anonymous, or user id if it's authenticated). The same token you should add to the name of the resulting file. When your background worker finished processing the file it uses the token (from a file's name) to create a Kafka or RabbitMQ topic, like topic_for_user_id_1337 or topic_for_uuid_jqwfoj-123qwr, and publishes queryResult as a message.
At the same time, your client should establish a web-socket connection (Flask is quite bad for web-sockets, but there are few fine libs to do that anyway, like socketio) and pass the token through it to your backend so it'll create a message queue subscriber, subscribed to a topic with the token's name so when a background job is finished, a web-backend will receive a message and pass it to the user through a web-socket.
P.S. If it sounds overly complicated, you can avoid the use of MQ and WS and put the queryResult into a db and create a endpoint to check if it exists in a DB. If it's not, you return something like not ready yet and a client retries in a few seconds, if it's ready -- you return the queryResult from the DB.
I have Flask application with route (webhook) receiving POST requests (webhooks) from external phone application (incomming call = POST request). This route sets threading.Event.set() and based on this event, another route (eventsource) sends an event stream to opened EventSource connection on a webpage created by yet another route (eventstream).
telfa_called = Event()
telfa_called.clear()
call = ""
#telfa.route('/webhook', methods=['GET', 'POST'])
def webhook():
global call
print('THE CALL IS HERE')
x = request.data
y = ET.fromstring(x.decode())
caller_number = y.find('caller_number').text
telfa_called.set() # setting threading.Event for another route
return Response(status=200)
#telfa.route('/eventstream', methods = ['GET','POST'])
#login_required
def eventstream():
jsid = str(uuid.uuid4())
return render_template('telfa/stream.html', jsid=jsid)
def eventsource_gen():
while 1:
if telfa_called.wait(10):
telfa_called.clear()
print('JE TO TADY')
yield "data: {}\n\n".format(json.dumps(call))
#telfa.route('/eventsource', methods=['GET', 'POST'])
def eventsource():
return Response(eventsource_gen(), mimetype='text/event-stream')`
Everything works great when testing in pure Python application. The problem starts, when I move this to production server, where I use uWSGI with nginx. (Other parts of this Python application work without any troubles.)
When the eventSource connection is opened and incomming webhook should be processed, whole flask server stucks (for all other users, too), page stops to load and I cannot find, where the error is.
I only know, the POST request from external application is received, but the response to EventSource is not made.
I suspect it has something to do with processes - the EventSource connection from JavaScript is one process, the webhook route another - and they do not communicate. So or so, I suppose this has to have very trivial solution, but I didn't find it in past 3 days and nights. Any hints, please? Thanks in advance.
To be complete, this my uwsgi config file:
[uwsgi]
module = wsgi:app
enable-threads = true
master = true
processes = 5
threads = 2
uid = www-data
gid= www-data
socket = /tmp/myproject.sock
chmod-socket = 666
vacuum = true
die-on-term = true
limit-as=512
buffer-size = 512000
workers = 5
max-requests = 100
req-logger = file:/tmp/uwsg-req.log
logger = file:/tmp/uwsgi.log`
For circumstances outside of my control, I need to use the Flask server to serve basic html files, the Flask SocketIO wrapper to provide a web socket interface between any clients and the server. The async_mode has to be threading instead of gevent or eventlet, I understand that it is less efficient to use threading, but I can't use the other two frameworks.
In my unit tests, I need to shut down and restart the web socket server. When I attempt to shut down the server, I get the RunTimeError 'Cannot stop unknown web server.' This is because the function werkzeug.server.shutdown is not found in the Flask Request Environment flask.request.environ object.
Here is how the server is started.
SERVER = flask.Flask(__name__)
WEBSOCKET = flask_socketio.SocketIO(SERVER, async_mode='threading')
WEBSOCKET.run(SERVER, host='127.0.0.1', port=7777)
Here is the short version of how I'm attempting to shut down the server.
client = WEBSOCKET.test_client(SERVER)
#WEBSOCKET.on('kill')
def killed():
WEBSOCKET.stop()
try:
client.emit('kill')
except:
pass
The stop method must be called from within a flask request context, hence the weird kill event callback. Inside the stop method, the flask.request.environ has the value
'CONTENT_LENGTH' (40503696) = {str} '0'
'CONTENT_TYPE' (60436576) = {str} ''
'HTTP_HOST' (61595248) = {str} 'localhost'
'PATH_INFO' (60437104) = {str} '/socket.io'
'QUERY_STRING' (60327808) = {str} ''
'REQUEST_METHOD' (40503648) = {str} 'GET'
'SCRIPT_NAME' (60437296) = {str} ''
'SERVER_NAME' (61595296) = {str} 'localhost'
'SERVER_PORT' (61595392) = {str} '80'
'SERVER_PROTOCOL' (65284592) = {str} 'HTTP/1.1'
'flask.app' (65336784) = {Flask} <Flask 'server'>
'werkzeug.request' (60361056) = {Request} <Request 'http://localhost/socket.io' [GET]>
'wsgi.errors' (65338896) = {file} <open file '<stderr>', mode 'w' at 0x0000000001C92150>
'wsgi.input' (65338848) = {StringO} <cStringIO.StringO object at 0x00000000039902D0>
'wsgi.multiprocess' (65369288) = {bool} False
'wsgi.multithread' (65369232) = {bool} False
'wsgi.run_once' (65338944) = {bool} False
'wsgi.url_scheme' (65338800) = {str} 'http'
'wsgi.version' (65338752) = {tuple} <type 'tuple'>: (1, 0)
My question is, how do I set up the Flask server to have the werkzeug.server.shutdownmethod available inside the flask request contexts?
Also this is using Python 2.7
I have good news for you, the testing environment does not use a real server, in that context the client and the server run inside the same process, so the communication between them does not go through the network as it does when you run things for real. Really in this situation there is no server, so there's nothing to stop.
It seems you are starting a real server, though. For unit tests, that server is not used, all you need are your unit tests which import the application and then use a test client to issue socket.io events. I think all you need to do is just not start the server, the unit tests should run just fine without it if all you use is the test client as you show above.
I try to create name server in server file by Pyro4.naming.startNS() method.
My server file looks like this:
my_object = MyClass()
daemon = Pyro4.Daemon()
uri_deamon, ns, br = Pyro4.naming.startNS()
uri = daemon.register(my_object)
ns.nameserver.register("server", uri)
daemon.requestLoop()
And my client:
ns = Pyro4.locateNS()
uri = ns.lookup('server')
my_object=Pyro4.Proxy(uri)
Pyro4.locateNS() never ends.
After I start server file. I try to execute "python -m Pyro4.nsc list" and this command never ends too.
Have you some ideas what is wrong?
Tomek.
SOLUTION:
I needed to use Pyro4.naming.startNSloop() instead of Pyro4.naming.startNS(). Pyro4.naming.startNSloop should be executed in thread.