Flask Deploy on Gunicorn and Nginx Threadsafe error - python

i deploy a Flask scrip as below on a Centos7 vps.
from flask import Flask, request
import time
# http server
app = Flask(__name__)
server_dict = dict()
server_status_dict = dict()
#app.route('/<ip>')
def report(ip):
server_dict[ip] = time.time()
return "ok" + ":"+ip + ":"+str(time.time())
#app.route('/check')
def check():
for server in server_dict:
difference = time.time()-server_dict[server]
if difference < 120:
server_status_dict[server]= "ok" + ":"+str(difference)
elif difference > 172800:
del server_status_dict[server]
else:
server_status_dict[server]= "fail" + ":"+str(difference)
message =''
for item in server_status_dict:
message = message+ f'{item}:{server_status_dict[item]}<br>'
return message
# main function
if __name__ == '__main__':
# start server
app.run(host='0.0.0.0', port=1111)
I run it in 2 way.
1. direct run the python scrip by type "python3.7 test.py"
2. deploy with Gunicorn and Nginx.
And i make a script to test this Flask api as below
import requests
for i in range(100):
requests.get(f'http://example.org/{i}')
print("done 1")
for i in range(100):
requests.get(f'http://server_IP:1111/{i}')
print("done 2")
with 1st option. The scrip run ok. When i go to "http://server_IP:1111/check". It give 100 entry
0:fail:759.2570543289185
1:fail:758.8786942958832
2:fail:758.5069346427917
3:fail:758.1351449489594
4:fail:757.7596881389618
5:fail:757.3863341808319
6:fail:757.010666847229
7:fail:756.6381704807281
8:fail:756.2622804641724
9:fail:755.8862257003784
10:fail:755.5146560668945
11:fail:755.1291973590851
12:fail:754.7365326881409
13:fail:754.356516122818
14:fail:753.981279373169
15:fail:753.6054089069366
16:fail:753.2138450145721
17:fail:752.818380355835
18:fail:752.4382960796356
19:fail:752.0667576789856
20:fail:751.7003827095032
21:fail:751.3132452964783
22:fail:750.9238367080688
23:fail:750.5513446331024
24:fail:750.1771302223206
25:fail:749.7979047298431
26:fail:749.4190459251404
27:fail:749.0481917858124
28:fail:748.6672575473785
29:fail:748.2830848693848
30:fail:747.909416437149
31:fail:747.5357480049133
32:fail:747.1593079566956
33:fail:746.7837409973145
34:fail:746.3994252681732
35:fail:746.0265593528748
36:fail:745.6520500183105
37:fail:745.2793860435486
38:fail:744.904794216156
39:fail:744.5288579463959
40:fail:744.1554877758026
41:fail:743.7802364826202
42:fail:743.4038217067719
43:fail:743.0015366077423
44:fail:742.616055727005
45:fail:742.2241225242615
46:fail:741.8492274284363
47:fail:741.4703538417816
48:fail:741.0822536945343
49:fail:740.7089433670044
50:fail:740.3415608406067
51:fail:739.9651212692261
52:fail:739.5690467357635
53:fail:739.1705968379974
54:fail:738.7934353351593
55:fail:738.4151468276978
56:fail:738.0353343486786
57:fail:737.6413230895996
58:fail:737.2650125026703
59:fail:736.8714530467987
60:fail:736.4966006278992
61:fail:736.1160485744476
62:fail:735.7190825939178
63:fail:735.3462533950806
64:fail:734.9714226722717
65:fail:734.5957586765289
66:fail:734.2199065685272
67:fail:733.8420522212982
68:fail:733.4598708152771
69:fail:733.0775439739227
70:fail:732.6989419460297
71:fail:732.3187139034271
72:fail:731.9392898082733
73:fail:731.5633845329285
74:fail:731.1846008300781
75:fail:730.8096714019775
76:fail:730.4323663711548
77:fail:730.0437717437744
78:fail:729.6707744598389
79:fail:729.2912459373474
80:fail:728.8956272602081
81:fail:728.5194237232208
82:fail:728.1444211006165
83:fail:727.7692551612854
84:fail:727.3844618797302
85:fail:727.0075929164886
86:fail:726.612667798996
87:fail:726.2140853404999
88:fail:725.8366258144379
89:fail:725.4668595790863
90:fail:725.080512046814
91:fail:724.7128283977509
92:fail:724.3402450084686
93:fail:723.9593863487244
94:fail:723.5851843357086
95:fail:723.2059574127197
96:fail:722.802404165268
97:fail:722.40824842453
98:fail:722.0141706466675
99:fail:721.6389377117157
But with 2nd option. the error happen. when i go to "http://example.org/check" it give difference result each time.
some time is below:
0:fail:780.1568698883057
4:fail:778.6187407970428
6:fail:777.8681375980377
9:fail:776.739280462265
13:fail:775.2384984493256
15:fail:774.4248764514923
19:fail:772.897510766983
22:fail:771.7576985359192
25:fail:770.6321122646332
28:fail:769.4517879486084
31:fail:768.3453030586243
34:fail:767.2020778656006
37:fail:766.0453197956085
40:fail:764.8815402984619
43:fail:763.7402126789093
another time is below:
2:fail:933.5209627151489
5:fail:932.381462097168
8:fail:931.2705476284027
11:fail:930.1319863796234
14:fail:928.9525971412659
17:fail:927.8257281780243
20:fail:926.6680727005005
23:fail:925.52357172966
26:fail:924.3903162479401
29:fail:923.2299783229828
32:fail:922.1214139461517
35:fail:920.9716517925262
38:fail:919.8139469623566
41:fail:918.6542329788208
44:fail:917.4981758594513
49:fail:915.6011772155762
i'm thinking it is threadsafe issue. Could someone advise me.
Thanks

Related

Heroku Returned a H10 Error for Python App Deployment

I'm trying to deploy a python app on heroku, but heroku returned a H10 error. Here is my code python code:
import os
import requests
import random
from flask import Flask, jsonify, request
from flask_cors import CORS
from system.backend.data import Data
from system.backend.folder import Folder
from system.wallet.data_pool import DataPool
from system.wallet.exchange import Exchange
from system.pubsub import PubSub
app = Flask(__name__)
CORS(app, resources={ r'/*': { 'origins': 'http://localhost:8080' } })
data = Data()
data_pool = DataPool()
folder = Folder(data)
pubsub = PubSub(data, data_pool)
#app.route('/')
def default():
return 'Welcome to mypython app'
#app.route('/main')
def route_mainn():
return jsonify(data.to_json())
#app.route('/main/range')
def route_main_range():
start = int(request.args.get('start'))
end = int(request.args.get('end'))
return jsonify(data.to_json()[::-1][start:end])
#app.route('/main/datalength')
def route_main_datalength():
return jsonify(len(data.length))
#app.route('/myapp/data')
def route_myapp_data():
app_data = data_pool.app_data()
exchange_data.append(Exchange.reward_exchange(folder).to_json())
data.add_data(exchange_data)
folder = data.length[-1]
pubsub.broadcast_folder(folder)
data_pool.clear_data_exchange(data)
return jsonify(data.to_json())
#app.route('/folder/exchange', methods=['POST'])
def route_folder_exchange():
exchange_data = request.get_json()
exchange = data_pool.existing_exchange(folder.address)
if exchange:
exchange.update(
folder,
exchange_data['recipient'],
exchange_data['sender']
)
else:
exchange = Exchange(
folder,
exchange_data['recipient'],
exchange_data['sender']
)
pubsub.broadcast_exchange(exchange)
data_pool.set_exchange(exchange)
return jsonify(exchange.to_json())
#app.route('/folder/info')
def route_folder_info():
return jsonify({'address': folder.address, 'data': folder.balance})
#app.route('/known-addresses')
def route_known_addresses():
known_addresses = set()
for data in main.length:
for exchange in main.data:
exchange['output'].keys()
known_addresses.update(exchange['output'].keys())
return jsonify(list(known_addresses))
#app.route('/exchange')
def route_exchanges():
return jsonify(exchange_pool.exchange_data())
ROOT_PORT = 8080
PORT = ROOT_PORT
if os.environ.get('PEER') == 'True':
PORT = random.randint(8081, 9000)
result = requests.get(f'http://localhost:{ROOT_PORT}/main')
print(f'result.json(): {result.json()}')
result_main = Data.from_json(result.json())
try:
data.replace_length(result_data.length)
print('\n -- Successfully synchronized the local data')
except Exception as e:
print(f'\n -- Error synchronizing: {e}')
if os.environ.get('SEED_DATA') == 'True':
for i in range(10):
main.add_folder([
Exchange(Folder(), Folder().address, random.randint(500, 1000)).to_json(),
Exchange(Folder(), Folder().address, random.randint(500, 1000)).to_json()
])
for i in range(3):
data_pool.set_exchange(
Exchange(Folder(), Folder().address, random.randint(500, 1000))
)
app.run(port=PORT)
I also made a worker.py file with the code as follows:
import os
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDIS_URL', 'redis://localhost:8080')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
The mac terminal recommended to use waitress for python deployment, but I'm not sure on how to implement waitress within the code shown on the above.
To use waitress:
Just do pip install waitress
And add this snippet:
if __name__ == '__main__':
from waitress import serve
serve(app, host='0.0.0.0', port=8000)

Linux NoHup fails for Streaming API IG Markets where file is python

This is quite a specific question regarding nohup in linux, which runs a python file.
Back-story, I am trying to save down streaming data (from IG markets broadcast signal). And, as I am trying to run it via a remote-server (so I don't have to keep my own local desktop up 24/7),
somehow, the nohup will not engage when it 'listen's to a broadcast signal.
Below, is the example python code
#!/usr/bin/env python
#-*- coding:utf-8 -*-
"""
IG Markets Stream API sample with Python
"""
user_ = 'xxx'
password_ = 'xxx'
api_key_ = 'xxx' # this is the 1st api key
account_ = 'xxx'
acc_type_ = 'xxx'
fileLoc = 'marketdata_IG_spx_5min.csv'
list_ = ["CHART:IX.D.SPTRD.DAILY.IP:5MINUTE"]
fields_ = ["UTM", "LTV", "TTV", "BID_OPEN", "BID_HIGH", \
"BID_LOW", "BID_CLOSE",]
import time
import sys
import traceback
import logging
import warnings
warnings.filterwarnings('ignore')
from trading_ig import (IGService, IGStreamService)
from trading_ig.lightstreamer import Subscription
cols_ = ['timestamp', 'data']
# A simple function acting as a Subscription listener
def on_prices_update(item_update):
# print("price: %s " % item_update)
print("xxxxxxxx
))
# A simple function acting as a Subscription listener
def on_charts_update(item_update):
# print("price: %s " % item_update)
print(xxxxxx"\
.format(
stock_name=item_update["name"], **item_update["values"]
))
res_ = [xxxxx"\
.format(
stock_name=item_update["name"], **item_update["values"]
).split(' '))]
# display(pd.DataFrame(res_))
try:
data_ = pd.read_csv(fileLoc)[cols_]
data_ = data_.append(pd.DataFrame(res_, columns = cols_))
data_.to_csv(fileLoc)
print('there is data and we are reading it')
# display(data_)
except:
pd.DataFrame(res_, columns = cols_).to_csv(fileLoc)
print('there is no data and we are saving first time')
time.sleep(60) # sleep for 1 min
def main():
logging.basicConfig(level=logging.INFO)
# logging.basicConfig(level=logging.DEBUG)
ig_service = IGService(
user_, password_, api_key_, acc_type_
)
ig_stream_service = IGStreamService(ig_service)
ig_session = ig_stream_service.create_session()
accountId = account_
################ my code to set sleep function to sleep/read at only certain time intervals
s_time = time.time()
############################
# Making a new Subscription in MERGE mode
subscription_prices = Subscription(
mode="MERGE",
# make sure to put L1 in front of the instrument name
items= list_,
fields= fields_
)
# adapter="QUOTE_ADAPTER")
# Adding the "on_price_update" function to Subscription
subscription_prices.addlistener(on_charts_update)
# Registering the Subscription
sub_key_prices = ig_stream_service.ls_client.subscribe(subscription_prices)
print('this is the line here')
input("{0:-^80}\n".format("HIT CR TO UNSUBSCRIBE AND DISCONNECT FROM \
LIGHTSTREAMER"))
# Disconnecting
ig_stream_service.disconnect()
if __name__ == '__main__':
main()
#######
Then, I try to run it on linux using this command : nohup python marketdata.py
where marketdata.py is basically the python code above.
Somehow, the nohup will not engage....... Any experts/guru who might see what I am missing in my code?

Why I receive 500 Internal Server Error on Postman when I run a flask app

Dears ,
I am new to Python and flask . When I run the following code on Spyder I get the following message:
runfile('C:/Users/...../Desktop/Folders/..../BlockChain/Create Blockchain/Module 1 - Create a Blockchain/blockchain.py', wdir='C:/Users/...../Desktop/Folders/..../BlockChain/Create Blockchain/Module 1 - Create a Blockchain')
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
but when I want to run http://127.0.0.1:5000/get_chain on POSTMAN, I get the following message:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
I TOTALLY CONFUSED WHY? Here is my code:
import datetime
import hashlib
import json
from flask import Flask, jsonify, request
Part 1 - Building a Blockchain
class Blockchain:
def __init__(self):
self.chain = []
self.create_block(proof = 1, previous_hash = '0')
def create_block(self, proof, previous_hash):
block = {'index': len(self.chain) + 1,
'timestamp': str(datetime.datetime.now()),
'proof': proof,
'previous_hash': previous_hash}
self.chain.append(block)
return block
def get_previous_block(self):
return self.chain[-1]
def proof_of_work(self, previous_proof):
new_proof = 1
check_proof = False
while check_proof is False:
hash_operation = hashlib.sha256(str(new_proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] == '0000':
check_proof = True
else:
new_proof += 1
return new_proof
def hash(self, block):
encoded_block = json.dumps(block, sort_keys = True).encode()
return hashlib.sha256(encoded_block).hexdigest()
def is_chain_valid(self, chain):
previous_block = chain[0]
block_index = 1
while block_index < len(chain):
block = chain[block_index]
if block['previous_hash'] != self.hash(previous_block):
return False
previous_proof = previous_block['proof']
proof = block['proof']
hash_operation = hashlib.sha256(str(proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] != '0000':
return False
previous_block = block
block_index += 1
return True
# Creating a Web App
app = Flask(__name__)
# Creating a Blockchain
blockchain = Blockchain()
# Mining a new block
#app.route('/mine_block', methods = ['GET'])
def mine_block():
previous_block = blockchain.get_previous_block()
previous_proof = previous_block['proof']
proof = blockchain.proof_of_work(previous_proof)
previous_hash = blockchain.hash(previous_block)
block = blockchain.create_block(proof, previous_hash)
response = {'message': 'Congratulations, you just mined a block!',
'index': block['index'],
'timestamp': block['timestamp'],
'proof': block['proof'],
'previous_hash': block['previous_hash']}
return jsonify(response), 200
# Getting the full Blockchain
#app.route('/get_chain', methods = ['GET'])
def get_chain():
response = {'chain': blockchain.chain,
'length': len(blockchain.chain)}
return jsonify(response), 200
# Checking if the Blockchain is valid
#app.route('/is_valid', methods = ['GET'])
def is_valid():
is_valid = blockchain.is_chain_valid(blockchain.chain)
if is_valid:
response = {'message': 'All good. The Blockchain is valid.'}
else:
response = {'message': 'Houston, we have a problem. The Blockchain is not valid.'}
return jsonify(response), 200
# Running the app
app.run(host='0.0.0.0', port=5000)
'is_xhr' method has been deprecated & removed, So you need to upgrade your Flask version
Open Anaconda Prompt and type
pip install --upgrade Flask
Then restart your IDE
Flask is not compatible with that. Need to upgrade our Flask.
how to do:
step 1: Open Anaconda prompt
step 2 : write command conda install flask=1.0.0 now your flask will get update..& issue will get solve if, still don't work, then downgrade Werkzeug
how to do:
step 1: conda install werkzeug=0.16.1 in Anaconda prompt.
And Restart the Anaconda.

gevent and flask make redis w/r spend more time

server: ubuntu 14.04 2core and 4G.
gunicorn -k gevent and flask.
gunicorn==20.0.4
Flask==1.1.1
gevent==1.4.0
the service behind the flask is some redis read/write, just small keys and values. use the python library: redis==3.4.1.
the production problem is: when more people use same api at same time, the cost or time of the api response becomes heavy and spend more time in redis operations: from 10ms increase to 100ms or even higher.
mport time
import functools
import redis
from flask import Flask, request, jsonify
app = Flask(__name__)
pool = redis.ConnectionPool(host='127.0.0.1',
port='6379',
db=6,
encoding='utf-8',
decode_responses=True)
r = redis.StrictRedis(
connection_pool=pool
)
def timer(func):
#functools.wraps(func)
def decorator(*args, **kwargs):
s = time.time()
data = request.json or request.form.to_dict()
r = func(data, *args, **kwargs)
end = time.time()
print('spend: {}'.format(int(end * 1000 - s * 1000)))
return r
return decorator
def get_no():
z = r.get('test2')
print('room_no: {}'.format(z))
if not z:
create_no()
return get_no()
else:
if player_num() > 100:
create_no()
return get_no()
else:
return z
def player_num():
return r.incrby('room_num')
def create_no():
if r.setnx('lock', 1):
print('locked!')
n = r.incrby('test2')
r.delete('room_num')
r.delete('lock')
return n
else:
print('sleep!')
time.sleep(0.05)
#app.route('/test', methods=['POST', 'GET'])
#timer
def test(data):
# no = get_no()
# print(no)
z = r.incrby('incry_4')
print(z)
return jsonify(dict(code=200))
Plus, I take some tests in local machine with wrk tool. and found that, when use more connections , the api response spend more time. I want to know why when use the -k gevent, the api spend more time.

Execute code in background after render page in Django

I have a script with twilio:
from twilio.rest import Client
def wa(testo):
client = Client()
# this is the Twilio sandbox testing number
from_whatsapp_number='whatsapp:+14155238886'
to_whatsapp_number='whatsapp:+39xxxxxxxxxx'
ts = 'Anomalia Rapportino ' + str(testo)
client.messages.create(body=ts,
from_=from_whatsapp_number,
to=to_whatsapp_number)
I imported this script in view and I have this def:
def grazieeprint(request, pk):
intermedio = get_object_or_404(IntermProd, pk=pk)
datilavoro = WhoWork.objects.get(pk=intermedio.work_id)
try:
return render(request, 'FBIsystem/thanksandprint.html', {'pkpreso': pk})
finally:
try:
appo = datilavoro.pezziorastima * 2
if datilavoro.pezziora >= appo:
testo = datilavoro.pk
subprocess.Popen([wa(testo)], shell=True)
except:
pass
I need to run 'wa(testo)' after django load the page because all the process of sending message take approx 15/20 sec.
I try whit 'try and finally' and whit 'subbrocess.Popen' but it send always the message before render the page.
Please help
TY
EDIT:
I try:
finally:
try:
time.sleep(1)
appo = datilavoro.pezziorastima * 2
if datilavoro.pezziora >= appo:
testo = datilavoro.pk
subprocess.Popen([wa(testo)], shell=True)
it load page fast, but not send
EDIT 2:
Trying use Celery, now script is:
from twilio.rest import Client
from celery import shared_task,current_task
#shared_task
def wa(testo):
print 'test'
client = Client()
# this is the Twilio sandbox testing number
from_whatsapp_number='whatsapp:+14155238886'
to_whatsapp_number='whatsapp:+39xxxxxxxxx'
ts = 'Anomalia Rapportino ' + str(testo)
client.messages.create(body=ts,
from_=from_whatsapp_number,
to=to_whatsapp_number)
but not work in parallel...
what is the right way?

Categories