Python Flask REST API freezes during a call to subprocess command - python

We have a working REST API that handles many diff endpoints for FrontEnd. During one specific endpoint where subprocess command should process operation all other endpoints halts and wait for the subprocess to finish. Can anyone help me to understand why it's happening? My thoughts that Python Flask runs asynchronously?
[...]
class withdrawCrypto(Resource):
def get(self):
auth = json.loads('{"ERROR" : "User authentication failed!"}')
wrongAmount = json.loads('{"ERROR" : "Wrong amount"}')
wrongWlt = json.loads('{"ERROR" : "Invalid wallet provided. Please check the wallet addr!"}')
notEnough = json.loads('{"ERROR" : "You don\'t have enough crypto to withdraw this amount"}')
account = str(request.args.get('account'))
token = str(request.args.get('token'))
wallet = str(request.args.get('wlt'))
count = int(request.args.get('count'))
if len(wallet) != 34:
return jsonify(data=wrongWlt)
if wallet[0] != 'B':
return jsonify(data=wrongWlt)
cursorLG.execute("select balance from btc WHERE login=%s;", account)
checkBalance = cursorLG.fetchall()
if checkBalance[0]['balance'] < int(count):
return jsonify(data=notEnough)
cursorLG.execute("select cred from accounts WHERE login=%s;", account)
userCheck = cursorLG.fetchall()
if userCheck[0]['secret'] == token:
if count and int(count) > 0:
host = credentials['rpc']
user = credentials['rpcuser']
passwd= credentials['rpcpassword']
timeout = credentials['rpcclienttimeout']
command = 'bitcoin-cli -rpcconnect=' + host + ' -rpcuser=' + user + ' -rpcpassword=' + passwd + ' -rpcclienttimeout=' + timeout + ' sendtoaddress ' + wallet + ' ' + str(count)
result = subprocess.check_output(command,shell=True).strip()
cursorLG.execute("select balance from btc WHERE login=%s", account)
current = cursorLG.fetchall()
setNew = int(int(current[0]['balance']) - int(count))
cursorLG.execute("replace into btc (login, balance, lastwithdrawalwlt) values (%s, %s, %s) ", (account, setNew, wallet))
return jsonify(data=result.decode("utf-8"))
else:
return jsonify(data=wrongAmount)
else:
print('Failed Crypto withdrawal! Actual passw / user sent: ', userCheck[0]['secret'], token)
return jsonify(data=auth)
[...]
# Serve the high performance http server
if __name__ == '__main__':
http_server = WSGIServer(('', 9000), app)
http_server.serve_forever()
All other endpoints work fast without any delays. Any help appreciated.

The problem was:
result = subprocess.check_output(command,shell=True).strip()
More specific:
shell=True
which waits until the process stops and reads the
STDOUT
As a quick workaround installed gunicorn app and served with a flag --timeout 120 --workers 20
So now 1 worker busy, 19 still returning other requests.

Related

Cloudwatch Insight Query Lambda

I have a problem, as I trying to automate the cloudwatch Logs insight query. I found this code
def getInsightLogCounts():
print("starting cloudwatch insight queries")
msg = ''
#STEP 1 - create a dict with query message as key and query as value. This can be stored in the db as well
query_Dict = getQueryDict()
log_group = '/the/loggroup/you/want/to/'
print("starting cloudwatch insight queries for " + str(log_group))
for x in query_Dict.items():
query_key = x[0]
query = x[1]
print("query key : " + str(query_key) + " \n query : " + str(query))
#STEP 2 - Create a query response object using the start_query method of the logs. Here we are fetching data for the last 24 hours
start_query_response = logs_client.start_query(
logGroupName=log_group,
queryString=query,
startTime=int((datetime.today() - timedelta(hours=24)).timestamp()),
endTime=int(datetime.now().timestamp()),
limit=1)
query_id = start_query_response['queryId']
response = None
#STEP3 - run a while loop and waith for the query to complete.
while response == None or response['status'] == 'Running':
time.sleep(1)
response = logs_client.get_query_results(queryId=query_id)
#STEP4 - Extract the result and proceed to the next query
if response and response['results'] and response['results'][0] and response['results'][0][1]:
#response found update msg
msg = msg + str(query_key) + " : " + str(response['results'][0][1].get('value')) + "; \n"
print("query value returned for " + str(query_key) + " is : " + str(response['results'][0][1].get('value')))
else:
msg = msg + str(query_key) + " : 0" + "; \n"
print("no query data returned for " + str(query_key))
return msg
Now I need to fit my query into this code, but I'm new to the Lambda (Python) codes.
My query is (sample one)
fields #timestamp, #message
| filter #message like 'END RequestId'
| sort #timestamp desc
If anyone has some ideas I would be very grateful if you can help me with this or just give me some advice
this is a link for the original post I took the code from.
https://medium.com/xebia-engineering/accessing-cloudwatch-metrics-and-insights-from-aws-lambda-1119c40ff80b#5833
Sorry guys, for the not clear post before. I manage to fit my code here so hopefully, someone with a big heart can help me

Python: Bloomberg API is not authorized

I am trying to pull data from Bloomberg using Python API. API package comes with example codes and the programs that only requires local host work perfectly. However, the programs that uses other authorization ways are always stuck with the error:
Connecting to port 8194 on localhost
TokenGenerationFailure = {
reason = {
source = "apitkns (apiauth) on ebbdbp-ob-053"
category = "NO_AUTH"
errorCode = 12
description = "User not in emrs userid=NA\mds firm=22691"
subcategory = "INVALID_USER"
}
}
Failed to get token
No authorization
I saw one more person having similar problem but instead of solving it he chose to just use local host. I can't always use localhost because I will have to assist and troubleshoot for other users. So I need a hint how to overcome this error.
My question is how can I set the userid anything other than OS_LOGON which automatically uses the login credentials of my account so that I can use other users' name when needed? I tried to change OS_LOGON with the user name but it didn't work.
The full program I am trying to run is:
"""SnapshotRequestTemplateExample.py"""
from __future__ import print_function
from __future__ import absolute_import
import datetime
from optparse import OptionParser, OptionValueError
import blpapi
TOKEN_SUCCESS = blpapi.Name("TokenGenerationSuccess")
TOKEN_FAILURE = blpapi.Name("TokenGenerationFailure")
AUTHORIZATION_SUCCESS = blpapi.Name("AuthorizationSuccess")
TOKEN = blpapi.Name("token")
def authOptionCallback(_option, _opt, value, parser):
vals = value.split('=', 1)
if value == "user":
parser.values.auth = "AuthenticationType=OS_LOGON"
elif value == "none":
parser.values.auth = None
elif vals[0] == "app" and len(vals) == 2:
parser.values.auth = "AuthenticationMode=APPLICATION_ONLY;"\
"ApplicationAuthenticationType=APPNAME_AND_KEY;"\
"ApplicationName=" + vals[1]
elif vals[0] == "userapp" and len(vals) == 2:
parser.values.auth = "AuthenticationMode=USER_AND_APPLICATION;"\
"AuthenticationType=OS_LOGON;"\
"ApplicationAuthenticationType=APPNAME_AND_KEY;"\
"ApplicationName=" + vals[1]
elif vals[0] == "dir" and len(vals) == 2:
parser.values.auth = "AuthenticationType=DIRECTORY_SERVICE;"\
"DirSvcPropertyName=" + vals[1]
else:
raise OptionValueError("Invalid auth option '%s'" % value)
def parseCmdLine():
"""parse cli arguments"""
parser = OptionParser(description="Retrieve realtime data.")
parser.add_option("-a",
"--ip",
dest="hosts",
help="server name or IP (default: localhost)",
metavar="ipAddress",
action="append",
default=[])
parser.add_option("-p",
dest="port",
type="int",
help="server port (default: %default)",
metavar="tcpPort",
default=8194)
parser.add_option("--auth",
dest="auth",
help="authentication option: "
"user|none|app=<app>|userapp=<app>|dir=<property>"
" (default: %default)",
metavar="option",
action="callback",
callback=authOptionCallback,
type="string",
default="user")
(opts, _) = parser.parse_args()
if not opts.hosts:
opts.hosts = ["localhost"]
if not opts.topics:
opts.topics = ["/ticker/IBM US Equity"]
return opts
def authorize(authService, identity, session, cid):
"""authorize the session for identity via authService"""
tokenEventQueue = blpapi.EventQueue()
session.generateToken(eventQueue=tokenEventQueue)
# Process related response
ev = tokenEventQueue.nextEvent()
token = None
if ev.eventType() == blpapi.Event.TOKEN_STATUS or \
ev.eventType() == blpapi.Event.REQUEST_STATUS:
for msg in ev:
print(msg)
if msg.messageType() == TOKEN_SUCCESS:
token = msg.getElementAsString(TOKEN)
elif msg.messageType() == TOKEN_FAILURE:
break
if not token:
print("Failed to get token")
return False
# Create and fill the authorization request
authRequest = authService.createAuthorizationRequest()
authRequest.set(TOKEN, token)
# Send authorization request to "fill" the Identity
session.sendAuthorizationRequest(authRequest, identity, cid)
# Process related responses
startTime = datetime.datetime.today()
WAIT_TIME_SECONDS = 10
while True:
event = session.nextEvent(WAIT_TIME_SECONDS * 1000)
if event.eventType() == blpapi.Event.RESPONSE or \
event.eventType() == blpapi.Event.REQUEST_STATUS or \
event.eventType() == blpapi.Event.PARTIAL_RESPONSE:
for msg in event:
print(msg)
if msg.messageType() == AUTHORIZATION_SUCCESS:
return True
print("Authorization failed")
return False
endTime = datetime.datetime.today()
if endTime - startTime > datetime.timedelta(seconds=WAIT_TIME_SECONDS):
return False
def main():
"""main entry point"""
global options
options = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
for idx, host in enumerate(options.hosts):
sessionOptions.setServerAddress(host, options.port, idx)
sessionOptions.setAuthenticationOptions(options.auth)
sessionOptions.setAutoRestartOnDisconnection(True)
print("Connecting to port %d on %s" % (
options.port, ", ".join(options.hosts)))
session = blpapi.Session(sessionOptions)
if not session.start():
print("Failed to start session.")
return
subscriptionIdentity = None
if options.auth:
subscriptionIdentity = session.createIdentity()
isAuthorized = False
authServiceName = "//blp/apiauth"
if session.openService(authServiceName):
authService = session.getService(authServiceName)
isAuthorized = authorize(authService, subscriptionIdentity,
session, blpapi.CorrelationId("auth"))
if not isAuthorized:
print("No authorization")
return
else:
print("Not using authorization")
.
.
.
.
.
finally:
session.stop()
if __name__ == "__main__":
print("SnapshotRequestTemplateExample")
try:
main()
except KeyboardInterrupt:
print("Ctrl+C pressed. Stopping...")
This example is intended for Bloomberg's BPIPE product and as such includes the necessary authorization code. For this example, if you're connecting to the Desktop API (typically localhost:8194) you would want to pass an auth parameter of "none". Note that this example is for the mktdata snapshot functionality which isn't supported by Desktop API.
You state you're trying to troubleshoot on behalf of other users, presumably traders using BPIPE under their credentials. In this case you would need to create an Identity object to represent that user.
This would be done thusly:
# Create and fill the authorization request
authRequest = authService.createAuthorizationRequest()
authRequest.set("authId", STRING_CONTAINING_USERS_EMRS_LOGON)
authRequest.set("ipAddress", STRING_OF_IP_ADDRESS_WHERE_USER_IS_LOGGED_INTO_TERMINAL)
# Send authorization request to "fill" the Identity
session.sendAuthorizationRequest(authRequest, identity, cid)
Please be aware of potential licensing compliance issues when using this approach as this can have serious consequences. If in any doubt, approach your firm's market data team who will be able to ask their Bloomberg contacts.
Edit:
As asked in the comments, it's useful to elaborate on the other possible parameters for the AuthorizationRequest.
"uuid" + "ipAddress"; this would be the default method of authenticating users for Server API. On BPIPE this would require Bloomberg to explicitly enable it for you. The UUID is the unique integer identifier assigned to each Bloomberg Anywhere user. You can look this up in the terminal by running IAM
"emrsId" + "ipAddress"; "emrsId" is a deprecated alias for "authId". This shouldn't be used anymore.
"authId" + "ipAddress"; "authId" is the String defined in EMRS (the BPIPE Entitlements Management and Reporting System) or SAPE (the Server API's equivalent of EMRS) that represents each user. This would typically be that user's OS login details (e.g. DOMAIN/USERID) or Active Directory property (e.g. mail -> blah#blah.blah)
"authId" + "ipAddress" + "application"; "application" is the application name defined on EMRS/SAPE. This will check to see whether the user defined in authId is enabled for the named application on EMRS. Using one of these user+app style Identity objects in requests should record usage against both the user and application in the EMRS usage reports.
"token"; this is the preferred approach. Using the session.generateToken functionality (which can be seen in the original question's code snippet) will result in an alphanumeric string. You'd pass this as the only parameter into the Authorization request. Note that the token generation system is virtualization-aware; if it detects it's running in Citrix or a remote desktop it will report the IP address of the display machine (or one hop towards where the user actually is).

Get output of a Job through JobId using Python in Azure Autonmation Runbook

I want to get the output of a Job or its status using Python in Azure automation. I can do it in powershell but I want to do it in Python.Is there any equvalent SDK for -AzureRmAutomationJob in python??
Here is the sample code to create a sample Azure automation task and getting the output of the job.
"""
import time
import uuid
import requests
import automationassets
# Automation resource group and account to start runbook job in
_AUTOMATION_RESOURCE_GROUP = "contoso"
_AUTOMATION_ACCOUNT = "contosodev"
# Set up required body values for a runbook.
# Make sure you have a hello_world_python runbook published in the automation account
# with an argument of -n
body = {
"properties":
{
"runbook":
{
"name":"hello_world_python"
},
"parameters":
{
"[PARAMETER 1]":"-n",
"[PARAMETER 2]":"world"
}
}
}
# Return token based on Azure automation Runas connection
def get_automation_runas_token(runas_connection):
""" Returs a token that can be used to authenticate against Azure resources """
from OpenSSL import crypto
import adal
# Get the Azure Automation RunAs service principal certificate
cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
sp_cert = crypto.load_pkcs12(cert)
pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM, sp_cert.get_privatekey())
# Get run as connection information for the Azure Automation service principal
application_id = runas_connection["ApplicationId"]
thumbprint = runas_connection["CertificateThumbprint"]
tenant_id = runas_connection["TenantId"]
# Authenticate with service principal certificate
resource = "https://management.core.windows.net/"
authority_url = ("https://login.microsoftonline.com/" + tenant_id)
context = adal.AuthenticationContext(authority_url)
azure_credential = context.acquire_token_with_client_certificate(
resource,
application_id,
pem_pkey,
thumbprint)
# Return the token
return azure_credential.get('accessToken')
# Authenticate to Azure using the Azure Automation RunAs service principal
automation_runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
access_token = get_automation_runas_token(automation_runas_connection)
# Set what resources to act against
subscription_id = str(automation_runas_connection["SubscriptionId"])
job_id = str(uuid.uuid4())
# Set up URI to create a new automation job
uri = ("https://management.azure.com/subscriptions/" + subscription_id
+ "/resourceGroups/" + _AUTOMATION_RESOURCE_GROUP
+ "/providers/Microsoft.Automation/automationAccounts/" + _AUTOMATION_ACCOUNT
+ "/jobs/(" + job_id + ")?api-version=2015-10-31")
# Make request to create new automation job
headers = {"Authorization": 'Bearer ' + access_token}
json_output = requests.put(uri, json=body, headers=headers).json()
# Get results of the automation job
_RETRY = 360 # stop after 60 minutes (360 * 10 sleep seconds / 60 seconds in a minute)
_SLEEP_SECONDS = 10
status_counter = 0
while status_counter < _RETRY:
status_counter = status_counter + 1
job = requests.get(uri, headers=headers).json()
status = job['properties']['status']
if status == 'Completed' or status == 'Failed' or status == 'Suspended' or status == 'Stopped':
break
time.sleep(_SLEEP_SECONDS)
# if job did not complete in an hour, throw an exception
if status_counter == 360:
raise StandardError("Job did not complete in 60 minutes.")
if job['properties']['status'] != 'Completed':
raise StandardError("Job did not complete successfully.")
# Get output streams from the job
uri = ("https://management.azure.com/subscriptions/" + subscription_id
+ "/resourceGroups/" + _AUTOMATION_RESOURCE_GROUP
+ "/providers/Microsoft.Automation/automationAccounts/" + _AUTOMATION_ACCOUNT
+ "/jobs/" + job_id
+ "/streams?$filter=properties/streamType%20eq%20'Output'&api-version=2015-10-31")
job_streams = requests.get(uri, headers=headers).json()
# For each stream id, print out the text
for stream in job_streams['value']:
uri = ("https://management.azure.com/subscriptions/" + subscription_id
+ "/resourceGroups/" + _AUTOMATION_RESOURCE_GROUP
+ "/providers/Microsoft.Automation/automationAccounts/" + _AUTOMATION_ACCOUNT
+ "/jobs/" + job_id
+ "/streams/" + stream['properties']['jobStreamId']
+ "?$filter=properties/streamType%20eq%20'Output'&api-version=2015-10-31")
output_stream = requests.get(uri, headers=headers).json()
print output_stream['properties']['streamText']
You can refer this url for further reference.Hope it helps.

Twython rate limit

I am getting a 'TwythonRateLimitError' and want to be sure that I don't screw up my account. I am new to working with the Twitter API. How can I check to make sure that I am not going over my query limit? I read that it is 150 queries/hour... What happens if I do? Am I at a risk of this in my code or is it only for particular commands?
I am not building an app, I am just trying to get a specific sample for twitter (random set of users with similar following bases (7500 to 10000 followers). My code so far is below. I will be saving the successful hits to a file but I am waiting to be sure that is necessary.
from twython import Twython, TwythonError, TwythonRateLimitError
from random import randint
APP_KEY = 'redacted'
APP_SECRET = 'redacted'
ACCESS_TOKEN = 'redacted'
twitter = Twython(APP_KEY, APP_SECRET, oauth_version=2)
ACCESS_TOKEN = twitter.obtain_access_token()
twitter = Twython(APP_KEY,access_token=ACCESS_TOKEN)
print "hello twitterQuery\n"
count = 0
step = 0
isError = 0
try:
#new account i made today to set upper bound on userID
maxID = twitter.show_user(screen_name="query_test")['id']
except TwythonRateLimitError:
isError = 1
ids = [0,0,0,0,0,0,0,0,0,0]
if isError == 0 and step <= 150:
while count < 10:
step = step +1
randomID = randint(1,maxID)
isMissing = 0
print str(step) + " " + str(randomID)
try:
randomUserData = twitter.show_user(user_id=randomID)
except TwythonError:
isMissing = 1;
if isMissing == 0:
followers = randomUserData['followers_count']
if followers >= 7500 and followers <= 10000:
print "ID: " + str(randomID) +", followers: "+ str(followers)
ids[count] = randomID
count = count+1
print "\ndone"
for each id in ids:
print id
to see your current rate limit status, pass in your app token and send a GET request to
https://api.twitter.com/1.1/account/rate_limit_status.json
and query the response.
See this page for further context

With ec2 python API boto, how to get spot instance_id from SpotInstanceRequest?

When using boto, Amazon aws python API.
ec2_connection.request_spot_instances(...)
# This will return an ResultSet of SpotInstanceRequest
How can I get instance_ids from the SpotInstanceRequest?
UPDATE: I did it this way, after a lot playing and googleing, hope this help:
ec2_connection.get_all_spot_instance_requests(request_ids=[my_spot_request_id, ])
This will return the updated SpotInstanceRequest, when the instance is ready, we can get *instance_id* from it.
I did something similar: check periodically to see if the spot instance request id returned by
ec2_connection.request_spot_instances(...)
is matched to an instance in the results of
conn.get_all_spot_instance_requests(...)
:
conn = boto.ec2.connect_to_region(region_name=region_name, aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
req = conn.request_spot_instances(price=MAX_SPOT_BID,instance_type=instance_type,image_id=AMI_ID,availability_zone_group=region_name,key_name=KEY_PAIR_PEM[:-4],security_groups=security_groups)
job_instance_id = None
while job_instance_id == None:
print "checking job instance id for this spot request"
job_sir_id = req[0].id # spot instance request = sir, job_ is the relevant aws item for this job
reqs = conn.get_all_spot_instance_requests()
for sir in reqs:
if sir.id == job_sir_id:
job_instance_id = sir.instance_id
print "job instance id: " + str(job_instance_id)
break
time.sleep(SPINUP_WAIT_TIME)
spot_instance_requests = aws.ec2_get_connection().request_spot_instances(...)
MAX_MINUTES = 180
spot_instance_request_ids = [sir.id for sir in spot_instance_requests]
for _ in range(MAX_MINUTES):
log.info('waiting for spot instances to start', request_ids=spot_instance_request_ids, seconds=60)
time.sleep(60)
spot_instance_requests = aws.ec2_get_connection().get_all_spot_instance_requests(
request_ids=spot_instance_request_ids)
if any(sir.instance_id for sir in spot_instance_requests):
log.info('spot instance started. waiting...', seconds=60*5)
time.sleep(60*5)
break
else:
raise Exception("Spot instances didn't start in {0} minutes!".format(MAX_MINUTES))

Categories