Executing the auth token once per run - Locust - python

Ideally I want to grab the token one time (1 request) and then pass that token into the other 2 requests as they execute. When I run this code through Locust though...
from locust import HttpUser, constant, task
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class ProcessRequests(HttpUser):
host = 'https://hostURL'
wait_time = constant(1)
def on_start(self):
tenant_id = "tenant123"
client_id = "client123"
secret = "secret123"
scope = "api://123/.default"
body ="grant_type=client_credentials&client_id=" + client_id + "&client_secret=" + secret + "&scope=" + scope
tokenResponse = self.client.post(
f"https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token",
body,
headers = { "ContentType": "application/x-www-form-urlencoded"}
)
response = tokenResponse.json()
responseToken = response['access_token']
self.headers = {'Authorization': 'Bearer ' + responseToken}
#task
def get_labware(self):
self.client.get("/123", name="Labware",headers=self.headers)
#task
def get_instruments(self):
self.client.get("/456", name="Instruments", headers=self.headers)
It ends up firing off multiple token requests that don't stop..
Any ideas how to fix this to make the token only run once?

In your case it runs once per user so my expectation is that you spawned 24 users and the number of Labware and Instruments is at least 2x times higher so it seems to work exactly according to the documentation.
Users (and TaskSets) can declare an on_start method and/or on_stop method. A User will call its on_start method when it starts running, and its on_stop method when it stops running. For a TaskSet, the on_start method is called when a simulated user starts executing that TaskSet, and on_stop is called when the simulated user stops executing that TaskSet (when interrupt() is called, or the user is killed).
If you want to get the token only once and then share it across all the virtual users you can go for the workaround from this question
#events.test_start.add_listener
def _(environment, **kwargs):
global token
token = get_token(environment.host)
and put into this get_token() function what you have in on_start() one.
More information: Locust with Python: Introduction, Correlating Variables, and Basic Assertions

Related

How to send object in worker from master?

Is have code
class Tasks(SequentialTaskSet):
#task
def shopping(self):
with self.parent.environment.parsed_options.shopping.request_airshopping(
client=self.client, catch_response=True) as response:
self.run_req(response, self.parent.environment.parsed_options.rsa)
class WebUser(HttpUser):
tasks = [Tasks]
min_wait = 0
max_wait = 0
#staticmethod
#events.test_start.add_listener
def on_test_start(environment: Environment, **kwargs):
os.environ["ENABLE_SLEEP"] = "False"
credentials = {
"login": environment.parsed_options.login,
"password": environment.parsed_options.password,
"structure_unit_code": environment.parsed_options.suid,
}
login = Login(base_url=environment.host)
response = login.request_login(credentials)
parse_xml = parsing_xml_response(response.text)
headers = {
"Content-Type": "application/xml",
"Authorization": f'Bearer {parse_xml.xpath("//Token")[0].text}'}
shopping = Shopping(
base_url=environment.parsed_options.host,
headers=headers
)
set_paxs(environment, shopping)
set_flight(environment, shopping)
environment.parsed_options.__dict__.update({"shopping": shopping})
If start without worker - success working
If start workers - errors: TypeError: can not serialize 'Shopping' object
How to send object Shopping in worker from master?
Try https://docs.locust.io/en/stable/running-distributed.html
You can do this by sending a message from worker to master or master to worker. The message is just a string but you can include a data payload with it. You need a function that registers what your message is and what to do when it receives that message, which many times will be a function that does something with the data payload. The docs use this example:
from locust import events
from locust.runners import MasterRunner, WorkerRunner
# Fired when the worker receives a message of type 'test_users'
def setup_test_users(environment, msg, **kwargs):
for user in msg.data:
print(f"User {user['name']} received")
environment.runner.send_message('acknowledge_users', f"Thanks for the {len(msg.data)} users!")
# Fired when the master receives a message of type 'acknowledge_users'
def on_acknowledge(msg, **kwargs):
print(msg.data)
#events.init.add_listener
def on_locust_init(environment, **_kwargs):
if not isinstance(environment.runner, MasterRunner):
environment.runner.register_message('test_users', setup_test_users)
if not isinstance(environment.runner, WorkerRunner):
environment.runner.register_message('acknowledge_users', on_acknowledge)
The #events.init.add_lisenter means Locust will run that function when it starts up and the key part is that register_message call. The first argument is an arbitrary string, whatever you want it to be. The second argument is the function to call when that message is received.
Note the arguments the function to be called has defined. One of them is msg, which you need to use to get the data that is sent.
Then you can send the message and data payload any time you want. The docs use this example:
users = [
{"name": "User1"},
{"name": "User2"},
{"name": "User3"},
]
environment.runner.send_message('test_users', users)
The string is the message you want the receiver to match and the second argument users in this case is the actual data you want to send to the other instance(s).

How to make Twilio Queue Automation Forward

Can someone tell me please: how to take callers, from enqueue to dial(forwarding). In one app, with automation this ???
i have something like that and it wont work:
<Say>hello</Say>
<Enqueue waitUrl="https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml">support</Enqueue>
<Dial url="/ivr/agent/screencall">
+000000000
<Queue>support</Queue>
</Dial>
<Redirect>/ivr/welcome/</Redirect>
</Response>
in python look like this:
twiml_response.say('hello')
twiml_response.enqueue('support', wait_url='https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml')
twiml_response.dial('+000000000', url=reverse('ivr:agents_screencall')).queue('support')
It looks like you are trying to perform actions for both your caller and an agent within the one TwiML response, and that will not work.
When you <Enqueue> a caller, no following TwiML will execute until you dequeue the caller with a <Leave>.
It looks like you want to dial an agent, allow them to screen the call and then connect them to the caller in the queue. To do this, you would start by creating a call to your agent using the REST API. With that call you would provide a URL that will be requested when that agent connects. In the response to that URL you should say the message to them, then <Dial> the <Queue>. Something like this:
import os
from twilio.rest import Client
account_sid = os.environ['TWILIO_ACCOUNT_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
client = Client(account_sid, auth_token)
def call(request):
twiml_response = VoiceResponse()
twiml_response.say('hello')
twiml_response.enqueue('support', wait_url='https://brass-dragonfly-1957.twil.io/assets/poczekalnia.xml')
call = client.calls.create(
url = '/agent'.
to = agent_phone_number,
from = your_twilio_number
)
return HttpResponse(twiml_response, content_type='text/xml')
Then, in response to the webhook to the /agent endpoint, you should return your response for screening, which might look like this:
def agent(request):
twiml_response = VoiceResponse()
gather = Gather(action='/agent_queue', method='POST', numDigits=1)
gather.say('You are receiving an incoming call, press 1 to accept')
twiml_response.append(gather)
return HttpResponse(twiml_response, content_type='text/xml')
And finally in /agent_queue you determine the result of screening the call and if the agent accepts, then you connect them to the queue.
def agent_queue(request):
twiml_response = VoiceResponse()
digits = request.POST.get("Digits", "")
if digits == "1":
dial = Dial()
dial.queue('support')
twiml_response.append(dial)
else:
twiml_response.hangup()
return HttpResponse(twiml_response, content_type='text/xml')

cannot keep a stable session open with bloomberg blpapi Python

I recently encountered an issue that I have not been able to solve, despite calling the Bloomberg helpdesk and researching thoroughly the internet for similar cases.
In short, I am using the official Python blpapi from Bloomberg (https://github.com/msitt/blpapi-python) and now am experiencing some connectivity issue: I cannot leave a session opened.
Here is the code I am running: https://github.com/msitt/blpapi-python/blob/master/examples/SimpleHistoryExample.py
I simply added a "while True loop" and a "time.sleep" in it so that I can keep the session open and refresh my data every 30 seconds (this is my use case).
This use to run perfectly fine for days, however, since last Friday, I am now getting those log messages:
22FEB2021_08:54:18.870 29336:26880 WARN blpapi_subscriptionmanager.cpp:7437 blpapi.session.subscriptionmanager.{1} Could not find a service for serviceCode: 90.
22FEB2021_08:54:23.755 29336:26880 WARN blpapi_platformcontroller.cpp:377 blpapi.session.platformcontroller.{1} Connectivity lost, no connected endpoints.
22FEB2021_08:54:31.867 29336:26880 WARN blpapi_platformcontroller.cpp:344 blpapi.session.platformcontroller.{1} Connectivity restored.
22FEB2021_08:54:32.731 29336:26880 WARN blpapi_subscriptionmanager.cpp:7437 blpapi.session.subscriptionmanager.{1} Could not find a service for serviceCode: 90.
which goes on and on and on, along with those responses as well:
SessionConnectionDown = {
server = "localhost:8194"
}
ServiceDown = {
serviceName = "//blp/refdata"
servicePart = {
publishing = {
}
}
}
SessionConnectionUp = {
server = "localhost:8194"
encryptionStatus = "Clear"
compressionStatus = "Uncompressed"
}
ServiceUp = {
serviceName = "//blp/refdata"
servicePart = {
publishing = {
}
}
}
I still can pull the data from the bloomberg API: I see the historical data request results just fine. However:
Those service/session status messages messes up my code (I could still ignore them)
For some reason the connect/reconnect also messes my Excel BBG in the background and prevent me from using the BBG excel add-in at all! I now have those "#N/A Connection" outputs in all of my workbooks using bloomberg formulas.
screenshot from excel
Has anyone ever encountered such cases? If yes, please do not hesitate to share your experience, any help is more than appreciated!
Wishing you all a great day,
Adrien
I cannot comment yet so I will try to "answer" it. I use blpapi everyday and pull data all day. I am a BBG Anywhere user and never have any session issues. If you log in from a different device it will kill your session for the Python app. Once you log back in where the python app is running it will connect again.
Why do you have another while loop and sleep to keep the session alive? You should create a separate session and always call it to run your request. You should not need any "keep alive" code inside the request. Just don't call session.stop(). This is what I ended up doing after much trial and error from not knowing what to do.
I run my model using Excel, trying to move away from any substantial code in Excel and use it as a GUI until I can migrate to a custom GUI. I also have BDP functions in my Excel and they work fine.
import blpapi
# removed optparse because it is deprecated.
from argparse import ArgumentParser
SERVICES = {}
def parseCmdLine():
parser = ArgumentParser(description='Retrieve reference data.')
parser.add_argument('-a',
'--ip',
dest='host',
help='server name or IP (default: %(default)s)',
metavar='ipAddress',
default='localhost')
parser.add_argument('-p',
dest='port',
type=int,
help='server port (default: %(default)s)',
metavar='tcpPort',
default=8194)
args = parser.parse_args()
return args
def start_session():
"""Standard session for synchronous refdata requests. Upon creation
the obj is held in SERVICES['session'].
Returns:
obj: A session object.
"""
args = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(args.host)
sessionOptions.setServerPort(args.port)
# Create a Session
session = blpapi.Session(sessionOptions)
# Start a Session
session.start()
SERVICES['session'] = session
return SERVICES['session']
def get_refDataService():
"""Create a refDataService object for functions to use. Upon creation
it is held in SERVICES['refDataService'].
Returns:
obj: refDataService object.
"""
# return the session and request because requests need session.send()
global SERVICES
if 'session' not in SERVICES:
start_session()
session = SERVICES['session']
# Check for SERVICES['refdata'] not needed because start_session()
# is called by this function and start_session() is never called on its own.
session.openService("//blp/refdata")
refDataService = session.getService("//blp/refdata")
SERVICES['refDataService'] = refDataService
session = SERVICES['session']
refDataService = SERVICES['refDataService']
print('get_refDataService called. Curious when this is called.')
return session, refDataService
# sample override request
def ytw_oride_muni(cusip_dict):
"""Requests the Price To Worst for a dict of cusips and Yield To Worst values.
The dict must be {'cusip' : YTW}. Overrides apply to each request, so this
function is designed for different overrides for each cusip. Although they could
all be the same as that is not a restriction.
Returns: Single level nested dict
{'cusip Muni': {'ID_BB_SEC_NUM_DES': 'val', 'PX_ASK': 'val1', 'YLD_CNV_ASK': 'val2'}}
"""
session, refDataService = get_refDataService()
fields1 = ["ID_BB_SEC_NUM_DES", "PX_ASK", "YLD_CNV_ASK"]
try:
values_dict = {}
# For different overrides you must send separate requests.
# This loops and creates separate messages.
for cusip, value in cusip_dict.items():
request = refDataService.createRequest("ReferenceDataRequest")
# append security to request
request.getElement("securities").appendValue(f"{cusip} Muni")
# append fields to request
request.getElement("fields").appendValue(fields1[0])
request.getElement("fields").appendValue(fields1[1])
request.getElement("fields").appendValue(fields1[2])
# add overrides
overrides = request.getElement("overrides")
override1 = overrides.appendElement()
override1.setElement("fieldId", "YLD_CNV_ASK")
override1.setElement("value", f"{value}")
session.sendRequest(request)
# Process received events
while(True):
# We provide timeout to give the chance to Ctrl+C handling:
ev = session.nextEvent(500)
# below msg.messageType == ReferenceDataResponse
for msg in ev:
if msg.messageType() == "ReferenceDataResponse":
if msg.hasElement("responseError"):
print(msg)
if msg.hasElement("securityData"):
data = msg.getElement("securityData")
num_cusips = data.numValues()
for i in range(num_cusips):
sec = data.getValue(i).getElement("security").getValue()
try:
des = data.getValue(i).getElement("fieldData").getElement("ID_BB_SEC_NUM_DES").getValue()
except:
des = None
try:
ptw = data.getValue(i).getElement("fieldData").getElement("PX_ASK").getValue()
except:
ptw = None
try:
ytw = data.getValue(i).getElement("fieldData").getElement("YLD_CNV_ASK").getValue()
except:
ytw = None
values = {'des': des, 'ptw': ptw, 'ytw': ytw}
# Response completly received, so we could exit
if ev.eventType() == blpapi.Event.RESPONSE:
values_dict.update({sec: values})
break
finally:
# Stop the session
# session.stop()
return values_dict

Queue large number of tasks from Google cloud function

I'm trying to use cloud functions to update data by calling an external API once a day.
So far I have:
Cloud Schedule set to invoke Function 1
Function 1 - loop over items and create a task for each item
Task - invoke Function 2 with data provided by function 1
Function 2 - call external API to get data and update our db
The issue is that there are ~2k items to update daily and a cloud function times out before it can do that, hence why I put them in a queue. But even placing the items in the queue takes too long for the cloud function so that is timing out before it can add them all.
Is there a simple way to bulk add multiple tasks to a queue at once?
Failing that, a better solution to all of this?
All written in python
Code for function 1:
def refresh(request):
for i in items:
# Create a client.
client = tasks_v2.CloudTasksClient()
# TODO(developer): Uncomment these lines and replace with your values.
project = 'my-project'
queue = 'refresh-queue'
location = 'europe-west2'
name = i['name'].replace(' ','')
url = f"https://europe-west2-my-project.cloudfunctions.net/endpoint?name={name}"
# Construct the fully qualified queue name.
parent = client.queue_path(project, location, queue)
# Construct the request body.
task = {
"http_request": { # Specify the type of request.
"http_method": tasks_v2.HttpMethod.GET,
"url": url, # The full url path that the task will be sent to.
}
}
# Use the client to build and send the task.
response = client.create_task(request={"parent": parent, "task": task})
Answering your question “Is there a simple way to bulk add multiple tasks to a queue at once?” As per the public documentation The best approach is to implement a double-injection pattern.
For this you will have a new queue where you are going to add a single task that contains multiple tasks of the original queue, then on the receiving end of this queue, you will have a service that will get the data of this task and create one task per entry on a second queue.
Additionally, I will suggest you use the 500/50/5 pattern to a cold queue. This will help both the task queue and the Cloud Function service to ramp up on a safe ratio.
Chris32's answer is correct, but one thing i noticed in your code snippet is you should create the client outside the for loop.
def refresh(request):
# Create a client.
client = tasks_v2.CloudTasksClient()
# TODO(developer): Uncomment these lines and replace with your values.
project = 'my-project'
queue = 'refresh-queue'
location = 'europe-west2'
for i in items:
name = i['name'].replace(' ','')
url = f"https://europe-west2-my-project.cloudfunctions.net/endpoint?name={name}"
# Construct the fully qualified queue name.
parent = client.queue_path(project, location, queue)
# Construct the request body.
task = {
"http_request": { # Specify the type of request.
"http_method": tasks_v2.HttpMethod.GET,
"url": url, # The full url path that the task will be sent to.
}
}
# Use the client to build and send the task.
response = client.create_task(request={"parent": parent, "task": task})
In app engine i would do client = tasks_v2.CloudTasksClient() outside of def refresh, at the file level, but i dont know if that matters for cloud functions.
Second thing,
Modify "Function 2" to take multiple 'names' instead of just one. Then in "Function 1" you can send 10 names to "Function 2" at a time
BATCH_SIZE = 10 # send 10 names to Function 2
def refresh(request):
# Create a client.
client = tasks_v2.CloudTasksClient()
# ...
for i in range(0, len(items), BATCH_SIZE)]:
items_batch = items[i:i + BATCH_SIZE]
names = ','.join([i['name'].replace(' ','') for i in items_batch])
url = f"https://europe-west2-my-project.cloudfunctions.net/endpoint?names={names}"
# Construct the fully qualified queue name.
# ...
If those 2 quick-fixes don't do it, then you'll have to split "Function 1" into Function 1A" and "Function 1B"
Function 1A:
BATCH_SIZE = 100 # send 100 names to Function 1B
def refresh(request):
client = tasks_v2.CloudTasksClient()
for i in range(0, len(items), BATCH_SIZE)]:
items_batch = items[i:i + BATCH_SIZE]
names = ','.join([i['name'].replace(' ','') for i in items_batch])
url = f"https://europe-west2-my-project.cloudfunctions.net/endpoint-for-function-1b?names={names}"
# send the task.
response = client.create_task(request={
"parent": client.queue_path('my-project', 'europe-west2', 'refresh-queue'),
"task": {
"http_request": {"http_method": tasks_v2.HttpMethod.GET, "url": url}
}})
Function 1B:
BATCH_SIZE = 10 # send 10 names to Function 2
def refresh(request):
# set `names` equal to the query param `names`
client = tasks_v2.CloudTasksClient()
for i in range(0, len(names), BATCH_SIZE)]:
names_batch = ','.join(names[i:i + BATCH_SIZE])
url = f"https://europe-west2-my-project.cloudfunctions.net/endpoint-for-function-2?names={names_batch}"
# send the task.
response = client.create_task(request={
"parent": client.queue_path('my-project', 'europe-west2', 'refresh-queue'),
"task": {
"http_request": {"http_method": tasks_v2.HttpMethod.GET, "url": url}
}})

Querying more properties in Google Search Console via python script

I am using a Python (2.7) script to download via API Google Search Console data. I would like to get rid of the property and dates arguments when launching the script:
>python script. py ´http://www.example.com´ ´01-01-2000´ ´01-02-2000´
For the latter I managed to do it importing timedelta and commenting out the lines referring to that argument:
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('property_uri', type=str,
help=('Site or app URI to query data for (including '
'trailing slash).'))
# Start and end dates are commented out as timeframe is dynamically set
'''argparser.add_argument('start_date', type=str,
help=('Start date of the requested date range in '
'YYYY-MM-DD format.'))
argparser.add_argument('end_date', type=str,
help=('End date of the requested date range in '
'YYYY-MM-DD format.'))'''
now = datetime.datetime.now()
StartDate = datetime.datetime.now()- timedelta(days=14)
EndDate = datetime.datetime.now()- timedelta(days=7)
From = StartDate.strftime('%Y-%m-%d' )
To = EndDate.strftime('%Y-%m-%d' )
request = {
'startDate': StartDate.strftime('%Y-%m-%d' ),
'endDate': EndDate.strftime('%Y-%m-%d' ),
'dimensions': ['query'],
Now I would like get rid also of the property argument, so that I can simply launch the script and have the property specified in the script itself. My final goal is to get data from several properties using only one script.
I tried to repeat the same procedure used for the dates but no luck. Needless to say I am a total beginner at coding.
I think I can help as I had the same problem when working from the sample script given by google as guidance. Which is what I think you gotten your code from?
The problem is that the script uses the sample_tools.py script in the googleapiclient library, which is meant to abstract away all the authentication bits so you can make a quick query easily. If you want to modify the code, I would recommend writing it from scratch.
These are my functions that I've pieced together from various bits of documentation that you might find useful.
Stage 1: Authentication
def authenticate_http():
"""Executes a searchAnalytics.query request.
Args:
service: The webmasters service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
# create flow object
flow = flow_from_clientsecrets('path to client_secrets.json',
scope='https://www.googleapis.com/auth/webmasters.readonly',
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
storage = Storage('credentials_file')
credentials = storage.get()
if credentials:
# print "have auth code"
http_auth = credentials.authorize(Http())
else:
print "need auth code"
# get authorization server uri
auth_uri = flow.step1_get_authorize_url()
print auth_uri
# get credentials object
code_input = raw_input("Code: ")
credentials = flow.step2_exchange(code_input)
storage.put(credentials)
# apply credential headers to all requests
http_auth = credentials.authorize(Http())
return http_auth
Stage 2: Build the Service Object
def build_service(api_name, version):
# use authenticate_http to return the http object
http_auth = authenticate_http()
# build gsc service object
service = build(api_name, version, http=http_auth)
return service
Stage 3: Execute Request
def execute_request(service, property_uri, request):
"""Executes a searchAnalytics.query request.
Args:
service: The webmasters service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
return service.searchanalytics().query(
siteUrl=property_uri, body=request).execute()
Stage 4: Main()
def main():
# define service object for the api service you want to use
gsc_service = build_service('webmasters', 'v3')
# define request
request = {'request goes here'}
# set your property set string you want to query
url = 'url or property set string goes here'
# response from executing request
response = execute_request(gsc_service, url, request)
print response
For multiple property sets you can just create a list of property sets, then create a loop and pass each property set into the 'url' argument of the 'execute_request' function
Hope this helps!

Categories