locust --no-web --client=1 --hatch-rate=1 --num-request=2 --host= http://localhost
I want to read --host value provided in cmd line in my HTTPLocust class. I am aware I can use host attribute for direct assignment but I do not want it. I want to read the value from cmd line with in HTTPLocust class. I am building custom logs and want to pass that value to the logs. I tried HTTPLocust.host but that returns none.
Also I want to read --web-port from python code.
New answer
There is a much more straightforward solution than my initial one below. Each TaskSet has a locust property that links back to their parent Locust locustinstance, so something like this will do exactly what you need:
class UserBehaviour(TaskSet):
def __init__(self, parent):
super().__init__(parent)
print(self.locust.host)
Old answer
Looking at the code for HttpSession, it seems base_url is what you want.
So something like this should give you the current host, either default or specified on the command line:
class HostGetter(locust.TaskSet):
#locust.Task()
def get_host(self):
print(self.client.base_url)
You can access the host variable by an instance of User()class. See the example below:
from locust import HttpLocust, TaskSet, task
import random, requests, time, os, inspect, json, sys
class UserBehaviour(TaskSet):
#task(1)
def test1(self):
user = User()
print(user.host)
self.client.get("/v3/User/ListOfCoupon/")
class User(HttpLocust):
task_set = UserBehaviour
min_wait = 1000
max_wait = 3000
see the log:
~/P/m/p/general (master ⚡↑) locust -f app_couponlist.py --host=http://www.google.com
[2017-09-19 14:33:13,020] Mesuts-MacBook.local/INFO/locust.main: Starting web monitor at *:8089
[2017-09-19 14:33:13,021] Mesuts-MacBook.local/INFO/locust.main: Starting Locust 0.8a3
[2017-09-19 14:33:22,281] Mesuts-MacBook.local/INFO/locust.runners: Hatching and swarming 5 clients at the rate 1 clients/s...
[2017-09-19 14:33:22,282] Mesuts-MacBook.local/INFO/stdout: http://www.google.com
[2017-09-19 14:33:22,282] Mesuts-MacBook.local/INFO/stdout:
[2017-09-19 14:33:23,285] Mesuts-MacBook.local/INFO/stdout: http://www.google.com
[2017-09-19 14:33:23,285] Mesuts-MacBook.local/INFO/stdout:
[2017-09-19 14:33:24,226] Mesuts-MacBook.local/INFO/stdout: http://www.google.com
This is my code:
import time,csv,argparse
class MySQLLocust(Locust):
parser = argparse.ArgumentParser()
parser.add_argument('--host', '--host')
args, unknown = parser.parse_known_args()
print("Host = " + args.host)
Now when I give:
locust -f mysql_locust.py --host=myhost-vm-101 --no-web --clients=2 --hatch-rate=10 --run-time=5m
I get the print statement result as expected:
Host = myhost-vm-101
You can access it via sys.argv
Or parse options via argparse
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-H', '--host')
args, unknown = parser.parse_known_args()
print str(args.host)
Related
There is a service (in the second listing the DBus "coordinates" are given)
running in a Linux environment, and at the same time
I in the tests a custom private DBus is being run as in the following
fixture:
def run_custom_daemon():
cmd = [
"dbus-daemon",
"--system",
"--nofork",
"--address=unix:path=<some_path>.sock"
]
proc = subprocess.Popen(cmd,stdout=subprocess.PIPE,shell=False)
os.environ['CUSTOM_DBUS_SYSTEM_BUS_ADDRESS'] = "<some_path>.sock"
time.sleep(0.5)
yield proc
proc.kill()
Now, my component uses the service com.foo.bar.Baz and in the C++-code
I connect to it using the
sdbus-c++ as e.g.
m_prx = stdbus::createProxy(...)
Now, if I run the tests as it is it appears that because the "official"
DBus is somehow "shielded" by the tests, I am getting a ServiceNotKnown
error. However, if I mock the service using a code below,
I get "the name already has owner"-type of error.
The question is, probably, how to tell the mock to use
the above "privately" set-up DBus?
from pydbus import SystemBus
from gi.repository import GLib
class MockSrv:
dbus = """
<node name="/com/foo/bar/Baz">
<interface name="com.foo.bar.Baz">
<method name="Buzz">
<arg type="s" direction="out"/>
</method>
</interface>
</node>
"""
mloop = GLib.MainLoop()
mbus = SystemBus()
srv = MockSrv()
mbus.publish("com.foo.bar.Baz",
("/com/foo/bar/Baz", srv)
)
mloop.run()
If I'm understanding you correctly, you want to use Python to mock a DBus service, which you then connect to using C++ for testing.
The best way that I've found to do this is to create a local session bus(using something like dbus-run-session) to run the applications. This will setup the environment variables correctly to open a new SESSION bus, and then exit at the end.
I'm trying to print to console before and after processing that takes a while in a Django management command, like this:
import requests
import xmltodict
from django.core.management.base import BaseCommand
def get_all_routes():
url = 'http://busopen.jeju.go.kr/OpenAPI/service/bis/Bus'
r = requests.get(url)
data = xmltodict.parse(r.content)
return data['response']['body']['items']['item']
class Command(BaseCommand):
help = 'Updates the database via Bus Info API'
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
In the above code, Saving routes ... is expected to print before the loop begins, and done. right next to it when the loop completes so that it looks like Saving routes ... done. in the end.
However, the former doesn't print until the loop completes, when both strings finally print at the same time, which is not what I expected.
I found this question, where the answer suggests flushing the output i.e. self.stdout.flush(), so I added that to my code:
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
self.stdout.flush()
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
Still, the result remains unchanged.
What could have I done wrong?
The thing to keep in mind is you're using self.stdout (as suggested in the Django docs), which is BaseCommand's override of Python's standard sys.stdout. There are two main differences between the 2 relevant to your problem:
The default "ending" in BaseCommand's version of self.stdout.write() is a new-line, forcing you to use the ending='' parameter, unlike sys.stdout.write() that has an empty ending as the default. This in itself is not causing your problem.
The BaseCommand version of flush() does not really do anything (who would have thought?). This is a known bug: https://code.djangoproject.com/ticket/29533
So you really have 2 options:
Not use BaseCommand's self.stdout but instead use sys.stdout, in which case the flush does work
Force the stdout to be totally unbuffered while running the management command by passing the "-u" parameter to python. So instead of running python manage.py <subcommand>, run python -u manage.py <subcommand>
Hope this helps.
Have you try to set PYTHONUNBUFFERED environment variable?
Please tell me how can i execute fab-script in a list of hosts with the same command BUT with the different values of parameter.
Something like this:
from fabric.api import *
def command(parameter):
run ("command%s" % parameter)
and execute this. I dont now how. For example:
fab -H host1,host2,host3 command:param1 command:param2 command:param3
And Fabric performs the following:
command:param1 executed on host1
command:param2 executed on host2
command:param3 executed on host3
The way I do this is to parametize the tasks. In my case it's about Deployment to Dev, Test and Production.
fabfile.py:
from ConfigParser import ConfigParser
from fabric.tasks import Task
from fabric.api import execute, task
#task()
def build(**options):
"""Your build function"""
class Deploy(Task):
name = "dev"
def __init__(self, *args, **kwargs):
super(Deploy, self).__init__(*args, **kwargs)
self.options = kwargs.get("options", {})
def run(self, **kwargs):
options = self.options.copy()
options.update(**kwargs)
return execute(build, **options)
config = ConfigParser()
config.read("deploy.ini")
sections = [
section
for section in config.sections()
if section != "globals" and ":" not in section
]
for section in sections:
options = {"name": section}
options.update(dict(config.items("globals")))
options.update(dict(config.items(section)))
t = Deploy(name=section, options=options)
setattr(t, "__doc__", "Deploy {0:s} instance".format(section))
globals()[section] = task
deploy.ini:
[globals]
repo = https://github.com/organization/repo
[dev]
dev = yes
host = 192.168.0.1
domain = app.local
[prod]
version = 1.0
host = 192.168.0.2
domain = app.mydomain.tld
Hopefully this is obvious enough to see that you can configure all kinds of deployments by simply editing your deploy.ini configuration file and subsequently automatically ceacreating new tasks that match with the right parameters.
This pattern can be adapted to YAML or JSON if that's your "cup of tea'>
I m trying to use fabric module through simple python module
remoteExc.py
from fabric.api import *
def clone_repo(IPADDRESS,USER,fPath,git_url):
env.hosts_string = IPADDRESS
env.user = USER
env.key_filename = fPath
env.disable_known_hosts = 'True'
run('git clone %s' % (git_url))
mainFile.py
from remoteExc import clone_repo
clone_repo(ipAddress,user,fPath,git_url)
When i execute it says
python mainfile.py
No hosts found. Please specify (single) host string for connection:
Please enlight me where i make a mistake
Typo. env.host_string = IPADDRESS - you've got an env.hosts_string instead.
Also, generally you run fabric via fab - unless you're trying to do something fairly non-standard, be aware that running it via python probably isn't what you want to do. See the Fabric docs for a pretty good intro.
http://docs.fabfile.org/en/1.7/tutorial.html
I am running a command on the remote machine:
remote_output = run('mysqldump --no-data --user=username --password={0} database'.format(password))
I would like to capture the output, but not have it all printed to the screen. What's the easiest way to do this?
It sounds like Managing output section is what you're looking for.
To hide the output from the console, try something like this:
from __future__ import with_statement
from fabric.api import hide, run, get
with hide('output'):
run('mysqldump --no-data test | tee test.create_table')
get('~/test.create_table', '~/test.create_table')
Belows is the sample results:
No hosts found. Please specify (single) host string for connection: 192.168.6.142
[192.168.6.142] run: mysqldump --no-data test | tee test.create_table
[192.168.6.142] download: /home/quanta/test.create_table <- /home/quanta/test.create_table
Try this if you want to hide everything from log and avoid fabric throwing exceptions when command fails:
from __future__ import with_statement
from fabric.api import env,run,hide,settings
env.host_string = 'username#servernameorip'
env.key_filename = '/path/to/key.pem'
def exec_remote_cmd(cmd):
with hide('output','running','warnings'), settings(warn_only=True):
return run(cmd)
After that, you can check commands result as shown in this example:
cmd_list = ['ls', 'lss']
for cmd in cmd_list:
result = exec_remote_cmd(cmd)
if result.succeeded:
sys.stdout.write('\n* Command succeeded: '+cmd+'\n')
sys.stdout.write(result+"\n")
else:
sys.stdout.write('\n* Command failed: '+cmd+'\n')
sys.stdout.write(result+"\n")
This will be the console output of the program (observe that there aren't log messages from fabric):
* Command succeeded: ls
Desktop espaiorgcats.sql Pictures Public Videos
Documents examples.desktop projectes scripts
Downloads Music prueba Templates
* Command failed: lss
/bin/bash: lss: command not found
For fabric==2.4.0 you can hide output using the following logic
conn = Connection(host="your-host", user="your-user")
result = conn.run('your_command', hide=True)
result.stdout.strip() # here you can get the output
As other answers allude, fabric.api doesn't exist anymore (as of writing, fabric==2.5.0) 8 years after the question. However the next most recent answer here implies providing hide=True to every .run() call is the only/accepted way to do it.
Not being satisfied I went digging for a reasonable equivalent to a context where I can specify it only once. It feels like there should still be a way using an invoke.context.Context but I didn't want to spend any longer on this, and the easiest way I could find was using invoke.config.Config, which we can access via fabric.config.Config without needing any additional imports.
>>> import fabric
>>> c = fabric.Connection(
... "foo.example.com",
... config=fabric.config.Config(overrides={"run": {"hide": True}}),
... )
>>> result = c.run("hostname")
>>> result.stdout.strip()
'foo.example.com'
As of Fabric 2.6.0 hide argument to run is not available.
Expanding on suggestions by #cfillol and #samuel-harmer, using a fabric.Config may be a simpler approach:
>>> import fabric
>>> conf = fabric.Config()
>>> conf.run.hide = True
>>> conf.run.warn = True
>>> c = fabric.Connection(
... "foo.example.com",
... config=conf
... )
>>> result = c.run("hostname")
This way no command output is printed and no exception is thrown on command failure.
As Samuel Harmer also pointed out in his answer, it is possible to manage output of the run command at the connection level.
As of version 2.7.1:
from fabric import Config, Connection
connection = Connection(
host,
config = Config(overrides = {
"run": { "hide": "stdout" }
}),
...
)