How to retrieve Test Results in Azure DevOps with Python REST API? - python

How to retrieve Test Results from VSTS (Azure DevOps) by using Python REST API?
Documentation is (as of today) very light, and even the examples in the dedicated repo of the API examples are light (https://github.com/Microsoft/azure-devops-python-samples).
For some reasons, the Test Results are not considered as WorkItems so a regular WIQL query would not work.
Additionally, it would be great to query the results for a given Area Path.
Thanks

First you need to get the proper connection client with the client string that matches the test results.
from vsts.vss_connection import VssConnection
from msrest.authentication import BasicAuthentication
token = "hcykwckuhe6vbnigsjs7r3ai2jefsdlkfjslkfj5mxizbtfu6k53j4ia"
team_instance = "https://tfstest.toto.com:8443/tfs/Development/"
credentials = BasicAuthentication("", token)
connection = VssConnection(base_url=team_instance, creds=credentials)
TEST_CLIENT = "vsts.test.v4_1.test_client.TestClient"
test_client = connection.get_client(TEST_CLIENT)
Then, you can have a look at all the functions available in: vsts/test/<api_version>/test_client.py"
The following functions look interesting:
def get_test_results(self, project, run_id, details_to_include=None, skip=None, top=None, outcomes=None) (Get Test Results for a run based on filters)
def get_test_runs(self, project, build_uri=None, owner=None, tmi_run_id=None, plan_id=None, include_run_details=None, automated=None, skip=None, top=None)
def query_test_runs(self, project, min_last_updated_date, max_last_updated_date, state=None, plan_ids=None, is_automated=None, publish_context=None, build_ids=None, build_def_ids=None, branch_name=None, release_ids=None, release_def_ids=None, release_env_ids=None, release_env_def_ids=None, run_title=None, top=None, continuation_token=None) (although this function has a limitation of 7 days range between min_last_updated_date and max_last_updated_date
To retrieve all the results from the Test Plans in a given Area Path, I have used the following code:
tp_query = Wiql(query="""
SELECT
[System.Id]
FROM workitems
WHERE
[System.WorkItemType] = 'Test Plan'
AND [Area Path] UNDER 'Development\MySoftware'
ORDER BY [System.ChangedDate] DESC""")
for plan in wit_client.query_by_wiql(tp_query).work_items:
print(f"Results for {plan.id}")
for run in test_client.get_test_runs(my_project, plan_id = plan.id):
for res in test_client.get_test_results(my_project, run.id):
tc = res.test_case
print(f"#{run.id}. {tc.name} ({tc.id}) => {res.outcome} by {res.run_by.display_name} in {res.duration_in_ms}")
Note that a test result includes the following attributes:
duration_in_ms
build
outcome (string)
associated_bugs
run_by (Identity)
test_case (TestCase)
test_case_title (string)
area (AreaPath)
Test_run, corresponding to the test run
test_suite
test_plan
completed_date (Python datetime object)
started_date ( Python datetime object)
configuration
Hope it can help others save the number of hours I spent exploring this API.
Cheers

Related

Pyetrade / Etrade API for option-chains function only returns options for apple?

I'm trying to get some option chains using the pyetrade package. I'm working in Sandbox mode for a newly made Etrade account.
When I execute the following code, it executes fine, but the returned information is incorrect: I keep getting options for Apple between 2012 and 2015, instead of current Exxon-Mobil options (what I'm inputting). This is also true for if I ask for Google or Facebook or Netflix, I just keep getting outdated Apple options.
I'm not sure where I messed up, or if this is just something that's part of sandbox mode, so that's why I asked for help. Thank you!
(Note: Some of the code is sourced from: https://github.com/1rocketdude/pyetrade_option_chains/blob/master/etrade_option_chains.py)
The following is the function to get the option chain from the API:
def getOption(thisSymbol):
#Renew session / or start session
try:
authManager.renew_access_token()
except:
authenticate() #this works fine
#make a market object to pull what you need from
market = pyetrade.ETradeMarket(
consumer_key,
consumer_secret,
tokens['oauth_token'],
tokens['oauth_token_secret'],
dev=True
)
try:
#the dates returned are also
q = market.get_option_expire_date(thisSymbol,resp_format='xml')
#just formats the dates to be more comprehensible:
expiration_dates = option_expire_dates_from_xml(q)
except Exception:
raise
rtn = []
for this_expiry_date in expiration_dates:
q = market.get_option_chains(thisSymbol, this_expiry_date)
chains = q['OptionChainResponse']['OptionPair']
rtn.append(chains)
print()
return rtn
ret = getOption("XOM")
print(ret[0])
The API provider is explicit on this:
Note:
E*TRADE's sandbox doesn't actually produce correct option chains so this will return an error.
The sandbox is still useful for debugging e.g. the OAuth stuff.
No one could hardly make the sandbox-ed code work otherwise.

How to pass dataset id to bigquery client for python

I just started playing around with bigquery, and I am trying to pass the dataset id to the python client. It should be a pretty basic operation, but I can't find it on other threads.
In practice I would like to to take the following example
# import packages
import os
from google.cloud import bigquery
# set current work directory to the one with this script.
os.chdir(os.path.dirname(os.path.abspath(__file__)))
# initialize client object using the bigquery key I generated from Google clouds
google_credentials_path = 'bigquery-stackoverflow-DC-fdb49371cf87.json'
client = bigquery.Client.from_service_account_json(google_credentials_path)
# create simple query
query_job = client.query(
"""
SELECT
CONCAT(
'https://stackoverflow.com/questions/',
CAST(id as STRING)) as url,
view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
LIMIT 10"""
)
# store results in dataframe
dataframe_query = query_job.result().to_dataframe()
and make it look something like
# import packages
import os
from google.cloud import bigquery
# set current work directory to the one with this script.
os.chdir(os.path.dirname(os.path.abspath(__file__)))
# initialize client object using the bigquery key I generated from Google clouds
google_credentials_path = 'bigquery-stackoverflow-DC-fdb49371cf87.json'
client = bigquery.Client.from_service_account_json(google_credentials_path)\
.A_function_to_specify_id(bigquery-public-data.stackoverflow)
# create simple query
query_job = client.query(
"""
SELECT
CONCAT(
'https://stackoverflow.com/questions/',
CAST(id as STRING)) as url,
view_count
FROM `posts_questions` -- No dataset ID here anymore
WHERE tags like 'google-bigquery'
ORDER BY view_count DESC
LIMIT 10"""
)
# store results in dataframe
dataframe_query = query_job.result().to_dataframe()
The documentation eludes me, so any help would be appreciated.
The closest thing to what you're asking for is the default_dataset (reference) property of the query job config. The query job config is an optional object that can be passed into the query() method of the instantiated BigQuery client.
You don't set default dataset as part of instantiating a client as not all resources are dataset scoped. You're implicitly working with a query job in your example, which is a project scoped resource.
So, to adapt your sample a bit, it might look something like this:
# skip the irrelevant bits like imports and client construction
job_config = bigquery.QueryJobConfig(default_dataset="bigquery-public-data.stackoverflow")
sql = "SELECT COUNT(1) FROM posts_questions WHERE tags like 'google-bigquery'"
dataframe = client.query(sql, job_config=job_config).to_dataframe()
If you're issuing multiple queries against this same dataset you could certainly reuse the same job config object with multiple query invocations.

How to mock Moto's AccountId

I need to change the default AccountId 123456789012 to a different value.
I tried this fixture:
#pytest.fixture(autouse=True):
def sts(monkeypatch):
import moto.iam.models as models
monkeypatch.setattr(models,'ACCOUNT_ID','111111111111')
from moto import mock_sts
with mock_sts():
sts=boto3.client('sts',region_name='us-east-1')
assert(sts.get_caller_identity().get('Account')=='111111111111')
yield sts
But that assert fails, the AccountId is still the default...
The code included hardcoded account ids in a bunch of places. The code currently on git lets that be overwritten by setting the environment variable MOTO_ACCOUNT_ID, no need to monkey patch.
Based on #FábioDias's excellent hint about the environment variable 'MOTO_ACCOUNT_ID', we can combine this with moto getting started as follow:
#pytest.fixture(autouse=True)
def aws_credentials(monkeypatch):
"""Mocked AWS credentials to prevent production accidents"""
monkeypatch.setenv('AWS_ACCESS_KEY_ID', 'testing')
monkeypatch.setenv('AWS_SECRET_ACCESS_KEY', 'testing')
monkeypatch.setenv('AWS_SECURITY_TOKEN', 'testing')
monkeypatch.setenv('AWS_SESSION_TOKEN', 'testing')
monkeypatch.setenv('AWS_DEFAULT_REGION', 'us-east-1')
monkeypatch.setenv('MOTO_ACCOUNT_ID', '111111111111')
# see note 2022-06-03
# since moto deals with ACCOUNT_ID in a "set-once" manner, we need
# to reload moto and all of its sub-modules
importlib.reload(sys.modules['moto'])
to_reload = [m for m in sys.modules if m.startswith('moto.')]
for m in to_reload:
importlib.reload(sys.modules[m])
#pytest.fixture(scope='function')
def s3(aws_credentials):
with mock_s3():
yield boto3.client('s3', region_name='us-east-1')
This seems to work in all cases I've tested so far.
Note 2022-06-03
As of now (moto.__version__ == '3.1.12dev'), moto still sets ACCOUNT_ID once and for all, and then copies it in various sub-modules. The only way I found to force-change the account ID everywhere is by reloading moto first (so that it assigns the new ACCOUNT_ID in moto.core.models) and then all sub-modules already loaded.

How should I test a method of a mocked object

I have a question about how to mock a nested method and test what it was called with. I'm having a hard time getting my head around: https://docs.python.org/3/library/unittest.mock-examples.html#mocking-chained-calls.
I'd like to test that the "put" method from the fabric library is called by the deploy_file method in this class, and maybe what values are given to it. This is the module that gathers some information from AWS and provides a method to take action on the data.
import json
import os
from aws.secrets_manager import get_secret
from fabric import Connection
class Deploy:
def __init__(self):
self.secrets = None
self.set_secrets()
def set_secrets(self):
secrets = get_secret()
self.secrets = json.loads(secrets)
def deploy_file(self, source_file):
with Connection(host=os.environ.get('SSH_USERNAME'), user=os.environ.get("SSH_USERNAME")) as conn:
destination_path = self.secrets["app_path"] + '/' + os.path.basename(source_file)
conn.put(source_file, destination_path)
"get_secret" is a method in another module that uses the boto3 library to get the info from AWS.
These are the tests I'm working on:
from unittest.mock import patch
from fabric import Connection
from jobs.deploy import Deploy
def test_set_secrets_dict_from_expected_json_string():
with patch('jobs.deploy.get_secret') as m_get_secret:
m_get_secret.return_value = '{"app_path": "/var/www/html"}'
deployment = Deploy()
assert deployment.secrets['app_path'] == "/var/www/html"
def test_copy_app_file_calls_fabric_put():
with patch('jobs.deploy.get_secret') as m_get_secret:
m_get_secret.return_value = '{"app_path": "/var/www/html"}'
deployment = Deploy()
with patch('jobs.deploy.Connection', spec=Connection) as m_conn:
local_file_path = "/tmp/foo"
deployment.deploy_file(local_file_path)
m_conn.put.assert_called_once()
where the second test results in "AssertionError: Expected 'put' to have been called once. Called 0 times."
the first test mocks the "get_secret" function just fine to test that the constructor for "Deploy" sets "Deploy.secrets" from the fake AWS data.
In the second test, get_secrets is mocked just as before, and I mock "Connection" from the fabric library. If I don't mock Connection, I get an error related to the "host" parameter when the Connection object is created.
I think that when "conn.put" is called its creating a whole new Mock object and I'm not testing that object when the unittest runs. I'm just not sure how to define the test to actually test the call to put.
I'm also a novice at understanding what to test (and how) and what not to test as well as how to use mock and such. I'm fully bought in on the idea though. It's been very helpful to find bugs and regressions as I work on projects.

TestLink API Python - How to look up test cases by assignee

I'd like to get the list of all test cases assigned to a specific user in TestLink. I'm using the TestLink Python API, version 0.6.4. I've been reading the JetMore docs for TestLink.
So far I have the following code:
from testlink import TestlinkAPIGeneric, TestLinkHelper
import testlink
TESTLINK_API_PYTHON_SERVER_URL = '<my testlink ip>'
TESTLINK_API_PYTHON_DEVKEY = '<my testlink devkey>'
my_test_link = TestLinkHelper(TESTLINK_API_PYTHON_SERVER_URL, TESTLINK_API_PYTHON_DEVKEY).connect(TestlinkAPIGeneric)
test_project_id = my_test_link.getTestProjectByName('<project name>')['id']
plans = my_test_link.getProjectTestPlans(test_project_id)
# I've also tried ['globalRoleID']
userid = int(my_test_link.getUserByLogin('<user name>')[0]['dbID'])
for plan in plans:
print(my_test_link.getTestCasesForTestPlan(testplanid=plan['id'], assignedto=userid, details='full'))
This produces an empty list for every test plan in my project:
[]
[]
[]
...
Any help is appreciated.
assignedto is an optional parameter in the function getTestCasesForTestPlan().
This function will always return all the testcases associated with the testplan.
Pass username instead of userid. It will return all the testcases.
To get testcases for specific user you have to modify the API code(xmlrpc.class.php) in testlink codebase.

Categories