I have Kubernetes cluster set up and managed by AKS, and I have access to it with the python client.
Thing is that when I'm trying to send patch scale request, I'm getting an error.
I've found information about scaling namespaced deployments from python client in the GitHub docs, but it was not clear what is the body needed in order to make the request work:
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(api_client)
name = 'name_example' # str | name of the Scale
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = None # object |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
field_manager = 'field_manager_example' # str | fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). (optional)
force = True # bool | Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. (optional)
try:
api_response = api_instance.patch_namespaced_deployment_scale(name, namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager, force=force)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->patch_namespaced_deployment_scale: %s\n" % e)
So when running the code I'm getting Reason: Unprocessable Entity
Does anyone have any idea in what format should the body be? For example if I want to scale the deployment to 2 replicas how can it be done?
The body argument to the patch_namespaced_deployment_scale can be a JSONPatch document, as #RakeshGupta shows in the comment, but it can also be a partial resource manifest. For example, this works:
>>> api_response = api_instance.patch_namespaced_deployment_scale(
... name, namespace,
... [{'op': 'replace', 'path': '/spec/replicas', 'value': 2}])
(Note that the value needs to be an integer, not a string as in the
comment.)
But this also works:
>>> api_response = api_instance.patch_namespaced_deployment_scale(
... name, namespace,
... {'spec': {'replicas': 2}})
Related
I am using a python library (ccxt) in which one base exchange class is inherited by exchange-specific classes, to provide a unified interface to several exchanges (coinbase, binance etc.).
The function definition for a sub-class might look something like this (not necessarily exactly): def fetch_ledger(self, symbols = None, since = None, params = {}):
The thing is, for e.g. the coinbase class, this method calls another method called prepareAccountRequestWithCurrencyCode(), which raises the exception:
raise ArgumentsRequired(self.id + ' prepareAccountRequestWithCurrencyCode() method requires an account_id(or accountId) parameter OR a currency code argument') if "accountId" or "code" is not provided in the params dict. These arguments are not in the function signature, as they are to be provided in the params dict (e.g. params = {"accountId" : "0x123"}).
I want to know that these arguments are required before I use the method, as I want to implement some automation and GUI-elements which can work across several exchanges (sub-classes). Some of these sub-classes have their own fetch_ledger methods which might not require e.g. the "accountId" argument to be provided in the params dict.
What is a god way to automatically obtain required aguments that are not in the function signature, for all exchanges?
I am providing the relevant ccxt code below since it's open-source:
def fetch_ledger(self, code=None, since=None, limit=None, params={}):
self.load_markets()
currency = None
if code is not None:
currency = self.currency(code)
request = self.prepare_account_request_with_currency_code(code, limit, params) # REQUIRES "accountId" in params
query = self.omit(params, ['account_id', 'accountId'])
response = self.v2PrivateGetAccountsAccountIdTransactions(self.extend(request, query))
return self.parse_ledger(response['data'], currency, since, limit)
def prepare_account_request_with_currency_code(self, code=None, limit=None, params={}):
accountId = self.safe_string_2(params, 'account_id', 'accountId')
if accountId is None:
if code is None:
raise ArgumentsRequired(self.id + ' prepareAccountRequestWithCurrencyCode() method requires an account_id(or accountId) parameter OR a currency code argument')
accountId = self.find_account_id(code)
if accountId is None:
raise ExchangeError(self.id + ' prepareAccountRequestWithCurrencyCode() could not find account id for ' + code)
request = {
'account_id': accountId,
}
if limit is not None:
request['limit'] = limit
return request
I've already thought of a few ways of doing it, such as running the function, catching the exception and dissecting the string to prompt the user for any missing arguments during run-time. I've also thought about making a source code parser, and even making changes to the library code, but I'm currently not sure what is best. I'd prefer to not have to look at the documentation of each unified method for all 100 exchanges and having to do it manually.
I'm wondering if anyone knows of an elegant or best-practice way of obtaining such optionally provided, yet required arguments for such methods (or just for the library I am currently using).
I am having trouble loading a function in a python file that i created. The entire code in the python file runs without any problems in a Jupyter Notebook. Now that i have put the code, 1:1 into a python file, i get an error "cannot import name 'get' from 'connector'" where connector is my python file, and get is the function nested inside.
I suspect that the format from Jupyter Notebook changes and into a regular python file changes the way the file is read somehow?
I wanted to move the code into a python file, as this function is usefull for mulitple projects i am doing in python when scraping webpages. I can call the function ratelimit function, but i can't seem to figure out what is wrong with my get command.
# Imports
import scraping_class
import pandas as pd
import requests,os,time
logfile="trustpilot.txt"
Connector = scraping_class.Connector(logfile)
def ratelimit(x):
"A function that handles the rate of your calls."
time.sleep(x) # sleep x seconds.
class Connector():
def __init__(self,logfile,overwrite_log=False,connector_type='requests',session=False,path2selenium='',n_tries = 5,timeout=30):
"""This Class implements a method for reliable connection to the internet and monitoring.
It handles simple errors due to connection problems, and logs a range of information for basic quality assessments
Keyword arguments:
logfile -- path to the logfile
overwrite_log -- bool, defining if logfile should be cleared (rarely the case).
connector_type -- use the 'requests' module or the 'selenium'. Will have different since the selenium webdriver does not have a similar response object when using the get method, and monitoring the behavior cannot be automated in the same way.
session -- requests.session object. For defining custom headers and proxies.
path2selenium -- str, sets the path to the geckodriver needed when using selenium.
n_tries -- int, defines the number of retries the *get* method will try to avoid random connection errors.
timeout -- int, seconds the get request will wait for the server to respond, again to avoid connection errors.
"""
## Initialization function defining parameters.
self.n_tries = n_tries # For avoiding triviel error e.g. connection errors, this defines how many times it will retry.
self.timeout = timeout # Defining the maximum time to wait for a server to response.
## not implemented here, if you use selenium.
if connector_type=='selenium':
assert path2selenium!='', "You need to specify the path to you geckodriver if you want to use Selenium"
from selenium import webdriver
## HIN download the latest geckodriver here: https://github.com/mozilla/geckodriver/releases
assert os.path.isfile(path2selenium),'You need to insert a valid path2selenium the path to your geckodriver. You can download the latest geckodriver here: https://github.com/mozilla/geckodriver/releases'
self.browser = webdriver.Firefox(executable_path=path2selenium) # start the browser with a path to the geckodriver.
self.connector_type = connector_type # set the connector_type
if session: # set the custom session
self.session = session
else:
self.session = requests.session()
self.logfilename = logfile # set the logfile path
## define header for the logfile
header = ['id','project','connector_type','t', 'delta_t', 'url', 'redirect_url','response_size', 'response_code','success','error']
if os.path.isfile(logfile):
if overwrite_log==True:
self.log = open(logfile,'w')
self.log.write(';'.join(header))
else:
self.log = open(logfile,'a')
else:
self.log = open(logfile,'w')
self.log.write(';'.join(header))
## load log
with open(logfile,'r') as f: # open file
l = f.read().split('\n') # read and split file by newlines.
## set id
if len(l)<=1:
self.id = 0
else:
self.id = int(l[-1][0])+1
def get(self,url,project_name):
"""Method for connector reliably to the internet, with multiple tries and simple error handling, as well as default logging function.
Input url and the project name for the log (i.e. is it part of mapping the domain, or is it the part of the final stage in the data collection).
Keyword arguments:
url -- str, url
project_name -- str, Name used for analyzing the log. Use case could be the 'Mapping of domain','Meta_data_collection','main data collection'.
"""
project_name = project_name.replace(';','-') # make sure the default csv seperator is not in the project_name.
if self.connector_type=='requests': # Determine connector method.
for _ in range(self.n_tries): # for loop defining number of retries with the requests method.
ratelimit()
t = time.time()
try: # error handling
response = self.session.get(url,timeout = self.timeout) # make get call
err = '' # define python error variable as empty assumming success.
success = True # define success variable
redirect_url = response.url # log current url, after potential redirects
dt = t - time.time() # define delta-time waiting for the server and downloading content.
size = len(response.text) # define variable for size of html content of the response.
response_code = response.status_code # log status code.
## log...
call_id = self.id # get current unique identifier for the call
self.id+=1 # increment call id
#['id','project_name','connector_type','t', 'delta_t', 'url', 'redirect_url','response_size', 'response_code','success','error']
row = [call_id,project_name,self.connector_type,t,dt,url,redirect_url,size,response_code,success,err] # define row to be written in the log.
self.log.write('\n'+';'.join(map(str,row))) # write log.
return response,call_id # return response and unique identifier.
except Exception as e: # define error condition
err = str(e) # python error
response_code = '' # blank response code
success = False # call success = False
size = 0 # content is empty.
redirect_url = '' # redirect url empty
dt = t - time.time() # define delta t
## log...
call_id = self.id # define unique identifier
self.id+=1 # increment call_id
row = [call_id,project_name,self.connector_type,t,dt,url,redirect_url,size,response_code,success,err] # define row
self.log.write('\n'+';'.join(map(str,row))) # write row to log.
else:
t = time.time()
ratelimit()
self.browser.get(url) # use selenium get method
## log
call_id = self.id # define unique identifier for the call.
self.id+=1 # increment the call_id
err = '' # blank error message
success = '' # success blank
redirect_url = self.browser.current_url # redirect url.
dt = t - time.time() # get time for get method ... NOTE: not necessarily the complete load time.
size = len(self.browser.page_source) # get size of content ... NOTE: not necessarily correct, since selenium works in the background, and could still be loading.
response_code = '' # empty response code.
row = [call_id,project_name,self.connector_type,t,dt,url,redirect_url,size,response_code,success,err] # define row
self.log.write('\n'+';'.join(map(str,row))) # write row to log file.
# Using selenium it will not return a response object, instead you should call the browser object of the connector.
## connector.browser.page_source will give you the html.
return call_id
logfile="trustpilot.txt" ## name your log file.
connector = Connector(logfile)
I expected to be able to load my 'get' function from 'connector.py'
The following line may be the issue due to naming conflict as you Connector variable and Connector class:
Connector = scraping_class.Connector(logfile)
Whenever my Spyne application receives a request, XSD validation is performed. This is good, but whenever there is an XSD violation a fault is raised and my app returns a Client.SchemaValidationError like so:
<soap11env:Fault>
<faultcode>soap11env:Client.SchemaValidationError</faultcode>
<faultstring>:25:0:ERROR:SCHEMASV:SCHEMAV_CVC_DATATYPE_VALID_1_2_1: Element '{http://services.sp.pas.ng.org}DateTimeStamp': '2018-07-25T13:01' is not a valid value of the atomic type 'xs:dateTime'.</faultstring>
<faultactor></faultactor>
</soap11env:Fault>
I would like to know how to handle the schema validation error gracefully and return the details in the Details field of my service's out_message, rather than just raising a standard Client.SchemaValidationError. I want to store the details of the error as a variable and pass it to my OperationOne function.
Here is my code, I have changed var names for sensitivity.
TNS = "http://services.so.example.org"
class InMessageType(ComplexModel):
__namespace__ = TNS
class Attributes(ComplexModel.Attributes):
declare_order = 'declared'
field_one = Unicode(values=["ONE", "TWO"],
min_occurs=1)
field_two = Unicode(20, min_occurs=1)
field_three = Unicode(20, min_occurs=0)
Confirmation = Unicode(values=["ACCEPTED", "REJECTED"], min_occurs=1)
FileReason = Unicode(200, min_occurs=0)
DateTimeStamp = DateTime(min_occurs=1)
class OperationOneResponse(ComplexModel):
__namespace__ = TNS
class Attributes(ComplexModel.Attributes):
declare_order = 'declared'
ResponseMessage = Unicode(values=["SUCCESS", "FAILURE"], min_occurs=1)
Details = Unicode(min_len=0, max_len=2000)
class ServiceOne(ServiceBase):
#rpc(InMessageType,
_returns=OperationOneResponse,
_out_message_name='OperationOneResponse',
_in_message_name='InMessageType',
_body_style='bare',
)
def OperationOne(ctx, message):
# DO STUFF HERE
# e.g. return {'ResponseMessage': Failure, 'Details': XSDValidationError}
application = Application([ServiceOne],
TNS,
in_protocol=Soap11(validator='lxml'),
out_protocol=Soap11(),
name='ServiceOne',)
wsgi_application = WsgiApplication(application)
if __name__ == '__main__':
pass
I have considered the following approach but I can't quite seem to make it work yet:
create subclass MyApplication with call_wrapper() function overridden.
Instantiate the application with in_protocol=Soap11(validator=None)
Inside the call wrapper set the protocol to Soap11(validator='lxml') and (somehow) call something which will validate the message. Wrap this in a try/except block and in case of error, catch the error and handle it in whatever way necessary.
I just haven't figured out what I can call inside my overridden call_wrapper() function which will actually perform the validation. I have tried protocol.decompose_incoming_envelope() and other such things but no luck yet.
Overriding the call_wrapper would not work as the validation error is raised before it's called.
You should instead use the event subsystem. More specifically, you must register an application-level handler for the method_exception_object event.
Here's an example:
def _on_exception_object(ctx):
if isinstance(ctx.out_error, ValidationError):
ctx.out_error = NicerValidationError(...)
app = Application(...)
app.event_manager.add_listener('method_exception_object', _on_exception_object)
See this test for more info: https://github.com/arskom/spyne/blob/4a74cfdbc7db7552bc89c0e5d5c19ed5d0755bc7/spyne/test/test_service.py#L69
As per your clarification, if you don't want to reply with a nicer error but a regular response, I'm afraid Spyne is not designed to satisfy that use-case. "Converting" an errored-out request processing state to a regular one would needlessly complicate the already heavy request handling logic.
What you can do instead is to HACK the heck out of the response document.
One way to do it is to implement an additional method_exception_document event handler where the <Fault> tag and its contents are either edited to your taste or even swapped out.
Off the top of my head:
class ValidationErrorReport(ComplexModel):
_type_info = [
('foo', Unicode),
('bar', Integer32),
]
def _on_exception_document(ctx):
fault_elt, = ctx.out_document.xpath("//soap11:Fault", namespaces={'soap11': NS_SOAP11_ENV})
explanation_elt = get_object_as_xml(ValidationErrorReport(...))
fault_parent = fault_elt.parent()
fault_parent.remove(fault_elt)
fault_parent.add(explanation_elt)
The above needs to be double-checked with the relevant Spyne and lxml APIs (maybe you can use find() instead of xpath()), but you get the idea.
Hope that helps!
In Tornado, how do you tell apart the different request types? Also, what is the proper way to split out the requests? In the end if I go to /item/1.xml, I want xml, /item/1.html to be a proper html view etc.
Something like:
def getXML():
return self.render('somexmlresult.xml')
def getHTML():
return self.rendeR('htmlresult.html')
or
def get():
if request == 'xml':
return self.render('somexmlresult.xml')
elif request == 'html':
return self.render('htmlresult.html')
~ edit ~ I was shooting for something along the lines of rails' implementation seen here
I would prefer a self describing url like a RESTful application. An url part need not be required to represent the format of the resource. http://www.enterprise.com/customer/abc/order/123 must represent the resource irrespective of whether it is xml/html/json. A way to send the requested format is to send it as one of the request parameters.
http://www.enterprise.com/customer/abc/order/123?mimetype=application/xml
http://www.enterprise.com/customer/abc/order/123?mimetype=application/json
http://www.enterprise.com/customer/abc/order/123?mimetype=text/html
Use the request parameter to serialize to the appropriate format.
mimetype is the correct way to do this, however I can see where an end user would want a more simplistic way of accessing the data in the format they wish.
In order to maintain compatibility with standards compliant libraries etc you should ultimately determine the response type based on the mimetype requested and respond with the appropriate mimetype in the headers.
A way to achieve this while not breaking anything would be to add a parser that checks the URI that was requested for a suffix that matches a tuple of defined suffixes that the route can respond to, if it does and the mimetype is not already specified change the mimetype passed in to be the correct type for the suffix.
Make sure that the final decision is based on the supplied mimetype not the suffix.
This way others can interact with your RESTful service in the way their used to and you can still maintain ease of use for humans etc.
~ edit ~
Heres an example regexp that checks to see if it ends in .js | .html | .xml | .json. This is assuming your given the full URI.
(?:([^:/?#]+):)?(?://([^/?#]*))?([^?#]*\.(?:js|html|xml|json))(?:\?([^#]*))?(?:#(.*))?
Here's an example that's easier to interpret but less robust
^https?://(?:[a-z\-]+\.)+[a-z]{2,6}(?:/[^/#?]+)+\.(?:js|html|xml|json)$
These regex's are taken from rfc2396
First, set up the handlers to count on a restful style URI. We use 2 chunks of regex looking for an ID and a potential request format (ie html, xml, json etc)
class TaskServer(tornado.web.Application):
def __init__(self, newHandlers = [], debug = None):
request_format = "(\.[a-zA-Z]+$)?"
baseHandlers = [
(r"/jobs" + request_format, JobsHandler),
(r"/jobs/", JobsHandler),
(r"/jobs/new" + request_format, NewJobsHandler),
(r"/jobs/([0-9]+)/edit" + request_format, EditJobsHandler)
]
for handler in newHandlers:
baseHandlers.append(handler)
tornado.web.Application.__init__(self, baseHandlers, debug = debug)
Now, in the handler define a reusable function parseRestArgs (I put mine in a BaseHandler but pasted it here for ease of understanding/to save space) that splits out ID's and request formats. Since you should be expecting id's in a particular order, I stick them in a list.
The get function can be abstracted more but it shows the basic idea of splitting out your logic into different request formats...
class JobsHandler(BaseHandler):
def parseRestArgs(self, args):
idList = []
extension = None
if len(args) and not args[0] is None:
for arg in range(len(args)):
match = re.match("[0-9]+", args[arg])
if match:
slave_id = int(match.groups()[0])
match = re.match("(\.[a-zA-Z]+$)", args[-1])
if match:
extension = match.groups()[0][1:]
return idList, extension
def get(self, *args):
### Read
job_id, extension = self.parseRestArgs(args)
if len(job_id):
if extension == None or "html":
#self.render(html) # Show with some ID voodoo
pass
elif extension == 'json':
#self.render(json) # Show with some ID voodoo
pass
else:
raise tornado.web.HTTPError(404) #We don't do that sort of thing here...
else:
if extension == None or "html":
pass
# self.render(html) # Index- No ID given, show an index
elif extension == "json":
pass
# self.render(json) # Index- No ID given, show an index
else:
raise tornado.web.HTTPError(404) #We don't do that sort of thing here...
I'm investigating SUDS as a SOAP client for python. I want to inspect the methods available from a specified service, and the types required by a specified method.
The aim is to generate a user interface, allowing users to select a method, then fill in values in a dynamically generated form.
I can get some information on a particular method, but am unsure how to parse it:
client = Client(url)
method = client.sd.service.methods['MyMethod']
I am unable to programmaticaly figure out what object type I need to create to be able to call the service
obj = client.factory.create('?')
res = client.service.MyMethod(obj, soapheaders=authen)
Does anyone have some sample code?
Okay, so SUDS does quite a bit of magic.
A suds.client.Client, is built from a WSDL file:
client = suds.client.Client("http://mssoapinterop.org/asmx/simple.asmx?WSDL")
It downloads the WSDL and creates a definition in client.wsdl. When you call a method using SUDS via client.service.<method> it's actually doing a whole lot of recursive resolve magic behind the scenes against that interpreted WSDL. To discover the parameters and types for methods you'll need to introspect this object.
For example:
for method in client.wsdl.services[0].ports[0].methods.values():
print '%s(%s)' % (method.name, ', '.join('%s: %s' % (part.type, part.name) for part in method.soap.input.body.parts))
This should print something like:
echoInteger((u'int', http://www.w3.org/2001/XMLSchema): inputInteger)
echoFloatArray((u'ArrayOfFloat', http://soapinterop.org/): inputFloatArray)
echoVoid()
echoDecimal((u'decimal', http://www.w3.org/2001/XMLSchema): inputDecimal)
echoStructArray((u'ArrayOfSOAPStruct', http://soapinterop.org/xsd): inputStructArray)
echoIntegerArray((u'ArrayOfInt', http://soapinterop.org/): inputIntegerArray)
echoBase64((u'base64Binary', http://www.w3.org/2001/XMLSchema): inputBase64)
echoHexBinary((u'hexBinary', http://www.w3.org/2001/XMLSchema): inputHexBinary)
echoBoolean((u'boolean', http://www.w3.org/2001/XMLSchema): inputBoolean)
echoStringArray((u'ArrayOfString', http://soapinterop.org/): inputStringArray)
echoStruct((u'SOAPStruct', http://soapinterop.org/xsd): inputStruct)
echoDate((u'dateTime', http://www.w3.org/2001/XMLSchema): inputDate)
echoFloat((u'float', http://www.w3.org/2001/XMLSchema): inputFloat)
echoString((u'string', http://www.w3.org/2001/XMLSchema): inputString)
So the first element of the part's type tuple is probably what you're after:
>>> client.factory.create(u'ArrayOfInt')
(ArrayOfInt){
_arrayType = ""
_offset = ""
_id = ""
_href = ""
_arrayType = ""
}
Update:
For the Weather service it appears that the "parameters" are a part with an element not a type:
>>> client = suds.client.Client('http://www.webservicex.net/WeatherForecast.asmx?WSDL')
>>> client.wsdl.services[0].ports[0].methods.values()[0].soap.input.body.parts[0].element
(u'GetWeatherByZipCode', http://www.webservicex.net)
>>> client.factory.create(u'GetWeatherByZipCode')
(GetWeatherByZipCode){
ZipCode = None
}
But this is magic'd into the parameters of the method call (a la client.service.GetWeatherByZipCode("12345"). IIRC this is SOAP RPC binding style? I think there's enough information here to get you started. Hint: the Python command line interface is your friend!
According to suds documentation, you can inspect service object with __str()__. So the following gets a list of methods and complex types:
from suds.client import Client;
url = 'http://www.webservicex.net/WeatherForecast.asmx?WSDL'
client = Client(url)
temp = str(client);
The code above produces following result (contents of temp):
Suds ( https://fedorahosted.org/suds/ ) version: 0.3.4 (beta) build: R418-20081208
Service ( WeatherForecast ) tns="http://www.webservicex.net"
Prefixes (1)
ns0 = "http://www.webservicex.net"
Ports (2):
(WeatherForecastSoap)
Methods (2):
GetWeatherByPlaceName(xs:string PlaceName, )
GetWeatherByZipCode(xs:string ZipCode, )
Types (3):
ArrayOfWeatherData
WeatherData
WeatherForecasts
(WeatherForecastSoap12)
Methods (2):
GetWeatherByPlaceName(xs:string PlaceName, )
GetWeatherByZipCode(xs:string ZipCode, )
Types (3):
ArrayOfWeatherData
WeatherData
WeatherForecasts
This would be much easier to parse. Also every method is listed with their parameters along with their types. You could, probably, even use just regular expression to extract information you need.
Here's a quick script I wrote based on the above information to list the input methods suds reports as available on a WSDL. Pass in the WSDL URL. Works for the project I'm currently on, I can't guarantee it for yours.
import suds
def list_all(url):
client = suds.client.Client(url)
for service in client.wsdl.services:
for port in service.ports:
methods = port.methods.values()
for method in methods:
print(method.name)
for part in method.soap.input.body.parts:
part_type = part.type
if(not part_type):
part_type = part.element[0]
print(' ' + str(part.name) + ': ' + str(part_type))
o = client.factory.create(part_type)
print(' ' + str(o))
You can access suds's ServiceDefinition object. Here's a quick sample:
from suds.client import Client
c = Client('http://some/wsdl/link')
types = c.sd[0].types
Now, if you want to know the prefixed name of a type, it's also quite easy:
c.sd[0].xlate(c.sd[0].types[0][0])
This double bracket notation is because the types are a list (hence a first [0]) and then in each item on this list there may be two items. However, suds's internal implementation of __unicode__ does exactly that (i.e. takes only the first item on the list):
s.append('Types (%d):' % len(self.types))
for t in self.types:
s.append(indent(4))
s.append(self.xlate(t[0]))
Happy coding ;)
Once you created WSDL method object you can get information about it from it's __metadata__, including list of it's arguments' names.
Given the argument's name, you can access it's actual instance in the method created. That instance also contains it's information in __metadata__, there you can get it's type name
# creating method object
method = client.factory.create('YourMethod')
# getting list of arguments' names
arg_names = method.__metadata__.ordering
# getting types of those arguments
types = [method.__getitem__(arg).__metadata__.sxtype.name for arg in arg_names]
Disclaimer: this only works with complex WSDL types. Simple types, like strings and numbers, are defaulted to None
from suds.client import Client
url = 'http://localhost:1234/sami/2009/08/reporting?wsdl'
client = Client(url)
functions = [m for m in client.wsdl.services[0].ports[0].methods]
count = 0
for function_name in functions:
print (function_name)
count+=1
print ("\nNumber of services exposed : " ,count)
i needed an example of using suds with objects.
beside the answers found here, i found a very good article
that answered my question even further.
here is a short summary:
first, print the client to see an overview of it's content.
from suds.client import Client client =
Client("https://wsvc.cdiscount.com/MarketplaceAPIService.svc?wsdl")
print client
second, create an instance of a type (using it's name including it's prefix ns*.), and print it, to see it's member data.
HeaderMessage = client.factory.create('ns0:HeaderMessage')
print HeaderMessage
to fill your object's data members, either assign them a scalar value for scalar members, or a dict, to object members.
HeaderMessage.Context = {
"CatalogID": "XXXXX"
"CustomerID": 'XXXXX'
"SiteID": 123
}
members whose type name start with ArrayOf expect a list of objects of the type mentioned in the rest of the type name.
ArrayOfDomainRights = client.factory.create('ns0:ArrayOfDomainRights')
ArrayOfDomainRights.DomainRights = [XXXXXXXXXXXXX, XXXXXXXXXXXX]
i needed an example of using suds with objects.
beside the answers found here, i found a very good article
that answered my question even further.
here is a short summary:
first, print the client to see an overview of it's content.
from suds.client import Client client =
Client("https://wsvc.cdiscount.com/MarketplaceAPIService.svc?wsdl")
print client
second, create an instance of a type (using it's name including it's prefix ns*.), and print it, to see it's member data.
HeaderMessage = client.factory.create('ns0:HeaderMessage')
print HeaderMessage
to fill your object's data members, either assign them a scalar value for scalar members, or a dict, to object members.
HeaderMessage.Context = {
"CatalogID": "XXXXX"
"CustomerID": 'XXXXX'
"SiteID": 123
}
members whose type name start with ArrayOf expect a list of objects of the type mentioned in the rest of the type name.
ArrayOfDomainRights = client.factory.create('ns0:ArrayOfDomainRights')
ArrayOfDomainRights.DomainRights = [XXXXXXXXXXXXX, XXXXXXXXXXXX]