I want to convert a perl SOAP client into a python SOAP client.
The perl client is initialized like
$url = 'https://host:port/cgi-devel/Service.cgi';
$uri = 'https://host/Service';
my $soap = SOAP::Lite
-> uri($uri)
-> proxy($url);
I tried to replicate this in python 2.4.2 with suds 0.3.6 doing
from suds.client import Client
url="https://host:port/cgi-devel/Service.cgi"
client=Client(url)
However when running this python script I get the error
suds.transport.TransportError: HTTP Error 411: Length Required
Is it because of https or what might be the problem?
Any help would be greatly appreciated!
urllib2 module doesn't add Content-Length (required for POST method) header automatically when Request object is constructed manually as suds does. You have to patch suds, probably suds.transport.HttpTransport.open() method or suds.transport.Request class.
I had the same error, then switched to using a local WSDL file, this worked:
import suds
wsdl = 'file:///tmp/my.wsdl'
client = suds.client.Client(wsdl, username='lbuser', password='lbpass', location='https://path.to.our.loadbalancer:9090/soap')
You should ask this in the suds's mailing list. This library is under development, is open source, and the authors are very keen to get feedback from the users.
Your code looks fine, this could be an error of the wsdl itself or of the suds library, therefore I encourage you to ask the author directly (after having checked with other wsdls that your installation is correct).
Related
I am trying to get header of the gRPC response with the following code, and it doesn't work:
response = stub.GetAccounts(users_pb2.GetAccountsRequest(), metadata=metadata)
header = response.header()
This is what this header looks like in Kreya, I'm trying to get it in python:
Does anyone know how to get the same header in python?
I suspect (! don't know) that you can't access the underlying HTTP/2 (response) headers from the (Python) gRPC client.
You can configure various environment variables that expose underlying details (see gRPC environment variables) and perhaps GRPC_TRACE="http" GRPC_VERBOSITRY="DEBUG".
If the headers were actually gRPC metadata, you can use Python's with_call and call.initial_metadata and call.trailing_metadata as shown in the gRPC metadata example here.
I am using python 2.7 + requests & github3 libraries (modules) to find and replace some webhooks URL, in our Enterprise Github account. We have more than 500 organizations and 1000+ repositories (privates and publics).
Our python script works well for most test cases. However, I found an error, which I will explain below:
I have found some webhooks URL in our github enterprise account, that have the following struncture:
EXAMPLE:
http://10.10.10.10:8080/pr/XXXX-pr/v1/teamcity?buildType=BeCd_XXCDServer_Pr
I mean, that type of webhook URL has been around for a few years on our github, but since we will migrate some servers to the cloud, over time we will need to replace the host name in that URL. So, we created a python script to do it, which seems to work well.
The specific problem appears when the script attempts to replace the hostname in the URL that begins with "http".
If the URL starts with "https" the replacement process runs smoothly.
The error code I receive, is "422 Unprocessable Entity".
The URL that I send to the "requests.patch" command in JSON format, is something like this:
my_url = "http://MY_GITHUB_DOMAIN/repos/MY_ORG/MY_REPO/hooks/001"
And the python code is:
data_json='{"config":{"url: http://NEW_GITHUB_DOMAIN/repos/MY_ORG/MY_REPO/hooks/001"}}'
data_json_load=json.loads(data_json)
TOKEN="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
my_url = "http://MY_GITHUB_DOMAIN/repos/MY_ORG/MY_REPO/hooks/001"
requests.patch(url=my_url, data=json.dumps(data_json_load), auth=(username, TOKEN))
If I manually replace "http" with "https" in the original webhook in github (before executing the script) in this case the python code runs correctly.
What could be a root cause of this problem (The 422 error, when the script attempts to change a hostname in an URL that begin with http instead of https) ?
Could it be a bug of the requests python library ?
thanks in advance!
I am trying to update Firebase using the python-firebase library, but cannot get authentication to work, using adapted sample code:
from firebase import firebase as fb
auth = fb.FirebaseAuthentication('<firebase secret>', 'me#gmail.com',
auth_payload={'uid': '<uid>'}) // NB renamed extras -> auth_payload, id -> uid here
firebase = fb.FirebaseApplication('https://<url>.firebaseio.com', authentication=auth)
result = firebase.get('/users', name=None, connection=None,
params={'print': 'pretty'}) // HTTPError: 401 Client Error: Unauthorized
print result
I keep getting (401) Unauthorized, but I notice that the token generated by the library is radically different to one generated by a JavaScript version of FirebaseTokenGenerator - and the latter authenticates fine when I provide the same URL, uid and secret.
I noticed a GitHub issue, questioning why the library did not just use the official Python firebase-token-generator, so I forked and implemented the suggested change just in case it would make a difference, but still get the same result.
Can anyone suggest what might be tripping me up here?
This library is 4 years old which means lots of things have been changed for firebase especially after Google's acquisition. The part of how you access Firebase is completely different.
I will recommend to use the official Firebase Admin Python SDK https://github.com/firebase/firebase-admin-python
A really good alternative but prefer the official is this:
https://github.com/thisbejim/Pyrebase
I want to make an HTTPS request to a real-time stream and keep the connection open so that I can keep reading content from it and processing it.
I want to write the script in python. I am unsure how to keep the connection open in my script. I have tested the endpoint with curl which keeps the connection open successfully. But how do I do it in Python. Currently, I have the following code:
c = httplib.HTTPSConnection('userstream.twitter.com')
c.request("GET", "/2/user.json?" + req.to_postdata())
response = c.getresponse()
Where do I go from here?
Thanks!
It looks like your real-time stream is delivered as one endless HTTP GET response, yes? If so, you could just use python's built-in urllib2.urlopen(). It returns a file-like object, from which you can read as much as you want until the server hangs up on you.
f=urllib2.urlopen('https://encrypted.google.com/')
while True:
data = f.read(100)
print(data)
Keep in mind that although urllib2 speaks https, it doesn't validate server certificates, so you might want to try and add-on package like pycurl or urlgrabber for better security. (I'm not sure if urlgrabber supports https.)
Connection keep-alive features are not available in any of the python standard libraries for https. The most mature option is probably urllib3
httplib2 supports this. (I'd have thought this the most mature option, didn't know urllib3 yet, so TokenMacGuy may still be right)
EDIT: while httplib2 does support persistent connections, I don't think you can really consume streams with it (ie. one long response vs. multiple requests over the same connection), which I now realise you may need.
Please advise library for working with soap in python.
Now, I'm trying to use "suds" and I can't undestand how get http headers from server reply
Code example:
from suds.client import Client
url = "http://10.1.0.36/money_trans/api3.wsdl"
client = Client(url)
login_res = client.service.Login("login", "password")
variable "login_res" contain xml answer and doesnt contain http headers. But I need to get session id from them.
I think you actually want to look in the Cookie Jar for that.
client = Client(url)
login_res = client.service.Login("login", "password")
for c in client.options.transport.cookiejar:
if "sess" in str(c).lower():
print "Session cookie:", c
I'm not sure. I'm still a SUDS noob, myself. But this is what my gut tells me.
The response from Ishpeck is on the right path. I just wanted to add a few things about the Suds internals.
The suds client is a big fat abstraction layer on top of a urllib2 HTTP opener. The HTTP client, cookiejar, headers, request and responses are all stored in the transport object. The problem is that none of this activity is cached or stored inside of the transport other than, maybe, the cookies within the cookiejar, and even tracking these can sometimes be problematic.
If you want to see what's going on when debugging, my suggestion would be to add this to your code:
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
Suds makes use of the native logging module and so by turning on debug logging, you get to see all of the activity being performed underneath including headers, variables, payload, URLs, etc. This has saved me tons of times.
Outside of that, if you really need to definitively track state on your headers, you're going to need to create a custom subclass of a suds.transport.http.HttpTransport object and overload some of the default behavior and then pass that to the Client constructor.
Here is a super-over-simplified example:
from suds.transport.http import HttpTransport, Reply, TransportError
from suds.client import Client
class MyTransport(HttpTransport):
# custom stuff done here
mytransport_instance = MyTransport()
myclient = Client(url, transport=mytransport_instance)
I think Suds library has a poor documentation so, I recommend you to use Zeep. It's a SOAP requests library in Python. Its documentation isn't perfect, but it's very much clear than Suds Doc.