XML differences between WCF and Python SUDS for inheritance? - python

I have a question regarding the different ways inheritance are represented between WCF and SUDS (Python). I have a C++/CLI WCF server (.NET 3.5 SP1) and I'm trying to communicate with it. I've used a C# (WCF also) client and it work fine, but there are problems when using a SUDS client (Python 2.6.4, SUDS 0.3.8). It's mostly fine, but for inherited types, and the difference seems to be in the way the two represent inheritance in the SOAP XML. When I look at the messages that the server logs, I get results similar to the following:
C# Client:
<ns:DerivedType>
...
</ns:DerivedType>
Python Client:
<ns:BaseType xsi:type="ns:DerivedType">
...
</ns:BaseType>
Is it possible to change the WCF server to accept the Python style? Or to change the Python SUDS client to send the WCF style? Which one is correct?

Don't know about the python side but there are couple of options on the WCF side. The more straight foward option is to create a message inspector to detect and convert the python generated soap to something more palatable to the WCF service.
The more difficult but "purer" option is to determine how to shape the WSDL generated by the WCF service so it allows the python client to produce the aforementioned palatable soap. Once you find the required tweaks, you would use the MessageContract classes in place of the DataContract to make the service produce the tweaked WSDL. The .NET clients should handle this tweaked WSDL without complaint.

You could use the data contract serializer to control how the xml is created and read. See: http://msdn.microsoft.com/en-us/library/ms731073.aspx

Related

GRPC: Sending messages from Python to C#, best method?

I am working on a project that just scrapes data from 3 devices (2xserial and 1xssh). I have this part implemented no problem.
I am now heading towards the second part where I need be be able to send the data I need using protobuf to the clients computer where they will receive and display on their own client.
The customer has provided examples of their GRPC servers, and it's written in C#.
Currently, for security reasons, our system uses RedHat 8.3 and I am using a SSH Protocol Library called Paramiko for the SSH part. Paramiko is a Python library. Also the machine I am extracting data from only runs on Linux.
Here are my main questions, and I apologize if I got nowhere.
1.) The developer from the client side provided us with a VM that has a simulator and examples written in C# since their side was written in C#. He says that it's best to use the C# because all examples can be almost re-used as it's all written, etc. While I know it's possible to use C# in Linux these days, I've still have no experience doing so I don't know how complicated/tedious this can get.
2.) I write code in C# and wrap all the python code, which is also something I've never done, but I would be doing this all in RedHat.
3.) I keep it in python because sending protobuf messages works across languages as long as it is sent properly. Also from the client side, I'm not sure if they will need to make adjustments if receiving protobuf messages written in Python(I don't think this is the case because it's just serialized messages, yea?).
Any advice would be appreciated. I am looking to seek more knowledge outside my realm.
Cheers,
Z
If you're happy in Python, I would use option 3. The key thing is to either obtain their .proto schema, or if they've used code-first C# for their server: reverse-engineer the schema (or use tools that generate the schema from code). If you only have C# and don't know how to infer a .proto from that, I can probably help.
That said: if you want to learn some new bits, option 1 (using C# in your system) is also very viable.
IMO option 2 is the worst of all worlds.

SOAP webservice: Python server and perl client

I have created a SOAP service (using spyne). I have used python on the server side. I have a python client that works perfectly fine. But, I want to reuse the output obtained from the server in a perl script (cannot use python).
One possible way would be to call the python client from within perl and capture STDOUT.
But, it would be nice if I could write a simple perl client to get the results. I wrote a simple perl client for SOAP following this, but my code did not print anything.
my $soap = SOAP::Lite->new();
my $service = $soap -> service($URL);
print Dumper($service -> serverFunction($arg1,$arg2));
Here I found a discussion about perl server and python client.
So, is it possible to have a python server and perl client?
Yes, it is possible, but it's SOAP. See the notice from SOAP::Simple:
Let's face it. SOAP is painfull. It's a dumb idea, the only reason you should ever consider using SOAP is if someone holds a gun to your head or pay you a lot of money for it.
If you want to build an API and have both server and client under your control: Try to avoid SOAP whenever possible. Consider switching to a JSON or basic HTTP API.
If you want to keep SOAP, I suggest to try SOAP::Simple, it might work better than SOAP::Lite. There's also the SOAP module, but it's probably too heavy for your usage case.
PS: You might want to include more details (e.g. code you tried) in your question to get a more specific answer.

Python Suds soap client for WCF. WS-reliable messageing

I have WCF service which uses WS-reliable messaging. I would like to write some python scripts that connect to this service using suds soap client but it seems that suds doesn't support reliable messaging out of the box. Does anybody tried to use suds with reliable messaging? Do I have to write some logic for handshaking like CreateSequence,TerminateSequece and so on? Or maybe there are another python soap clients which support WS-reliable messageing?
Thanks in advance.

JSON-RPC server via Python

I need to implement a JSON-RPC server like this:
http://pasha.cdemo.applicationcraft.com/service/json
This server will be accessed from jQuery and I have to use Python for writing it.
What library should I use? Can you also give me an example of using that library?
Thanks.
I found cherrypy very easy to use (doesn't come with a predefined template engine or a database model, so it's IMO better than others when your server is producing json and is not a typical database).
Coupled with nginx and memcached can also be quite performant...
Python 2.6 comes with json module in the standard library which allows you to effective convert Python data structures to JSON responses.
For HTTP communications and request handling, you can use Python web frameworks like Pyramid, Django or HTTP server software like Tornado. It really much depends what do you need to process in your JSON-RPC calls.

how to implement thin client app with pyqt

Here is what I would like to do, and I want to know how some people with experience in this field do this:
With three POST requests I get from the http server:
widgets and layout
and then app logic (minimal)
data
Or maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty.
I want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the "thin" app is just a minimal compiled python file that loads data from a server.
How can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts?
PS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.
Your desire to send "app logic" from the server to the client without sending "code" is inherently self-contradictory, though you may not realize that yet -- even if the "logic" you're sending is in some simplified ad-hoc "language" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may "sandbox" things to some extent, but in the end, that's what you're doing.
To avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-).
Once you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import / reload them, will be just fine. You'll basically be doing a variant of the classic "plugins architecture" where the "plugins" are in fact being sent from the server (instead of being found on disk in a given location).
Use a web-browser it is a well documented system that does everything you want. It is also relatively fast to create simple graphical applications in a browser. Examples for my reasoning:
The Sage math environment has built their graphical client as an application that runs in a browser, together with a local web-server.
There is the Pyjamas project that compiles Python to Javascript. This is IMHO worth a try.
Edit:
You could try PyPy's sandbox interpreter, as a secure Python interpreter for the code that was transferred over a network.
An then there is the most simple solution: Simply send Python modules over the network, but sign and/or encrypt them. This is the way all Linux distributions work. You store a cryptographic token on the local computer. The server signs/encrypts the code before it sends it, with a matching token. GPG should be able to do it.

Categories