I run a soap server in django.
Is it possible to create a soap method that returns a soaplib classmodel instance without <{method name}Response><{method name}Result> tags?
For example, here is a part of my soap server code:
# -*- coding: cp1254 -*-
from soaplib.core.service import rpc, DefinitionBase, soap
from soaplib.core.model.primitive import String, Integer, Boolean
from soaplib.core.model.clazz import Array, ClassModel
from soaplib.core import Application
from soaplib.core.server.wsgi import Application as WSGIApplication
from soaplib.core.model.binary import Attachment
class documentResponse(ClassModel):
__namespace__ = ""
msg = String
hash = String
class MyService(DefinitionBase):
__service_interface__ = "MyService"
__port_types__ = ["MyServicePortType"]
#soap(String, Attachment, String ,_returns=documentResponse,_faults=(MyServiceFaultMessage,) , _port_type="MyServicePortType" )
def sendDocument(self, fileName, binaryData, hash ):
binaryData.file_name = fileName
binaryData.save_to_file()
resp = documentResponse()
resp.msg = "Saved"
resp.hash = hash
return resp
and it responses like that:
<senv:Body>
<tns:sendDocumentResponse>
<tns:sendDocumentResult>
<hash>14a95636ddcf022fa2593c69af1a02f6</hash>
<msg>Saved</msg>
</tns:sendDocumentResult>
</tns:sendDocumentResponse>
</senv:Body>
But i need a response like this:
<senv:Body>
<ns3:documentResponse>
<hash>A694EFB083E81568A66B96FC90EEBACE</hash>
<msg>Saved</msg>
</ns3:documentResponse>
</senv:Body>
What kind of configurations should i make in order to get that second response i mentioned above ?
Thanks in advance.
I haven't used Python's SoapLib yet, but had the same problem while using .NET soap libs. Just for reference, in .NET this is done using the following decorator:
[SoapDocumentMethod(ParameterStyle=SoapParameterStyle.Bare)]
I've looked in the soaplib source, but it seems it doesn't have a similar decorator. The closest thing I've found is the _style property. As seen from the code https://github.com/soaplib/soaplib/blob/master/src/soaplib/core/service.py#L124 - when using
#soap(..., _style='document')
it doesn't append the %sResult tag, but I haven't tested this. Just try it and see if this works in the way you want it.
If it doesn't work, but you still want to get this kind of response, look at Spyne:
http://spyne.io/docs/2.10/reference/decorator.html
It is a fork from soaplib(I think) and has the _soap_body_style='bare' decorator, which I believe is what you want.
Related
I am working on a project that is divided into two parts:
Retrieve a specific page
Once the ID of this page is extracted,
Send requests to an API to obtain additional information on this page
For the second point, and to follow Scrapy's asynchronous philosophy, where should such a code be placed? (I hesitate between in the spider or in a pipeline).
Do we have to use different libraries like asyncio & aiohttp to be able to achieve this goal asynchronously? (I love aiohttp so this is not a problem to use it)
Thanks you
Since you're doing this to fetch additional information about an item, I'd just yield a request from the parsing method, passing the already scraped information in the meta attribute.
You can see an example of this at https://doc.scrapy.org/en/latest/topics/request-response.html#topics-request-response-ref-request-callback-arguments
This can also be done in a pipeline (either using scrapy's engine API, or a different library, e.g. treq).
I do however think that doing it "the normal way" from the spider makes more sense in this instance.
I recently had the same problem (again) and found an elegant solution using Twisted decorators t.i.d.inlineCallbacks.
# -*- coding: utf-8 -*-
import scrapy
import re
from twisted.internet.defer import inlineCallbacks
from sherlock import utils, items, regex
class PagesSpider(scrapy.spiders.SitemapSpider):
name = 'pages'
allowed_domains = ['thing.com']
sitemap_follow = [r'sitemap_page']
def __init__(self, site=None, *args, **kwargs):
super(PagesSpider, self).__init__(*args, **kwargs)
#inlineCallbacks
def parse(self, response):
# things
request = scrapy.Request("https://google.com")
response = yield self.crawler.engine.download(request, self)
# Twisted execute the request and resume the generator here with the response
print(response.text)
I have a problem with the routing my URL adress to Flask, precisely with running it in web-browser. All I want is to transfer the sharp symbol "#" and some Russian words (as like "#привет" or "#ПомогитеМнеПожалуйста") together.
The screenshot of error: enter image description here
My programming code at the moment looks like this:
# -*- coding: utf-8 -*-
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/hashtags/<names>', methods=['GET'])
def get_hashtags(names):
return jsonify({'Segmentation Hashtags': names})
if __name__ == '__main__':
app.run(port=9876)
So, basically, <names> is a parameter from function get_hashtag that is used for transfering my future hashtag to the web-browser using jsonify. I need to find the way of transfering any hashtag I want with sharp symbol "#" plus Russian letters. As far as I know, there is an ASCII-coding methods (something like .**decode(utf-8)**), but I have no idea how to use it properly.
Thanks in advance!
The hashtags are causing the error. You can try to remove them on the client side, and just request this link instead:
/hashtags/привет
Hashtags in the url are often used to tell the browser which element on a page to jump to. For instance, on https://en.wikipedia.org/wiki/Stack_Overflow#Technology
the #Technology means jump to the technology section of the page.
try unidecode:
from unidecode import unidecode
#app.route('/hashtags/<names:string>', methods=['GET'])
def get_hashtags(names):
return jsonify({'Segmentation Hashtags': unidecode(names)})
I want to use https://github.com/bear/python-twitter/ and check API requests https://github.com/kevin1024/vcrpy or https://github.com/agriffis/vcrpy-unittest.
From lines 30:
https://github.com/bear/python-twitter/blob/master/twitter/api.py#L30
30: import requests
and later on:
res = requests.post(url='https://api.twitter.com/oauth2/token',
data={'grant_type': 'client_credentials'},
headers=post_headers)
# ... etc ...
Yet when doing something like:
from vcr_unittest import VCRTestCase
import vcr
import twitter
from django.conf import settings
class TwitterRetrievalAndStorageTests(VCRTestCase):
#vcr.use_cassette()
def test_recorded_session(self):
api = twitter.Api(
consumer_key=settings.TWITTER_CONSUMER_KEY,
consumer_secret=settings.TWITTER_CONSUMER_SECRET,
access_token_key=settings.TWITTER_ACCESS_KEY,
access_token_secret=settings.TWITTER_ACCESS_SECRET)
statuses = api.GetUserTimeline(screen_name='nntaleb')
for s in statuses:
print(s)
Not a cassette file is being created. Is there a way to do this with python-twitter?
VCRTestCase is built on top of vcr. It doesn't make much sense to use both simultaneously. If you remove the line #vcr.use_cassette(), you should see a cassette named TwitterRetrievalAndStorageTests.test_recorded_session.yaml (or json) in your cwd. If instead, you don't inherit from VCRTestCase and use #vcr.use_cassette(), the cassette name should be test_recorded_session.yaml and it should be in the same folder as your class.
In general, if you use both, I have noticed that vcr takes precedence over vcrpy-unittest but that is not consistently the case.
a fews days ago, i tried to learn the python twisted..
and this is how i make my webserver :
from twisted.application import internet, service
from twisted.web import static, server, script
from twisted.web.resource import Resource
import os
class NotFound(Resource):
isLeaf=True
def render(self, request):
return "Sorry... the page you're requesting is not found / forbidden"
class myStaticFile(static.File):
def directoryListing(self):
return self.childNotFound
#root=static.file(os.getcwd()+"/www")
root=myStaticFile(os.getcwd()+"/www")
root.indexNames=['index.py']
root.ignoreExt(".py")
root.processors = {'.py': script.ResourceScript}
root.childNotFound=NotFound()
application = service.Application('web')
sc = service.IServiceCollection(application)
i = internet.TCPServer(8080, server.Site(root))##UndefinedVariable
i.setServiceParent(sc)
in my code, i make an instance class for twisted.web.static.File and override the directoryListing.
so when user try to access my resource folder (http://localhost:8080/resource/ or http://localhost:8080/resource/css), it will return a notFound page.
but he can still open/read the http://localhost:8080/resource/css/style.css.
it works...
what i want to know is.. is this the correct way to do that???
is there another 'perfect' way ?
i was looking for a config that disable directoryListing like root.dirListing=False. but no luck...
Yes, that's a reasonable way to do it. You can also use twisted.web.resource.NoResource or twisted.web.resource.Forbidden instead of defining your own NotFound.
I need to perform http PUT operations from python Which libraries have been proven to support this? More specifically I need to perform PUT on keypairs, not file upload.
I have been trying to work with the restful_lib.py, but I get invalid results from the API that I am testing. (I know the results are invalid because I can fire off the same request with curl from the command line and it works.)
After attending Pycon 2011 I came away with the impression that pycurl might be my solution, so I have been trying to implement that. I have two issues here. First, pycurl renames "PUT" as "UPLOAD" which seems to imply that it is focused on file uploads rather than key pairs. Second when I try to use it I never seem to get a return from the .perform() step.
Here is my current code:
import pycurl
import urllib
url='https://xxxxxx.com/xxx-rest'
UAM=pycurl.Curl()
def on_receive(data):
print data
arglist= [\
('username', 'testEmailAdd#test.com'),\
('email', 'testEmailAdd#test.com'),\
('username','testUserName'),\
('givenName','testFirstName'),\
('surname','testLastName')]
encodedarg=urllib.urlencode(arglist)
path2= url+"/user/"+"99b47002-56e5-4fe2-9802-9a760c9fb966"
UAM.setopt(pycurl.URL, path2)
UAM.setopt(pycurl.POSTFIELDS, encodedarg)
UAM.setopt(pycurl.SSL_VERIFYPEER, 0)
UAM.setopt(pycurl.UPLOAD, 1) #Set to "PUT"
UAM.setopt(pycurl.CONNECTTIMEOUT, 1)
UAM.setopt(pycurl.TIMEOUT, 2)
UAM.setopt(pycurl.WRITEFUNCTION, on_receive)
print "about to perform"
print UAM.perform()
httplib should manage.
http://docs.python.org/library/httplib.html
There's an example on this page http://effbot.org/librarybook/httplib.htm
urllib and urllib2 are also suggested.
Thank you all for your assistance. I think I have found an answer.
My code now looks like this:
import urllib
import httplib
import lxml
from lxml import etree
url='xxxx.com'
UAM=httplib.HTTPSConnection(url)
arglist= [\
('username', 'testEmailAdd#test.com'),\
('email', 'testEmailAdd#test.com'),\
('username','testUserName'),\
('givenName','testFirstName'),\
('surname','testLastName')\
]
encodedarg=urllib.urlencode(arglist)
uuid="99b47002-56e5-4fe2-9802-9a760c9fb966"
path= "/uam-rest/user/"+uuid
UAM.putrequest("PUT", path)
UAM.putheader('content-type','application/x-www-form-urlencoded')
UAM.putheader('accepts','application/com.internap.ca.uam.ama-v1+xml')
UAM.putheader("Content-Length", str(len(encodedarg)))
UAM.endheaders()
UAM.send(encodedarg)
response = UAM.getresponse()
html = etree.HTML(response.read())
result = etree.tostring(html, pretty_print=True, method="html")
print result
Updated: Now I am getting valid responses. This seems to be my solution. (The pretty print at the end isn't working, but I don't really care, that is just there while I am building the function.)