I am trying to write a RESTful web service in python. But while trying out the tutorials given on Cherrypy Website I ended up with an error like
Traceback (most recent call last):
File "rest.py", line 35, in <module>
cherrypy.quickstart(StringGeneratorWebService(), '/', conf)
TypeError: expose_() takes exactly 1 argument (0 given)
Where rest.py is my file which contains the exact same code on the site and under subtitle "Give us a REST".
I am clear that, obviously from error message, I am missing a parameter that should be passed in. But I am not clear where exactly I should amend that code to make it work.
I tried out fixing something on line number 35, but nothing helped me, and I am stuck! Please help me to clear this or please give some code snippet to make a REST service in cherrypy. Thank you!
The CherryPy version that you're using (3.2.2) doesn't support the cherrypy.expose decorator on classes, that functionality was added in version 6.
You can use the old syntax of setting the exposed attribute to True(it is also compatible with the newer versions).
The class would end up like:
class StringGeneratorWebService(object):
exposed = True
#cherrypy.tools.accept(media='text/plain')
def GET(self):
return cherrypy.session['mystring']
def POST(self, length=8):
some_string = ''.join(random.sample(string.hexdigits, int(length)))
cherrypy.session['mystring'] = some_string
return some_string
def PUT(self, another_string):
cherrypy.session['mystring'] = another_string
def DELETE(self):
cherrypy.session.pop('mystring', None)
Related
I'm trying to get S3 hook in Apache Airflow using the Connection object.
It looks like this:
class S3ConnectionHandler:
def __init__():
# values are read from configuration class, which loads from env. variables
self._s3 = Connection(
conn_type="s3",
conn_id=config.AWS_CONN_ID,
login=config.AWS_ACCESS_KEY_ID,
password=config.AWS_SECRET_ACCESS_KEY,
extra=json.dumps({"region_name": config.AWS_DEFAULT_REGION}),
)
#property
def s3(self) -> Connection:
return get_live_connection(self.logger, self._s3)
#property
def s3_hook(self) -> S3Hook:
return self.s3.get_hook()
I get an error:
Broken DAG: [...] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/connection.py", line 282, in get_hook
return hook_class(**{conn_id_param: self.conn_id})
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 354, in __init__
raise AirflowException('Either client_type or resource_type must be provided.')
airflow.exceptions.AirflowException: Either client_type or resource_type must be provided.
Why does this happen? From what I understand the S3Hook calls the constructor from the parent class, AwsHook, and passes the client_type as "s3" string. How can I fix this?
I took this configuration for hook from here.
EDIT: I even get the same error when directly creating the S3 hook:
#property
def s3_hook(self) -> S3Hook:
#return self.s3.get_hook()
return S3Hook(
aws_conn_id=config.AWS_CONN_ID,
region_name=self.config.AWS_DEFAULT_REGION,
client_type="s3",
config={"aws_access_key_id": self.config.AWS_ACCESS_KEY_ID, "aws_secret_access_key": self.config.AWS_SECRET_ACCESS_KEY}
)
``
If youre using Airflow 2
please refer to the new documentation - it can be kind of tricky as most of the google searches redirect you to the old ones.
In my case I was using the AwsHook and had to switch to AwsBaseHook as it seems to be the only and correct one for version 2. I've had to switch the import path as well, now aws stuff isnt on contrib anymore its under providers
And as you can see on the new documentation you can pass either client_type ou resource_type as a AwsBaseHook parameter, depending on the one you want to use. Once you do that your problem should be solved
No other answers worked, I couldn't get around this. I ended up using boto3 library directly, which also gave me more low-level flexibility that Airflow hooks lacked.
First of all , I suggest that you create a S3 connection , for this you must go the path Admin >> Connections
After that and assuming that you want to load a file into S3 Bucket, you can code :
def load_csv_S3():
# Send to S3
hook = S3Hook(aws_conn_id="s3_conn")
hook.load_file(
filename='/write_your_path_file/filename.csv',
key='filename.csv',
bucket_name="BUCKET_NAME",
replace=True,
)
Finally, you can check all the functions of S3Hook 👉 HERE
What has worked for me, in case it helps someone, in my answer to a similar post: https://stackoverflow.com/a/73652781/4187360
I am trying to get Flask-openid working, but keep hitting this error when trying to log in
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
It happens when using this function
oid.try_login(openid, ask_for=['email', 'fullname', 'nickname'])
This is where the function is used:
#app.route('/login', methods=['GET', 'POST'])
#oid.loginhandler
def login():
"""Does the login via OpenID. Has to call into `oid.try_login`
to start the OpenID machinery.
"""
# if we are already logged in, go back to were we came from
if g.user is not None:
app.logger.info('logged-in: ' + oid.get_next_url())
return redirect(oid.get_next_url())
if request.method == 'POST':
openid = request.form.get('openid_identifier')
if openid:
app.logger.info(request.form)
app.logger.info('logging-in: ' + oid.get_next_url())
return oid.try_login(openid, ask_for=['email', 'fullname',
'nickname'])
app.logger.info('not-logged-in: ' + oid.get_next_url())
return render_template('login.html', next=oid.get_next_url(),
error=oid.fetch_error())
and actually seems to be an issue with lxml that Flask-openid uses:
File "C:\Python33\lib\site-packages\openid\yadis\etxrd.py", line 69, in parseXRDS
element = ElementTree.XML(text)
File "lxml.etree.pyx", line 3012, in lxml.etree.XML (src\lxml\lxml.etree.c:67876)
File "parser.pxi", line 1781, in lxml.etree._parseMemoryDocument (src\lxml\lxml.etree.c:102435)
I have tried a couple of example projects on github, but they all have the same issue. Is there some way I can get Flask-openid to work in Python 3?
I'm only just learning Flask myself, so I'm not of much help.
However take a look at http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-v-user-logins
The author mentions
Note that due to the differences in unicode handling between Python 2 and 3 we have to provide two alternative versions of this method.
He uses a str instead of unicode
def get_id(self):
try:
return unicode(self.id) # python 2
except NameError:
return str(self.id) # python 3
I might be completely wrong! In which case i'm sorry, worth a try though.
Its much more than just string. Its based on an older python-openid package That is not Python3 compatible. There is a new version of python-openid specifically for Python3.
https://pypi.python.org/pypi/python3-openid/3.0.1
Same blog mentioned earlier also detail this:
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-v-user-logins
"Unfortunately version 1.2.1 of Flask-OpenID (the current official version) does not work well with Python 3. Check what version you have by running the following command:"
I have the next code somewhere in Pyramid application
import xmlrpclib
....
#view_config(route_name='api-paypoint', renderer='string')
def api_paypoint(request):
call_data = ["mid", "password", "name"]
api_server = xmlrpclib.ServerProxy('https://www.secpay.com/secxmlrpc/make_call')
response = api_server.SECVPN.validateCardFull(call_data)
print response
return {}
What I'm trying is to call Secpay API (here's JAVA's example http://www.paypoint.net/support/gateway/soap-xmlrpc/xmlrpc-java/ )
I'm getting the next error:
Exception Value: <Fault 0: 'java.lang.NoSuchMethodException: com.secpay.secvpn.SECVPN.validateCardFull(java.util.Vector)'>
Any idea what is wrong here?
I found a problem. I was trying to pass to api_server.SECVPN.validateCardFull() which is wrong. This should be changed to
api_server.SECVPN.validateCardFull('mid', 'password', 'name')
You're calling with the wrong number of arguments, and the java serverside can't find a method matching that signature. If you call with 14 strings the exception changes (something about the serverside failing to encode a null).
proxy.SECVPN.validateCardFull("","","","","","","","","","","","","","")
I have site on Pyramid framework and want to cache with memcached. For testing reasons I've used memory type caching and everything was OK. I'm using pyramid_beaker package.
Here is my previous code (working version).
In .ini file
cache.regions = day, hour, minute, second
cache.type = memory
cache.second.expire = 1
cache.minute.expire = 60
cache.hour.expire = 3600
cache.day.expire = 86400
In views.py:
from beaker.cache import cache_region
#cache_region('hour')
def get_popular_users():
#some code to work with db
return some_dict
The only .ini settings I've found in docs were about working with memory and file types of caching. But I need to work with memcached.
First of all I've installed package memcached from Ubuntu official repository and also python-memcached to my virtualenv.
In .ini file I've replaced cache.type = memory -> cache.type = memcached. And I've got next error:
beaker.exceptions.MissingCacheParameter
MissingCacheParameter: url is required
What am I doing wrong?
Thanks in advance!
So, using the TurboGears documentation as a guide, what settings do you have for the url?
[app:main]
beaker.cache.type = ext:memcached
beaker.cache.url = 127.0.0.1:11211
# you can also store sessions in memcached, should you wish
# beaker.session.type = ext:memcached
# beaker.session.url = 127.0.0.1:11211
It looks to me as if memcached requires a url to initialize correctly:
def __init__(self, namespace, url=None, data_dir=None, lock_dir=None, **params):
NamespaceManager.__init__(self, namespace)
if not url:
raise MissingCacheParameter("url is required")
I am not really sure why the code allows url to be optional (defaulting to None) and then requires it. I think it would have been simpler just to require the url as an argument.
Later: in response to your next question:
when I used cache.url I've got next error: AttributeError:
'MemcachedNamespaceManager' object has no attribute 'lock_dir'
I'd say that the way I read the code below, you have to provide either lock_dir or data_dir to initialize self.lock_dir:
if lock_dir:
self.lock_dir = lock_dir
elif data_dir:
self.lock_dir = data_dir + "/container_mcd_lock"
if self.lock_dir:
verify_directory(self.lock_dir)
You can replicate that exact error using this test code:
class Foo(object):
def __init__(self, lock_dir=None, data_dir=None):
if lock_dir:
self.lock_dir = lock_dir
elif data_dir:
self.lock_dir = data_dir + "/container_mcd_lock"
if self.lock_dir:
verify_directory(self.lock_dir)
f = Foo()
It turns out like this:
>>> f = Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 7, in __init__
AttributeError: 'Foo' object has no attribute 'lock_dir'
Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app:
client = gdata.service.GDataService()
gdata.alt.appengine.run_on_appengine(client)
sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
client.UpgradeToSessionToken(sessionToken)
logging.info(client.GetAuthSubToken())
what gets logged is "None" so that does seem right :-(
if I use this:
temp = client.upgrade_to_session_token(sessionToken)
logging.info(dump(temp))
I get this:
{'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'}
so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work.
If I try to use AuthSubTokenInfo I get this:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "controllers/indexController.py", line 47, in get
logging.info(client.AuthSubTokenInfo())
File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo
token = self.token_store.find_token(scopes[0])
TypeError: 'NoneType' object is unsubscriptable
so it looks like my token_store is not getting filled in correctly, is that something I should be doing?
Also I am using gdata 2.0.9
Thanks
Matt
To answer my own question:
When you get the Token just call:
client.token_store.add_token(sessionToken)
and App Engine will store it in a new entity type for you. Then when making calls to the calendar service just dont set the authsubtoken as it will take care of that for you also.