I have recently taken over support for an app that uses rauth to connect to linkedin. The code that is failing is:
self.linkedin= OAuth1Service(
name='linkedin',
consumer_key=self._consumer_key,
consumer_secret=self._consumer_secret,
request_token_url=self.request_token_url,
access_token_url=self.access_token_url,
authorize_url=self.authorize_url)
self.request_token, self.request_token_secret = \
self.linkedin.get_request_token(method='GET',
oauth_callback=self.callback_url)
The owner of the app says this used to work but now we're getting:
TypeError: request() got an unexpected keyword argument 'oauth_callback'
Can you point me to some doc/examples that would help me re-architect this?
-Jim
It sounds like you're using a later version of rauth than the original author was. You will need to amend the code to conform to the changes in the rauth API. These are mostly small, partly necessitated by the move to Requests v1.0.0 which had many breaking changes in its API.
You should read the upgrade guide. Additionally there's a number of working examples.
Finally this particular error is indicating that an unexpected parameter was passed in, namely oauth_callback. This is because rauth is just a wrapper over Requests. Requests doesn't know what to do with oauth_callback. Instead, you should use the native Requests' API and pass it in, in this case, via the params parameter, e.g.:
linkedin = OAuth1Service(name='linkedin',
consumer_key=consumer_key,
consumer_secret=consumer_secret,
request_token_url=request_token_url,
access_token_url=access_token_url,
authorize_url=authorize_url)
request_token, request_token_secret = \
linkedin.get_request_token(method='GET',
params={'oauth_callback': callback_url})
Hope that helps!
Related
I am using supertokens to build an authentication system and using fastApi as backend but while using their prebuild UI and already setup backend code in python, I am not able to access the 127.0.0.0:3000/docs endpoint. It is showing only a blank page.
Also, the custom routes that I have built in my API are not working and accessible.
Here is the code that I have written
#app.get("/sessioninfo")
async def secure_api(s: SessionContainer = Depends(verify_session())):
return {
"sessionHandle": s.get_handle(),
"userId": s.get_user_id(),
"accessTokenPayload": s.get_access_token_payload(),
}
Here is the app_info part of init function in supertokens
app_info = InputAppInfo(
app_name="demoApp",
api_domain="http://localhost:3001",
website_domain="http://localhost:3000",
)
After hitting the API with port 3000 and endpoint /sessioninfo I am getting blankpage
localhost:3000/session_info
And for localhost:3001/session_info I am getting an internal server error.
localhost:3001/session_info
localhost:3000/session_info won't respond with session info because it's being handled by the front end.
But localhost:3001/session_info should work. This error is likely to be happening because of python/dependency versions.
Update: As per this GH comment, this error is likely to be coming from Python 3.11. So maybe try a different version of python
I hope this helps :)
I was hoping to find an answer to my problem with the elasticsearch python framework. Maybe I'm completely blind or doing something absolutely wrong, but I'm very confused right now and can't find an adequate answer.
I'm currently trying to establish a connection to my elastic search API using the elasticsearch python framework, my code looks like this:
from elasticsearch import Elasticsearch
def create_es_connection(host: str, port: int, api_key_id: str, api_key: str, user: str, pw: str) -> Elasticsearch:
return Elasticsearch([f"https://{user}:{pw}#{host}:{port}"])
This is working fine. I then created an API key for my testuser, which I'm giving to the function above as well. I am now trying to do something similar like this: Elasticsearch([f"https://{api_key_id}:{api_key}#{host}:{port}"]) so, I want to leave out the user and password completely, because in regards to the greater project behind this snippet, I'm not feeling very well in saving the user/password credentials in my project (and maybe even pushing it to our git server). Sooner or later, these credentials have to be entered somewhere and I was thinking that only saving the API key and to authenticate with that could be more safe. Unfortunately, this isn't working out like I planned and I couldn't find anything about how to authenticate with the API key only.
What am I doing wrong? In fact, I found so little about this, that I'm questioning my fundamental understanding here. Thank you for your replies!
working configuration for me was :
es = Elasticsearch(['localhost:9200'], api_key=('DuSkVm8BZ5TMcIF99zOC','2rs8yN26QSC_uPr31R1KJg'))
Elasticsearch's documentation shows how to generate and use (at the bottom of the page) an API key: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
curl -H "Authorization: ApiKey ..." http://localhost:9200/_cluster/health
-H means "header", so to do the same in Python you will need to set this header. Rummaging through the elasticsearch module source code tells me that you might just be able to do the following:
Elasticsearch([f'http://{host}:{port}'], api_key=api_key)
The reason for this is that **kwargs of the Elasticsearch.__init__ method are passed to Transport.__init__, the **kwargs of that are passed to Connection.__init__, which takes an api_key arg which is then used in
_get_api_key_header_val to construct the appropriate header.
The following code shows that the api key header gets added to the HTTP headers of the request:
import elasticsearch, logging, http.client
http.client.HTTPConnection.debuglevel = 5
logging.basicConfig(level=logging.DEBUG)
c = elasticsearch.Elasticsearch(['localhost'], api_key='TestApiKey')
print(c.cluster.health(wait_for_status='green'))
This is definitely something that should be added to Elasticsearch's docs.
I've been working with the spotipy python API for a few days, trying to get it to work. Each time I try for a login request, it tracebacks with bad oauth request.
I've used this code:
id='my_client_id'
secret='my_client_secret'
url='https://mywebsite.mydomain/callback'
username='myusername'
scope='a list of scopes'
token=util.prompt(username, scope, client_id=id, client_secret=secret, url)
I then paste in a url that looks like:
https://mywebsite.mydomain/callback?code=a_long_code
But each time it gives me a bad request from oauth. Am I missing something? It seems to go through the login process fine, it's just it tracebacks at the end.
Just in case people have this issue in the future, here is what I did:
In oauth2.py, find where it raises the error, and before that put something like: self.problem=response.
Run the steps that util.prompt does by hand, I.E, do the oauth_url requests yourself, rather than through util.prompt.
See what sp_oauth.problem.text says.
In my case, it was an incorrect app secret.
MORTIFIED!
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html
I am trying to update Firebase using the python-firebase library, but cannot get authentication to work, using adapted sample code:
from firebase import firebase as fb
auth = fb.FirebaseAuthentication('<firebase secret>', 'me#gmail.com',
auth_payload={'uid': '<uid>'}) // NB renamed extras -> auth_payload, id -> uid here
firebase = fb.FirebaseApplication('https://<url>.firebaseio.com', authentication=auth)
result = firebase.get('/users', name=None, connection=None,
params={'print': 'pretty'}) // HTTPError: 401 Client Error: Unauthorized
print result
I keep getting (401) Unauthorized, but I notice that the token generated by the library is radically different to one generated by a JavaScript version of FirebaseTokenGenerator - and the latter authenticates fine when I provide the same URL, uid and secret.
I noticed a GitHub issue, questioning why the library did not just use the official Python firebase-token-generator, so I forked and implemented the suggested change just in case it would make a difference, but still get the same result.
Can anyone suggest what might be tripping me up here?
This library is 4 years old which means lots of things have been changed for firebase especially after Google's acquisition. The part of how you access Firebase is completely different.
I will recommend to use the official Firebase Admin Python SDK https://github.com/firebase/firebase-admin-python
A really good alternative but prefer the official is this:
https://github.com/thisbejim/Pyrebase