ca_certs_locater/__init__.py import error - python

I was trying to get authentication of my api.However,it always show the following import errors:
public_key=raw.input ('...')
secret_key=raw.input ('...')
client = upwork.Client(public_key, secret_key)
It is supposed to appear a url, however it shows that
" File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/upwork/client.py", line 118, in __init__
ca_certs=ca_certs_locater.get(),
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ca_certs_locater/__init__.py", line 36, in get
raise ImportError()"
I don't know what I should do about the ca_certs_locater

Before instantiating the upwork client, modify the module's LINUX_PATH constant.
import upwork
# Set the certificate path within the module
upwork.ca_certs_locater.LINUX_PATH = '/path/to/my/cert.crt'
...
client = upwork.Client(public_key, secret_key, **credentials)
...

I had this same problem. The solution is indeed as the comment suggests to do what Python - SSL Issue with Oauth2 says in combination with following the "SSL Certificate Note" on https://pypi.python.org/pypi/python-upwork. I did the following:
Read https://github.com/upwork/python-upwork/issues/9
Downloaded cacert.pem from Python - SSL Issue with Oauth2
Set the HTTPLIB_CA_CERTS_PATH environment variable to /path/to/cacert.pem
Then, the import error went away. My use case was the upwork API and yours may be different but the solution is the same either way.

Related

Authentication failed with Jenkins using python API

I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message:
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job
self.server + CREATE_JOB % locals(), config_xml, headers))
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open
'Possibly authentication failed [%s]' % (e.code)
jenkins.JenkinsException: Error in request.Possibly authentication failed [403]
The config file I have created was copied from another job config file as it was the easiest way to build it:
I am using the import jenkins module.
The server instance I create is using these credentials:
server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN')
Any help will be greatly appreciated.
Error 403 is basically issued when the user is not allowed to access the resource. Are you able to access the resource manually using the same credentials? If there are some other admin credentials, then you can try using those.
Also, I am not sure but may be you can try running the python script with admin rights.
As far as I know for security reasons in Jenkins 2.x only admins are able to create jobs (to be specific - are able to send PUT requests). At least that's what I encountered using Jenkins Job Builder (also Python) and Jenkins 2.x.

BadParametersError: Invalid signature when using OVH Python wrapper

I'm using OVH API along with python wrapper:
https://pypi.python.org/pypi/ovh
When trying to execute this code:
import ovh
client = ovh.Client()
# Print nice welcome message
print "Welcome", client.get('/me')['firstname']
I get this error:
Traceback (most recent call last):
File "index.py", line 6, in <module>
print "Welcome", client.get('/me')['firstname']
File "/home/rubinhozzz/.local/lib/python2.7/site-packages/ovh/client.py", line 290, in get
return self.call('GET', _target, None, _need_auth)
File "/home/rubinhozzz/.local/lib/python2.7/site-packages/ovh/client.py", line 419, in call
raise BadParametersError(json_result.get('message'))
ovh.exceptions.BadParametersError: Invalid signature
My info is saved in the ovh.conf as the documentation suggests.
[default]
; general configuration: default endpoint
endpoint=ovh-eu
[ovh-eu]
application_key=XXXlVy5SE7dY7Gc5
application_secret=XXXdTEBKHweS5F0P0tb0lfOa8GoQPy4l
consumer_key=pscg79fXXX8ESMIXXX7dR9ckpDR7Pful
It looks that I can connect but when trying to use the services like for instance "/me", the error raises!
It is difficult to reproduce the issue because it requires an application key and it seems that it is only granted to existing customers of OVH. I couldn't even see a link to an account registration page on their site.
By looking at the code of the call() method in /ovh/client.py, it seems that their server doesn't recognise the format or the content off the signature sent by your script. According to the inline documentation the signature is generated from these parameters:
application_secret
consumer_key
METHOD
full request url
body
server current time (takes time delta into account)
Since your call is identical to the example code provided on the OVH Python package web page, the last four parameters should be valid. In that case it looks like either the application secret or the customer key (or both) in your config file are not correct.
See also the documentation on OVH site under the 'Signing requests' heading. They explain how the signature is made and what it should look like.
Perhaps try to re-create a new application API to obtain new key and secret and make sure you copy them without any additional character.

authentification for Google Storage using Python

I want to build an app which has easy interactions with google storage, i.e., list files in bucket, download a file, and upload a file.
Following this tutorial, I decided to use a service account (not a user one) for authentification and followed the procedure. I created a public/private key on my console and download the key on my machine. Then I created the .boto file which points to this private key, and finally launched this program and it worked:
import boto
import gcs_oauth2_boto_plugin
uri = boto.storage_uri('txxxxxxxxxxxxxx9.appspot.com', 'gs')
for obj in uri.get_bucket():
print '%s://%s/%s' % (uri.scheme, uri.bucket_name, obj.name)
As you can see, the package gcs_oauth2_boto_plugin is not used in the code, so I decided to get rid of it. But magically, when I comment the import gcs_oauth2_boto_plugin line and run the program again, I get this error:
C:\Users\...\Anaconda3\envs\snakes\python.exe C:/Users/.../Dropbox/Prog/s3_manifest_builder/test.py
Traceback (most recent call last):
File "C:/Users/.../Dropbox/Prog/s3_manifest_builder/test.py", line 10, in <module>
for obj in uri.get_bucket():
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\storage_uri.py", line 181, in get_bucket
conn = self.connect()
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\storage_uri.py", line 140, in connect
**connection_args)
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\gs\connection.py", line 47, in __init__
suppress_consec_slashes=suppress_consec_slashes)
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\s3\connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "C:\Users\...\Anaconda3\envs\snakes\lib\site-packages\boto\auth.py", line 987, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
So my questions are:
1- how can you explain that deleting an import which IS NOT USED in the code makes it fail?
2- more generally, to be sure to understand the authentification process, if I want to run my app on a machine, I must be sure to have the .boto file (which points to my service account private key) generated previously? Or is there a cleaner/easier way to give access to my application to Google Storage for in/out interactions?
For instance, I only have to provide public and private key as strings to my program when I want to connect to a S3 bucket with boto. I don't needto generate a .boto file, importing packages etc..., which makes it so much easier to use, isn't it?
1- how can you explain that deleting an import which IS NOT USED in the code makes it fail?
The first hint is that the module is named a "plugin", although exactly how that's implemented isn't clear on the surface. It intuitively makes some sense that not importing a module would lead to an exception of this kind, though. Initially, I thought it was a bad practice of doing stateful activity on a global during the init of importing that module. In some ways, that is what it was, but only because class hierarchies are "state" in the meta-programmable python.
It turns out (as in many cases) that inspecting the location that stacktrace was thrown from (boto.auth.get_auth_handler()) provides the key to understanding the issue.
(see the linked source for commented version)
def get_auth_handler(host, config, provider, requested_capability=None):
ready_handlers = []
auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)
for handler in auth_handlers:
try:
ready_handlers.append(handler(host, config, provider))
except boto.auth_handler.NotReadyToAuthenticate:
pass
if not ready_handlers:
checked_handlers = auth_handlers
names = [handler.__name__ for handler in checked_handlers]
raise boto.exception.NoAuthHandlerFound(
'No handler was ready to authenticate. %d handlers were checked.'
' %s '
'Check your credentials' % (len(names), str(names)))
Note the reference to the class AuthHandler, which is defined in boto.auth_handler.
So, you can see that we need to look at the contents of boto.plugin.get_plugin(AuthHandler, requested_capability):
def get_plugin(cls, requested_capability=None):
if not requested_capability:
requested_capability = []
result = []
for handler in cls.__subclasses__():
if handler.is_capable(requested_capability):
result.append(handler)
return result
So, it becomes clear, at last finally when we see that the class definition of the class OAuth2Auth in gcs_oauth2_boto_plugin.oauth2_plugin, in which it is declared as a subclass of boto.auth_handler.AuthHandler, signaling its auth capabilities to the boto framework via the following member variable:
capability = ['google-oauth2', 's3']
2- more generally, to be sure to understand the authentification process, if I want to run my app on a machine, I must be sure to have the .boto file (which points to my service account private key) generated previously? Or is there a cleaner/easier way to give access to my application to Google Storage for in/out interactions?
This has a more generalized answer: You can use a .boto file, although you can also use service account credentials, and you could even use the REST API and go through an oauth2 flow to get the tokens needed to send in the Authorization header. The various methods of auth to cloud storage are in the documentation. The tutorial/doc you linked shows some methods, you've used .boto for another method. You can read about the Cloud Storage REST API (JSON) here and you can read about python oauth2 flows of various kinds here.

Finding gapps users groups using the python admin-sdk libraries via the Directory API?

I'm porting our old user management scripts from the Google Provisioning API (which used the python gdata libraries) to the Google Directory API (the python admin-sdk libaries). So far most things have gone fine, however I've run into issues when attempting to do a discovery on what groups a user belongs to (which I need to remove membership from before a user deletion). Even stripping the code down to the barest essentials (replaced e-mails/credentials for public consumption):
#!/usr/bin/python
import httplib2
from apiclient import errors
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
SERVICE_ACCOUNT_EMAIL = 'XXXXXXXXXXXXXXXXXXXXXX#developer.gserviceaccount.com'
SERVICE_ACCOUNT_PKCS12_FILE_PATH = '/blah/blah/XXXXXXXX-privatekey.p12'
f = file(SERVICE_ACCOUNT_PKCS12_FILE_PATH, 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(SERVICE_ACCOUNT_EMAIL, key,
scope='https://www.googleapis.com/auth/admin.directory.user', sub='serviceaccount#our.tld')
service.users()
members = service.members().get(memberKey = 'serviceaccount#our.old', groupKey = 'googlegroup#our.tld').execute()
print members
This returns a 403 permissions error:
Traceback (most recent call last):
File "group_tests.py", line 39, in <module>
members = service.members().get(memberKey = 'serviceaccount#our.tld', groupKey = 'googlegroup#our.tld').execute()
File "/XXX/bin/gapps/lib/python2.6/site-packages/oauth2client/util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "/XXX/bin/gapps/lib/python2.6/site-packages/googleapiclient/http.py", line 729, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/admin/directory/v1/groups/googlegroup%40our.tld/members/serviceaccount%40our.tld?alt=json returned "Insufficient Permission">
I don't recognize if the scope is wrong here, and if so what it should be? This service account is already set with permission for the following scopes (in Security>Advanced Security>API>Manage API client access):
https://www.googleapis.com/auth/admin.directory.user
https://www.googleapis.com/auth/admin.directory.group
Or should I be using groups instead of members? Like:
members = service.groups().get(memberKey = 'serviceaccount#our.old', groupKey = 'googlegroup#our.tld').execute()
Any pointers appreciated, I've been goggling around for any help on this for a week now to no avail.
Got this working:
First off, the scope in the credentials definition was incorrect.
admin.directory.user
changed to:
admin.directory.group
Also had the wrong initialization for the "build":
service.users()
changed to:
service.groups()
And the query statement itself was completely wrong, I went back to the reference doc and kept trying different changes until it took:
members = service.groups().list(domain = 'our.tld',userKey = 'user_whos_groups_i_want#our.tld',pageToken=None,maxResults=500).execute()
Hopefully this will be useful to someone else running into the same issue later. Please be aware that not all permission errors google will throw back are literally because of permissions, it may be your own code has conflicting scopes that you're trying to use.

httpError using mwclient with local MediaWiki

I try to create a page using mwclient with a local MediaWiki.
With wikipedia.org everything works fine.
With my local MediaWiki I enter these commands:
import mwclient
site = mwclient.Site("192.168.1.143")
The result is the following error:
File "/Library/Python/2.7/site-packages/mwclient/http.py", line 152, in request
raise errors.HTTPStatusError, (res.status, res)
mwclient.errors.HTTPStatusError: (404, <httplib.HTTPResponse instance at 0x104368488>)
If I type the IP or the Hostname in my Browser, it works. The same with the ping command.
I used url lib with:
a=urllib.urlopen('http://www.google.com/asdfsf')
a.getcode()
and got the 200 OK Code.
What's the problem here? Any ideas?
The problem is that mwclient expects the api.php (which is what it uses to access the wiki) to be located at /w/, which is the location used for Wikimedia wikis, instead of directly under /, which is the default.
Per the documentation for Site, you need to use the path parameter for that:
site = mwclient.Site('192.168.1.143', path='/')

Categories