Authentication failed with Jenkins using python API - python

I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message:
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job
self.server + CREATE_JOB % locals(), config_xml, headers))
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open
'Possibly authentication failed [%s]' % (e.code)
jenkins.JenkinsException: Error in request.Possibly authentication failed [403]
The config file I have created was copied from another job config file as it was the easiest way to build it:
I am using the import jenkins module.
The server instance I create is using these credentials:
server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN')
Any help will be greatly appreciated.

Error 403 is basically issued when the user is not allowed to access the resource. Are you able to access the resource manually using the same credentials? If there are some other admin credentials, then you can try using those.
Also, I am not sure but may be you can try running the python script with admin rights.

As far as I know for security reasons in Jenkins 2.x only admins are able to create jobs (to be specific - are able to send PUT requests). At least that's what I encountered using Jenkins Job Builder (also Python) and Jenkins 2.x.

Related

"Failed RTM connect" error when trying to connect to Slack with RTM API

I'm using the following Python code from Slack's "Migrating to 2.x" github docs
from slackclient import SlackClient
slack_token = os.environ["SLACK_API_TOKEN"]
client = SlackClient(slack_token)
def say_hello(data):
if 'Hello' in data['text']:
channel_id = data['channel']
thread_ts = data['ts']
user = data['user']
client.api_call('chat.postMessage',
channel=channel_id,
text="Hi <#{}>!".format(user),
thread_ts=thread_ts
)
if client.rtm_connect():
while client.server.connected is True:
for data in client.rtm_read():
if "type" in data and data["type"] == "message":
say_hello(data)
else:
print "Connection Failed"
For the SLACK_API_TOKEN, I am using the Bot User OAuth Access Token for my app, found here:
The error I am getting is the following:
Failed RTM connect
Traceback (most recent call last):
File "/Users/.../slackbot/slackbot_env/lib/python3.8/site-packages/slackclient/client.py", line 140, in rtm_connect
self.server.rtm_connect(use_rtm_start=with_team_state, **kwargs)
File "/Users/.../slackbot/slackbot_env/lib/python3.8/site-packages/slackclient/server.py", line 168, in rtm_connect
raise SlackLoginError(reply=reply)
slackclient.server.SlackLoginError
Connection Failed
Why am I getting this error?!?!?!
Other context:
I am on a Mac, unlike others who have had issues online using Windows
machines.
I am running the code locally, in a virtual env, via
python script.py in my terminal.
I last successfully ran this in December, and have seen that Slack dropped support for the RTM API (?) Dec 31st 2019?
The app has been reinstalled to my workspace, and the keys did not change.
I think it may be something I need to configure/change/set/refresh on the api.slack.com/apps side, since it broke without any code changes occurring.
Why am I focusing on debugging the example for 1.x? My code was previously working using rtm_connect / 1.x using the same commands as the example code, and without any code changes it has stopped working. My code and the example code yield the same errors, so I'm using the sample code to make debugging easier. I'd like to fix this before starting the process of migrating to 2.x, so I can start with working code before embarking on a long series of changes that can introduce their own errors.
I do not think this issue is related to the Bot User OAuth Access Token, in my view you are using the right one (xoxb-). However, this issue might be related to the Slack App. Note that RTM isn't supported for the new Slack App granular scopes (see python client issue #584 and node client issue #921). If you want to use RTM, you should create rather a classic slack app with the OAuth Scope bot.
I not sure if this is the reason, but I ran into the same issues before.
The answer I found on the Slack Github is that new xoxob-* doesn't support RTM.
Please reference this web:
- https://github.com/slackapi/python-slackclient/issues/326.
So I use my OAuth Access Token instead of Bot User OAuth Access Token.

Uploading file with python returns Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>

blob.upload_from_filename(source) gives the error
raise exceptions.from_http_status(response.status_code, message, >response=response)
google.api_core.exceptions.Forbidden: 403 POST >https://www.googleapis.com/upload/storage/v1/b/bucket1-newsdata->bluetechsoft/o?uploadType=multipart: ('Request failed with status >code', 403, 'Expected one of', )
I am following the example of google cloud written in python here!
from google.cloud import storage
def upload_blob(bucket, source, des):
client = storage.Client.from_service_account_json('/path')
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(des)
blob.upload_from_filename(source)
I used gsutil to upload files, which is working fine.
Tried to list the bucket names using the python script which is also working fine.
I have necessary permissions and GOOGLE_APPLICATION_CREDENTIALS set.
This whole things wasn't working because I didn't have permission storage admin in the service account that I am using in GCP.
Allowing storage admin to my service account solved my problem.
As other answers have indicated that this is related to the issue of permission, I have found one following command as useful way to create default application credential for currently logged in user.
Assuming, you got this error, while running this code in some machine. Just following steps would be sufficient:
SSH to vm where code is running or will be running. Make sure you are user, who has permission to upload things in google storage.
Run following command:
gcloud auth application-default login
This above command will ask to create token by clicking on url. Generate token and paste in ssh console.
That's it. All your python application started as that user, will use this as default credential for storage buckets interaction.
Happy GCP'ing :)
This question is more appropriate for a support case.
As you are getting a 403, most likely you are missing a permission on IAM, the Google Cloud Platform support team will be able to inspect your resources and configurations.
This is what worked for me when the google documentation didn't work. I was getting the same error with the appropriate permissions.
import pathlib
import google.cloud.storage as gcs
client = gcs.Client()
#set target file to write to
target = pathlib.Path("local_file.txt")
#set file to download
FULL_FILE_PATH = "gs://bucket_name/folder_name/file_name.txt"
#open filestream with write permissions
with target.open(mode="wb") as downloaded_file:
#download and write file locally
client.download_blob_to_file(FULL_FILE_PATH, downloaded_file)

Accessing Office 365 ProPlus OneDrive folder using the official Python SDK

We are currently trying to access a folder of an Office 365 ProPlus tenant using the official OneDrive SDK for Python (https://github.com/OneDrive/onedrive-sdk-python). One of our clients would like to use a OneDrive folder as a way of storing and sharing programmatically generated files, therefore, we would like to provide basic file operations.
We have a working solution for a personal OneDrive account, however, when we try to apply the same approach for their OneDrive, we face an issue during the authentication process.
We asked them to register the application in the Azure AD following the steps in the official documentation. Next, they sent us the redirect URI, client ID and client secret that we included in our script. We are trying to use the following code:
redirect_uri = 'REDIRECT_URI'
client_secret = 'CLIENT_SECRET'
client_id='CLIENT_ID'
discovery_uri = 'https://api.office.com/discovery/'
auth_server_url='https://login.microsoftonline.com/common/oauth2/authorize'
auth_token_url='https://login.microsoftonline.com/common/oauth2/token'
http_provider = onedrivesdk.HttpProvider()
auth_provider = onedrivesdk.AuthProvider(http_provider,
client_id,
auth_server_url=auth_server_url,
auth_token_url=auth_token_url)
auth_url = auth_provider.get_auth_url(redirect_uri)
code = GetAuthCodeServer.get_auth_code(auth_url, redirect_uri)
However, we get the following error message when executing the last line:
Traceback (most recent call last):
File "onedrive-test.py", line 25, in
code = GetAuthCodeServer.get_auth_code(auth_url, redirect_uri)
File "/home/username/.local/lib/python3.6/site-packages/onedrivesdk/helpers/GetAuthCodeServer.py",
line 60, in get_auth_code
s = GetAuthCodeServer((host_address, port), code_acquired, GetAuthCodeRequestHandler)
File "/home/username/.local/lib/python3.6/site-packages/onedrivesdk/helpers/GetAuthCodeServer.py",
line 76, in init
HTTPServer.init(self, server_address, RequestHandlerClass)
File "/usr/lib/python3.6/socketserver.py", line 453, in init
self.server_bind()
File "/usr/lib/python3.6/http/server.py", line 136, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/lib/python3.6/socketserver.py", line 467, in server_bind
self.socket.bind(self.server_address)
socket.gaierror: [Errno -2] Name or service not known
We also tried opening the auth_url manually, which took us one step further, but still could not authenticate the application with the following error:
AADSTS50020: User account 'USER ACCOUNT' from identity provider
'live.com' does not exist in tenant 'TENANT NAME' and cannot access
the application 'CLIENT ID' in that tenant. The account needs to be
added as an external user in the tenant first. Sign out and sign in
again with a different Azure Active Directory user account.
We have two questions:
What might casue the first error? This is the comment (see below) that can be found in the readme of the SDK about using the GetAuthCodeServer class. It seems to us that the server cannot be run. Are there any not explicitly defined dependencies that we should be aware of before trying to run the webserver? (We are running the script on Ubuntu 18.10)
If you want to remove some of that manual work, you can
use the helper class GetAuthCodeServer. That helper class spins up a
webserver, so this method cannot be used on all environments.
With respect to the second issue, can you recommend proper material for configuring OneDrive for Business for our use-case? We went through a lot of documentation, but after long hours of research, we still could not find the correct way to fix that issue, especially since we do not have direct acces to the tenant and we cannot easily experiment with things. We would need to give a step-by-step cookbook to our client to set up everything on their side.
Any help would be much appreciated! :)

Trying to connect to Google cloud storage (GCS) using python

I've build the following script:
import boto
import sys
import gcs_oauth2_boto_plugin
def check_size_lzo(ds):
# URI scheme for Cloud Storage.
CLIENT_ID = 'myclientid'
CLIENT_SECRET = 'mysecret'
GOOGLE_STORAGE = 'gs'
dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds)
gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE)
key = uri.get_key()
if key.size < 45379959:
raise ValueError('umg lzo file is too small, investigate')
else:
print('umg lzo file is %sMB' % round((key.size/1e6),2))
if __name__ == "__main__":
check_size_lzo(sys.argv[1])
It works fine locally but when I try and run on kubernetes cluster I get the following error:
boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo'
I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue.
Would really appreciate help resolving this issue.
Many thanks!
If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.
Unless you've changed something, I'm guessing that you're getting the default Kubernetes Engine auth, with means a service account associated with your project. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.
Also note that by default the default credentials aren't scoped to include GCS, so you'll need to add that as well and then restart the instance.

appcfg.py shows You must be logged in as an administrator

When I try to upload a sample csv data to my GAE app through appcfg.py, it shows the below 401 error.
2015-11-04 10:44:41,820 INFO client.py:571 Refreshing due to a 401 (attempt 2/2)
2015-11-04 10:44:41,821 INFO client.py:797 Refreshing access_token
Error 401: --- begin server output ---
You must be logged in as an administrator to access this.
--- end server output ---
Here is the command I tried,
appcfg.py upload_data --application=dev~app --url=http://localhost:8080/_ah/remote_api --filename=data/sample.csv
This is how we do it in order to use custom authentication.
Custom handler in app.yaml
- url: /remoteapi.*
script: remote_api.app
Custom wsgi app in remote_api.py to override CheckIsAdmin
from google.appengine.ext.remote_api import handler
from google.appengine.ext import webapp
import re
MY_SECRET_KEY = 'MAKE UP PASSWORD HERE' # make one up, use the same one in the shell command
cookie_re = re.compile('^"?([^:]+):.*"?$')
class ApiCallHandler(handler.ApiCallHandler):
def CheckIsAdmin(self):
"""Determine if admin access should be granted based on the
auth cookie passed with the request."""
login_cookie = self.request.cookies.get('dev_appserver_login', '')
match = cookie_re.search(login_cookie)
if (match and match.group(1) == MY_SECRET_KEY
and 'X-appcfg-api-version' in self.request.headers):
return True
else:
self.redirect('/_ah/login')
return False
app = webapp.WSGIApplication([('.*', ApiCallHandler)])
From here we script the uploading of data that was exported from our live app. Use the same password that you made up in the python script above.
echo "MAKE UP PASSWORD HERE" | appcfg.py upload_data --email=some#example.org --passin --url=http://localhost:8080/remoteapi --num_threads=4 --kind=WebHook --filename=webhook.data --db_filename=bulkloader-progress-webhook.sql3
WebHook and webhook.data are specific to the Kind that we exported from production.
I had a similar issue, where appcfg.py was not giving me any credentials dialog, so I could not authenticate. I downgraded from GAELauncher 1.27 to 1.26, and the authentication started working again.
Temporary solution: go to https://console.developers.google.com/storage/browser/appengine-sdks/featured/ to get version 1.9.26
Submitted bug report: https://code.google.com/p/google-cloud-sdk/issues/detail?id=340
You cannot use the appcfg.py upload_data command with the development server [edit: as is; see Josh J's answer]. It only works with the remote_api endpoint running on App Engine and authenticated with OAuth2.
An easy way to load data into the dev server's datastore is to create an endpoint that reads a CSV file and creates the appropriate datastore entities, then hit it with the browser. (Be sure to remove the endpoint before deploying the app, or restrict access to the URL with login: admin.)
You must have an oauth token for a google account that is not an admin of that project. Try passing the --no_cookies flag so that it prompts for authentication again.
Maybe this has something to do with it? From the docs
Connecting your app to the local development server
To use the local development server for your app running locally, you
need to do the following:
Set environment variables. Add or modify your app's Datastore
connection code. Setting environment variables
Create an environment variable DATASTORE_HOST and set it to the host
and port on which the local development server is listening. The
default host and port is http://localhost:8080. (Note: If you use the
port and/or host command line arguments to change these defaults, be
sure to adjust DATASTORE_HOST accordingly.) The following bash shell
example shows how to set this variable:
export DATASTORE_HOST=http://localhost:8080 Create an environment
variable named DATASTORE_DATASET and set it to your dataset ID, as
shown in the following bash shell example:
export DATASTORE_DATASET= Note: Both the Python and Java
client libraries look for the environment variables DATASTORE_HOST and
DATASTORE_DATASET.
Link to Docs
https://cloud.google.com/datastore/docs/tools/devserver

Categories