build() works in IDE but not in EXE - python

I am working on a python project which includes using the Youtube\Google API, accessing public playlists.
The code works fine in the Pycharm IDE but after using Pyinstaller to build it into an .exe file, it crashes and prints 'Crashed at youtube build stage'
from googleapiclient.discovery import build
try:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = json_path
credentials = google.oauth2.service_account.Credentials.from_service_account_file(json_path)
print("credentials=",credentials)
youtube = build(serviceName='youtube', version='v3', developerKey=yt_api_key, num_retries=3,credentials=credentials)
except Exception as e:
print(f"Crashed at youtube build stage \n\nError: {e}")
try:
youtube_playlist=youtube.playlists().list(part='contentDetails,id',id=youtube_playlist_id)
youtube_playlist=youtube_playlist.execute()
except Exception as e:
print(e,"\nERROR HERE")
When running on the IDE the build succeeds, but fails at the 2nd try except section.
The error given is :
('invalid_scope: Invalid OAuth scope or ID token audience provided.', {'error': 'invalid_scope', 'error_description': 'Invalid OAuth scope or ID token audience provided.'})
I looked at the google API and the playlists() method and I didn't find any place to use the Oauth credentials there.
When running on the EXE it simply prints :
Crashed at youtube build stage
Error: name: youtube version: v3
According the the build() documentation found here https://googleapis.github.io/google-api-python-client/docs/epy/googleapiclient.discovery-module.html#build
I found that the function raises an error when it can't create an mTLS connection.
Why can't it do so when the code is ran from an .exe file but manages when called from the IDE?
I thought the .exe created with Pyinstaller would work as it does on Pycharm, but it appears not to.
Not sure why TLS fails to set up.
Edit: added credentials and Oauth2 and another error appears.
It manages to build it but right afte

Related

Issues using SendGrid with Azure ML

I'm trying to send an email using SendGrid within Azure Machine Learning. This is initially just a basic test email to ensure it is working correctly.
The steps I have taken:
Pip installed SendGrid;
Zipped the SendGrid download and uploaded as a dataset to AML platform;
Attempted to run the example SendGrid Python code (which can be seen below):
I have copied steps in similar posts concerning uploading modules to AML here and here as well as ensuring the correct settings for the SendGrid API key were established on setup here.
def azureml_main():
import sendgrid
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
message = Mail(
from_email='xxx#xyz.com',
to_emails='xxx#xyz.com',
subject='Sending with Twilio SendGrid is Fun',
html_content='<strong>and easy to do anywhere, even with Python</strong>')
try:
sg = SendGridAPIClient(os.environ.get('SG API Code'))
response = sg.send(message)
print(response.status_code)
print(response.body)
print(response.headers)
except Exception as e:
print(e)
No error message is returned in the terminal. To me this indicates there weren't any issues with the code, yet no emails have been received/sent.
python ScheduleRun.py
azureuser#will1:~/cloudfiles/code/Users/will/Schedule$
azureuser#will1:~/cloudfiles/code/Users/will/Schedule$ python ScheduleRun.py
azureuser#will1:~/cloudfiles/code/Users/will/Schedule$
Twilio SendGrid developer evangelist here.
When you run the code nothing is printed, which is concerning as neither the success or error prints are called.
If the code you have shared is the entirety of the file then I think the code is simply not running. You have defined a function but you have not called the function itself.
Try calling azureml_main() on the last line of the file:
print(response.headers)
except Exception as e:
print(e)
azureml_main()
One other thing that concerns me is this line:
os.environ.get('SG API Code')
It looks as though you intend to actually place your SG API Key in the call to os.environ.get. Instead, that string should be the identifier for an environment variable that you have set. For example, you should set the SendGrid API key as an environment variable called SENDGRID_API_KEY and then in your code fetch the API key by calling os.environ.get("SENDGRID_API_KEY"). Check out this post on working with environment variables in Python for more.

"Failed RTM connect" error when trying to connect to Slack with RTM API

I'm using the following Python code from Slack's "Migrating to 2.x" github docs
from slackclient import SlackClient
slack_token = os.environ["SLACK_API_TOKEN"]
client = SlackClient(slack_token)
def say_hello(data):
if 'Hello' in data['text']:
channel_id = data['channel']
thread_ts = data['ts']
user = data['user']
client.api_call('chat.postMessage',
channel=channel_id,
text="Hi <#{}>!".format(user),
thread_ts=thread_ts
)
if client.rtm_connect():
while client.server.connected is True:
for data in client.rtm_read():
if "type" in data and data["type"] == "message":
say_hello(data)
else:
print "Connection Failed"
For the SLACK_API_TOKEN, I am using the Bot User OAuth Access Token for my app, found here:
The error I am getting is the following:
Failed RTM connect
Traceback (most recent call last):
File "/Users/.../slackbot/slackbot_env/lib/python3.8/site-packages/slackclient/client.py", line 140, in rtm_connect
self.server.rtm_connect(use_rtm_start=with_team_state, **kwargs)
File "/Users/.../slackbot/slackbot_env/lib/python3.8/site-packages/slackclient/server.py", line 168, in rtm_connect
raise SlackLoginError(reply=reply)
slackclient.server.SlackLoginError
Connection Failed
Why am I getting this error?!?!?!
Other context:
I am on a Mac, unlike others who have had issues online using Windows
machines.
I am running the code locally, in a virtual env, via
python script.py in my terminal.
I last successfully ran this in December, and have seen that Slack dropped support for the RTM API (?) Dec 31st 2019?
The app has been reinstalled to my workspace, and the keys did not change.
I think it may be something I need to configure/change/set/refresh on the api.slack.com/apps side, since it broke without any code changes occurring.
Why am I focusing on debugging the example for 1.x? My code was previously working using rtm_connect / 1.x using the same commands as the example code, and without any code changes it has stopped working. My code and the example code yield the same errors, so I'm using the sample code to make debugging easier. I'd like to fix this before starting the process of migrating to 2.x, so I can start with working code before embarking on a long series of changes that can introduce their own errors.
I do not think this issue is related to the Bot User OAuth Access Token, in my view you are using the right one (xoxb-). However, this issue might be related to the Slack App. Note that RTM isn't supported for the new Slack App granular scopes (see python client issue #584 and node client issue #921). If you want to use RTM, you should create rather a classic slack app with the OAuth Scope bot.
I not sure if this is the reason, but I ran into the same issues before.
The answer I found on the Slack Github is that new xoxob-* doesn't support RTM.
Please reference this web:
- https://github.com/slackapi/python-slackclient/issues/326.
So I use my OAuth Access Token instead of Bot User OAuth Access Token.

Google Cloud Function in Python gives an error on deploying

I am trying to configure Google Cloud Functions to place an order when the function is triggered.
I have mentioned kiteconnect in the requirements.txt file
But the function doesn't get deployed. throws an error "Unknown resource type".
FULL ERROR MESSAGE:
Deployment failure: Build failed: {"error": {"canonicalCode": "INVALID_ARGUMENT", "errorMessage": "`pip_download_wheels` had stderr output:\nCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-wheel-97dghcl9/logging/\n\nerror: `pip_download_wheels` returned code: 1", "errorType": "InternalError", "errorId": "67DBDBF3"}}
Does anyone have any experience dealing with cloud functions to place a trading order on zerodah?
Following is the function that i have tried:
import logging
from kiteconnect import KiteConnect
logging.basicConfig(level=logging.DEBUG)
kite = KiteConnect(api_key="xxxxxxxxxxxxxxxxxxxxxxxx")
# Redirect the user to the login url obtained
# from kite.login_url(), and receive the request_token
# from the registered redirect url after the login flow.
# Once you have the request_token, obtain the access_token
# as follows.
data = kite.generate_session("xxxxxxxxxxxxxxxxxxxxxxxxx", secret="xxxxxxxxxxxxxxxxxxxxxxxxxx")
kite.set_access_token(data["xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"])
# Place an order
def orderPlace():
order_id = kite.place_order(
variety=kite.VARIETY_REGULAR,
exchange=kite.EXCHANGE_NSE,
tradingsymbol="INFY",
transaction_type=kite.TRANSACTION_TYPE_BUY,
quantity=1,
product=kite.PRODUCT_CNC,
order_type=kite.ORDER_TYPE_MARKET
)
logging.info("Order placed. ID is: {}".format(order_id))
except Exception as e:
logging.info("Order placement failed: {}".format(e.message))
Content of requirements.txt file:
# Function dependencies, for example:
# package>=version
kiteconnect
The error indicates that one of your dependencies is uninstallable.
It looks like the kiteconnect dependency is currently incompatible with Python 3.7, which is the Python version Cloud Functions uses in it's runtime: https://github.com/zerodhatech/pykiteconnect/issues/55
You'll need to wait until the maintainers release a new version that is compatible with Python 3.7.

Uploading file with python returns Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>

blob.upload_from_filename(source) gives the error
raise exceptions.from_http_status(response.status_code, message, >response=response)
google.api_core.exceptions.Forbidden: 403 POST >https://www.googleapis.com/upload/storage/v1/b/bucket1-newsdata->bluetechsoft/o?uploadType=multipart: ('Request failed with status >code', 403, 'Expected one of', )
I am following the example of google cloud written in python here!
from google.cloud import storage
def upload_blob(bucket, source, des):
client = storage.Client.from_service_account_json('/path')
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(des)
blob.upload_from_filename(source)
I used gsutil to upload files, which is working fine.
Tried to list the bucket names using the python script which is also working fine.
I have necessary permissions and GOOGLE_APPLICATION_CREDENTIALS set.
This whole things wasn't working because I didn't have permission storage admin in the service account that I am using in GCP.
Allowing storage admin to my service account solved my problem.
As other answers have indicated that this is related to the issue of permission, I have found one following command as useful way to create default application credential for currently logged in user.
Assuming, you got this error, while running this code in some machine. Just following steps would be sufficient:
SSH to vm where code is running or will be running. Make sure you are user, who has permission to upload things in google storage.
Run following command:
gcloud auth application-default login
This above command will ask to create token by clicking on url. Generate token and paste in ssh console.
That's it. All your python application started as that user, will use this as default credential for storage buckets interaction.
Happy GCP'ing :)
This question is more appropriate for a support case.
As you are getting a 403, most likely you are missing a permission on IAM, the Google Cloud Platform support team will be able to inspect your resources and configurations.
This is what worked for me when the google documentation didn't work. I was getting the same error with the appropriate permissions.
import pathlib
import google.cloud.storage as gcs
client = gcs.Client()
#set target file to write to
target = pathlib.Path("local_file.txt")
#set file to download
FULL_FILE_PATH = "gs://bucket_name/folder_name/file_name.txt"
#open filestream with write permissions
with target.open(mode="wb") as downloaded_file:
#download and write file locally
client.download_blob_to_file(FULL_FILE_PATH, downloaded_file)

Insufficient Permission Error 403 : Google Drive Upload (Python)

I am trying to access Google Drive using the Drive API Version 3 (Python). Listing the files seems to work fine. I get insufficient Permission error when I try to upload a file.
I changed My scope to give full permission to my script
SCOPES = 'https://www.googleapis.com/auth/drive'
Below is the block that I use to create the file
file_metadata = {
'name': 'Contents.pdf',
'mimeType': 'application/vnd.google-apps.file'
}
media = MediaFileUpload('Contents.pdf',
mimetype='application/vnd.google-apps.file',
resumable=True)
file = service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print ('File ID: %s' % file.get('id'))
I get this error message:
ResumableUploadError: HttpError 403 "Insufficient Permission"
I am not sure what is wrong here.
I think that your script works fine. From the error you show, I thought the requirement of reauthorize of access token and refresh token. So please try a following flow.
When you authorize using client_secret.json, a credential JSON file is created. At the default Quickstart, it is created in .credentials of your home directory.
For your current situation, please delete your current the credential JSON file which is not client_secret.json, and reauthorize by launching your script. The default file name of Quickstart is drive-python-quickstart.json.
By this, scope of https://www.googleapis.com/auth/drive is reflected to access token and refresh token, and they are used for uploading process. When the error occurs even if this flow is done, please confirm whether Drive API is enabled at API console, again.
If this was not useful for you, I'm sorry.
Maybe you already have a file with the same name there?

Categories