I am trying to include the SUDS library in a Python project through Google App Engine.
My code tries the following:
from suds.client import Client
from suds.wsse import *
And, once I've deployed on GAE, I encounter the following error:
File ".../myfile.py", line 13, in <module>
from suds.client import Client
File ".../suds/__init__.py", line 154, in <module>
import client
File ".../suds/client.py", line 25, in <module>
import suds.metrics as metrics
AttributeError: 'module' object has no attribute 'suds'
I've been looking around for a little while, and it seems like SUDS is workable with GAE. I added the fixes outlined here, but that doesn't seem to be the problem. It seems like App Engine doesn't even get to that point.
Any info or suggestions?
I'm using Python 2.7 and SUDS 0.4.
Have you tried doing a simple import suds.client? Also make sure you include the suds folder in your application as App Engine doesn't include it by default. I hope that helps.
Related
I am trying to deploy a Google Cloud Function that performs sentiment analysis on tweets using a flair nlp model. The code deploys perfectly fine without the line 'import flair' or alternatives like 'from flair import x,y,z'. As soon as I include the import statement for flair the function fails to deploy. Below is the error I get when deploying with the import statement (error is copied from Firebase logs). This is my first time posting on StackOverflow so please pardon me if the post looks ugly.
{"#type":"type.googleapis.com/google.cloud.audit.AuditLog","status":{"code":3,"message":"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Code in file main.py can't be loaded.\nDetailed stack trace:\nTraceback (most recent call last):\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 359, in check_or_load_user_function\n _function_handler.load_user_function()\n File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py\", line 236, in load_user_function\n spec.loader.exec_module(main_module)\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/user_code/main.py\", line 5, in <module>\n from flair import models, data\n File \"/env/local/lib/python3.7/site-packages/flair/__init__.py\", line 20, in <module>\n from . import models\n File \"/env/local/lib/python3.7/site-packages/flair/models/__init__.py\", line 1, in <module>\n from .sequence_tagger_model import SequenceTagger, MultiTagger\n File \"/env/local/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py\", line 21, in <module>\n from flair.embeddings import TokenEmbeddings, StackedEmbeddings, Embeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/__init__.py\", line 6, in <module>\n from .token import TokenEmbeddings\n File \"/env/local/lib/python3.7/site-packages/flair/embeddings/token.py\", line 10, in <module>\n from transformers import AutoTokenizer, AutoConfig, AutoModel, CONFIG_MAPPING, PreTrainedTokenizer\nImportError: cannot import name 'AutoModel' from 'transformers' (unknown location)\n. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."},"authenticationInfo":
And here is the script I am trying to deploy, as well as the requirements.txt file
main.py
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
from datetime import datetime, timedelta
import flair
# or from flair import models, data
class FirestoreHandler():
cred = credentials.Certificate("serviceAccountKey.json")
firebase_admin.initialize_app(cred)
db = firestore.client()
def analysis_on_create(self):
docs = self.db.collection('tweets').order_by(u'time', direction=firestore.Query.DESCENDING).limit(1).get()
data = docs[0].to_dict()
most_recent_tweet = data['full-text']
sentiment_model = flair.models.TextClassifier.load('en-sentiment')
sentence = flair.data.Sentence(str(most_recent_tweet))
sentiment_model.predict(sentence)
result = sentence.labels[0]
if result.value == "POSITIVE":
val= 1 * result.score
else:
val= -1 * result.score
self.db.collection('sentiment').add({'sentiment':val,'timestamp':datetime.now()+timedelta(hours=3)})
def add_test(self):
self.db.collection('test3').add({"status":"success", 'timestamp':datetime.now()+timedelta(hours=3)})
def hello_firestore(event, context):
"""Triggered by a change to a Firestore document.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
resource_string = context.resource
# print out the resource string that triggered the function
print(f"Function triggered by change to: {resource_string}.")
# now print out the entire event object
print(str(event))
fire = FirestoreHandler()
fire.add_test()
fire.analysis_on_create()
requirements.txt
# Function dependencies, for example:
# package>=version
firebase-admin==5.0.1
https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl
flair
I included the url to pytorch download because flair is built on pytorch, and the function would not deploy without the url (even when I didn't import flair in main.py). I have also tried specifying different versions for flair to no avail.
Any intuition as to what may be causing this issue would be greatly appreciated! I am new to the Google Cloud ecosystem, this being my first project. If there is any additional information I can provide please let me know.
Edit: I am deploying from the website (not using CLI)
I am not sure that the provided requirements.txt is OK for GCP cloud functions deployment. Not sure that an explicit https URL is going to be handled correctly...
The Specifying dependencies in Python documentation page describes how the dependencies are to be stated - using the pip package manager's requirements.txt file or packaging local dependencies alongside your function.
Can you simply mention flair with necessary version in the requirements.txt file? Will it work?
In addition, the error you provided highlights that the transformers package is required. Can it be that some specific version is required?
====
As a side comment - I don't know your context and requirements, but I am not sure that in order to work with the Firestore from inside a cloud function all of that
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
is required, as it might be better to avoid using serviceAccountKey.json at all, and simply assign relevant IAM roles to the service account which is used for the given cloud function execution.
import boto3
import json
import time
client = boto3.client('elbv2')
desired_capacity=8
client.set_desired_capacity(
AutoScalingGroupName='Test-Web',
DesiredCapacity=desired_capacity,
HonorCooldown=True)
and
boto3==1.7.1
When I run this script I get a
File "deploy_staging_web.py", line 6, in <module>
client.set_desired_capacity(
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 601, in __getattr__
self.__class__.__name__, item)
AttributeError: 'ElasticLoadBalancingv2' object has no attribute 'set_desired_capacity'
I intended to use python to scale aws instances up and down.
I'm not inside any virtual environment at the moment.
why is it being thrown, and how do I get across it?
It is even mentioned here on the official documentation : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.set_desired_capacity
The official document is for the latest version, not your too old version. Upgrade your boto3 package to the latest. The most recent version is 1.9.243.
The problem turns out to be a silly one.
boto3 has moved around the various functions.
set_desired_capacity is no longer part of 'elbv2' .
It is part of 'autoscaling' https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.set_desired_capacity
While 'describe_target_health' is still part of the former https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html?highlight=elb#ElasticLoadBalancingv2.Client.describe_target_health.
Updating
client = boto3.client('elbv2')
to
client = boto3.client('autoscaling')
has solved my problem.
I'm trying the app I cloned from following Github.
https://github.com/sfujiwara/qa-system-sample.git
(This app uses python flask, Google Natural Language API and so on.)
But, I cannot run this on Google App Engine.Following Error occurs.
Error:
ImportError: cannot import name types
at <module> (/base/data/home/apps/b~qa-system-sample2/20171223t122211.406411375873194303/lib/google/cloud/language_v1/__init__.py:17)
at <module> (/base/data/home/apps/b~qa-system-sample2/20171223t122211.406411375873194303/lib/google/cloud/language.py:17)
at <module> (/base/data/home/apps/b~qa-system-sample2/20171223t122211.406411375873194303/factoid.py:5)
at <module> (/base/data/home/apps/b~qa-system-sample2/20171223t122211.406411375873194303/main.py:9)
at LoadObject (/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py:85)
at _LoadHandler (/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py:299)
at Handle (/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py:240)
This occurs at from google.cloud.language_v1 import typesin lib/google/cloud/language_v1/_ _init__.py.
Although, There is types.py in /lib/google/cloud/language_v1.
I'm a beginner of python, so I couldn't find out a reason of this error.Please tell me the way to resolve this error.Thanks in advance.
I added enum into the library. It fixes this issue (although another import issue occurred)
libraries:
- name: enum
version: latest
I know this question has been asked countless times here, but I've been stuck with this problem for a long time and have not been able to find a solution online.
I have an import cycle, and here is the stack trace:
Traceback (most recent call last):
File "openldap_lookup.py", line 2, in <module>
import pure.directory_service.ds_lookup as dsl
File "/home/myname/purity/tools/pure/directory_service/ds_lookup.py", line 8, in <module>
import pure.authorization.auth as auth
File "/home/myname/purity/tools/pure/authorization/auth.py", line 16, in <module>
import auth_unix as auth_impl
File "/home/myname/purity/tools/pure/authorization/auth_unix.py", line 17, in <module>
import pure.directory_service.ad_lookup as ad_lookup
File "/home/myname/purity/tools/pure/directory_service/ad_lookup.py", line 1, in <module>
import pure.authorization.auth as auth
AttributeError: 'module' object has no attribute 'auth'
I import modules only; I avoid the from <module> import <class> and from <module> import <method>
I tried to reproduce the error locally, but python has no complaints. These are the local test files:
openldap_lookup.py
import ds_lookup
def openldap_foo():
print ds_lookup.ds_foo
print 'openldap_lookup importing ds_lookup'
ds_lookup.py
import auth as au
def ds_foo():
print au.auth_foo
print 'ds_lookup importing auth'
auth.py
import auth_unix
def auth_foo():
print auth_unix.auth_unix_foo
print 'auth importing auth_unix'
auth_unix.py
import ad_lookup
def auth_unix_foo():
print ad_lookup.ad_foo
print 'auth_unix importing ad_lookup'
ad_lookup.py
import auth as au
def ad_foo():
print au.auth_foo
print 'ad_lookup importing auth'
But python doesn't complain:
myname#myname-mbp:~/cycletest$ python openldap_lookup.py
ad_lookup importing auth
auth_unix importing ad_lookup
auth importing auth_unix
ds_lookup importing auth
openldap_lookup importing ds_lookup
myname#myname-mbp:~/cycletest$
I am not a python expert, but I understand that an import cycle is causing the error. But why doesn't the same error occur with the small test files? When is an import cycle legal in python and when is it not? What can I do to resolve this?
I would greatly appreciate any help from the python experts out there.
Since many are bound to ask, why do I have this cycle in the first place?
Well, openldap_lookup and ad_lookup both contain subclasses of a base class in ds_lookup. ds_lookup requires constants from auth. auth requires auth_unix as an implementation, and auth_unix in turn calls the implementations openldap_lookup and ad_lookup.
I would love to move the constants out from auth and remove the cycle, but this code is part of a large git repo where hundreds of files depend on the constants and methods in auth, and I would like to avoid having to refactor all of them if possible.
Actually, you're not just importing modules -- you're importing modules from packages, and your test case doesn't actually reflect this.
I think the problem is that in your first import of pure.authorization.auth, the interpreter is still building the pure.authorization module (it hasn't yet bound auth into pure.authorization, because it hasn't finished importing auth), so the second time it encounters this, it finds the pure.authorization module, but there is no global variable auth in it yet.
As far as breaking the cycle goes, does auth really need to import auth_unix right away, or could that be deferred until you really need an auth implementation?
I am trying to use the Remote API in a local Client, following instructions given in Google documentation (https://developers.google.com/appengine/docs/python/tools/remoteapi).
I first tried from the Remote API shell, and everything is working fine. I can connect to my remote datastore and fetch data.
But when I run the following script, which is very similar to Google's example :
#!/opt/local/bin/python2.7
import sys
SDK_PATH = "/usr/local/google_appengine/"
sys.path.append(SDK_PATH)
import dev_appserver
dev_appserver.fix_sys_path()
from google.appengine.ext.remote_api import remote_api_stub
import model.account
def auth_func():
return ('myaccount', 'mypasswd')
remote_api_stub.ConfigureRemoteApi('myappid', '/_ah/remote_api', auth_func)
# Fetch some data
entries = model.account.list()
for a in entries:
print a.name
then I get an error :
Traceback (most recent call last):
File "./remote_script_test.py", line 26, in <module>
entries = model.account.list()
[...]
File "/Developer/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1333, in check_rpc_success
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: Application Id (app) format is invalid: '_'
It says my application ID is a plain underscore '_', which is not the case since my app.yaml is correctly configured and when I do a
print app_identity.get_application_id()
from that script I get the correct application ID. I am under the impression that the GAE environment is not properly setup, but I could not figure out how to make it work. Does anyone have a full piece of code that works in this context ?
I am using Mac Os X Mountain Lion.
You may wish to start with remote_api_shell.py and remove stuff until it breaks, rather than add in things until it works.
For instance there's a line after the one you copied: remote_api_stub.MaybeInvokeAuthentication() is that necessary? There may be a number of necessary lines in remote_api_shell.py, maybe better to customize that rather than start from scratch.