I have two apps, my_app and my_endpoint_app. I can access my_endpoint_app with any version label in the URL I want and it will automatically route to the default version if it does not match an existing version.
Example:
https://josh-dot-my_endpoint_app.appspot.com/ will respond with the default version since there is no josh version deployed.
If I try to do the same with a Google Cloud Endpoint service call, I get a Not Found error.
Example:
The unsuccessful https://josh-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint vs the working https://my_endpoint_app.appspot.com/_ah/api/myendpoint
I have a couple of Google AppEngine applications that communicate with each other via Cloud Endpoints.
Under normal usage this is OK because I know the version beforehand and avoid these errors. In our development environment, this falls apart. In order to support feature branches and testing in isolation, we push our code up to appspot using the -V switch of appcfg.py.
Example:
appcfg.py -A my_app -V josh update .
Now I can access my feature branch at https://josh-dot-my_app.appspot.com. In order to support some version label hackery, I dynamically calculate the right endpoint app to call with something like s/my_app/my_endpont_app/g and then make my service calls there. This fails because of the dynamic version label not existing. If I push a version label with that name it completes as expected.
Is there any way to get Cloud Endpoints to answer on non-existent version label hostnames?
Scenarios that I want to support
https://my_endpoint_app.appspot.com/_ah/api/myendpoint
Main application URL, routes to default version
https://josh-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint
Version does not exist, should route to default version
https://new-feature-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint
Version new-feature exists, should route to new-feature version so that we can test new code in isolation before merging into the main code branch. This would be internal apis that the current endpoints might make use of without changing what the endpoint accomplishes. (performance improvements, etc)
You can reroute any Url to any module/version via the dispatch file.
Related
Just started using Google Cloud SDK Shell after using the older, gui-based, version. I have multiple projects under development, if that matters.
Here's what I do
run gcloud SDK shell (click on the icon!)
cd \myproject
dev_appserver.py app.yaml
In the browser (Chrome),
browse to http://localhost:8000/datastore
Under Datastore Viewer, I see 'tables' from a completely different project
(say, myotherproject)
Under Datastore Indexes, I see 'indexes' from the correct project (myproject)
Under Task Queues, I see the correct queues listed (I have specified different queues setup for parts of myproject)
Everything works fine for myotherproject. So, is there something I am missing to get the Datastore Viewer to show the correct 'tables'?
Many thanks, David
Edit: no matter what project I run, Datastore Viewer shows the same data (from myotherproject) but Datastore Indexes show the correct indexes.
Edit: Windows 8.1, Python v2.7.13:a06454b1afa1
Edit: further questions 1) does gcloud sdk use a different datastore from the original app engine sdk? 2) if so, where is it by default or do I have to define it upfront?
Thanks to everyone for their help with this. It appears GCloud uses one datastore for all projects so the --datastore_path is not really optional when you have multiple paths. However, I kept getting errors with --datastore_path so I went with the following...
dev_appserver.py --storage_path=c:\gcdata\projectname app.yaml
Yes, could have been c:\temp but this gives me separate 'databases', one for each project.
Note also that GCloud SDK does not use the same data as the original App Engine SDK grrrrrr!
My reading of Cognito is that it can be used in place of a local Django admin database to authenticate users of a website. However I am not finding any soup-to-nuts examples of a basic "Hello, World" app with a login screen that goes through Cognito. I would very much appreciate it if someone could post an article that shows, step-by-step, how to create a Hello World Django app and a Cognito user pool, and then how to replace the default authentication in Django with a call to AWS Cognito.
In particular I need to know how to gather the information from the Cognito admin site that is needed to set up a call to Cognito API to authenticate a user.
There are two cases to consider: App user login to App, and Admin login to django Admin URL of site. I assume that I would want to use Cognito for both cases, otherwise I am leaving a potential hole where the Admin URL is using a weaker login technology.
Current answers on AWS forums and StackExchange either say:
(1) It is a waste of time to use Cognito for authenticating a website, it is only for access to AWS resources
(2) It is not a waste of time. I am about to give up. I have gone as far as creating a sample Cognito user pool and user groups, and of scouring the web for proper examples of this use case. (None found, or I wouldn't be writing.)
(3) https://github.com/capless/warrant, https://github.com/metametricsinc/django-warrant are two possible solution from the aws forums.
If you are reading this, you probably googled "aws cognito django" xD.
I just want to share what I did in order to get this thing to work:
Django-Warrant. Great aws cognito wrapper package.
Make sure to understand your current User model structure. If you use custom user model, don't forget to map it using COGNITO_ATTR_MAPPING setting.
Change your authentication to support 3rd party connectivity. When you get from the client some Cognito token, convert it into your own token using oAuth/JWT/Session.
Rethink your login/register process. Do you want different registration? The django-warrant package supports it...
At the end of the day, this is a GREAT solution for fast authentication.
To add to the accepted answer, there is a simple but very important extra step that I found was necessary to take to use django-warrant with Django 2.0:
The conditional in backend.py in the root package needs to be changed from:
if DJANGO_VERSION[1] > 10
to:
if DJANGO_VERSION[1] > 10 or DJANGO_VERSION[0] > 1:
Using django-warrant with Zappa and AWS Lambda:
The project I am working on also uses Zappa to enable the serverless deployment of my Django app to AWS Lambda. Although the above code fixed django-warrant for me when testing locally, after deploying the app to the Lambda environment, I had another significant issue stemming from some of django-warrant's supporting packages - primarily related to python-jose-pycryptodome, which django-warrant uses during the authentication process. The issue showed itself in the form of a FileNotFound error related to the Crypto._SHA256 file. This error appears to have been caused because pycryptodome expects different files to be available in the Crypto package at runtime on Windows (which I am developing on) and Linux (the Lambda environment) respectively. I ended up solving this issue by downloading the Linux version of pycryptodome and merging its Crypto package with the Crypto package from the Windows version.
TLDR: If you want to use django-warrant with AWS Lambda and you are developing on a Windows machine, make sure to download the Linux version of pycryptodome and merge its Crypto package with the same from the Windows version.
Note: The versions of pycryptodome and python-jose (not python-jose-cryptodome) that I ended up using to achieve the above were 3.7.2 and 3.0.1 respectively.
Reading this doc it says "You must initially deploy a version of your app to the default service before you can create and deploy subsequent services."
I don't understand this because I thought the GAE microservices were separate things as in:
But it seems this is not an accurate depiction of how GAE microservices work? Is there like a master controller "default" service that sets top level config or does some kind of routing? If I'm just running a bunch of non web apps (meaning apps that wil run on a scheduled and process data) and a frontend "app" for accepting web requests isn't necessary than why do I still need to create the default service?
The reason is that there are also several app-level configs, applicable to all services/modules:
dispatch.yaml
index.yaml
queue.yaml
cron.yaml
Some of these configs can have trouble if not deployed after/together with the default service. And some services may have dependencies on the app-level configs.
The requirement of deploying default first is simply a measure to reduce the risk of initial deployment problems. Subsequent deployments no longer have this restriction (since default is already deployed)
Yes, the default service is mandatory (sort of like a kitchen sink for all kinds of stuff, for example requests not matching any dispatch rule are sent to the default service). So just declare one of your non-web apps the default one (it doesn't matter what the default service actually does).
Somehow related (mostly for the examples): Can a default service/module in a Google App Engine app be a sibling of a non-default one in terms of folder structure?
You can deploy a default app by initializing a default AppEngine application in your project by running ./init_appengine.sh
[init_appengine.sh]
I am running a Virtual Machine on Google Cloud and am using their SDK to deploy with the following command:
gcloud preview app deploy ./app.yaml
The deployment works, however for every deployment a new instance is created which can only be reached by adding the version id to the domain name. I tried removing older instances through the developer dashboard but they just restart directly after that.
How can I remove the newly created instances and overwrite the default version on the main domain by default when deploying?
To do this directly from gcloud, use the following two flags:
--set-default:
Set the deployed version to be the default serving version.
--version:
The version of the app that will be created or replaced by this
deployment. If you do not specify a version, one will be generated for
you.
(both from gcloud preview app deploy --help).
If you set --version to be the same each time, the current version deployed at that URL will be overwritten, and a new version will not be created on each deployment.
If you use --set-default, the deployed version can be accessed just using the domain name (without the version as a subdomain).
Deleting the other versions by hand in the developer console will be the simplest way to get rid of them.
Turns out you can't edit this under Computer Engine > VM Instances. You have to look under AppEngine > Versions and change the default version there + delete the older ones.
In order to redeploy a GAE application, I currently have to install the GAE deployment tools on the system that I am using for deployment. While this process is relatively straight forward, the deployment process is a manual process that does not work from behind a firewall and the deployment tools must be installed on every machine that will be used for updating GAE apps. A more ideal solution would be if I could update a GAE application from another GAE application that I have deployed previously. This would remove the need to have multiple systems configured to deploy apps.
Since the GAE deployment tools are written in Python and the GAE App Engine supports Python, is it possible to modify appcfg.py to work from within GAE? The use case would be to pull a project from GitHub or some other online repository and update one GAE application from another GAE app. If this is not possible, what is the limiting constraint?
Is it possible? Yes. The protocol appcfg uses to update apps is entirely HTTP-based, so there's absolutely no reason you couldn't write an app that's capable of deploying other apps (or redeploying itself - self-modifying code)! You may even be able to reuse large parts of appcfg.py to do it.
Is it easy? Probably not. It's quite likely you'll need to understand a decent chunk of appcfg's internals, and the RPCs it uses to upload new apps - not a trivial undertaking. You'll also need to store your credentials in the app, in all likelihood - though you can use a role account that is and admin only for the apps it's deploying to minimize risk there.
One limiting constraint could be the protocol that the python sdk uses to communicate with the GAE servers. If it only uses HTTP, you might be OK. but if it's anything else, you might be out of luck because you can't open a socket directly from within GAE.
What problem did you have by trying to update behind a firewall?
I've got some, but finally I manage to work around them.
About your question, the constraint is that you cannot write files into a GAE app, so even though you could possibly pull from the VCS you can't write those pulled files.
So you would have to update from outside the GAE in first place.
Anyway every machine that needs to update the GAE should have the SDK anyway just to see if they changes work.
So, If you really want to do this you have two alternatives:
Host your own "updater" site and istall the SDK there, then when you want to update log into your side ( or run a script ) and do the remote update.
Although I don't know Amazon EC2 well, I think you can do pretty much the same thing as op 1 from there.
Finally I think the password to update has to be typed always. ( you could have the SDK of the App engine and modify that, because it is open source )