I am using google cloud appengine and deploying with gcloud app deploy and a standard app.yaml file. My requirements.txt file has one private package that is fetched from github (git+ssh://git#github.com/...git). This install works locally, but when I run the deploy I get
Host key verification failed.
fatal: Could not read from remote repository.
This suggests there is no ssh key when installing. Reading docs (https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies) it appears that this just isn't an option???
Dependencies are installed in a Cloud Build environment that does not provide access to SSH keys. Packages hosted on repositories that require SSH-based authentication must be copied into your project directory and uploaded alongside your project's code using the pip package manager.
To me this seems severely not-optimal - the whole point of factoring out code into a package was to be able to avoid duplication in repos. Now, if I want to use appengine, you're telling me this not possible?
Is there really no workaround?
See:
https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies#private_dependencies
The App Engine service does not (and should not) have access to your private repo.
One alternative (that you don't want) is to upload your public key to the App Engine service.
The other -- as documented -- is that you must provide the content of your private repo to the service as part of your upload.
I'm going through the same issue, deploying on gcloud a python project that contains in its requirements.txt some private repositories. As #DazWilkin wrote already, there's no way to deploy it like you do normally.
One option would be to create a docker image of the whole project and its dependencies, save it into the gcloud docker registry and then pull it into the App Engine instance.
Related
I have a question.
What's the best approach to building a Docker image using the pip artifact from the Artifact Registry?
I have a Cloud Build build that runs a Docker build, the only Dockerfile is pip install -r requirements.txt, one of the dependencies of which is the library located in the Artifact Registry.
When executing a stage with the image gcr.io / cloud-builders / docker, I get the error that my Artifact Registry is not accessible, which is quite logical. I have access only from the image performing the given step, not from the image that is being built in this step.
Any ideas?
Edit:
For now I will use Secret Manager to pass JSON key to my Dockerfile, but hope for better solution.
When you use Cloud Build, you can forward the metadata server access through the Docker build process. It's documented, but absolutely not clear (personally, the first time I made a mail to Cloud Build PM to ask him, and he send me the documentation link.)
Now, your docker build can access the metadata server and be authenticated with the Cloud Build runtime service account. It should make your process easiest.
I built a functioning python API that runs from my local machine. I'd like to run this API from Google Cloud SDK, but after looking through the documentation and googling every variation of "run local python API from google cloud SDK" I had no luck finding anything that wouldn't involve me restructuring the script heavily. I have a hunch that "google run" or "API endpoint" might be what I'm looking for, but as a complete newbie to everything other than Firestore (which I would rather not convert my entire api into if I don't have to), I want to ask if there's a straightforward way to do this.
tl;dr The API runs successfully when I simply type "python apiscript.py" into local console, is there a way I can transfer it to Google Cloud without adjusting the script itself too much?
IMO, the easiest solution for portable app is to use Container. And to host the container in serverless mode, you can use Cloud Run.
In the getting started guide, you have python example. The main task for you is to create a Dockerfile
FROM python:3.9-slim
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip install -r requirements.txt
CMD python apiscript.py
I adapted the script to your description, and I assumed that you have a requirements.txt file for the dependencies.
Now, build your container
gcloud builds submit --tag gcr.io/<PROJECT_ID>/apiscript
Replace the PROJECT_ID by your project ID, not the name of the project (even if sometimes it's the same, it's a common mistake for the newcomers)
Deploy on Cloud Run
gcloud run deploy --region=us-central1 --image=gcr.io/<PROJECT_ID>/apiscript --allow-unauthenticated --platform=managed apiscript
I assume that your API is served on the port 8080. else you need to add a --port parameter to override this.
That should be enough
Here it's a getting started example, you can change the region, the security mode (here no security) the name and the project.
In addition, for this deployment, the Compute Engine default service account is used. You can use another service account if you want, but, in any cases, you need to grant the used service account the permission to access to the Firestore database.
Just a conceptual question here.
I'm a newbie to aws. I have a node app and a python file that is currently on a Flask server. The node app sends data to the Py server and gets data back. This takes approx 3.2 secs to happen. I am not sure how I can apply this to AWS. I tried sagemaker but it was really costly for me. Is there anyway I can create a Python server with an endpoint in AWS within the free tier?
Thanks
Rushi
You do not need to use sagemaker to deploy your flask application to AWS. AWS has a nice documentation to deploy a Flask Application to an AWS Elastic Beanstalk environment.
Other than that you can also deploy the application using two methods.
via EC2
via Lambda
EC2 Instances
You can launch the ec2 instance with public IP with SSH enabled from your IP address. Then SSH into the instance and install the python, it's libraries and your application.
Lambda
AWS lambda is the perfect solution. It scales automatically, depends upon the requests your application will receive.
As lambda needs your dependencies be available in the package, so you need to install them using --target parameter, zip the python code along with the installed packages and then upload to the Lambda.
pip install --target ./package Flask
cd package
zip -r9 function.zip . # Create a ZIP archive of the dependencies.
cd .. && zip -g function.zip lambda_function.py # Add your function code to the archive.
For more detailed instructions you can read these documentations
Lambda
My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.
I have a existing django web application currently deployed on aws. I want to deploy it on Microsoft Azure by using cloud services. How to create config files for deploying web app on Azure? How to access environment variables on Azure? I am not using Visual Studio. I am developing web app in linux env and using git for code management. Please help
It sounds like you want to know which way is the best choice for deploying a django app via Git for code management on Linux, using Cloud Services or App Services on Azure.
Per my experience, I think deploying a pure web app into App Service on Azure via Git on Linux is the simplest way for you. You can refer to the offical docuemnts below to know how to do it via Azure CLI or only Git.
Deploy your first Python web app to Azure in five minutes
Local Git Deployment to Azure App Service
And there is a code sample of Django on App Service as reference that you can know how to configure it for running on Azure.
However, if your app need more powerful features & performance, using Cloud Services for your django app is also a better way than using VM directly. Also as references, please view the document Python web and worker roles with Python Tools for Visual Studio to know how to let Azure support Python & Django on Cloud Services, and you can create & deploy it via Azure portal in the browser on Linux. Meanwhile, thanks for the third party GitHub sample of Django WebRole for Cloud Service which you can refer to know how to create a cloud service project structure without PTVS for VS on Linux.
Hope it helps.
I read this post, decided the how-to guides Peter Pan posted looked good, and set off on my own. With my one business day's worth of experience if you are looking to deploy your app to Azure, start with the Marketplace Django app and go from there. Reason being the virtual environment comes with it along with the activate script needed to run the virtual environment and the web.config is setup for you. If you follow the start from scratch how-to guides, these are the hardest parts to setup correctly. Once you create the app service from the template, do a git clone of the repo to your local machine. Make a small change and push it back up by running the command below in bash.
az webapp deployment source config-local-git --name <app name> --resource-group <group name> --query url --output tsv
Use the result of the command to add the git repo as a remote source.
git remote add azure https://<ftp_credential>#<app_name>.scm.azurewebsites.net/<app_name>.git
Finally, commit your changes and deploy
git add -A
git commit -m "Test change"
git push azure remote
A couple of side notes
If you do not have your bash environment setup, you'll need to do so to use the az commands. The marketplace app does run error-free locally. I have not dug into this yet.
Good luck!