Download Google App Engine Project - python

I've gotten appcfg.py to run. However, when I run the command it doesn't actually download the files. Nothing appears in the destination directory.
There was a mixup and a lot of work got lost and the only way to recover them is to download them from the host.
python appcfg.py download_app -A beatemup-1097 /home/chaserlewis/Desktop/gcloud
The output is
Host: appengine.google.com
Fetching file list...
Fetching files...
Then it just returns without having downloaded anything. It is definitely hosted so I'm not sure what else to do.
I am doing this from a different computer then I deployed from if that matters. I couldn't get appcfg.py to run on my Windows machine unfortunately.

It might be due to the omitted version flag. Try the following:
Go to the App Engine versions page in the console and check the version of your app that is serving traffic. If you don't specify the -V flag, the appcfg command will try to download the default version, which isn't necessarily your latest version or the version serving traffic.
Add the -V flag to your command with the target version that you identified from the console.
python appcfg.py download_app -A beatemup-1097 -V [YOUR_VERSION] /home/chaserlewis/Desktop/gcloud

Related

Error with Python subprocess when running Flask app using nginx + WSGI

I have developed a Python web server using Flask, and some of the endpoints make use of the subprocess module to call different executables. On development, using the Flask debug server, everything works fine. However, when running the server along with nginx+WSGI (on the exact same machine), some subprocess calls fail.
For example, one of the tools I'm using is Microsoft's dotnet, which I installed from my user as sudo apt-get install -y aspnetcore-runtime-5.0 and is then called from Python with the subprocess module. When I run the server with python3 server.py, it works like a charm. However, when using nginx and WSGI, the subprocess call fails with an exception that says: /bin/sh: 1: dotnet: not found.
I suspect this is due to the command not being accessible to the user and group running the server. I have used this guide as a reference to deploy the app, and on the wsgi .ini file, I have set uid = javierd and gid = www-data, while on the systemd .service file I have User=javierd, Group=www-data.
I have tried to add the executables' paths to /etc/profile, but it didn't work, and I don't know any other way to fix it. I find also very surprising that this happens to some executables, but not to all, and that it happes to dotnet, for example, which is located at /usr/bin/dotnet and therefore should be accessible to every user. Any idea on how to solve this problem? Furthermore, if somebody could explain me why this is happening, I would really appreciate the effort.
Thanks a lot!
Ok, finally after having a big headache, I noticed the error, and it was really simple.
On the tutorial I linked, when creating the system service file, the following line was included: Environment="PATH=/home/myuser/myfolder/enviroment/bin".
Of course, as this was overriding the path, there was no way of executing the commands. Once I notices it I just removed that line, restarted the service, and it was fixed.

Missing MapBox Token when using Apache Superset with Docker

I've installed Apache Superset according to this official manual. I can create plots, connect to databases etc. without any problems, only if I want to plot latitude and longitude data with a mapbox or deck.gl plots, I get this warning and can't see any maps:
NO_TOKEN_WARNING
For information on setting up your basemap, read
Note on Map Tokens
I have a MapBox-Api-Key (lets say XXYYZZ) and followed instructions where I created a superset_config.py file in the home folder of the server where superset is running. This is the code I used:
Entries in .bashrc
export SUPERSET_HOME=/home/maximus/envs/superset/lib/python3.6/site-packages/superset
export SUPERSET_CONFIG_PATH=$HOME/.superset/superset_config.py
export PYTHONPATH=/home/maximus/envs/superset/bin/python:/home/maximus/.superset:$PYTHONPATH
Created superset_confiy.py in .superset
path: $ ~/.superset/superset_config.py
with following code
#---------------------------------------------------------
# Superset specific config
#---------------------------------------------------------
ROW_LIMIT = 50000
MAPBOX_API_KEY = 'XXYYZZ'
As I'm using docker, I thought maybe I need to to the same within the main docker container of superset (superset_app) but it still does not work.
My server runs on Ubuntu 18.04 LTS. Anyone any ideas on how to solve this problem with docker, superset and mapbox?
I solved the problem by adding my mapbox token (XXYYZZ) to the docker environment file which is used by docker-compose.
This is what I did in detail:
As superset runs on my server I connected via ssh
Stop superset with docker-compose down
cd into the docker folder within the folder where the docker-compose files is --> cd superset/docker
I was running the non-dev version with docker-compose, therefore I opened the .env-non-dev file with nano. If you run the "normal" version just edit the .env file instead.
Comment: I'm not sure if this is the supposed way, but apparently you can edit the environmental parameters.
I added my Mapbox Key (MAPBOX_API_KEY = "XXYYZZ")
Finally just start superset again with docker-compose -f docker-compose-non-dev.yml up -d or docker-compose -f docker-compose.yml up -d respectively.
Thats all, I can now see the maps when opening the deck.gl sample dashboard.
The documentation and a youtube video tutorial seem outdated.
For the most recent release:
clone the superset repo;
add the MAPBOX_API_KEY to the superset/config.py or docker/pythonpath_dev/superset_config.py;
then docker-compose up solved the problem

Server is freezing while trying to get a backup of MongoDB in docker composer

I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz

Gcloud Internal Error while submitting training inside Docker container

I'm building a Docker container to submit ML training jobs using gcloud - the runnable is actually a Python program and gcloud is being executed via subprocess.check_output. Running the program outside a Docker container works just fine which makes me wonder if there is some dependency that is not installed but gcloud simply outputs no useful logs at all.
While running gcloud ml-engine jobs submit training the executable returns exit status 1 simply outputting Internal Error. The logs that are available on Google Cloud Console are always 5 entries of "Validating job requirements..." with no further information.
The Docker container has the following installed dependencies (some are not relevant to Google Cloud ML but are used by other features in the program):
Via apt-get: python, python-pip, python-dev, libmysqlclient-dev, curl
Via pip install: flask, MySQL-python, configparser, pandas, tensorflow
The gcloud tool itself is installed by downloading the SDK and installing it through command line:
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
Account credentials are setup via
RUN gcloud auth activate-service-account --key-file path-to-keyfile-in-docker-container
RUN gsutil version -l
Last gsutil version command is pretty much just to make sure SDK installation is working.
Does anyone have any clue what might be happening or how to further debug what might me causing an Internal Error on gcloud?
Thanks in advance! :)
Please make sure all the parameters are set properly and make sure you have all your dependencies uploaded and packaged properly.
If everything is done and you still can't run the job, you will need more than just "Internal Error" to pinpoint the issue. Please either contact Google Cloud Platform support or file a bug on the Public Issue Tracker to get further assistance.

Heroku install letsencrypt - su: must be run from a terminal

I am trying to create an ssl certificate for my website to get the green lock.
While reaseacrhing how to do that (never done anything with SSL certificates before) I encountered letsencrypt. But I cant figure out how to install it on my server.
I have my application on heroku and a custom domain at a random webhoster. I point this domain via CNAME DNS to my heroku application.
As far as I understand the whole SSL thing has to be configured with heroku, because the data is also there.
I have tried a few things which all didnt worked. But this attempt seems to be close:
I created a folder "letsencrypt" in my app localy
I logged in to heroku via CMD
I pushed everything to heroku git push heroku master
I used heroku run bash to access the folder I created
I entered the folder which I just created cd letsencrypt
I cloned letsencrypt into this folder git clone https://github.com/letsencrypt/letsencrypt
I went again into cd letsencrypt
I used ./letsencrypt-auto --help
Which gave me:
"sudo" is not available, will use "su" for installation steps...
Bootstrapping dependencies for Debian-based OSes...
su: must be run from a terminal
apt-get update hit problems but continuing anyway...
su: must be run from a terminal
Disclaimer: have not tried this yet, but:
This seems to be a pretty comprehensive doc.

Categories