Missing MapBox Token when using Apache Superset with Docker - python

I've installed Apache Superset according to this official manual. I can create plots, connect to databases etc. without any problems, only if I want to plot latitude and longitude data with a mapbox or deck.gl plots, I get this warning and can't see any maps:
NO_TOKEN_WARNING
For information on setting up your basemap, read
Note on Map Tokens
I have a MapBox-Api-Key (lets say XXYYZZ) and followed instructions where I created a superset_config.py file in the home folder of the server where superset is running. This is the code I used:
Entries in .bashrc
export SUPERSET_HOME=/home/maximus/envs/superset/lib/python3.6/site-packages/superset
export SUPERSET_CONFIG_PATH=$HOME/.superset/superset_config.py
export PYTHONPATH=/home/maximus/envs/superset/bin/python:/home/maximus/.superset:$PYTHONPATH
Created superset_confiy.py in .superset
path: $ ~/.superset/superset_config.py
with following code
#---------------------------------------------------------
# Superset specific config
#---------------------------------------------------------
ROW_LIMIT = 50000
MAPBOX_API_KEY = 'XXYYZZ'
As I'm using docker, I thought maybe I need to to the same within the main docker container of superset (superset_app) but it still does not work.
My server runs on Ubuntu 18.04 LTS. Anyone any ideas on how to solve this problem with docker, superset and mapbox?

I solved the problem by adding my mapbox token (XXYYZZ) to the docker environment file which is used by docker-compose.
This is what I did in detail:
As superset runs on my server I connected via ssh
Stop superset with docker-compose down
cd into the docker folder within the folder where the docker-compose files is --> cd superset/docker
I was running the non-dev version with docker-compose, therefore I opened the .env-non-dev file with nano. If you run the "normal" version just edit the .env file instead.
Comment: I'm not sure if this is the supposed way, but apparently you can edit the environmental parameters.
I added my Mapbox Key (MAPBOX_API_KEY = "XXYYZZ")
Finally just start superset again with docker-compose -f docker-compose-non-dev.yml up -d or docker-compose -f docker-compose.yml up -d respectively.
Thats all, I can now see the maps when opening the deck.gl sample dashboard.

The documentation and a youtube video tutorial seem outdated.
For the most recent release:
clone the superset repo;
add the MAPBOX_API_KEY to the superset/config.py or docker/pythonpath_dev/superset_config.py;
then docker-compose up solved the problem

Related

Installing packages in a Kubernetes Pod

I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.
You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.

Running a python script inside an nginx docker container

I'm using nginx to serve some of my docs. I have a python script that processes these docs for me. I don't want to pre-process the docs and then add them in before the docker container is built since these docs can grow to be pretty big and they increase in number. What I want is to run my python (and bash) scripts inside the nginx container and have nginx just serve those docs. Is there a way to do this without pre-processing the docs before building the container?
I've attempted to execute RUN python3 process_docs.py, but I keep seeing the following error:
/bin/sh: 1: python: not found
The command '/bin/sh -c python process_docs.py' returned a non-zero code: 127
Is there a way to get python3 onto the Nginx docker container? I was thinking of installing python3 using:
apt-get update -y
apt-get install python3.6 -y
but I'm not sure that this would be good practice. Please let me know the best way to run my pre processing script.
You can use a bind mount to inject data from your host system into the container. This will automatically update itself when the host data changes. If you're running this in Docker Compose, the syntax looks like
version: '3.8'
services:
nginx:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
- ./data:/usr/share/nginx/html/data
ports:
- '8000:80' # access via http://localhost:8000
In this sample setup, the html directory holds your static assets (checked into source control) and the data directory holds the generated data. You can regenerate the data from the host, outside Docker, the same way you would if Docker weren't involved
# on the host
. venv/bin/activate
./regenerate_data.py --output-directory ./data
You should not need docker exec in normal operation, though it can be extremely useful as a debugging tool. Conceptually it might help to think of a container as identical to a process; if you ask "can I run this Python script inside the Nginx process", no, you should generally run it somewhere else.

Django manage.py command having SyntaxError on ElasticBeanstalk

I'm using AWS for first time, and after uploading my Django project I wanted to know how to backup the information of the DB into a file to be able to modify the data in case I have to modify the models of my project (still on development) and keep having some population data.
I thought about the django dumpdata command, so to be able to execute it on EB through the CLI I did the following (and here is where maybe I'm doing something wrong):
- eb ssh
- sudo -s
- cd /opt/python/current/app/
- python manage.py dumpdata --natural-foreign --natural-primary -e contenttypes -e auth.Permission --indent 4 > project_dump.json
For what I understand, the first command is just to access to the SSH on Elastic Beanstalk.
The second one is to have root permissions inside the linux server to evade problems creating and opening files, etc.
The third one is just to access where the current working aplication is.
And the last is the command I have to use to dump all the data "human friendly" without restrictions to be able to use it in any other new database.
I have to say that I tried this last command on my local machine and
worked as expected without any error or warning.
So, the issue I'm facing here is that when I execute this last command, I get the following error:
File "manage.py", line 14
) from exc
^
SyntaxError: invalid syntax
Also I tried to skip the sudo -s to just use the permissions of the user I'm using to log on the ssh, but got: -bash: project_dump.json: Permission denied. So that is why I thought using the sudo command would help here.
In addition, I followed this known tutorial to deploy
Django+PostgreSQL on EB, so the user I'm using to access to ssh is
one in a group with AdministratorAccess permissions.
Before Trying all of this, I also looked for a way of having this information directly from AWS-RDS, but I only found a way of having a backup restored, but without being able to modify the content manually, so is not what I really need.
As on your local environment, you need to run your manage.py commands inside your correct python virtualenv and make sure the environment variables like RDS_USERNAME and RDS_PASSWORD are set. To do that, you need to:
Activate your virtualenv
Source your environment variables
As described at the end of the tutorial you mentioned, this is how to do it:
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
python manage.py <your_command>
And you have to do that every time you ssh into the machine.
Note: the reason you're getting the permission denied error is that when you pipe the output of dumpdata to project_dump.json, you're trying to write in the app directory itself. Not a good idea. Try piping to > ~/project_dump.json instead (your home directory), then sudo won't be needed.

How to use data in a docker container?

After installing Docker and googling for hours now, I can't figure out how to place data in a Docker, it seems to become more complex by the minute.
What I did; installed Docker and ran the image that I want to use (kaggle/python). I also read several tutorials about managing and sharing data in Docker containers, but no success so far...
What I want: for now, I simply want to be able to download GitHub repositories+other data to a Docker container. Where and how do I need to store these files? I prefer using GUI or even my GitHub GUI, but simple commands would also be fine I suppose.. Is it also possible to place data or access data from a Docker that is currently not active?
Note that I also assume you are using linux containers. This works in all platforms, but on windows you need to tell your docker process that that you are dealing with linux containers. (It's a dropdown in the tray)
It takes a bit of work to understand docker and the only way to understand it is to get your hands dirty. I recommend starting with making an image of an existing project. Make a Dockerfile and play with docker build . etc.
To cover the docker basics (fast version) first.
In order to run something in docker we first need to build and image
An image is a collection of files
You can add files to an image by making a Dockerfile
Using the FROM keyword on the first line you extend and image
by adding new files to it creating a new image
When staring a container we need to tell what image it should use
and all the files in the image is copied into the containers storage
The simplest way to get files inside a container:
Crate your own image using a Dockerfile and copy in the files
Map a directory on your computer/server into the container
You can also use docker cp, to copy files from and two a container,
but that's not very practical in the long run.
(docker-compose automates a lot of these things for you, but you should probably also play around with the docker command to understand how things work. A compose file is basically a format that stores arguments to the docker command so you don't have to write commands that are multiple lines long)
A "simple" way to configure multiple projects in docker in local development.
In your project directory, add a docker-dev folder (or whatever you want to call it) that contains an environment file and a compose file. The compose file is responsible for telling docker how it should run your projects. You can of course make a compose file for each project, but this way you can run them easily together.
projects/
docker-dev/
.env
docker-compose.yml
project_a/
Dockerfile
# .. all your project files
project_b/
Dockerfile
# .. all your project files
The values in .env is sent as variables to the compose file. We simply add the full path to the project directory for now.
PROJECT_ROOT=/path/to/your/project/dir
The compose file will describe each of your project as a "service". We are using compose version 2 here.
version: '2'
services:
project_a:
# Assuming this is a Django project and we override command
build: ${PROJECT_ROOT}/project_a
command: python manage.py runserver 0.0.0.0:8000
volumes:
# Map the local source inside the container
- ${PROJECT_ROOT}/project_a:/srv/project_a/
ports:
# Map port 8000 in the container to your computer at port 8000
- "8000:8000"
project_a:
# Assuming this is a Django project and we override command
build: ${PROJECT_ROOT}/project_b
volumes:
# Map the local source inside the container
- ${PROJECT_ROOT}/project_b:/srv/project_b/
This will tell docker how to build and run the two projects. We are also mapping the source on your computer into the container so you can work on the project locally and see instant updates in the container.
Now we need to create a Dockerfile for each out our projects, or docker will not know how to build the image for the project.
Example of a Dockerfile:
FROM python:3.6
COPY requirements.txt /requirements.txt
RUN pip install requirements.txt
# Copy the project into the image
# We don't need that now because we are mapping it from the host
# COPY . /srv/project_a
# If we need to expose a network port, make sure we specify that
EXPOSE 8000
# Set the current working directory
WORKDIR /srv/project_a
# Assuming we run django here
CMD python manage.py runserver 0.0.0.0:8000
Now we enter the docker-dev directory and try things out. Try to build a single project at a time.
docker-compose build project_a
docker-compose build project_b
To start the project in background mode.
docker-compose up -d project_a
Jumping inside a running container
docker-compose exec project_a bash
Just run the container in the forground:
docker-compose run project_a
There is a lot of ground to cover, but hopefully this can be useful.
In my case I run a ton of web servers of different kinds. This gets really frustrating if you don't set up a proxy in docker so you can reach each container using a virtual host. You can for example use jwilder-nginx (https://hub.docker.com/r/jwilder/nginx-proxy/) to solve this in a super-easy way. You can edit your own host file and make fake name entires for each container (just add a .dev suffix so you don't override real dns names)
The jwilder-nginx container will automagically send you to a specific container based on a virtualhost name you decide. Then you no longer need to map ports to your local computer except for the nginx container that maps to port 80.
For others who prefer using GUI, I ended up using portainer.
After installing portainer (which is done by using one simple command), you can open the UI by browsing to where it is running, in my case:
http://127.0.1.1:9000
There you can create a container. First specify a name and an image, then scroll down to 'Advanced container options' > Volumes > map additional volume. Click the 'Bind' button, specify a path in the container (e.g. '/home') and the path on your host, and you're done!
Add files to this host directory while your container is not running, then start the container and you will see your files in there. The other way around, accessing in files created by the container, is also possible while the container is not running.
Note: I'm not sure whether this is the correct way of doing things. I will, however, edit this post as soon as I encounter any problems.
After pulling the image, you can use code like this in the shell:
docker run --rm -it -p 8888:8888 -v d:/Kaggles:/d kaggle/python
Run jupyter notebook inside the container
jupyter notebook --ip=0.0.0.0 --no-browser
This mounts the local directory onto the container having access to it.
Then, go to the browser and hit https://localhost:8888, and when I open a new kernel it's with Python 3.5/ I don't recall doing anything special when pulling the image or setting up Docker.
You can find more information from here.
You can also try using datmo in order to easily setup environment and track machine learning projects to make experiments reproducible. You can run datmo task command as follows for setting up jupyter notebook,
datmo task run 'jupyter notebook' --port 8888
It sets up your project and files inside the environment to keep track of your progress.

Download Google App Engine Project

I've gotten appcfg.py to run. However, when I run the command it doesn't actually download the files. Nothing appears in the destination directory.
There was a mixup and a lot of work got lost and the only way to recover them is to download them from the host.
python appcfg.py download_app -A beatemup-1097 /home/chaserlewis/Desktop/gcloud
The output is
Host: appengine.google.com
Fetching file list...
Fetching files...
Then it just returns without having downloaded anything. It is definitely hosted so I'm not sure what else to do.
I am doing this from a different computer then I deployed from if that matters. I couldn't get appcfg.py to run on my Windows machine unfortunately.
It might be due to the omitted version flag. Try the following:
Go to the App Engine versions page in the console and check the version of your app that is serving traffic. If you don't specify the -V flag, the appcfg command will try to download the default version, which isn't necessarily your latest version or the version serving traffic.
Add the -V flag to your command with the target version that you identified from the console.
python appcfg.py download_app -A beatemup-1097 -V [YOUR_VERSION] /home/chaserlewis/Desktop/gcloud

Categories