I've a RHEL host with docker installed, it has default Py 2.7. My python scripts needs a bit more modules which
I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
Now, I am trying to get a python in docker container where I get to add few modules do the needfull.
Issue - docker installed RHEL is not connected to internet and cant be connected as well
The laptop i have doesnt have the docker either and I can't install docker here (no admin acccess) to create the docker image and copy them to RHEL host
I was hoping if docker image with python can be downloaded from Internet I might be able to use that as is!,
Any pointers in any approprite direction would be appreciated.
what have I done - tried searching for the python images, been through the dockers documentation to create the image.
Apologies if the above question sounds silly, I am getting better with time on docker :)
If your environment is restricted enough that you can't use sudo to install packages, you won't be able to use Docker: if you can run any docker run command at all you can trivially get unrestricted root access on the host.
My python scripts needs a bit more modules which I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
That sounds like a perfect use for a virtual environment: it gives you an isolated local package tree that you can install into as an unprivileged user and doesn't interfere with the system Python. For Python 2 you need a separate tool for it, with a couple of steps to install:
export PYTHONUSERBASE=$HOME
pip install --user virtualenv
~/bin/virtualenv vpy
. vpy/bin/activate
pip install ... # installs into vpy/lib/python2.7/site-packages
you can create a docker image on any standalone machine and push the final required image to docker registry ( docker hub ). Then in your laptop you can pull that image and start working :)
Below are some key commands that will be required for the same.
To create a image, you will need to create a Dockerfile with all the packages installed
Or you can also do sudo docker run -it ubuntu:16.04 then install python and other packages as required.
then sudo docker commit container_id name
sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
sudo docker push IMAGE_NAME
Then you pull this image in your laptop and start working.
You can refer to this link for more docker commands https://github.com/akasranjan005/docker-k8s/blob/master/docker/basic-commands.md
Hope this helps. Thanks
Related
I try to create a standalone python app using Pyinstaller which should run and can be reproducibly recreated on Windows, Linux, and Mac.
The idea is to use Docker to have a fixed environment that creates the app and exports it again. For Linux and Windows, I could make it work using https://github.com/cdrx/docker-pyinstaller.
the idea is to let docker create a fully functioning pyinstaller app (with GUI) within and export this app. Since pyinstaller depends on the package versions etc. this should be fixed in the docker and only the new source code should be supplied to compile and export a new version of the software.
in the ideal scenario (and how it already works with Linux and Windows), the user can create the docker and compile the software itself:
docker build -t docker_pyinstaller_linux https://raw.githubusercontent.com/loipf/calimera_docker_export/main/linux/Dockerfile
docker run --rm -v "/path/to/app_source_code:/code/" -v "${PWD}:/code/dist/" docker_pyinstaller_linux
For Mac, however, there is no simple straightforward solution. There is one Mac docker image out there https://github.com/sickcodes/Docker-OSX, but the docker creation code is not that simple.
my idea:
take https://github.com/sickcodes/Docker-OSX/blob/master/Dockerfile.auto
add to the end the download of miniconda:
RUN chmod -R 777 /Users/user
### install miniconda to /miniconda
RUN curl -LO "http://repo.continuum.io/miniconda/Miniconda3-4.4.10-MacOSX-x86_64.sh"
RUN bash Miniconda3-4.4.10-MacOSX-x86_64.sh -p /miniconda -b
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
### install packages from conda
RUN conda install -c anaconda -y python=3.8
RUN apt-get install -y python3-pyqt5
...
but already the first command fails due to chmod: cannot access '/Users/user': No such file or directory since I am in a different "environment" than the interactive session (/home/arch/OSX-KVM). Can somebody help me out?
I know these asking for code pieces questions are not the best, and I could show you all the things I tried, but I doubt they will help anybody. I would like to have a minimum Mac example without user login or gui etc (which should be possible using https://github.com/sickcodes/osx-optimizer). It should only have miniconda installed (with pyinstaller and a few packages).
other info:
I can run the previous commands in the mac environment interactively in the docker environment but would like to fix these commands permanently in the image:
docker pull sickcodes/docker-osx:auto ### replace with docker with pyinstaller and packages
docker run -it \
--device /dev/kvm \
-p 50922:10022 \
sickcodes/docker-osx:auto
ideally in the end, we can run the above command with OSX_commands pyinstaller file.spec
related questions but without solution:
Using Docker and Pyinstaller to distribute my application
How to create OS X app with Python on Windows
Tl;Dr: is there a way to check if a volume /foo built into a docker image has been overwritten at runtime by remounting using -v host/foo:/foo?
I have designed an image that runs a few scripts on container initialization using s6-overlay to dynamically generate a user, transfer permissions, and launch a few services under that uid:gid.
One of the things I need to do is pip install -e /foo a python module mounted at /foo. This also installs a default version of /foo contained in the docker image if the user doesn't specify a volume. The reason I am doing this install at runtime is because this container is designed to contain the entire environment for development and experimentation of foo, so if a user mounts a system version of foo, e.g. -v /home/user/foo:/foo, the user can develop by updating host:/home/user/foo or in container:/foo and all changes will persist and the image won't need to be rebuilt to get new changes. It needs to be an editable install so that new changes don't require reinstallation.
I have this working now.
I would like to speed up container initialization by moving this pip install into the image build, and then only install the module at runtime if the user has mounted a new /foo at runtime using -v /home/user/foo:/foo.
Of course, there are other ways to do this. For example, I could build foo into the image copying it to /bar at build time and install foo using pip install /bar... Then at runtime just check if /foo exists and if it doesn't then create a symlink /foo->/bar. If it does exist then pip uninstall foo and pip install -e /foo.. but this isn't the cleanest solution. I could also just mv /bar /foo at runtime if /foo doesn't exist.. but I'm not sure how pip will handle the change in module path.
The only way to do this, that I can think of, is to map the docker socket into the container, so you can do docker inspect from inside the container and see the mounted volumes. Like this
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
and then inside the container
docker inspect $(hostname)
I've used the 'docker' image, since that has docker installed. You can, of course, use another image. You just have to install docker in it.
My Docker knowledge is very poor, I have Docker installed only because I would use freqtrade, so I followed this simple HOWTO
https://www.freqtrade.io/en/stable/docker_quickstart/
Now , all freqtrade commands run using docker , for example this
D:\ft_userdata\user_data>docker-compose run --rm freqtrade backtesting --config user_data/cryptofrog.config.json --datadir user_data/data/binance --export trades --stake-amount 70 --strategy CryptoFrog -i 5m
Well , I started to have problems when I would had try this strategy
https://github.com/froggleston/cryptofrog-strategies
for freqtrade . This strategy requires Python module finta .
I understood that the Python module finta should be installed in my Docker container
and NOT in my Windows system (it was easy "pip install finta" from console!).
Even if I tried to find a solution over stackoverflow and google, I do no understand how to do this step (install finta python module in freqtrade container).
after several hours I am really lost.
Someone can explain me in easy steps how to do this ?
Freqtrade mount point is
D:\ft_userdata\user_data
You can get bash from your container with this command:
docker-compose exec freqtrade bash
and then:
pip install finta
OR
run only one command:
docker-compose exec freqtrade pip install finta
If the above solutions didn't work, You can run docker ps command and get container id of your container. Then
docker exec -it CONTAINER_ID bash
pip install finta
You need to make your own docker image that has finta installed. Luckily you can build on top of the standard freqtrade docker image.
First make a Dockerfile with these two lines in it
FROM freqtradeorg/freqtrade:stable
RUN pip install finta
Then build the image (calling the new image myfreqtrade) by running the command
docker build -t myfreqtrade .
Finally change the docker-compose.yml file to run your image by changing the line
image: freqtradeorg/freqtrade:stable
to
image: myfreqtrade
And that should be that.
The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it.
To generate a Docker image we need to create a Dockerfile that contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.
An example of a Dockerfile containing instructions for assembling a Docker image for Python service installing finta is the following
# set base image (host OS)
FROM python:3.8
# install dependencies
RUN pip install finta
# command to run on container start
CMD [ "python", "-V" ]
For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.
docker build -t myimage .
Then, we can check the image is in the local image store:
docker images
Please refer to the freqtrade DockerFile https://github.com/freqtrade/freqtrade/blob/develop/Dockerfile
I installed gettyimages/spark docker image and jupyter/pyspark-notebook inside my machine.
However as the gettyimage/spark python version is 3.5.3 while jupyter/pyspark-notebook python version is 3.7, the following error come out:
Exception: Python in worker has different version 3.5 than that in
driver 3.7, PySpark cannot run with different minor versions.Please
check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON
are correctly set.
So, i have tried to upgrade the python version of gettyimage/spark image OR downgrade the python version of jupyter/pyspark-notebook docker image to fix it.
Lets talk about method 1, downgrade jupyter/pyspark-notebook python version first:
I use conda install python=3.5 to downgrade the python version of jupyter/pyspark-notebook docker image. However, after i do so , my jupyter notebook cannot connect to any single ipynb and the kernel seems dead. Also, when i type conda again, it shows me conda command not found, but python terminal work well
I have compare the sys.path before the downgrade and after it
['', '/usr/local/spark/python',
'/usr/local/spark/python/lib/py4j-0.10.7-src.zip',
'/opt/conda/lib/python35.zip', '/opt/conda/lib/python3.5',
'/opt/conda/lib/python3.5/plat-linux',
'/opt/conda/lib/python3.5/lib-dynload',
'/opt/conda/lib/python3.5/site-packages']
['', '/usr/local/spark/python',
'/usr/local/spark/python/lib/py4j-0.10.7-src.zip',
'/opt/conda/lib/python37.zip', '/opt/conda/lib/python3.7',
'/opt/conda/lib/python3.7/lib-dynload',
'/opt/conda/lib/python3.7/site-packages']
I think more or less, it is correct. So why i cannot use my jupyter notebook to connect to the kennel?
So i use another method, i tried upgrade the gettyimage/spark image
sudo docker run -it gettyimages/spark:2.4.1-hadoop-3.0 apt-get install
python3.7.3 ; python3 -v
However, I find that even i do so, i cannot run the spark well.
I am not quite sure what to do. May you share with me how to modify the docker images internal package version
If I look at the Dockerfile here, it installs python3 which by default is installing python 3.5 for debian:stretch. You can instead install python 3.7 by editing the Dockerfile and building it yourself. In your Dockerfile, remove lines 19-25 and replace line 1 with the following, and then build the image locally.
FROM python:3.7-stretch
If you are not familiar with building your own image, download the Dockerfile and keep it in its own standalone directory. Then after cd into the directory, run the command below . You may want to first remove the already downloaded image. After this you should be able to run other docker commands the same way as if you had pulled the image from docker hub.
docker build -t gettyimages/spark .
I am using Windows and learning to use tensorflow, so I need to run it under Docker (Toolbox).
Following the usual instruction:
$ docker run -it gcr.io/tensorflow/tensorflow
I can launch a Jupyter notebook on my browser at 192.168.99.100:8888 and run the tutorial notebooks without problems.
Now when I try to import pandas as pd , which is installed in my computer with pip, on Juypter it just said ImportError: No module named pandas
Any idea how I can get this library to work inside the tensorflow images launched from docker?
Screenshot
The Docker image should be built on a linux operating system. You should launch a shell inside the Docker image grc.io/tensorflow/tensorflow to install the requisite python dependencies.
See Docker quickstart for using
docker run -it grc.io/tensorflow/tensorflow /bin/bash
and then
sudo apt-get install python-pandas
according to pandas docs.
To avoid doing this every time you launch the image, you need to commit the change to create a new image.
To commit the change, you need to get the container id (after run and installation steps above):
sudo docker ps –a # Get list of all containers previously started with run command
Then, commit your changes git style using the container_id displayed in the container list you just got and giving it an image_name of your choosing:
sudo docker commit container_id image_name
The new image will now show up in the list displayed by sudo docker ps –a.
If you get a free docker account you can push and pull your updated image to your docker repo, or just keep it locally.
See docs under 'Updating and Committing your image'.
For windows users:
docker run -d -p 8888:8888 -v /c/Users/YOUR_WIN_FOLDER:/home/ds/notebooks gcr.io/tensorflow/tensorflow
Then use the following command to see the name of your container for easy execution commands later on (the last column will be the name):
docker ps
Then run:
docker exec <NAME OF CONTAINER> apt-get update
And finally to install pandas:
docker exec <NAME OF CONTAINER> apt-get install -y python-pandas
(the -y is an automatic 'yes' to stop a prompt from appearing for you to agree to the installation taking up additional disk space)
Here is an image with the pandas installed -
https://hub.docker.com/r/zavolokas/tensorflow-udacity/
Or pull it docker pull zavolokas/tensorflow-udacity:pandas