I have a small python flask app on a CentOS-7 VM that runs in docker, along with an nginx reverse proxy. The requirements.txt pulls in several external utilities using git+ssh such as:
git+ssh://path-to-our-repo/some-utility.git
I had to make a change to the utility, so I cloned it locally, and I need the app to use my local version.
Say the cloned and modified utility is in a local directory:
/var/work/some-utility
In the requirements.txt I changed the entry to:
git+file:///var/work/some-utility
But when I try to run the app with
sudo docker-compose up
I get the error message
Invalid requirement: 'git+file:///var/work/some-utility'
it looks like a path. Does it exist ?
How can I get it to use my local copy of "some-utility" ?
I also tried:
git+file:///var/work/some-utility#egg=someutility
but that produced the same error.
I looked at PIP install from local git repository.
This is related to this question:
https://stackoverflow.com/questions/7225900/how-to-pip-install-packages-according-to-requirements-txt-from-a-local-directory?rq=1
I suppose most people would say why not just check in a development branch of some-utility to the corporate git repo, but in my case I do not have privileges for that.
Or maybe my problem is related to docker, and I need to map the some-utility folder into the docker container, and then use that path? I am a docker noob.
--- Edit ---
Thank you larsks for your answer. I tried to add the some-utility folder to the docker-compose.yml:
volumes:
- ./some-utility:/usr/local/some-utility
and then changed the requirements.txt to
git+file:///usr/local/some-utility
but our local git repo just went down for maintenance, so I will have to wait a bit for it to come back up to try this.
=== Edit 2 ===
After I made the above changes, I get the following error when running docker-compose when it tries to build my endpoint app:
Cloning file:///usr/local/some-utility to /tmp/pip-yj9xxtae-build
fatal: '/usr/local/some-utility' does not appear to be a git repository
But the /usr/local/some-utility folder does contain the cloned some-utility repo, and I can go there and run git status.
If you're running pip install inside a container, then of course /var/work/some-utility needs to be available inside the container.
You can expose the directory inside your container using a host volume mount, like this:
docker run -v /var/work/some-utility:/var/work/some-utility ...
Related
After pulling the docker image from here I realised after attaching a shell that the tutorial files are not in the dag folder specified in airflow.cg (dags_folder = /usr/local/airflow/dags, the folder dags does not exist).
The tutorial file is actually found here instead:
/usr/local/lib/python3.6/site-packages/airflow/example_dags/tutorial.py
In addition, running a airflow list_dag raises warnings about kubernetes not bieng installed and I am missing the permissions to run apt-get for applications like vim to edit py files, or even run ps to view processes.
As I am new to docker and airflow, is there anything I need to change in the dockerfile when building?
Note: I am using Docker for windows to build the linux image.
The warnings about Kubernetes come from the fact that the airflow[kubernetes] module is not installed by default by Puckel's Dockerfile, but it's not something to worry about unless you want to use Airflow's KubernetesPodOperator.
It's also normal that you don't have permission to edit python modules when you go inside the container, because there you are logged as user airflow and not as root and that user only has write access to the $AIRFLOW_HOME directory. In general editing files from inside the container is hackish and you should try to avoid that.
If I guess correctly, what you want to do is to have your own dags been loaded from airflow-docker. If that's the case, you can run something like the following:
docker run -d -p 8080:8080 -v <local_path_to_your_dags>:/usr/local/airflow/dags puckel/docker-airflow webserver
Here you're mounting a local folder from your machine to the HOME/dags folder in the container, which is the one used to load dags.
I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server
I am trying to get started with docker and django. I followed the directions of docker-compose and created an image with a simple requirements.txt.
I now want to actually build out my app more and add templates and actual code.
1) i installed some modules on the host machine and added them to the requirements.txt file
2) i run (again) docker-compose run web django-admin.py startproject exampleproject. All my new requirements get downloaded but then i get this error:
/code/manage.py already exists, overlaying a project or app into an existing directory won't replace conflicting files
I am using the exact Dockerfile and docker-compose.yml as here:
http://docs.docker.com/compose/django/
how am i supposed to update the container/image with new templates/views &c and new modules as i am developing my app?
am i using docker wrong?
thanks.
I needed to remove the manage.py from the directory where i had my project (the directory where the docker-compose.yml is in).
I guess once you start a project, you install new requirements on the container itself and add them to requirements.txt for the next time you build your project from scratch.
Sorry for the late addition but I thought it might help someone. I had also encountered this same problem and inspite of removing manage.py from the folder where my .yml and Dockerfile existed, it gave me same error.
I have no idea where it made the code directory which had the manage.py file. But I worked around it presently by changing the service name in the yml file and restarting the project with the new service name.
docker-compose run newservicenm django-admin.py startproject projectname directoryname
This worked as a new container!
For PythonAnywhere:
I am currently building a project where I have to change one of my installed packages frequently (because I am adding to the package as I build out the project). It is very manual and laborious to constantly update the package in the BASH console be reinstalling the package everytime I make a change locally. Is there a better process for this?
It sounds like you want to be able to use a single command from your local machine to push up some changes to PythonAnywhere, one way to go about it would be to use PythonAnywere as a git remote. There's some details in this post, but, broadly:
username#PythonAnywhere:~$ mkdir my_repo.git
username#PythonAnywhere:~$ cd my_repo.git
username#PythonAnywhere:~$ git init --bare
Then, on your PC:
git remote add pythonanywhere username#ssh.pythonanywhere.com:my_repo.git
Then you should be able to push to the bare repository on PA from your machine with a
git push pythonanywhere master
You can then use a Git post-receive hook to update the package on PythonAnywhere, by whatever means you like. One might be to have your package checked out on PythonAnywhere:
username#PythonAnywhere:~$ git clone my_package ./my_repo.git
And then the post-receive hook could be as simple as
cd ~/my_package && git pull
I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest
Modified version of innaM:
Concept
Have three repositories
devel - development on your local development machine
central - repository server - like GitHub, Bitbucket or anything other
prod - production server
Then you commit things from devel to central and as soon as you want to deploy on prod, than you ask prod to pull data from prod.
"asking" prod server to pull the updates can be managed by cron (then you have to wait a moment) or you may use other means like one shot call of ssh asking to do git pull and possibly restart your app.
Step by step
In more details you can go this way.
Prepare repo on devel
Develop and test the app on your devel server.
Put it into local repository:
$ git init
$ git add *
$ git commit -m "initial commit"
Create repo on central server
E.g. bitbucket provides this description: https://confluence.atlassian.com/display/BITBUCKET/Import+code+from+an+existing+project
Generally, you create the project on Bitbucket, find the url of it and then from your devel repo call:
$ git remote add origin <bitbucket-repo-url>
$ git push origin
Clone central repo to prod server
Log onto your prod server.
Go to /var/www and clone form bitucket:
$ cd /var/www
$ git clone <bitbucket-repo-url>
$ cd mysite.com
and you shall have your directory ready.
Trigger publication of updates to prod3
There are numerous options. One being a cron task, which would regularly call
$ git pull
In case, your app needs restart afte an update, then you have to ensure, the restart would happen (this shall be possible using git log command, which will show new line after the update, or you may check, if status code would tell you.
Personally I would use "one shot ssh" (you asked not to use ssh, but I assume you are asking for "simpler" solution, so one shot call shall work simpler then using ftp, scp or other magic.
From your devel machine (assuming you have ssh access there):
$ ssh user#prod.server.com "cd /var/www/mysite.com && git pull origin && myapp restart"
Advantage is, that you do control the moment, the update happens.
Discussion
I use similar workflow.
rsync seems in many cases serve well enough or better (be aware of files being created at app runtime and by files in your app, which shall be removed during ongoing versions and shall be removed on server too).
salt (saltstack) could serve too, but requires a bit more learning and setup).
I have learned, that keeping source code and configuration data in the same repo makes sometime situation more dificult (that is why I am working on using salt).
fab command from Fabric (python based) may be best option (in case installation on Windows becomes difficult, look at http://ridingpython.blogspot.cz/2011/07/installing-fabric-on-windows.html
Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.