How would I go about setting up one github user and ssh key and then replicating that to several other laptops so they can all use the same account? It would be optimal if I could copy a configuration file so I wouldn't have to apply it one laptop at a time - I could apply it through server administration.
This isn't a typical github setup so don't worry about this being the correct way to set it up.
Setup the project the way github describes.
Create your ssh keys.
Tar or Zip everything up.
Distribute and Untar/Unzip.
Done.
Related
I want to upload my a csv file daily on Ambari Apache. I've tried manuplating multiple solutions avaliable online to upload files of Google and other equivalent platforms. I have also tried methods like sftp to help me achieve it, but still have not found a solution. Please recommend any tips, ideas or methods on how should I achieve it.
There is an Ambari method to do this. You can create a custom service in ambari that would run. This would enable you to have ambari self contain the code and execute it. Out of the box Ambari wouldn't technically be running the script, you'd have to run it on a Master/Slave but you might be able to work around that by running an agent on ambari and making it a slave. If it's acceptable you could just have this service installed on one slave and have it push/pull the appropriate file to Ambari.
There are others that have implemented this on just one machine and you can google for how they make sure it's run on just one machine.
I use python to develop code on my work laptop and then deploy to our server for automation purposes.
I just recently started using git and github through PyCharm in hopes of making the deployment process smoother.
My issue is that I have a config file (YAML) that uses different parameters respective to my build environment (laptop) and production (server). For example, the file path changes.
Is there a git practice that I could implement that when either pushing from my laptop or pulling from the server it will excluded changes to specific parts of a file?
I use .gitignore for files such as pyvenv.cfg but is there a way to do this within a file?
Another approach I thought of would be to utilize different branches for local and remote specific parameters...
For Example:
Local branch would contain local parameters and production branch would contain production parameters. In this case I would push 1st from my local to the local branch. Next I would make the necessary changes to the parameters for production, in my situation it is much easier to work on my laptop than through the server, then push to the production branch. However, I have a feeling this is against good practice or simply changes the use of branches.
Thank you.
Config files are also a common place to store credentials (eg : a login/pwd for the database, an API key for a web service ...) and it is generally a good idea to not store those in the repository.
A common practice is to store template files in the repo (eg : config.yml.sample), to not store the actual config file along with the code (even add it in .gitignore, if it is in a versioned directory), and add steps at deployment time to either set up the initial config file or update the existing one - those steps can be manual, or scripted. You can backup and version the config separately, if needed.
Another possibility is to take the elements that should be adapted from somewhere else (the environment for instance), and have some user: $APP_DB_USER entries in your config file. You should provision these entries on both your servers - eg : have an env.txt file on your local machine and a different one on your prod server.
gitlab has this functionality that you can use pipelines that will execute code whenever you push code to your project.
this is done through their .gitlab-ci.yml file format
i am trying to somehow make the pipeline to merge all branches with prefix "ready/"
i have written a python program to do it locally, but it wont execute on the gitlab docker remote machine. this is due to the fact that it only lists "* and master" as branches with "git branch -a".
i have tried to checkout to master but that dosent work.
is this even possible on the gitlab pipeline? how would i go forward?
There are a couple of ways to achieve this depending what credentials you want to use, what you prefer, and what is better suited to your use case.
Use SSH in CI/CD (with SSH keys) to use your standard git commands to pull, do whatever, then push to the repo as part of a pipeline job.
Use the merge requests API which requires a personal access token. The API allows you to create, accept, and merge a merge request.
If you have a lot of branches, then you may want to use the first method.
There are a number of computer that I can access via ssh.
I was wondering if I can have a python code base in a single location but execute a part of it or all of it in each of these computers independently.
I can copy my codes in each of these PCs and then execute them in each through SSH but it will be hard then to make a change in the code since I should do it in all the copies.
I was also wondering if I can do something like this similar to a cluster since each of these PCs has a number of CPUs, though that would probably not be possible or very easy.
Two options quickly pop to mind:
Use sshfs to mount the remote code location to the local PC and run.
Use something like git to store your configured code, and setup each of the PCs to pull the app (?) from the remote code repo. This way, you update / configure the code once and pull the update to each PC.
For example:
We use the second method. We have seven RasPi servers running various (isolated) tasks. One of the servers is a NAS server which has a Git repo on it - where we store our configured code, and use git pull or git clone commands (via ssh) to pull the app to the local server. Works really well for us. Maybe an idea to help you ... ?
I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.