I have a project that is running a python script in a GitHub repo that reads and updates a local CSV file. Is it possible to save the updated CSV file on the GitHub repo using that py script? Calling and saving the file to my GoogleDrive won't work because it is a public GitHub repo, so I cannot store my Google credentials.
Does anyone have a solution? Or know a cloud provider that will read/write access to non-users accounts?
You can check out PyGithub, from their docs:
PyGitHub is a Python library to access the GitHub REST API. This
library enables you to manage GitHub resources such as repositories,
user profiles, and organizations in your Python applications.
Or GitPython
GitPython is a python library used to interact with git repositories,
high-level like git-porcelain, or low-level like git-plumbing.
Cheers!
Related
I am using Google Cloud Source Repositories to store code for my CI/CD pipeline. What I'm building has two repos: core and clients. The core code will be built and deployed to monitor changes to a cloud storage bucket. When it detects a new customer config in the bucket, it will copy the clients code into a new branch of the clients repo named after the customer. The idea is to enable later potential tailoring for a given customer beyond the standard clients codebase.
The solution I've been considering is to have the core deploy programmatically create the branches in the the clients repo, but have come up empty handed in my research for how to do that in Google Cloud.
The only documentation that is close to what I want to do is here.
The Google Cloud SDKs do not provide a git API. However, there are python libraries that integrate with git like gitpython as one example.
Credit goes to #JohnHanley for his comment to my question for this answer.
I have python scripts that I want to host as a web app in azure.
I want to keep the function scripts in azure data storage and host them as a python API in azure.
I want some devs to be able to change the scripts in the azure data storage and reflect the changes live without having to deploy.
How can I got about doing this?
Create Function App of Runtime Stack Python supported only in Linux Version in any Hosting Plan through Azure Portal.
Make the Continuous Deployment setting to GitHub under Deployment Center of the function app in the portal like below:
After authorizing, provide your GitHub Repository details in the same section.
Create the Azure Python Functions from VS Code and deploy to the GitHub Repository using Git Clone through Command Palette.
After that, you can deploy the functions to azure function app.
Here after publishing to azure, you'll get the Azure Functions Python Rest API.
I want some devs to be able to change the scripts in the azure data storage and reflect the changes live without having to deploy.
Whenever you or dev's make changes the changes in code through GitHub and commit the changes, then it automatically reflects in the Azure Portal Function App.
For more information, please refer this article and GitHub actions of editing the code/script files.
How can I save a file generated by colab notebook directly to github repo?
It can be assumed that the notebook was opened from the github repo and can be (the same notebook) saved to the same github repo.
Google Colaboratory's integrating with github tends to be lacking, however you can run bash commands from inside the notebook. These allow you to access and modify any data generated.
You'll need to generate a token on github to allow access to the repository you want to save data to. See here for how to create a personal access token.
Once you have that token, you run git commands from inside the notebook to clone the repository, add whatever files you need to, and then upload them. This post here provides an overview of how to do it in depth.
That being said, this approach is kind of cumbersome, and it might be preferable to configure colab to work over an SSH connection. Once you do that, you can mount a folder on the colab instance to a folder on your local machine using sshfs. This will allow you to access the colab as though it were any other folder on your machine, including opening it in your IDE, viewing files in a file browser, and cloning or updating git repositories. This goes more in depth on that.
These are the best options I was able to identify, and I hope one of them can be made to work for you.
I am using google cloud appengine and deploying with gcloud app deploy and a standard app.yaml file. My requirements.txt file has one private package that is fetched from github (git+ssh://git#github.com/...git). This install works locally, but when I run the deploy I get
Host key verification failed.
fatal: Could not read from remote repository.
This suggests there is no ssh key when installing. Reading docs (https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies) it appears that this just isn't an option???
Dependencies are installed in a Cloud Build environment that does not provide access to SSH keys. Packages hosted on repositories that require SSH-based authentication must be copied into your project directory and uploaded alongside your project's code using the pip package manager.
To me this seems severely not-optimal - the whole point of factoring out code into a package was to be able to avoid duplication in repos. Now, if I want to use appengine, you're telling me this not possible?
Is there really no workaround?
See:
https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies#private_dependencies
The App Engine service does not (and should not) have access to your private repo.
One alternative (that you don't want) is to upload your public key to the App Engine service.
The other -- as documented -- is that you must provide the content of your private repo to the service as part of your upload.
I'm going through the same issue, deploying on gcloud a python project that contains in its requirements.txt some private repositories. As #DazWilkin wrote already, there's no way to deploy it like you do normally.
One option would be to create a docker image of the whole project and its dependencies, save it into the gcloud docker registry and then pull it into the App Engine instance.
Is it possible to create a new excel spreadsheet file and save it to an Amazon S3 bucket without first saving to a local filesystem?
For example, I have a Ruby on Rails web application which now generates Excel spreadsheets using the write_xlsx gem and saving it to the server's local file system. Internally, it looks like the gem is using Ruby's IO.copy_stream when it saves the spreadsheet. I'm not sure this will work if moving to Heroku and S3.
Has anyone done this before using Ruby or even Python?
I found this earlier question, Heroku + ephemeral filesystem + AWS S3. So, it would seem this is not possible using Heroku. Theoretically, it would be possible using a service which allows adding an Amazon EBS.
You have dedicated Ruby Gem to help you moving file to Amazon S3:
https://rubygems.org/gems/aws-s3
If you want more details about the implementation, here is the git repository. The documentation on the page is very complete, and explain how to move file to S3. Hope it helps.
Once your xls file is created, the library helps you create a S3Objects and store it into a Bucket (which you can also create with the library).
S3Object.store('keyOfYourData', open('nameOfExcelFile.xls'), 'bucketName')
If you want more choice, Amazon also delivered an official Gem for this purpose: https://rubygems.org/gems/aws-sdk