I recently upgraded a video from 480p to 720p on my repo. To do this I had to use git LFS since the 720p video was more than 100 MB (which is Github's limit for file storage).
Before the upgrade, I could download the video via a link similar to this: https://raw.githubusercontent.com/user/repo/master/videos/video.mp4
Now, this link displays git LFS related information about the video (version, oid and size). I know I can use another link to download the video but I really need this link to do it (this URL as been documented somewhere I can't edit).
Is there a way to achieve this?
I think I found a solution:
I just looked up the network traffic. The URL should be:
https://media.githubusercontent.com/media/<name>/<repoName>/<branchName>/raw/some/path/someFile.huge
Or in your case:
https://media.githubusercontent.com/media/user/repo/master/raw/videos/video.mp4
Related
I finished a streamlit app that basically downloads video and audio using pytube, and currently, I want to deploy it so that me and the others can use it on their phones, comps, and so on. However, when I deploy it and use it from another device, even though the model itself works, the file is not downloaded. Therefore I want to make it to be downloaded directly from chrome. (like the way that pictures and others are downloaded from chrome). Is there any particular method or trick for that? Thanks
you need to provide code of your app. For specify a download directory in pytube you need to write
directory = '/home/user/Documents'
file.download(directory)
Node.js has an index file to easily get all releases with additional info. Is there a file like this for python too? I've searched the python FTP directory but couldn't find a file like this.
node's index file: https://nodejs.org/dist/index.json
As far as I've researched there is none, so I created this express app to scrape the FTP page and get the versions as an array. It's live on Heroku but at the time you are seeing this post it may have gotten down. You can spin up your own server from this repo.
PS: I didn't spend much time on it, it just does the job.
GitHub
Heroku
MyBinder and Colaboratory are feasible to allow people to run our examples from the website directly in their browser, without any download required.
When I work on the Binder, our data to be loaded takes a huge time. So, I need to run python code on the website directly.
I'm not sure whether I totally get the question. If you want to avoid having to download the data from another source, you can add the data into you git repo which you use to start Binder. It should look something like this: https://github.com/lschmiddey/book_recommender_voila
However, if your dataset is too big to be uploaded to your git repo, you have to get the data onto the provided Binder server somehow. So you usually have to download the data onto your Binder Server so that other users can work on your notebook.
I have tried searching for a way to download VHD disk of a VM from Azure, but couldn't find any.
The only way I found to download is by downloading it manually using the steps provided in the link :
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/download-vhd
If anyone has a way to download it using python, please share...
Thanx in Advance...
Essentially, the link you referenced tells you what you need to do to download a .VHD
However, if you want to use Python, there is a library you can use to make common tasks easier.
See this file especially for some more information on how to read blobs in an Azure Storage Account.
I want to download all the files that are publicly accessible on this site:
https://www.duo.uio.no/
This is the site for the university of Oslo, and here we can find every paper/thesis that is publicly available from the archives of the university. I tried a crawler, but the website has set some mechanism for stopping crawlers accessing their documents. Are there any other ways of doing this?
Did not mention this in the original question, but what I want is all the pdf files on the server. I tried SiteSucker, but that seems to just download the site itself.
wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=unix,ascii --domains your-site.com --no-parent http://your-site.com
try it
You could try using site sucker (download), that allows you to download the contents of a website, ignoring any rules they may have in place.