I have tried searching for a way to download VHD disk of a VM from Azure, but couldn't find any.
The only way I found to download is by downloading it manually using the steps provided in the link :
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/download-vhd
If anyone has a way to download it using python, please share...
Thanx in Advance...
Essentially, the link you referenced tells you what you need to do to download a .VHD
However, if you want to use Python, there is a library you can use to make common tasks easier.
See this file especially for some more information on how to read blobs in an Azure Storage Account.
Related
I am new to Azure and Python and would like to ask some questions. Using Azure Python SDK:
Is there a direct way for me to get a list of snapshots of a managed OS disk order by created date? As of now, I can only get all snapshots within a resource group or under a subscription.
How do I write Python SDK code to create snapshots asynchronously? I have an idea of using multi-threading, and I also want to be notified when a snapshot is successfully created.
My questions might be confusing since I have little experience with Azure and Python. Any help will be appreciated.
I have a dataset contains hundreds of numpy arrays looks like this,
I am trying to save them to an online drive so that I can run the code with this dataset remotely from a sever. I cannot access the drive of the server but can only run code script and access the terminal. So I have tried with google drive and Onedrive, and looked up how to generate a direct download link from those drives but it did not work.
In short, I need to be able to get those files from my python scripts. Could anyone give some hints?
You can get the download URLs very easily from Drive. I assume that you already uploaded the files into a Drive folder. Then you can easily set up a scenario to download the files on Python. First you would need an environment on Python to connect to Drive. If you don't currently have one, you can follow this guide. That guide will install the required libraries, credentials and run a sample script. Once you can run the sample script you can make minor modifications to reach your goal.
To download the files you are going to need their ids. I am assuming that you already know them, but if you don't you could retrieve them by doing a Files.list on the folder where you keep the files. To do so you can use '{ FOLDER ID }' in parents as the q parameter.
To download the files you only have to run a Files.get request by providing the file id. You will find the download URL on the webContentLink property. Feel free to leave a comment if you need further clarifications.
I have been searching for hours trying to find out how I could edit an Excel file saved to OneDrive using python and have had no luck. Help if you know how/if it is possible.
#JeremiahTrest Welcome to Stackoverflow. I don't think that's how OneDrive works. What I mean is, I don't think it's possible to directly edit a file that is saved to your OneDrive in the cloud with any language. What you would have to do is to get a copy of the file on the machine that is running the Python script, update the file on that machine and save it with the changes, then push the changed file to your OneDrive. I looked and found this SDK for Python that is meant to be able to help you interface with the OneDrive API. So, you would use this SDK to get the file from OneDrive, update the file locally, then use the SDK to push the changed file back out to OneDrive.
I came across this post as I have the same task. Here's what I'm going to try:
Use the request library on python to call the OneDrive API.
Here's the page on excel APIs:
enter link description here
I'll update when I have my code.
Is it possible to create a new excel spreadsheet file and save it to an Amazon S3 bucket without first saving to a local filesystem?
For example, I have a Ruby on Rails web application which now generates Excel spreadsheets using the write_xlsx gem and saving it to the server's local file system. Internally, it looks like the gem is using Ruby's IO.copy_stream when it saves the spreadsheet. I'm not sure this will work if moving to Heroku and S3.
Has anyone done this before using Ruby or even Python?
I found this earlier question, Heroku + ephemeral filesystem + AWS S3. So, it would seem this is not possible using Heroku. Theoretically, it would be possible using a service which allows adding an Amazon EBS.
You have dedicated Ruby Gem to help you moving file to Amazon S3:
https://rubygems.org/gems/aws-s3
If you want more details about the implementation, here is the git repository. The documentation on the page is very complete, and explain how to move file to S3. Hope it helps.
Once your xls file is created, the library helps you create a S3Objects and store it into a Bucket (which you can also create with the library).
S3Object.store('keyOfYourData', open('nameOfExcelFile.xls'), 'bucketName')
If you want more choice, Amazon also delivered an official Gem for this purpose: https://rubygems.org/gems/aws-sdk
I'm now using gdata-python-client(Google document List API) to access my google drive on Terminal in Linux OS and I have problem to show the image files -- It's just show only the .doc .xls or .pdf files
Is it has some solutions to solve my problem in still using gdata-python-client? I hope there is some solutions better than changing my APIs to Google drive API,that's mean I should restart my project!!. So sad :(
And If I change to use Google Drive APIs.how to do it? or can i reuse my project working compatibility with the new APIs?
Please give me some advice or tutorial.
Thank you very very very much :)
Use the Drive API. We have a Python command line sample to get you started, and python snippets for every API method including files.list.