I have a Json object which needed to be uploaded to the Google Spreadsheet. I have searched and read various resources but could not find the solution. Is there a way to upload object or csv from locally to google spreadsheet using gspread. I will prefer not to use google client api. Thanks
You can import CSV data using gspread.Client.import_csv method. Here's a quick example from the docs:
import gspread
# Check how to get `credentials`:
# https://github.com/burnash/gspread
gc = gspread.authorize(credentials)
# Read CSV file contents
content = open('file_to_import.csv', 'r').read()
gc.import_csv('<SPREADSHEET_ID>', content)
This will upload your CSV file into the first sheet of a spreadsheet with <SPREADSHEET_ID>.
NOTE
This method removes all other worksheets and then entirely replaces the contents of the first worksheet.
Related
I have a csv file (data.csv) in my Google drive.
I want to append new data (df) to that csv file(stack new below old as both csv's have same column headers) and save it to existing data.csv without using google sheets.
How can I do this?
Open google colab and mount your google drive in colab using :
from google.colab import drive
drive.mount('/content/drive')
Provide your authorization and the file will be available in your current session inside your drive directory. You can then edit the file by importing pandas or csv then :
with open('data.csv','a') as fd:
fd.write(myCsvRow)
Opening a file with the 'a' parameter allows you to append to the end of the file instead of simply overwriting the existing content. Any changes you do will be saved to your original file.
I created a dataframe to jupyter notebook and I would like to export this dataframe directly to sharepoint as an excel file. Is there a way to do that?
As per the other answers, there doesn't seem to be a way to write a dataframe directly to SharePoint at the moment, but rather than saving it to file locally, you can create a BytesIO object in memory, and write that to SharePoint using the Office365-REST-Python-Client.
from io import BytesIO
buffer = BytesIO() # Create a buffer object
df.to_excel(buffer, index=False) # Write the dataframe to the buffer
buffer.seek(0)
file_content = buffer.read()
Referencing the example script upload_file.py, you would just replace the lines reading in the file with the code above.
You can add the SharePoint site as a network drive on your local computer and then use a file path that way. Here's a link to show you how to map the SharePoint to a network drive. From there, just select the file path for the drive you selected.
df.to_excel(r'Y:\Shared Documents\file_name.xlsx')
I think there is no way directly export pandas dataframe data to sharepoint but you can export your pandas data to excel and then upload the excel data to sharepoint
Export Covid data to Excel file
df.to_excel(r'/home/noh/Desktop/covid19.xls', index=False)
I wrote a small python program that works with data from a CSV file. I am tracking some numbers in a google sheet and I created the CSV file by downloading the google sheet. I am trying to find a way to have python read in the CSV file directly from google sheets, so that I do not have to download a new CSV when I update the spreadsheet.
I see that the requests library may be able to handle this, but I'm having a hard time figuring it out. I've chosen not to try the google APIs because this way seems simpler as long as I don't mind making the sheet public to those with the link, which is fine.
I've tried working with the requests documentation but I'm a novice programmer and I can't get it to read in as a CSV.
This is how the data is currently taken into python:
file = open('data1.csv', newline='')
reader = csv.reader(file)
I would like the file = open() to ideally be replaced by the requests library and pull directly from the spreadsheet.
You need to find the correct URL request that download the file.
Sample URL:
csv_url='https://docs.google.com/spreadsheets/d/169AMdEzYzH7NDY20RCcyf-JpxPSUaO0nC5JRUb8wwvc/export?format=csv&id=169AMdEzYzH7NDY20RCcyf-JpxPSUaO0nC5JRUb8wwvc&gid=0'
The way to doing it is by manually download your file while inspecting the requests URL at the Network tab in the Developer Tools in your browser.
Then the following is enough:
import requests as rs
csv_url=YOUR_CSV_DOWNLOAD_URL
res=rs.get(url=csv_url)
open('google.csv', 'wb').write(res.content)
It will save CSV file with the name 'google.csv' in the folder of you python script file.
import pandas as pd
import requests
YOUR_SHEET_ID=''
r = requests.get(f'https://docs.google.com/spreadsheet/ccc?key={YOUR_SHEET_ID}&output=csv')
open('dataset.csv', 'wb').write(r.content)
df = pd.read_csv('dataset.csv')
df.head()
I tried #adirmola's solution but I had to tweak it a little.
When he wrote "You need to find the correct URL request that download the file" he has a point. An easy solution is what I'm showing here. Adding "&output=csv" after your google sheet id.
Hope it helps!
I'm not exactly sure about your usage scenario, and Adirmola already provided a very exact answer to your question, but my immediate question is why you want to download the CSV in the first place.
Google Sheets has a python library so you can just get the data from the GSheet directly.
You may also be interested in this answer since you're interested in watching for changes in GSheets
I would just like to say using the Oauth keys and google python API is not always an option. I found the above to be quite useful for my current application.
I was testing with the dropbox provided API for python..my target was to read a Spreadsheet in my dropbox without downloading it to my local storage.
import dropbox
dbx = dropbox.Dropbox('my-token')
print dbx.users_get_current_account()
fl = dbx.files_get_preview('/CGPA.xlsx')[1] # returns a Response object
After the above code, calling the fl.text() method gives an HTML output which shows the preview that would be seen if opened by browser. And the data can be parsed.
My query is, if there is a built-in method of the SDK for getting any particular info from the spreadsheet, like the data of a row or a cell...preferrably in json format...I previously used butterdb for extracting data from a google drive spreadsheet...is there such functionality for dropbox?....could not understand by reading the docs: http://dropbox-sdk-python.readthedocs.io/en/master/
No, the Dropbox API doesn't offer the ability to selectively query parts of a spreadsheet file like this without downloading the whole file, but we'll consider it a feature request.
As the title suggests, how can I import a XML file into Spreadsheet with Python. I need for documenting High Level Router Configurations in a Spreadsheet format.
I have been using xlsxwriter to create spreadsheet etc, but have no idea how to put xml files into spreadsheet