Smartsheet adding multiple contacts - python

I had the column in the smartsheet as 'Allowing multiple contacts to be selected'.
I am using simple_smartsheet package (https://pypi.org/project/simple-smartsheet/) but I cannot seems to find in the internet that anyone adding multiple contact using this package.
Below is the piece of code that i tried:
from simple_smartsheet import Smartsheet
from simple_smartsheet.models import Sheet, Column, Row, Cell, ColumnType
#%%
access_token='XXX'
smartsheet = Smartsheet(access_token)
sheet_name = 'test'
sh = smartsheet.sheets.get(sheet_name)
new_rows = [
Row(
to_top=True,
cells=[
Cell(column_id=released_by.id, value=[{'objectType': 'CONTACT',
'email': 'xxx.yyy#westrac.com.au',
'name': 'xxx yyy'},
{'objectType': 'CONTACT',
'email': 'aaa.bbb#westrac.com.au',
'name': 'aaa bbb'}])
],
),
]
#new_rows.append(Row(to_top=True,cells=sh.make_cells(row_value)))
smartsheet.sheets.add_rows(sh.id, new_rows)
But I got this error:
SmartsheetHTTPClientError: HTTP response code 400 - Error code 1008 - Unable to parse request. The following error occurred: Field "value" was not parsable. value must be a primitive type
at [Source: java.io.PushbackInputStream#786472ed; line: 1, column: 241].
I am not quite sure where did i do wrong. Any thoughts?

From the python doc to the original doc you can see that the Cell class only accept as value either a string a boolean or a number
So this should work:
new_rows = [
Row(
to_top=True,
cells=[
Cell(column_id=released_by.id, value="your value")
],
),
]

You were close and this was a pain. This is working for me...example for multi contact and single...
#build to update a cell
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = pasteCol
new_cell.object_value = {"objectType":"MULTI_CONTACT", "values":[{"name":"rOB","email":"test#test.com"}, {"name":"rob","email":"rob#test.com"}]}
#this will work on single select
#new_cell.object_value = {'objectType': 'CONTACT','email': 'test#test.com', 'name': 'test'}
new_cell.strict = True
print(new_cell)
#append to update
get_row = smartsheet.models.Row()
get_row.id = rowID
get_row.cells.append(new_cell)
updated_row = smartsheet_client.Sheets.update_rows(sheet_id,[get_row])

Create the list of values as text delimited (I created a function to convert it to a correct formatted string because SS is very strict)
Initialize the row as usual.
Create the cell adding the "objectType" option as "MULTI_PICKLIST" and add to the row (this should be inside "object_value")
new_row_add.cells.append({'column_id': col_id, 'object_value': {'objectType': 'MULTI_PICKLIST', 'values': new_list}, 'strict': True})
Add the row as usual
This also works for row updates

Related

Push a python dataframe to Smartsheet using Smartsheet API

I have a python script where I'm trying to fetch data from meraki dashboard through its API. Now the data is stored in a dataframe which needs to be pushed to a Smartsheet using the Smartsheet API integration. I've tried searching the Smartsheet API documentation but couldn't find any solution to the problem. Has anyone worked on this kind of use case before or know a script to push a simple data frame to the smartsheet?
The code is something like this:
for device in list_of_devices:
try:
dict1 = {'Name': [device['name']],
"Serial_No": [device['serial']],
'MAC': [device['mac']],
'Network_Id': [device['networkId']],
'Product_Type': [device['productType']],
'Model': [device['model']],
'Tags': [device['tags']],
'Lan_Ip': [device['lanIp']],
'Configuration_Updated_At': [device['configurationUpdatedAt']],
'Firmware': [device['firmware']],
'URL': [device['url']]
}
except KeyError:
dict1['Lan_Ip'] = "NA"
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
The dataframe("alldata") looks something like this:
Name Serial_No MAC \
0 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
1 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
2 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
the dataframe has somewhere around 1000 rows and 11 columns
I've tried pushing this dataframe similar to the code mentioned in the comments but I'm getting a "Bad Request" error.
smart = smartsheet.Smartsheet(access_token='xxxxxxxx')
sheet_id = xxxxxxxxxxxxx
sheet = smart.Sheets.get_sheet(sheet_id)
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
data_dict = alldata.to_dict('index')
rowsToAdd = []
for i,i in data_dict.items():
new_row = smart.models.Row()
new_row.to_top = True
for k,v in i.items():
new_cell = smart.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
new_row.cells.append(new_cell)
rowsToAdd.append(new_row)
result = smart.Sheets.add_rows(sheet_id, rowsToAdd)
{"response": {"statusCode": 400, "reason": "Bad Request", "content": {"detail": {"index": 0}, "errorCode": 1012, "message": "Required object attribute(s) are missing from your request: cell.value.", "refId": "1ob56acvz5nzv"}}}
Smartsheet photo where the data must be pushed
The following code adds data from a dataframe to a sheet in Smartsheet -- this should be enough to at least get you started. If you still can't get the desired result using this code, please update your original post to include the code you're using, the outcome you're wanting, and a detailed description of the issue you encountered. (Add a comment to this answer if you update your original post, so I'll be notified and will know to look.)
# target sheet
sheet_id = 3932034054809476
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
df = pd.DataFrame({'item_id': [111111, 222222],
'item_color': ['red', 'yellow'],
'item_location': ['office', 'kitchen']})
data_dict = df.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
# add the collection of rows to the sheet in Smartsheet
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
UPDATE #1 - re Bad Request error
Seems like the error you've described in your first comment below is perhaps being caused by the fact that some of the cells in your dataframe don't have a value. When you add a new row using the Smartsheet API, each cell that's specified for the row must specify a value for the cell -- otherwise you'll get the Bad Request error you've described. Maybe try adding an if statement inside the for loop to skip adding the cell if the value of v is None?
for k,v in i.items():
# skip adding this cell if there's no value
if v is None:
continue
...
UPDATE #2 - re further troubleshooting
In response to your second comment below: you'll need to debug further using the data in your dataframe, as I'm unable to repro the issue you describe using other data.
To simplify things -- I'd suggest that you start by trying to debug with just one item in the dataframe. You can do so by adding the line (statement) break at the end of the for loop that's building the dict -- that way, only the first device will be added.
for device in list_of_devices:
try:
...
except KeyError:
dict1['Lan_Ip'] = "NA"
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
# break out of loop after one item is added
break
alldata.reset_index(drop=True, inplace=True)
# print dataframe contents
print (alldata)
If you get the same error when testing with just one item, and can't recognize what it is about that data (or the way it's stored in your dataframe) that's causing the Smartsheet error, then feel free to add a print (alldata) statement after the for loop (as I show in the code snippet above) to your code and update your original post again to include the output of that statement (changing any sensitive data values, of course) -- and then I can try to repro/troubleshoot using that data.
UPDATE #3 - repro'd issue
Okay, so I've reproduced the error you've described -- by specifying None as the value of a field in the dict.
The following code successfully inserts two new rows into Smartsheet -- because every field in each dict it builds contains a (non-None) value. (For simplicity, I'm manually constructing two dicts in the same manner as you do in your for loop.)
# target sheet
sheet_id = 37558492129156
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
#----
# start: repro SO question's building of dataframe
#----
alldata = pd.DataFrame()
dict1 = {'Name': ['name1'],
"Serial_No": ['serial_no1'],
'MAC': ['mac1'],
'Network_Id': ['networkId1'],
'Product_Type': ['productType1'],
'Model': ['model1'],
'Tags': ['tags1'],
'Lan_Ip': ['lanIp1'],
'Configuration_Updated_At': ['configurationUpdatedAt1'],
'Firmware': ['firmware1'],
'URL': ['url1']
}
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
dict2 = {'Name': ['name2'],
"Serial_No": ['serial_no2'],
'MAC': ['mac2'],
'Network_Id': ['networkId2'],
'Product_Type': ['productType2'],
'Model': ['model2'],
'Tags': ['tags2'],
'Lan_Ip': ['lanIp2'],
'Configuration_Updated_At': ['configurationUpdatedAt2'],
'Firmware': ['firmware2'],
'URL': ['URL2']
}
temp = pd.DataFrame.from_dict(dict2)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
#----
# end: repro SO question's building of dataframe
#----
data_dict = alldata.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
However, running the following code (where the value of the URL field in the second dict is set to None) results in the same error you've described:
{"response": {"statusCode": 400, "reason": "Bad Request", "content": {"detail": {"index": 1}, "errorCode": 1012, "message": "Required object attribute(s) are missing from your request: cell.value.", "refId": "dw1id3oj1bv0"}}}
Code that causes this error (identical to the successful code above except that the value of the URL field in the second dict is None):
# target sheet
sheet_id = 37558492129156
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
#----
# start: repro SO question's building of dataframe
#----
alldata = pd.DataFrame()
dict1 = {'Name': ['name1'],
"Serial_No": ['serial_no1'],
'MAC': ['mac1'],
'Network_Id': ['networkId1'],
'Product_Type': ['productType1'],
'Model': ['model1'],
'Tags': ['tags1'],
'Lan_Ip': ['lanIp1'],
'Configuration_Updated_At': ['configurationUpdatedAt1'],
'Firmware': ['firmware1'],
'URL': ['url1']
}
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
dict2 = {'Name': ['name2'],
"Serial_No": ['serial_no2'],
'MAC': ['mac2'],
'Network_Id': ['networkId2'],
'Product_Type': ['productType2'],
'Model': ['model2'],
'Tags': ['tags2'],
'Lan_Ip': ['lanIp2'],
'Configuration_Updated_At': ['configurationUpdatedAt2'],
'Firmware': ['firmware2'],
'URL': [None]
}
temp = pd.DataFrame.from_dict(dict2)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
#----
# end: repro SO question's building of dataframe
#----
data_dict = alldata.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
Finally, note that the error message I received contains {"index": 1} -- this implies that the value of index in this error message indicates the (zero-based) index of the problematic row. The fact that your error message contains {"index": 0} implies that there's a problem with the data in the first row you're trying to add to Smartsheet (i.e., the first item in the dataframe). Therefore, following the troubleshooting guidance I posted in my previous update (Update #2 above) should allow you to closely examine the data for the first item/row and hopefully spot the problematic data (i.e., where the value is missing).

Saving python dictionary (or JSON?) as CSV

I have been trying to save the output from Google Search Console API as a CSV File. Initially, I was using sys.stdout to save what was print from the sample code they had provided. However, on the third or so attempt, I started receiving this error:
File "C:\python39\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\uff1a' in position 13: character maps to <undefined>
After that I tried switching to using Pandas to csv funtion. The result is not what I had hoped for but is at least closer:
> ,rows,responseAggregationType
0,"{'keys': ['amp pwa'], 'clicks': 1, 'impressions': 4, 'ctr': 0.25, 'position': 7.25}",byProperty
1,"{'keys': ['convert desktop site to mobile'], 'clicks': 1, 'impressions': 2, 'ctr': 0.5, 'position': 1.5}",byProperty
I'm very new to python but I figure it has something to do with the output from the API pull not being quite the standard dict object format.
I also tried using the csv.write function (I deleted that code before coming here so I don't have an example) but the result was the same unable to encode issues as from sys.stdout.
Here is the code that prints the output exactly as I need it, I just need to be able to save it somewhere where I can use it in a spreadsheet.
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import argparse
import sys
from googleapiclient import sample_tools
# Declare command-line flags.
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('property_uri', type=str,
help=('Site or app URI to query data for (including '
'trailing slash).'))
argparser.add_argument('start_date', type=str,
help=('Start date of the requested date range in '
'YYYY-MM-DD format.'))
argparser.add_argument('end_date', type=str,
help=('End date of the requested date range in '
'YYYY-MM-DD format.'))
def main(argv):
service, flags = sample_tools.init(
argv, 'searchconsole', 'v1', __doc__, __file__, parents=[argparser],
scope='https://www.googleapis.com/auth/webmasters.readonly')
# Get top 10 queries for the date range, sorted by click count, descending.
request = {
'startDate': flags.start_date,
'endDate': flags.end_date,
'dimensions': ['query'],
'rowLimit': 10
}
response = execute_request(service, flags.property_uri, request)
print_table(response, 'Top Queries')
def execute_request(service, property_uri, request):
"""Executes a searchAnalytics.query request.
Args:
service: The searchconsole service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
return service.searchanalytics().query(
siteUrl=property_uri, body=request).execute()
def print_table(response, title):
"""Prints out a response table.
Each row contains key(s), clicks, impressions, CTR, and average position.
Args:
response: The server response to be printed as a table.
title: The title of the table.
"""
print('\n --' + title + ':')
if 'rows' not in response:
print('Empty response')
return
rows = response['rows']
row_format = '{:<20}' + '{:>20}' * 4
print(row_format.format('Keys', 'Clicks', 'Impressions', 'CTR', 'Position'))
for row in rows:
keys = ''
# Keys are returned only if one or more dimensions are requested.
if 'keys' in row:
keys = u','.join(row['keys']).encode('utf-8').decode()
print(row_format.format(
keys, row['clicks'], row['impressions'], row['ctr'], row['position']))
if __name__ == '__main__':
main(sys.argv)
Here's the output as I want it, but comma separated:
Keys Clicks Impressions CTR Position
amp pwa 1 4 0.25 7.25
convert desktop site to mobile 1 2 0.5 1.5
And here is what printing just the result object results in:
{'rows': [{'keys': ['amp pwa'], 'clicks': 1, 'impressions': 4, 'ctr': 0.25, 'position': 7.25}, {'keys': ['convert desktop site to mobile'], 'clicks': 1, 'impressions': 2, 'ctr': 0.5, 'position': 1.5}], 'responseAggregationType': 'byProperty'}
I hope I have included enough info, I tried every solution recommended here and on other sites before asking a question. It just seems like an oddly formatted json/dictionary object.
Any help is extremely appreciated.
Update, Solution:
Adjused output code to be:
import csv
with open("out.csv", "w", encoding="utf8", newline='') as f:
rows = response['rows']
writer = csv.writer(f)
headers = ["Keys", "Clicks", "Impressions", "CTR", "Position"]
writer.writerow(headers)
for row in rows:
keys = ''
# Keys are returned only if one or more dimensions are requested.
if 'keys' in row:
keys = u','.join(row['keys']).encode('utf-8').decode()
# Looks like your data has the keys in lowercase
writer.writerow([keys, row['clicks'], row['impressions'], row['ctr'], row['position']])
It may just be the encoding of the output file that's the problem.
It looks like the rows you get from the response are a series of dict-like objects, so this should work:
import csv
with open("out.csv", "w", encoding="utf8") as f:
writer = csv.writer(f)
headers = ["Keys", "Clicks", "Impressions", "CTR", "Position"]
writer.writerow(headers)
for row in rows:
writer.writerow(
[
", ".join(row.get("keys", [])),
row["clicks"],
row["impressions"],
row["ctr"],
row["postition"],
]
)
The writer object accepts a number of arguments to control line separators and quoting in the output csv. Check the module docs for details.

Writing JSON data in python. Format

I have this method that writes json data to a file. The title is based on books and data is the book publisher,date,author, etc. The method works fine if I wanted to add one book.
Code
import json
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','a') as outfile:
json.dump(data,outfile , default = set_default)
def set_default(obj):
if isinstance(obj,set):
return list(obj)
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
JSON File with one book/one method call
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
}
However if I call the method multiple times , thus adding more book data to the json file. The format is all wrong. For instance if I simply call the method twice with a main method of
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
createJson("william-golding-lord of the flies","william","golding","1944","134","Penguin Books")
My JSON file looks like
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
} {
"william-golding-lord of the flies": [
["pageCount:134", "publisher:Penguin Books", "firstName:william","lastName:golding", "date:1944"]
]
}
Which is obviously wrong. Is there a simple fix to edit my method to produce a correct JSON format? I look at many simple examples online on putting json data in python. But all of them gave me format errors when I checked on JSONLint.com . I have been racking my brain to fix this problem and editing the file to make it correct. However all my efforts were to no avail. Any help is appreciated. Thank you very much.
Simply appending new objects to your file doesn't create valid JSON. You need to add your new data inside the top-level object, then rewrite the entire file.
This should work:
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
# Load any existing json data,
# or create an empty object if the file is not found,
# or is empty
try:
with open('data.json') as infile:
data = json.load(infile)
except FileNotFoundError:
data = {}
if not data:
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','w') as outfile:
json.dump(data,outfile , default = set_default)
A JSON can either be an array or a dictionary. In your case the JSON has two objects, one with the key stephen-king-it and another with william-golding-lord of the flies. Either of these on their own would be okay, but the way you combine them is invalid.
Using an array you could do this:
[
{ "stephen-king-it": [] },
{ "william-golding-lord of the flies": [] }
]
Or a dictionary style format (I would recommend this):
{
"stephen-king-it": [],
"william-golding-lord of the flies": []
}
Also the data you are appending looks like it should be formatted as key value pairs in a dictionary (which would be ideal). You need to change it to this:
data[title].append({
'firstName': firstName,
'lastName': lastName,
'date': date,
'pageCount': pageCount,
'publisher': publisher
})

Smartsheet API change? Bad Request Error related to Index format type

I have a piece of software that utilizes the Smartsheet API (specifically the Python SDK). I recently realized that the Smartsheet connectivity of the software has been broken since mid-January and as far as I can tell it’s due to a change on the Smartsheet side of things. I even rolled back to a version I know to have worked previously (the version that resulted from help in this related post).
Here is the code I use to access the sheet:
#Now query the smartsheet server
try:
# Load entire sheet
sheet = ss.Sheets.get_sheet(sheet_id, include=['format'],
page_size=500)
logging.info("Loaded "+str(len(sheet.rows))+
" rows from sheet: "+sheet.name)
column_map = {}
# Build column map for later reference-translates
# column names to column id
for column in sheet.columns:
column_map[column.title] = column.id
# Helper function to find cell in a row i a smartsheet object
def get_cell_by_column_name(row, column_ame):
column_id = column_map[column_ame]
return row.get_column(column_id)
# Find first row that hasn't already been written to
newrowindex = [row.row_number
for row in sheet.rows
if get_cell_by_column_name(row,
'Column4').display_value != None][-1]
# Build the row to update
newRow = ss.models.Row()
newRow.id = sheet.rows[newrowindex].id
for col in sheet.columns:
newCell = ss.models.Cell()
newCell.column_id = col.id
newCell.value = df[sheet.rows[2].get_column(
col.id).display_value
].loc[1]
oldformat = sheet.rows[newrowindex].get_column(
col.id).format
if oldformat == None:
oldformat = ',,,,,,,,,,,,,,,'
newformat = oldformat.split(',')[:-3]
newformat = "".join([ii+',' for ii in
newformat])[:-1]
newformat = (newformat + df_fmts[
sheet.rows[2].get_column(
col.id).display_value
].loc[1])
newCell._format = newformat
newRow.cells.append(newCell)
newRow.cells[0].value = sheet.rows[newrowindex].cells[0].value
newRow.cells[2].value = fieldinputs.projectid
result = ss.Sheets.update_rows(sheet_id,[newRow])
smshtexportButton.button_type = "success"
smshtexportButton.label = "Sucessfully Published!"
Here is the resulting error log:
Request: {
command: GET https://api.smartsheet.com/2.0/sheets/7120922902587268?
include=format&pageSize=500&page=1
}
2018-02-16 09:53:00,378 Loaded 205 rows from sheet: Bulk Update
2018-02-16 09:53:01,126 Request: {
command: PUT https://api.smartsheet.com/2.0/sheets/7120922902587268/rows
}
2018-02-16 09:53:01,134 Response: {
status: 400 Bad Request
content: {
{
"detail": {
"index": 0,
"rowId": 6710889537660804
},
"errorCode": 1008,
"message": "Unable to parse request. The following error occurred: Index
value '11' is invalid for format type DECIMAL_COUNT",
"refId": "1upixmmo45bp3"
}
}
The last successful update the software made to the 'Bulk Update' sheet was 1/16/18 at 3:37pm PST. I initially suspected that I had made changes to the df and df_fmts dataframes in the code above, but upon reverting back to the same version of the code used on 1/16, I still get the Bad Request error.
Any help would be greatly appreciated.
In the Smartsheet app the furthest you can set decimal places is 5. Anything greater than that will return an error like you are seeing. According to the error you are setting the decimal places to be 11, which is unsupported.
I suggest looking at your code that sets the format and how it is getting to 11. Then make sure the decimal place never goes above 5.

export list to csv and present to user via browser

Want to prompt browser to save csv
^^working off above question, file is exporting correctly but the data is not displaying correctly.
#view_config(route_name='csvfile', renderer='csv')
def csv(self):
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
header = ['name']
rows = []
for item in name:
rows = [item.id]
return {
'header': header,
'rows': rows
}
Getting _csv.Error
Error: sequence expected but if I change in my renderer writer.writerows(value['rows']) to writer.writerow(value['rows']) the file will download via the browser just fine. Problem is, it's not displaying data in each row. The entire result/dataset is in one row, so each entry is in it's own column rather than it's own row.
First, I wonder if having a return statement inside your for loop isn't also causing problems; from the linked example it looks like their loop was in the prior statement.
I think what it looks like it's doing is it's building a collection of rows based on "table" having columns with the same name as the headers. What are the fields in your table table?
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
This is going to give you back essentially a collection of rows from table, as if you did a SELECT query on it.
Something like
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
header = ['name']
rows = []
for item in name:
rows.append(item.name)
return {
'header': header,
'rows': r
}
Figured it out. kept getting Error: sequence expected so I was looking at the output. Decided to try putting the result inside another list.
#view_config(route_name='csv', renderer='csv')
def csv(self):
d = datetime.now()
query = DBSession.query(table, othertable).join(othertable).join(thirdtable).filter(
thirdtable.sid == 9701)
header = ['First Name', 'Last Name']
rows = []
filename = "csvreport" + d.strftime(" %m/%d").replace(' 0', '')
for i in query:
items = [i.table.first_name, i.table.last_name, i.othertable.login_time.strftime("%m/%d/%Y"),
]
rows.append(items)
return {
'header': header,
'rows': rows,
'filename': filename
}
This accomplishes 3 things. Fills out the header, fills the rows, and passes through a filename.
Renderer should look like this:
class CSVRenderer(object):
def __init__(self, info):
pass
def __call__(self, value, system):
fout = StringIO.StringIO()
writer = csv.writer(fout, delimiter=',',quotechar =',',quoting=csv.QUOTE_MINIMAL)
writer.writerow(value['header'])
writer.writerows(value['rows'])
resp = system['request'].response
resp.content_type = 'text/csv'
resp.content_disposition = 'attachment;filename='+value['filename']+'.csv'
return fout.getvalue()
This way, you can use the same csv renderer anywhere else and be able to pass through your own filename. It's also the only way I could figure out how to get the data from one column in the database to iterate through one column in the renderer. It feels a bit hacky but it works and works well.

Categories