My org is using utilizing Zendesk for work orders. To do this, we have created custom fields to manage statuses and various other information. I want to be able to export this data for reporting purposes to see what is completed, what is in progress, etc. but the 10 column limitation in Zendesk is an issue. Can I use the API to export these work order tickets with a column for each custom field and get it into CSV?
Can you be more specific on which Zendesk product are you using? They have lots of different products (Zendesk Support, Zendesk Talk, Zendesk Chat, etc.).
Feel free to contact me through email.
Zendesk Views do have current 10 column limitation. Although Zendesk provides multiple api endpoints to export data from its products.
You can do following:
Export with the time-based incremental export api export. Documentation link
Export with Query based search endpoint. Documentation link. There is also a zendesk community post "Support Search Reference" on how to create search query. Do take a look at that!
IMO you probably want to go via route 2.
There's also an Advanced Search marketplace app which is pretty handy and easy to use!
🔼 Vote this up 🔼, If you found this helpful! 🤓
Related
My company is trying out google's recommendation AI using BQ exports of merchant center and GA data sources. However, we discovered a configuration error in the merchant feed which led to most of the events being unjoined.
I would like to do a new (clean) setup and am looking for the best way to delete the old data. It seems only possible via the API?
Secondly, while the UserEventService has a purge function, there doesn't seem to be a similar function for the ProductService.
Is deleting each product one by one the only way to go?
Any pointers and examples (Python) would be greatly appreciated as there seems to be very little documentation about this at this point in time.
As you mentioned, the only way to delete data is through the API, you can use Google Cloud Client Libraries or use REST requests; however, the library does not have a function to purge all the Product data.
In this case will be necessary to delete one product at a time by using the delete_prod() function (example).
Nevertheless, as a workaround you can get the id product get_product()function (example) of your products and add them into a collection, then, iterate this collection and pass each value into the delete_prod(). In that way you can delete all the data products, but this needs to be reviewed on your side.
Additionally, I would like to share additional information provided by Google where you can find all related to Python Library.
Retail Docs API,
Python Retail library, GitHub Repository Retail API
Please keep in mind that Stackoverflow is for specific questions about code such as errors.
Im using python library to interact with google bigquery and create a group a new views, however, those view need to be added in a different share dataset as authorized views, but Im not able to find how to do using scripting due is a big amount. Somebody have an idea?
Thanks!!
The short answer to this is unfortunately,no. This can not be done directly as you describe in your question.
As per the official documentation "Currently, you cannot grant permissions on tables, views, or rows. You can set access controls at the dataset level, and you can restrict access to columns with BigQuery Column-level security" Controlling access to datasets
. Controlling access to views, requires you to grant a Cloud IAM role to an entity at the dataset level or higher
There is however a possible workaround that would allow you achieve your goal.
It would be possible to share access to BigQuery views using project- level IAM roles or dataset-level access controls. This is a very detailed walk through of how you could achieve this, it uses only two datasets. But the solution could be expanded for a larger number of datasets.
The subtle art of sharing “views” in BigQuery
Additionally, as you ask about using a Python script. There is no reason that the steps described could not be implemented using the Python client library for Big Query..
I hope this helps.
We have a requirement where we want to show zendesk tickets updated with the data from PostgreSQL database , We are using Python as the scripting language and planning to use this API "http://docs.facetoe.com.au/zenpy.html" for this.
The idea is to help the service team to gather and see all the information in the Zendesk itself.There are additional data in the database which we want to show it in the tickets either as comments or a table structure with the details from other tickets which is raised by this user(We are taking the email address of the user for this).
There is no application at our DWH, So mostly google reference shows the integration between zendesk and some other applications and not much references about updating the tickets from the database via Python or other scripting languages.
So is it possible to pass the data from our DWH to be appeared in the zendesk tickets?
Can anyone helps/suggest me on how to achieve/start on this.
It is possible to update tickets from anywhere using python and some codding.
Your problem can be solved in different ways.
The first one, a little simpler:
You make a simple python app and launch it with cron. App architecture will be like this:
Main process periodically track new tickets in Zendesk using search request. If relevant to database ticket is found (you need some metrics to understand is it relevant ticket) you main process makes a post via ticket.update with information from database. And make a special tag on ticket, to understand that it was already updated.
This is easy to write, but if you database data will be updated it will not be updated in ticket.
The second option is to make private app on zendesk side with backend on you side.
So in this case when your staff member opens some ticket app will request backend to display current data from database, relevant to this ticket. In this case you will see actual information everytime, but will get some database requests on every ticket open case.
To make first script you will need:
zenpy, sqlalchemy and 1-2 days codding.
To make second option you will need:
zenpy, sqlalchemy, flask, front-end interface.
I'm currently extracting data from DBpedia articles using a SPARQLWrapper for python, but I can't seem to find how to extract the number of watchers (and other statistical information) for a given article.
Is there an easy way to achieve this? I don't mind if it's through DBpedia, or directly through wikipedia (using wget, for example).
Thanks for any advice.
It shell be prohibited to get the number of watchers for every arbitrary article, as it is considered to be a security leak if everyone could find unwatched pages. For example, only privileged users have access to Special:Unwatched Pages. There is a toolserver tool (which has access to the DB) showing the number of watchers, but it is restricted to pages with more than 30 watchers for the same reasons - at least unauthenticated.
The MediaWiki query API exposes only mostly content and status information about articles, though you can query and evaluate the public logs or revision histories as well to get statistical data about (public) user actions. For more stats about the Wikimedia sites you may have a look at Meta:Statistics, where various data sources (mostly http://stats.wikimedia.org/) and visualisations of them are listed.
We are looking to add a news/articles section to an existing site which will be powered by aggregating content via RSS feeds. The requirements are
Be able to aggregate lots of feeds. Initially we will start with small number of and eventually we may be aggregating few hundreds of them.
We don't want to display the whole post on our site. We will display summary or short description and when user clicks on read more, he will be taken to the original post on external site.
We would like to grab the image/s related to a post and display that as a small thumbnail with a post on our site.
Create an automated tag cloud out of all the aggregated content.
Categorize aggregated content by using category/sub-category structure.
The aggregation piece should perform well.
Our web app is built using Django and so I am looking into selecting one the following packages. Based on our requirements, which package would you recommend?
django-planet
django-news
planetplanet
feedjack
If you have a good idea of what you want, why not just try them all? If you have pretty strict requirements, write it yourself, roll your own aggregator with feedparser.