Is it possible to make changes to an existing page in Trac via a plugin?
(I am not talking about the wiki, but the ticket system).
I am trying to make a plugin that uses the View Tickets -> Custom Query view and gets the tickets from the resulting table of tickets. The goal is to use these tickets to modify them via a predefined python script, and then optionally print them.
Is this possible via the trac api or would one have to make a whole new page and write that whole query functionality from scratch to get the tickets from the database?
I feel that this is not very clearly documented by Trac, so I hope there are some people with experience in plugin development for trac and/or agilo for trac.
The first thing that comes to mind is that ITicketManipulator is not called in batch modify events. You might be able to implement a solution using IRequestFilter. I'd need more information about how you plan to modify the tickets in order to give better advice.
Related
We have a requirement where we want to show zendesk tickets updated with the data from PostgreSQL database , We are using Python as the scripting language and planning to use this API "http://docs.facetoe.com.au/zenpy.html" for this.
The idea is to help the service team to gather and see all the information in the Zendesk itself.There are additional data in the database which we want to show it in the tickets either as comments or a table structure with the details from other tickets which is raised by this user(We are taking the email address of the user for this).
There is no application at our DWH, So mostly google reference shows the integration between zendesk and some other applications and not much references about updating the tickets from the database via Python or other scripting languages.
So is it possible to pass the data from our DWH to be appeared in the zendesk tickets?
Can anyone helps/suggest me on how to achieve/start on this.
It is possible to update tickets from anywhere using python and some codding.
Your problem can be solved in different ways.
The first one, a little simpler:
You make a simple python app and launch it with cron. App architecture will be like this:
Main process periodically track new tickets in Zendesk using search request. If relevant to database ticket is found (you need some metrics to understand is it relevant ticket) you main process makes a post via ticket.update with information from database. And make a special tag on ticket, to understand that it was already updated.
This is easy to write, but if you database data will be updated it will not be updated in ticket.
The second option is to make private app on zendesk side with backend on you side.
So in this case when your staff member opens some ticket app will request backend to display current data from database, relevant to this ticket. In this case you will see actual information everytime, but will get some database requests on every ticket open case.
To make first script you will need:
zenpy, sqlalchemy and 1-2 days codding.
To make second option you will need:
zenpy, sqlalchemy, flask, front-end interface.
So we have been using Apache Superset, It's a great tool.
The only frustration come from that there are a few dashboards we want to share with users outside the company.
I believe right now the way to do it is go from the Gamma user then create a read only role (Correct me if I'm wrong)
There are a few downside of this:
we need to create a view per user on each table to make sure that they do not see the records that they are not supposed to.
the access is given by datasource, so they will be able to see any dashboard that use the same datasource, which can be a problem sometimes.
all of these authentication is a lot of work to maintain.
I'm wondering if there is any way (or even hack) to simply share the graphs and tables as a dashboard, without any database access granted.
Like a frozen or snapshot of dashboard,
like the way Redash does it:
https://redash.io/help/user-guide/dashboards/sharing-dashboards
What you are looking for can be achieved through a combination of the public user and appending ?standalone=true to the dashboard url.
You also don't need the entire list of Gamma user permissions, the most important ones are can explore on superset, explore json on superset and datasource access and csrf token. This basically renders the dashboards without the superset menu and should make everything readonly.
We can achieve this by creating a custom role.
1. Can remove all the menu items
2. Can disable the dashboard edit button
3. Can give access to specific tables.
So a user cannot access any other dashboard or charts
Eg. Dashboard
Public dashboards
This is not meant for production. It’s for experiments or while doing a proof of concept.
#superset_config.py
PUBLIC_ROLE_LIKE_GAMMA = True
or
PUBLIC_ROLE_LIKE: Optional[str] = "Gamma"
After this, we need to re-run the init user (if already run)
docker-compose exec superset superset-init
Dashboards & charts can be embedded without superset header (Nav bar etc) by adding standalone=true parameter to the url, like this :
http://localhost:9000/superset/dashboard/world_health/?standalone=true
We need to grant database source permissions to public role for the data to be visible.
please see: https://sairamkrish.medium.com/apache-superset-custom-authentication-and-integrate-with-other-micro-services-8217956273c1
Superset is great, I'm glad people are talking about it since the days when it was AirBnB's Caravel. It has come a long way.
There is no 'official' solution for what you're looking for but there is a way to effectively get the same result. You said you wouldn't mind a 'hack' so...
Creating a table or a data source and exposing it to the 'public' group should do what you're looking to accomplish.
We are building an exensive api-link with the Exact online odata API. Problem we are having is that many objects cant be updated or deleted. For instance BankEntryLines, GeneralJournalEntryLines.
We have now worked around this by creating new EntryLines upon each update or delete, but this creates much unclarity in some cases.
Can the API be changed, or can I get extra authorization to be able to update or delete these objects, just like is possible in the GUI?
As the Exact Online REST API doesn't support modifying on quite some objects, there is no way to achieve what you want using the REST API. If the Exact Online XML API doesn't support updating either, there is only one solution left.
That solution is forbidden by Exact, and it could risk you lose you application developer status. You can make those changes using HTTP POSTS on the web site itself. If you can extract the calls that are made through the screens, you can mimic their behavior and by replaying that, you can modify what you need.
If you want to make a coupling to Exact Online and you are starting with developing, I want to suggest you to take a look at Invantive Data Hub, which allows updating Exact Online using SQL syntax. (To give full disclosure: I work for that company)
I have a large amount of automatically generated html files that I would like to push to my Plone website with a script. I currently generate the files, log into Plone, click edit on each individual page and copy and paste the html into the editor. I'd like to automate this. It would be nice to retain the plone versioning, have a auto generated comment for the edit, and come from a specific user.
I've read and tried Webdav with little luck at getting it working consistently and know that there is a way to connect to plone via ftp, but haven't tried it. I'm not sure if these are the methods that I need.
My google searches aren't leading me to anything useful. Any ideas on where to start looking for a solution to this? Or any tips on implementing it?
You can script anything in Plone via the following methods:
Through-the-web via API calls (e.g. XML-RPC, wsapi, etc.)
The bin/instance run script provided by plone.recipe.zope2instance (See charm for an example of this).
You can also use a migration framework like:
collective.transmogrifier
which allows you to write migration code, and trigger it via GenericSetup or Browser view. Additionally, there are applications written on top of Transmogrifier aimed roughly at what you are describing, the most popular of which is:
funnelweb
I would recommend that you consider using or writing a Transmogrifier "blueprint(s)" to do your import, and execute the pipeline with a tool that makes that easy:
mr.migrator
You can find blueprints by searching PyPI for "transmogrify". One popular set of blueprints is:
quintagroup.transmogrifier
One of the main attractions to the Transmogrifier approach, aside from getting the job done, is the ability to share useful blueprints with others.
I think transmogrifier is the best tool for this job, but this will definitely be a programming task no matter how you do it. It's used for many such migration jobs such as migrating from drupal.
There's an add-on, wsapi4plone.core that pumazi at WebLion started that provides web services for portals which you can then hook into. You can create, modify, delete content via XML-RPC calls. The only caveat is that it doesn't yet work with Collections (criteria specifically).
project: http://pypi.python.org/pypi/wsapi4plone.core
docs: http://packages.python.org/wsapi4plone.core/
You can also do it programmatically by hooking into the ZODB via Python (zopepy or some other method).
These should get you started:
http://plone.org/documentation/kb/manipulating-plone-objects-programmatically/reading-and-writing-field-values - you should be able to get an understanding of accessors and mutators (setters and getters), in your case you are going to be more than likely working with obj.Text (getter) and obj.setText (setter).
https://weblion.psu.edu/trac/weblion/wiki/AutomatingObjectCreation - lots of examples (slightly outdated but still relevant)
http://plone.org/documentation/faq/upload-images-files
Try to enable Webdav or ftp in Plone, then you can access Plone via webdav or ftp clients, pushing the html files. Plone (Zope) will recognises the html files as Pages.
as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion.
Flat files
I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki.
On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine)
Storing in a database
By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts.
Storing a vcs repository in a database field
This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that.
So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test
In none of your choices have you considered whether you wish to be able to search your wiki. If this is a consideration, having the 'live' copy of each page in a database with full text search would be hugely beneficial. For this reason, I would personally go with storing the pages in a database every time - otherwise you'll have to create your own index somewhere.
As far as version logging goes, you only need store the live copy in an indexable format. You could automatically create a history item within your 'page' model when an changed page is written back to the database. You can cut down on the storage overhead of earlier page revisions by compressing the data, should this become necessary.
If you're expecting a massive amount of change logging, you might want to read this answer here:
How does one store history of edits effectively?
Creating a wiki is fun and rewarding, but there are a lot of prebuilt wiki software packages already. I suggest Wikipedia's List of wiki software. In particular, MoinMoin and Trac are good. Finally, John Sutherland has made a wiki using Django.