I'm making an application to process traffic data using python and postgresql. I need to create a GUI for users to select options such as dates, times, settings, etc. I've done this once before using QT Designer. This time, I need to create a map interface which allows users to select various points or segments on a map for processing. I would plot the locations of detectors, for example, and the user would select the end points of the segment they'd like to process and I would need to determine the points between as well. Potentially, I'd like to output some of the analysis to the map, but I'm mostly concerned about this base functionality as I've never done this before. I'm looking for a straightforward method with a low learning curve. I've looking into QML and creating a HTML/javascript based application, but wondering if there are other options or if one of these is the best? I would prefer not to mess with HTML/javascript too much unless it's clearly the best solution for my needs. I really appreciate any feedback.
Related
I've got masses of network information held in a SQLite database. I want to draw a diagram based on this information in the style a network diagram.
I want it to be interactive, in a sense that at the highest level only network range communication can be seen, and as you move deeper into the diagram you begin to see individual nodes (switches, routers, firewalls, hosts, servers, etc) all linked together.
I'd like this process to be as smooth as possible, allowing you to zoom in using the scroll wheel in a location of the diagram and it expands as you do so. Allowing you to then click and drag around the map. However I'd like to get basics down first, so thinking that I should start with drawing the diagram in HTML and have hyper-links for nodes allowing the user to move around and deeper into the diagram using a browser.
It is also crucial that the user would be able to able to capture their view as a still image, which I'd guess would be far more easily done in HTML.
To get to the point I'm asking where I might start to do this. I've looked at PyQT, Graphviz, outputting to HTML etc. I'm just trying to decide what to use and generally how to go about doing this. I'm reasonably good with Python but I am open to suggestion of other languages.
If you guys think Python can do this, which Python? 2.7 or 3? I've been considering making a move to 3 for a while, is it time?
Thanks in advance!
The important thing here is that you want something dynamic and that can be captured by the user.
My answer will be fairly similar to this question.
Export your graph to a standard format such as GEXF and use a Javascript graph drawing library to make your graph interactive such as: SigmaJs, or VivaGraphJs.
The big advantage is that you can script your graph to respond to user event such as zoom, save as a picture or display information dynamically about nodes and edges, etc.
To resume:
First you a python graph library such as Networkx, then export your graph with its properties as JSON or GEXF.
Load the graph using a javascript graph library using the examples as baselines. List of examples using sigma.js, tutorial for VivaGraphJs.
Concerning the python version, it is really dependent on other libraries you may use. For scientific use, I wouldn't recommend to switch to Py3k but for anything else you're good to go.
I've faced in the past something similar to your problem. We have tons of routers documented in a MySQL database. We actually use Racktables and that tool stores all the information in such a way.
At one point we needed to plot networking topologies. If you want, please have a look at this:
https://notedisabbia.wordpress.com/2016/06/17/first-blog-post/
https://github.com/RackTables/racktables-contribs/tree/master/python-graph-topology
The first link is a blog I've written in order to explain what my python program (second link) does in terms of connecting to Racktables, gathering information and plotting network diagrams.
Hope this helps.
Cheers,
Lucas
I would recommend looking at library d3graph. It does not allow you to zoom using the scroll wheel but does have other features like breaking edges.
More information can be found in this blog.
I apologize in advance for my noob-ness; I'm just getting into programming.
Can you set me down the right path for a GUI framework? Looking at this list of GUI frameworks is pretty daunting, considering my general lack of expertise.
Summary:
I'm trying to write a GUI in python that actively updates a second monitor with images that are mathematically generated using numpy. The GUI will have parameters that can be adjusted in real time that change the image (an interference pattern of light) accordingly.
Important criteria:
parameters adjusted on screen change the interference pattern in real time
compatibility with numpy, matplotlib (or easy graphing)
Secondary criteria:
a framework that is useful/flexible for a beginner who's interested in industry programming
dual monitor support (if push comes to shove I can just update the image in a window and move the window to the second monitor)
as a side project I'd like to write a stock trading interface (with graphs, commands, etc... maybe with PyAlgoTrade?), so, once again, flexibility would be nice
Right now I'm leaning towards wxpython, since I've heard that it's flexible with matplotlib (for stock trading GUI's). Before I head down this path (and likely overwhelm myself with new documentation), I'd like to make sure I'm not heading down an unnecessarily windy road.
Any useful links are much appreciated! Your 'keyword relevance' knowledge is likely much better than mine.
Thank you!
Tkinter GUI for python
To have a fast idea, how matplotlib may get directly into a Tkinter based GUI, included a fully operational Model-Visual-Controller tripod co-integrated with Tkinter real-time control-loop, kindly go through this recipe: https://stackoverflow.com/a/25769600/3666197
Both <<Important>> & <<Secondary>> parameters met.
Fast updates
Numpy is a lingua franca, so telling that it is a must is worthless.
Good Real-Time UI / Event-handling design is cardinal. Poor MVC/control-loop may kill otherwise smart system ( as seen from recent updates of some professionally distributed trading system, where UI-responsiveness fell by far under an acceptable UI-interaction latency and sometimes even freezes UI-interactions for several tens of seconds
There are techniques to construct matplotlib objects ( having pre-baked data-structures ), that accelerate any real-time updates to get propagated faster onto GUI-Visual-layer
after hunting through stackoverflow, I still can't find quite what I'm looking for.
Question:
I am looking to use (or create if this doesn't exist) an open source node graph, as a node-based approach to using Python code.
Clarification
Let me explain the concept of what I mean by "Node graph".
A node graph is a place to create "nodes", represented by little boxes with connections. The user can move select them, move them around, make and break connections based on appropriate data types.
Example
For example- and I think to myself, "surely this has been done before," but I'd like to write nodes in python script that load data, process that data in some way, etc.
For example, I want to write a python node that loads time based data from a file OR a real time source, then another node that processes it, then another node that maybe visualizes it, or writes out a custom format to file.
Background I come from a background in 3D animation where we use the software packages Maya and Nuke to process data in this way using node-based workflow- however the architectures are very high-level and specialized. What I want to do is reduce this idea of workflow to a very mid-low level. All I want is a node graph with custom python nodes.
So far I have tried:
Nodebox 3 - this is a high level software built on Java and compiled code. Seems to be too high level for me (with a fully functional gui) and too java based
Nodebox Windows - a nifty OpenGL derived library, but too low level, would have to build an interactive node graph from scratch
Coral - an interesting idea but too specialized in 3D I think, for doing generic data processing as I'd like. Perhaps I could modify it, but I'm afraid the dying community might hold me back.
appreciate any other suggestions you guys may have.
** EDIT **
StackOverflow won't let me post a screenshot example.. annoying.
A visual programming language- this general idea is what I'm looking for! But what I would prefer is the ability to write functions in Python and then graph them visually like those languages.
Try PyQtGraph
It has a node-graph among a lot of other visualization Widgets. It's called flowchart and lets you create and connect nodes. Each of these contains a python function to modify its input data.
As the name implies it relies on PyQt/PySide.
Background: I have a desktop app that I've written in Python27x that uses wxPython for the UI and requests with xml.etree.ElementTree to retrieve open data from a RESTful service and present the data in meaningful ways to the user.
I am currently re-writing my app using PySide. I am fairly certain that I can present my data-candy in HTML5 and I would eventually like to provide a web-app using web2py and JavaScript.
The PySide and web2py versions both need to support:
aquiring a lock on a Throttle object that I've made to handle the fair-usage policy of the service.
presenting the analyzed data in tables with links or buttons for retrieving or donwloading more related data.
presenting lists of related data and highlighting groups of items on mouse-over of any member of the group.
presenting text documents and providing automatic searching / highlighting of keywords / part-of-speech tagging using NLTK.
providing basic graphs and charts of various stats on the data.
Question: Given what I've told you about my app, and assuming web2py and HTML5 can meet my user-interface / presentation needs, what steps can I take / widgets should I use in making my PySide desktop app to maximize re-use when porting over to web2py? What should I be sure to avoid when writing the desktop version?
You need to separate your data and data-processing from the user interface. Qt (and thus PySide) has a very strong focus on this Model-View approach (see e.g. http://qt-project.org/doc/qt-4.8/modelview.html), and provides models to organize your data, and views to present the data. Within Qt, this approach allows to easily use multiple views on the same data sets, without having to worry about how to get the data in the view.
Admittedly, the Qt models take some time to get used to, but the aforementioned tutorial should give you some pointers and references to get you started. In your case, I would go for the following approach:
Find/extend a suitable Qt model to manage your data
Use this data with standard/custom views in your PySide application
Develop web2py-based view to present the data in your webapp
I'm not familiar with web2py, so I can't assess how hard/easy this last step would be. However, I can recommend to invest some time (if you have it) in getting to know the Qt Model-View framework, as it can save you huge amounts of time in the future (at least, in my experience).
In my opinion, you may be over-thinking this.
Basically you will have two methods of presenting data to the end-user;
1) Via a GUI
2) Via your HTML5-candy
Surely the limiting factor will be whatever limitations (if any), web2py/HTML5 place on presenting the data?
If I were writing such an app, which used both PySide and web2py to present the same data but using their respective methods (gui, web), I'd probably want to abstract the data to be presented in such a way that you can feed the same stream into either the GUI function(s) or the Web function(s), with each taking that in and using their respective methods to present the data.
Another alternative I can think of, is embedding a web view in your GUI presentation. Perhaps you can then dispense with a lot of concern of what widgets to use/avoid, merely by using your html5-candy in the embedded web view?
Looking for a non-cloud based open source app for doing data transformation; though for a killer (and I mean killer) app just built for data transformations, I might be willing to spend up to $1000.
I've looked at Perl, Kapow Katalyst, Pentaho Kettle, and more.
Perl, Python, Ruby which are clearly languages, but unable to find any frameworks/DSLs just for processing data; meaning they're really not a great development environments, meaning there's no built GUI's for building RegEx, Input/Output (CSV, XML, JDBC, REST, etc.), no debugger for testing rows and rows of data -- they're not bad either, just not what I'm looking for, which is a GUI built for complex data transformations; that said, I'd love if the GUI/app file was in a scripting language, and NOT just stored in some not human readable XML/ASCII file.
Kapow Katalyst is made for accessing data via HTTP (HTML, CSS, RSS, JavaScript, etc.) it's got a nice GUI for transforming unstructured text, but that's not its core value offering, and is way, way too expensive. It does an okay job of traversing document namespace paths; guessing it's just XPath on the back-end, since the syntax appears to be the same.
Pentaho Kettle has a nice GUI for INPUT/OUTPUT of most common data stores, and its own take on handling data processing; which is okay, and just has a small learning curve. Kettle's debugger is ok, in that the data is easy to see, but the errors and exceptions are not threaded with the output, and there no way to really debug an issue; meaning you can't reload the output/error/exception, but are able to view the system feedback. All that said, Kettle data transformation is _______ well, let's just say it left me feeling like I must be missing something, because I was completely puzzled by "if it's not possible, just write the transformation in JavaScript"; umm, what?
So, any suggestions? Do realize that I haven't really spec'd out any transformations, but figure if you really use a product for data munging, I'd like to know about it; even excel, I guess.
In general though, currently I'm looking for a product that's able to handle 1000-100,000 rows with 10-100 columns. It'd be super cool if it could profile data sets, which is a feature Kettle sort of does, but not super well. I'd also like built in unit testing, meaning I'm able to build out control sets of data, and run changes made against the control set. Then I'd like to be able to selectively filter out rows and columns as I build out the transformation without altering the build; for example, I run a data set through transformation, filter the results, and the next run those sets are automatically blocked at the first "logical" occurrence; which in turn would mean less data to "look at" and a reduced runtime per each enhanced iteration; what would be crazy nice is if as I'd filtering out the rows/columns the app is tracking those, (and the output was filtered out). and unit tested/highlighted any changes. If I made a change that would effect the application logs and it's ability to track the unit tests based on me "breaking a branch" - it'd give me a warning, let me dump the data stored branch... and/or track the primary keys for difference in next generation of output, or even attempt to match them using fuzzy logic. And yes, I know this is a pipe dream, but hey, figured I'd ask, just in case there's something out there I've just never seen.
Feel free to comment, I'd be happy to answer any questions, or offer additional info.
Google Refine?
Talend will need more than 5 minutes of your time, perhaps closer to about 1 hour to begin to wire up a basic transformations and being able to fulfill your requirement to keep versioned control transformations as well. You described a Pipeline process that can be done easily in Talend when you know how, where you have multiple inputs and outputs in a project as the same raw data goes through various transformations and filtering, until it arrives as final output as you desired. Then you can schedule your jobs to repeat the process over similar data. Go back and spend more time with Talend, and you'll succeed in what you need, I'm sure.
I also happen to be one of the committers of Google Refine and also use Talend in my daily work. I actually sometimes model my transformations for Talend first in Google Refine. (Sometimes even using Refine to perform cleanup on borked ETL transforms themselves! LOL ) I can tell you that my experience with Talend played a small part in a few of the features of Google Refine. For instance, both Talend and Google Refine have the concept of an expression editor for your transformations (Talend goes down to Java language for this if need be).
Google Refine will never be an ETL tool, in the sense that we have not designed it to compete in that space were ETL is typically used for large data warehouse backend processing & transformations. However, we designed Google Refine to compliment existing ETL tools like Talend by allowing easy live previewing to make informed decisions about your transformations and cleanup, and if your data isn't incredibly huge, then you might opt to perform what you need within Refine itself.
I'm not sure exactly what kind of data or exactly what kind of transformations you're trying to do, but if it's primarily mathematical transformation, perhaps you can try FreeMat, Octave, or SciLab. If it's more data-warehouse-style munging, try open source ETL tools like Clover, Talend, JasperETL Community Edition, or Jitterbit.