Good geometry library in python? [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for a good and well developed library for geometrical manipulations and evaluations in python, like:
evaluate the intersection between two lines in 2D and 3D (if present)
evaluate the point of intersection between a plane and a line, or the line of intersection between two planes
evaluate the minimum distance between a line and a point
find the orthonormal to a plane passing through a point
rotate, translate, mirror a set of points
find the dihedral angle defined by four points
I have a compendium book for all these operations, and I could implement it but unfortunately I have no time, so I would enjoy a library that does it. Most operations are useful for gaming purposes, so I am sure that some of these functionalities can be found in gaming libraries, but I would prefer not to include functionalities (such as graphics) I don't need.
Any suggestions ? Thanks

Perhaps take a look at SymPy.

Shapely is a nice python wrapper around the popular GEOS library.

I found pyeuclid to be a great simple general purpose euclidean math package. Though the library may not contain exactly the problems that you mentioned, its infrastructure is good enough to make it easy to write these on your own.

CGAL has Python bindings too.

I really want a good answer to this question, and the ones above left me dissatisfied. However, I just came across pythonocc which looks great, apart from lacking good docs and still having some trouble with installation (not yet pypi compatible). The last update was 4 days ago (June 19th, 2011). It wraps OpenCascade which has a ton of geometry and modeling functionality. From the pythonocc website:
pythonOCC is a 3D CAD/CAE/PLM development framework for the Python programming language. It provides features such as advanced topological and geometrical operations, data exchange (STEP, IGES, STL import/export), 2D and 3D meshing, rigid body simulation, parametric modeling.
[EDIT: I've now downloaded pythonocc and began working through some of the examples]
I believe it can perform all of the tasks mentioned, but I found it to be unintuitive to use. It is created almost entirely from SWIG wrappers, and as a result, introspection of the commands becomes difficult.

geometry-simple has classes Point Line Plane Movement in ~ 300 lines, using only numpy; take a look.

You may be interested in Python module SpaceFuncs from OpenOpt project, http://openopt.org
SpaceFuncs is tool for 2D, 3D, N-dimensional geometric modeling with possibilities of parametrized calculations, numerical optimization and solving systems of geometrical equations

Python Wild Magic is another SWIG wrapped code. It is however a gaming library, but you could manipulate the SWIG library file to exclude any undesired graphics stuff from the Python API.

Related

Cyclomatic complexity metric practices for Python [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have a relatively large Python project that I work on, and we don't have any cyclomatic complexity tools as a part of our automated test and deployment process.
How important are cyclomatic complexity tools in Python? Do you or your project use them and find them effective? I'd like a nice before/after story if anyone has one so we can take a bit of the subjectiveness out of the answers (i.e. before we didn't have a cyclo-comp tool either, and after we introduced it, good thing A happened, bad thing B happened, etc). There are a lot of other general answers to this type of question, but I didn't find one for Python projects in particular.
I'm ultimately trying to decide whether or not it's worth it for me to add it to our processes, and what particular metric and tool/library is best for large Python projects. One of our major goals is long term maintenance.
We used the RADON tool in one of our projects which is related to Test Automation.
RADON
Depending on new features and requirements, we need to add/modify/update/delete codes in that project. Also, almost 4-5 people were working on this. So, as a part of review process, we identified and used RADON tools since we want our code maintainable and readable.
Depend on the RADON tool output, there were several times we re-factored our code, added more methods and modified the looping.
Please let me know if this is useful to you.
Python isn't special when it comes to cyclomatic complexity. CC measures how much branching logic is in a chunk of code.
Experience shows that when the branching is "high", that code is harder to understand and change reliably than code in which the branching is lower.
With metrics, it typically isn't absolute values that matter; it is relative values as experienced by your organization. What you should do is to measure various metrics (CC is one) and look for a knee in the curve that relates that metric to bugs-found-in-code. Once you know where the knee is, ask coders to write modules whose complexity is below the knee. This is the connection to long-term maintenance.
What you don't measure, you can't control.
wemake-python-styleguide supports both radon and mccabe implementations of Cyclomatic Complexity.
There are also different complexity metrics that are not covered by just Cyclomatic Complexity, including:
Number of function decorators; lower is better
Number of arguments; lower is better
Number of annotations; higher is better
Number of local variables; lower is better
Number of returns, yields, awaits; lower is better
Number of statements and expressions; lower is better
Read more about why it is important to obey them: https://sobolevn.me/2019/10/complexity-waterfall
They are all covered by wemake-python-styleguide.
Repo: https://github.com/wemake-services/wemake-python-styleguide
Docs: https://wemake-python-stylegui.de
You can also use mccabe library. It counts only McCabe complexity, and can be integrated in your flake8 linter.

Graph library API [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am creating a library to support some standard graph traversals. Some of the graphs are defined explicitly: i.e., all edges are added by providing a data structure, or by repeatedly calling a relevant method. Some graphs are only defined implicitly: i.e., I only can provide a function that, given a node, will return its children (in particular, all the infinite graphs I traverse must be defined implicitly, of course).
The traversal generator needs to be highly customizable. For example, I should be able to specify whether I want DFS post-order/pre-order/in-order, BFS, etc.; in which order the children should be visited (if I provide a key that sorts them); whether the set of visited nodes should be maintained; whether the back-pointer (pointer to parent) should be yielded along with the node; etc.
I am struggling with the API design for this library (the implementation is not complicated at all, once the API is clear). I want it to be elegant, logical, and concise. Is there any graph library that meets these criteria that I can use as a template (doesn't have to be in Python)?
Of course, if there's a Python library that already does all of this, I'd like to know, so I can avoid coding my own.
(I'm using Python 3.)
if you need to handle infinite graphs then you are going to need some kind of functional interface to graphs (as you say in the q). so i would make that the standard representation and provide helper functions that take other representations and generate a functional representation.
for the results, maybe you can yield (you imply a generator and i think that is a good idea) a series of result objects, each of which represents a node. if the user wants more info, like backlinks, they call a method on that, and the extra information is provided (calculated lazily, where possible, so that you avoid that cost for people that don't need it).
you don't mention if the graph is directed or not. obviously you can treat all graphs as directed and return both directions. but then the implementation is not as efficient. typically (eg jgrapht) libraries have different interfaces for different kinds of graph.
(i suspect you're going to have to iterate a lot on this, before you get a good balance between elegant api and efficiency)
finally, are you aware of the functional graph library? i am not sure how it will help, but i remember thinking (years ago!) that the api there was a nice one.
The traversal algorithm and the graph data structure implementation should be separate, and should talk to each other only through the standard API. (If they are coupled, each traversal algorithm would have to be rewritten for every implementation.)
So my question really has two parts:
How to design the API for the graph data structure (used by graph algorithms such as traversals and by client code that creates/accesses graphs)
How to design the API for the graph traversal algorithms (used by client code that needs to traverse a graph)
I believe C++ Boost Graph Library answers both parts of my question very well. I would expect it can be (theoretically) rewritten in Python, although there may be some obstacles I don't see until I try.
Incidentally, I found a website that deals with question 1 in the context of Python: http://wiki.python.org/moin/PythonGraphApi. Unfortunately, it hasn't been updated since Aug 2011.

Learning Algorithms [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Learning efficient algorithms
I recently came across an problem that was solved by applying the correct algorithm: Calculating plugin dependencies
While I was eventually able to understand the logic of the prescribed algorithm, it was not an easy task for me. The only reason I was able to come up with code that worked was because of the logic example on the wikipedia page.
Being entirely self taught, without any CS or math background, I'd like to at least get some practical foundation to being able to apply algorithms to solve problems.
That said, are there any great books / resources (something akin to 'algorithms for dummies') that doesn't expect you have completed college Algebra 9 or Calculus 5 that can teach the basics? I don't expect to ever be a wizard, just expand my problem solving tool-set a little bit.
Doing an amazon search turns up a bunch of books, but I'm hoping you guys can point me to the truly useful resources.
The only language I have any real experience with is Python (a tiny bit of C) so whatever I find needs to be language agnostic or centred around Python/C.
"Art of Computer Programming" by Donald Knuth is a Very Useful Book.
A great book is "Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
Probably not the easiest one but it is very good indeed.
I found useful for myself the following sources:
"Analysis of Algorithms : An Active Learning Approach" by Jeffrey J. McConnell;
"Python Algorithms: Mastering Basic Algorithms in the Python Language"(Expert's Voice in Open Source) by Magnus Lie Hetland. - this books seems to me to be a very like a previous book but from python developer point of view;
http://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs
Steve Skiena's Algorithm Design Manual is very good. It doesn't assume very much background knowledge, and covers several important topics in algorithms.
Personally I found Algorithms and Complexity to be super helpful. I'm also without CS degree or anything.

Any Naive Bayesian Classifier in python? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I have tried the Orange Framework for Naive Bayesian classification.
The methods are extremely unintuitive, and the documentation is extremely unorganized. Does anyone here have another framework to recommend?
I use mostly NaiveBayesian for now.
I was thinking of using nltk's NaiveClassification but then they don't think they can handle continuous variables.
What are my options?
The scikit-learn has an implementation of Gaussian naive Bayesian classifier. In general, the goal of this library is to provide a good trade off between code that is easy to read and use, and efficiency. Hopefully it should be a good library to learn of the algorithms work.
This might be a good place to start. It's the full source code (the text parser, the data storage, and the classifier) for a python implementation of of a naive Bayesian classifier. Although it's complete, it's still small enough to digest in one session. I think the code is reasonably well written and well commented. This is part of the source code files for the book Programming Collective Intelligence.
To get the source, click the link, dl and unpack the zip, from the main folder 'PCI_Code', go to the folder 'chapter 6', which has a python source file 'docclass.py. That's the complete source code for a Bayesian spam filter. The training data (emails) are persisted in an sqlite database which is also included in the same folder ('test.db') The only external library you need are the python bindings to sqlite (pysqlite); you also need sqlite itself if you don't already have it installed).
If you're processing natural language, check out the Natural Language Toolkit.
If you're looking for something else, here's a simple search on PyPI.
pebl appears to handle continuous variables.
I found Divmod Reverend to be the simplest and easiest to use Python Bayesian classifier.
I just took Paul Graham's LISP stuff and converted to to Python
http://www.paulgraham.com/spam.html
There’s also SpamBayes, which I think can be used as a general naive Bayesian clasisfier, instead of just for spam.

Machine vision in Python

I would like to perform a few basic machine vision tasks using Python and I'd like to know where I could find tutorials to help me get started.
As far as I know, the only free library for Python that does machine vision is PyCV (which is a wrapper for OpenCV apparently), but I can't find any appropriate tutorials.
My main tasks are to acquire an image from FireWire. Segment the image in different regions. And then perform statistics on each regions to determine pixel area and center of mass.
Previously, I've used Matlab's Image Processing Tootlbox without any problems. The functions I would like to find an equivalent in Python are graythresh, regionprops and gray2ind.
Thanks!
OpenCV is probably your best bet for a library; you have your choice of wrappers for them. I looked at the SWIG wrapper that comes with the standard OpenCV install, but ended up using ctypes-opencv because the memory management seemed cleaner.
They are both very thin wrappers around the C code, so any C references you can find will be applicable to the Python.
OpenCV is huge and not especially well documented, but there are some decent samples included in the samples directory that you can use to get started. A searchable OpenCV API reference is here.
You didn't mention if you were looking for online or print sources, but I have the O'Reilly book and it's quite good (examples in C, but easily translatable).
The FindContours function is a bit similar to regionprops; it will get you a list of the connected components, which you can then inspect to get their info.
For thresholding you can try Threshold. I was sure you could pass a flag to it to use Otsu's method, but it doesn't seem to be listed in the docs there.
I haven't come across specific functions corresponding to gray2ind, but they may be in there.
documentation: A few years ago I used OpenCV wrapped for Python quite a lot. OpenCV is extensively documented, ships with many examples, and there's even a book. The Python wrappers I was using were thin enough so that very little wrapper specific documentation was required (and this is typical for many other wrapped libraries). I imagine that a few minutes looking at an example, like the PyCV unit tests would be all you need, and then you could focus on the OpenCV documentation that suited your needs.
analysis: As for whether there's a better library than OpenCV, my somewhat outdated opinion is that OpenCV is great if you want to do fairly advanced stuff (e.g. object tracking), but it is possibly overkill for your needs. It sounds like scipy ndimage combined with some basic numpy array manipulation might be enough.
acquisition: The options I know of for acquisition are OpenCV, Motmot, or using ctypes to directly interface to the drivers. Of these, I've never used Motmot because I had trouble installing it. The other methods I found fairly straightforward, though I don't remember the details (which is a good thing, since it means it was easy).
I've started a website on this subject: pythonvision.org. It has some tutorials, &c and some links to software. There are more links and tutorials there.
You probably would be well served by SciPy. Here is the introductory tutorial for SciPy. It has a lot of similarities to Matlab. Especially the included matplotlib package, which is explicitly made to emulate the Matlab plotting functions. I don't believe SciPy has equivalents for the functions you mentioned. There are some things which are similar. For example, threshold is a very simple version of graythresh. It doesn't implement "Otsu's" method, it just does a simple threshold, but that might be close enough.
I'm sorry that I don't know of any tutorials which are closer to the task you described. But if you are accustomed to Matlab, and you want to do this in Python, SciPy is a good starting point.
I don't know much about this package Motmot or how it compares to OpenCV, but I have imported and used a class or two from it. Much of the image processing is done via numpy arrays and might be similar enough to how you've used Matlab to meet your needs.
I've acquired image from FW camera using .NET and IronPython. On CPython I would checkout ctypes library, unless you find any library support for grabbing.
Foreword: This book is more for people who want a good hands on introduction into computer or machine vision, even though it covers what the original question asked.
[BOOK]: Programming Computer Vision with Python
At the moment you can download the final draft from the book's website for free as pdf:
http://programmingcomputervision.com/
From the introduction:
The idea behind this book is to give an easily accessible entry point to hands-on
computer vision with enough understanding of the underlying theory and algorithms
to be a foundation for students, researchers and enthusiasts.
What you need to know
Basic programming experience. You need to know how to use an editor and run
scripts, how to structure code as well as basic data types. Familiarity with Python or other scripting style languages like Ruby or Matlab will help.
Basic mathematics. To make full use of the examples it helps if you know about
matrices, vectors, matrix multiplication, the standard mathematical functions
and concepts like derivatives and gradients. Some of the more advanced mathe-
matical examples can be easily skipped.
What you will learn
Hands-on programming with images using Python.
Computer vision techniques behind a wide variety of real-world applications.
Many of the fundamental algorithms and how to implement and apply them your-
self.

Categories