Can Python recognize a formula from Excel cell? - python

Can Python recognize formula from Excel cell and ignore processing the cell which has formulas?

Yes, but instead of reinventing the wheel. I would use one of these libraries.
They seem to provide what you need and a Tutorial is also available.

According to the answer provided by the author of xlrd in Jan 2011, xlrd does not currently provide access to Excel formulas. As I'm currently trying to do this, I'm inclined to believe that this is still the case - I'm working with version 0.9.2 which according to github is the latest. I've just noticed there is xlrd1 on PyPI, but as the two limitations listed are
There is no support for files in the Microsoft Excel 2007/2010 format
One cannot extract formulas from the input file
this offers no joy either.
Although my search is hardly exhaustive (I'm in a hurry), I'm of the opinion that the only sure way of accessing formulas in Python is to access the underlying COM object via Mark Hammond's pywin32. Fairly obviously you will need Excel installed, so this will limit the availability of this solution away from Windows. I'm currently using the Python Excels website for a bit of inspiration. I'm afraid that I don't have any reliable or coherent code as yet - my answer is posted mainly to warn that xlrd is sadly not yet the answer to grabbing Excel formulas via Python.
===
Tue 18.Mar.2014
BTW as I am moderately new to stackoverflow and currently lack the ability to comment or recommend, I would add that this answer was added specifically in light of the inadequacy of the previous answer to the question "can Python recognize a formula from an Excel cell?" which xlrd for all its merits does not. The main reason I posted an incompletely researched answer was to warn other users of a false positive which, for all its merits, xlrd is in this instance.
I'm currently engaged in many tasks, one of which involves this question. If I find an approach which "does" rather than one which "might" answer this question, other than the approach I have given, I will amend my answer.

Related

Total Downloads of Module Missing on PyPi

Up until recently, it was possible to see how many times a python module indexed on https://pypi.python.org/pypi had been downloaded (each module listed downloads for the past 24hrs, week and month). Now that information seems to be missing.
Download numbers are very helpful information when evaluating whether to build code off of one module or another. They also seem to be referenced by sites such as https://img.shields.io/
Does anyone know what happened? And/or, where I can view/retrieve that information?
This email from Donald Stufft (PyPI maintainer) from distutils mailing list says:
Just an FYI, I've disabled download counts on PyPI for the time being. The statistics stack is broken and needs engineering effort to fix it back up to deal with changes to PyPI. It was suggested that hiding the counts would help prevent user confusion when they see things like "downloaded 0 times" making people believe that a library has no users, even if it is a significantly downloaded library.
I'm unlikely to get around to fixing the current stack since, as part of Warehouse, I'm working on a new statistics stack which is much better. The data collection and storage parts of that stack are already done and I just need to get querying done (made more difficult by the fact that the new system queries can take 10+ seconds to complete, but can be queried on any dimension) and a tool to process the historical data and put it into the new storage engine.
Anyways, this is just to let folks know that this isn't a permanent loss of the feature and we won't lose any data.
So i guess we'll have to wait for a new stats stack in PyPI.
I just released http://pepy.tech/ to view the downloads of a package. I use the official data which is stored in BigQuery. I hope you will find it interesting :-)
Also the site is open source https://github.com/psincraian/pepy
Don't know what happened (although it happened before, i.e.) but you might wan't to try the PyPI ranking, or any of the several available modules and recipes to do this. For example:
Vanity
pyStats
random recipe
But consider that a lot of the downloads might be mirrors and not necessarily "real" user downloads. You should that into account in you evaluation. The libs mailing list (or other preferred media) might be a better way to know what version you should install.
PYPI count is disable temporarily as posted by dmand but there are some sites which may tells you python package statistics like pypi-stats.com (they said it shows real time information) and pypi-ranking.info (this might not gives you real time information).
You can also found some pypi packages which can gives you downloads information.

Possible to autogenerate Cython bindings around a large, existing C library?

In otherwords: *.h/*.c --[??POSSIBLE??]--> *.pxd/*.pyx
OK. I’ve done (I hope) enough digging around the Internet - but I think this is a good question so I’ll ask it straight.
There are a few related questions (e.g. Generate python bindings, what methods/programs to use or Wrapping a C library in Python: C, Cython or ctypes? ) but which don't quite sum up the situation that I’m asking which is perhaps for a more “high-level” approach (and specifically for an existing library, not generating new C from python).
I’ve got a little bit of experience of this myself having wrapped a wee bit of code before using Cython. Cython gets the thumbs up for speed and maintainability. That’s OK in my book for small/single bits of code - but, this time I’ve got a bit more on my plate…
And following the first of the three great virtues of a programmer - I want to do this with as minimal effort as possible.
So the real question here is how can I ease the creation by automated means of the .pxd, and possibly .pyx, files (i.e. to save time and not slip up miss-typing something).
This here seems to be the only real hint/note about how to do this - but most of the projects on it are defunct, old or sourceforge. Many only seem to work for C++ (this is C I'm doing here).
Does anyone still use them? Recently? Has anyone got a workflow or best practice for doing this? Am I simply just better doing it by hand?
My library is well defined by a set of header files. One containing defs of all the C struct/types and another containing prototypes for all the functions. But it's loooonnnggg...
Thanks for any tips.
UPDATE (25th August, 2015):
Right, so over the last few months when I had a spare moment, I tried:
CFFI (thank for #David pointing that out) - has a noble aim of "to call C code from Python without learning a 3rd language: existing alternatives require users to learn domain specific language (Cython, SWIG) or API (ctypes)” - but it didn’t quite fit the bill as it involved a fair degree of embedded C code in the actual python files (or loading it in). This would be a pretty manual process to do for a large library. Maybe I missed something…
SWIG is the granddaddy of Python binding, and is pretty solid. Fundamentally though, it is not “hands off” as I understand it - i.e. you need a separate specification file. For example, you have to edit all your C header files to indicate building a python module with a #define SWIG_FILE_WITH_INIT or use other annotations. SIP has the same issue here. You don’t auto-generate from the headers, you modify them to include your own directives and annotations and create a complete specification file.
cwrap - I’m on a Mac so I used this version for clang. https://github.com/geggo/cwrap Really poor doc - but using the source I finally got it to run and it generated…. an empty .pyx file from a pretty simple header of structs. Not so good.
xdress - This showed promise. The website is down so the docs are actually seemingly here. There’s an impressive amount of work gone into it and it looks straightforward to use. But it needed all the llvm headers (and a correctly linked version of clang). I had to use brew install llvm —with-clang. There is a xdressclang-3.5 branch, but it doesn’t seem to have enough fixes done. I tried tapping homebrew/versions for an earlier version of clang (install llvm33 / llvm34) and that got it built. Anyway, I digress… it worked great for a simple example, but the resulting ctypes files for the full library was pretty garbled and refused to build. Something in the AST C->Python is a bit awry...
ctypesgen wasn’t one I had encountered in the original search. The documentation is pretty sparse - or you might call it concise. It hasn’t seemingly had much work done on it the last 4 years either (and people enquiring on the issues list if the developers are ever going to further the project). I’ve tried running it, but sadly it seems to fall over with what I suspect/seems like issues with the Clang compiler cdefs.h use of _attribute_. I’ve tried things like -std=c11 but to no avail.
In conclusion, out of all the ones I’ve looked at I think xdress came the closest to the fully automated generation of python bindings. It worked fine for the simple examples given, but couldn’t handle the more complex existing library headers, with all the complexities of forward declarations, enumerated types, void pointers… It seems a well designed and (for a while) well maintained project, so there is possibly some way to circumvent these issues if someone were to take it on again.
Still, the question remains, does anyone have a robust toolchain for generating python wrappers from C headers automatically? I think the reality is there always has to be a bit of manual work, and for that CFFI looks the most “modern” approach (one of the best overviews/comparisons I encountered is here) - yet it always involves a specially edited cdef() version of any header files (e.g. Using Python's CFFI and excluding system headers).
I find ctypesgen great for autogeneration. I'm only using it with one or two python modules that I hope to open source, and I've been happy so far. Here's a quick example using it with zlib, but I also just tried it successfully with a few other libraries:
(Edit: I know you mentioned ctypesgen has problems on a mac, so maybe it needs someone to tweak it to work on OSX - I don't have OSX at home or I'd try it.)
Get ctypesgen:
git clone https://github.com/davidjamesca/ctypesgen.git
Run short script to call ctypesgen (replace zlib info with another library):
import os
ZLIB_INC_DIR = "/usr/include"
ZLIB_LIB_DIR = "/usr/lib/x86_64-linux-gnu"
ZLIB_LIB = "libz.so"
ZLIB_HEADERS = "/usr/include/zlib.h"
# Set location of ctypesgen.py
ctypesgen_path = 'ctypesgen/ctypesgen.py'
wrapper_filename = 'zlib.py'
cmd = "LD_LIBRARY_PATH={} {} -I {} -L {} -l {} {} -o {}".format(
ZLIB_LIB_DIR, ctypesgen_path, ZLIB_INC_DIR, ZLIB_LIB_DIR, ZLIB_LIB,
ZLIB_HEADERS, wrapper_filename)
print(cmd)
os.system(cmd)
Usage example:
python
>>> import zlib
>>> zlib.compress("asdfasdfasdfasdfasdf")
'x\x9cK,NIKD\xc3\x00T\xfb\x08\x17'

Converting codes written in Python 2 to Python 3

I'm given a task of converting a bunch of codes written in Python 2.7 into Python 3.
So my question is
What are the fundamental differences between the two and what are the new features expected from conversion? I'm assuming it's not just syntactical issues.
Where should I start and what should I focus on?
It'll be more helpful if you could be as concrete as possible..
Please help me out and thank you in advance
Definitely start here: http://docs.python.org/py3k/whatsnew/3.0.html
For an automated tool, see: http://docs.python.org/library/2to3.html
Building from Greg's answer I find find it easier to grok the changes by looking at different compatibility layers people have built in order to support 2 and 3 in parallel.
CherryPy, or specifically this file.
Six, or specifically this file.
Pyramid, or specifically this file.
To use a compatibility layer or not is a widely discussed topic, however they are a good programmatic reference too scope the major changes and what you need to do in order to support them.
By far the easiest way is to use 2to3 and maintain two branches concurrently for a while. See this article on the python.org wiki.
There's also an entire website with detailed information, which is basically the contents of a book on the subject.

Neo4j API explanation for Python

Does anybody knows where can I find a document of the Neo4j API for Python?
I'm newbie and I'm looking for a document with the list of methods and properties and the explanation of each one, or something similar.
I'm using the Neo4j Community Edition (well in fact I'm using the embedded Neo4j database in Python).
At same time I'm trying to use neoclipse to see the graph,... but sometimes the graph don't reflect the changes that I've made.
I've found myself in a similar situation, and after reading the source code of python-embedded in github, I did realize that in the examples that they show, it's almost everything that you could make in a different way from python, if you take a look to the code, they are just light wrappers to be able to use neo4j in a more "pythonic" manner and lifting all the burden that represent to connect to the java classes, but in the long run, if you want to get all the information, you need to read some of javadocs of the native api. In my impression this is one of those projects that you contribute more documentation and code, not look for something more.
By the way the new release 1.6 is already on github, which include a convenient wrapper to use cypher from python-embedded GraphDatabase.query, not just indexes and traversals.
The documentation for the latest version of neo4j can be found here:
http://docs.neo4j.org/chunked/milestone/
Docs on the python bindings specifically can be found here:
http://docs.neo4j.org/chunked/milestone/python-embedded.html

Does one often use libraries outside the standard ones?

I am trying to learn Python and referencing the documentation for the standard Python library from the Python website, and I was wondering if this was really the only library and documentation I will need or is there more? I do not plan to program advanced 3d graphics or anything advanced at the moment.
Edit:
Thanks very much for the responses, they were very useful. My problem is where to start on a script I have been thinking of. I want to write a script that converts images into a web format but I am not completely sure where to begin. Thanks for any more help you can provide.
For the basics, yes, the standard Python library is probably all you'll need. But as you continue programming in Python, eventually you will need some other library for some task -- for instance, I recently needed to generate a tone at a specific, but differing, frequency for an application, and pyAudiere did the job just right.
A lot of the other libraries out there generate their documentation differently from the core Python style -- it's just visually different, the content is the same. Some only have docstrings, and you'll be best off reading them in a console, perhaps.
Regardless of how the other documentation is generated, get used to looking through the Python APIs to find the functions/classes/methods you need. When the time comes for you to use non-core libraries, you'll know what you want to do, but you'll have to find how to do it.
For the future, it wouldn't hurt to be familiar with C, either. There's a number of Python libraries that are actually just wrappers around C libraries, and the documentation for the Python libraries is just the same as the documentation for the C libraries. PyOpenGL comes to mind, but it's been a while since I've personally used it.
As others have said, it depends on what you're into. The package index at http://pypi.python.org/pypi/ has categories and summaries that are helpful in seeing what other libraries are available for different purposes. (Select "Browse packages" on the left to see the categories.)
One very common library, that should also fit your current needs, is the Python Image Library (PIL).
Note: the latest version is still in beta, and available only at Effbot site.
If you're just beginning, all you'll need to know is the stuff you can get from the Python website. Failing that a quick Google is the fastest way to get (most) Python answers these days.
As you develop your skills and become more advanced, you'll start looking for more exciting things to do, at which point you'll naturally start coming across other libraries (for example, pygame) that you can use for your more advanced projects.
It's very hard to answer this without knowing what you're planning on using Python for. I recommend Dive Into Python as a useful resource for learning Python.
In terms of popular third party frameworks, for web applications there's the Django framework and associated documentation, network stuff there's Twisted ... the list goes on. It really depends on what you're hoping to do!
Assuming that the standard library doesn't provide what we need and we don't have the time, or the knowledge, to implement the code we reuse 3rd party libraries.
This is a common attitude regardless of the programming language.
If there's a chance that someone else ever wanted to do what you want to do, there's a chance that someone created a library for it. A few minutes Googling something like "python image library" will find you what you need, or let you know that someone hasn't created a library for your purposes.

Categories