Reportlab Wrapper - python

I'm looking for a Reportlab wrapper which does the heavy lifting for me.
I found this one, which looks promising.
It looks cumbersome to me to deal with the low-level api of Reportlab (especially positioning of elements, etc) and a library should facilitate at least this part.
My code for creating .pdfs is currently a maintain hell which consists of positioning elements, taking care which things should stick together, and logic to deal with varying length of input strings.
For example while creating pdf invoices, I have to give the user the ability to adjust the distance between two paragraphs. Currently I grab this info from the UI and then re-calculate the position of paragraph A and B based upon the input.
Besides that I look for a wrapper to help me with this, it would be great if someone could point me to / provide a best-practice example on how to deal with positioning of elements, varying lengh of input strings etc.

For future reference:
Having tested the lib PDFDocument, I can only recommend it. It takes away a lot of complexity, provides a lot of helper functions, and helps to keep your code clean. I found this resource really helpful to get started.

Related

Visualize Python function flow (e.g. as tree or concept map)

I have taught myself python in quite a haphazard way. So my question perhaps won't be very pythonic.
When I write functions in classes, I often lose overview of what each function does. I do try to name them properly. But still, they are sometimes smaller parts of code where it seems arbitrary in which functions to put them. So whenever I want to make changes, I still need to go through the entire code in order to figure out how my functions actually flow.
From what I understand, we have objects and we have functions, and these are our units for structuring our code. But this only gives you a flat structure. It doesn't allow you to do any kind of nesting, like in a tree diagram, over multiple levels. Especially, the code in my file doesn't automatically water itself so that the functions that are called first and more often automatically further up on the top, whereas helper functions would be automatically further down in the document, or even nested.
In fact, even being able to visually nest lower-order subroutines "inside" a higher-order function that calls it would seem helpful. But it's not something that would be supported by Python's syntax. (Plus, it wouldn't quite suffice, because a sub-routine might be used by several higher-order functions.)
I imagine it would be useful to see all functions in my code visualized as a tree, or as a concept map:
Where each function is a dot. And calling order visualized by arrows. This way, I would also easily see which functions are more central, and which are more outliers, or even orphaned.
Yet perhaps this isn't even a case for another tool. Perhaps this is more a case of me not understanding proper coding. Perhaps there is something I can do differently in order to get the kind of intuitive overview over how my program works, without needing another tool.
Firstly, I am not quite sure why this isn't asked more often. Reading code is not intuitive, at all! We should be able to visualize the evolution of a process or function so well that close to every one will be able to understand its behavior. In the 60s and so, people had to be reasonably sure their programs would run, because getting access to the computer would take time; today we execute or compile our program, run tests if we have them, and get to know immediately whether it works. What happened is there is less mental effort now, we execute a bit less code in our heads, and the computer a bit more. But we must still think of how the program behaves midst execution in order to debug. For the future, it would be nice if the computer could just tell us how the program behaves.
You propose looking at a sort of tree of the program as a resolute, and after all, the abstract syntax tree is literally a tree, but I don't think this is what we ought to spend our efforts on when it comes to visualizing systems. What would be preferable is if we could look at an interactive view of how the problem changes its intermediate data-structures as a function of time.
Currently, we have debuggers--but that's akin to looking at the issue by asking what a function is at many values, when we would much rather look at its graph. A lot of programming is done by doing something you feel is right, observing if the behavior correct, if it's not correct then we make modifications by reacting and correcting said behavior.
Bret Victor in his essay, Learnable Programming, discusses this topic. I highly recommend it, even though it won't help you right now, maybe you can help others in the future by making these ideas more prevalent.
Onwards, then to where I tell you what you can do right now. In his book Clean Code, Robert C. Martin proposes structuring code much like how a newspaper is laid out.
Think of a well-written newspaper article. You read it vertically. At the top you expect a headline that will tell you what the story is about and allows you to decide whether it is something you want to read. The first paragraph gives you a synopsis of the whole story, hiding all the details while giving you the broad-brush concepts. As you continue downward, the details increase until you have all the dates, names, quotes, claims, and other minutia.
What is proposed, is to organize your program top-down, with higher level procedures that call mid-level procedures, which in turn call the lower level procedures. At any place, it should be obvious that (1) you are at the appropriate level of abstraction, and (2) you are looking at the part of the program implementing the behavior you seek to modify.
This means storing state at the level where it belongs, and exposing it anywhere else. Procedures should take only the parameters they need, because more parameters means you must also think about more parameters when reasoning about the code.
This is the primary reason for decomposing procedures into smaller procedures. For example, here some code I've written previously. It's not perfect, but you can see very clearly which part of the program you need to go to if you want to change anything.
Of course, higher order procedures are listed before any other. I'm telling you what I'm going to do, before I show you how I do it.
function formatLinks(matches, stripped) {
let formatted_links = []
for (match of matches) {
let diff = difference(match, stripped)
if (isSimpleLink(diff)) {
formatted_links.push(formatAsSimpleLink(diff))
} else if (hasPrefix(diff)) {
formatted_links.push(formatAsPrefixedLink(diff))
} else if (hasSuffix(diff)) {
formatted_links.push(formatAsSuffixedLink(diff))
}
}
// There might be multiple links within a word
// in which case we join them and strip any duplicated parts
// (which are otherwise interpreted as suffixes)
if (formatted_links.length > 1) {
return combineLinks(formatted_links)
}
// Default case
return formatted_links[0]
}
If JavaScript was a typed language, and if we could see an image of the decisions made in the code, as a factor of input and time, this could be even better.
I think Quokka.js and VS Code Debug Visualizer are both doing interesting work in this sector.

Pyramids and Oblique Cones in MEEP

Apologies if this is not the right place for this question.
I've recently started using MIT's MEEP software (Python3, on Linux). I am quite new to it and would like to mostly use it for photovoltaics projects. Somewhat common shapes that show up here are "inverted pyramid" and slanted (oblique) cone structures. Creating shapes in MEEP seems to generally be done with the GeometricObject class, but they don't seem to directly support either of these structures. Is there any way around this or is my only real option simulating these structures by stacking small Block objects?
As described in my own "answer" posted, it's not too difficult to just define these geometric objects myself, write a function to check if it's inside the object, and return the appropriate material. How would I go about converting this to a MEEP GeometricObject, instead of converting that to a material_func as I've done?
No responses, so I thought I'd post my hacky way around it. There are two solutions: First is as mentioned in the question, just stacking MEEP's Block object. The other approach I did was define my own class Pyramid, which works basically the same way as described here. Then, I convert a list of my class objects and MEEP's shape object to a function that takes a vector and returns a material, and this is fed as material_func in MEEP's Simulation object. So far, it seems to work, hence I'm posting it as an answer. However, It substantially slows down subpixel averaging (and maybe the rest of the simulation, though I haven't done an actual analysis), so I'm not very happy with it.
I'm not sure which is "better" but the second method does feel more precise, insofar that you have pyramids, not just a stack of Blocks.

Pros and Cons of html-output for statistical data

I am using Python3 to calculate some statistics from language corpora. Until now I was exporting the results in a csv-file or directly on the shell. A few days ago I started learning how to output the data to html-tables. I must say I really like it, it deals perfect height/width of cell and unicodes and you can apply color to different values. although I think there are some problem when dealing with large data or tables.
Anyway, my question is, I'mot not sure if I should continue in this direction and output the results to html. Can someone with experience in this field help me with some pros and cons of using html as output?
Why not do both ? Make your data available as CSV (for simple export to scripts etc.) and provide a decorated HTML version.
At some stage you may want (say) a proper Excel sheet, a PDF etc. So I would enforce a separation of the data generation from the rendering. Make your generator return a structure that can be consumed by an abstract renderer, and your concrete implementations would present CSV, PDF, HTML etc.
The question lists some benefits of HTML format. These alone are sufficient for using it as one of output formats. Used that way, it does not really matter much what you cannot easily do with the HTML format, as you can use other formats as needed.
Benefits include reasonable default rendering, which can be fine-tuned in many ways using CSS, possibly with alternate style sheets (now supported even by IE). You can also include links.
What you cannot do in HTML without scripting is computation, sorting, reordering, that kind of stuff. But they can be added with JavaScript – not trivial, but doable.
There’s a technical difficulty with large tables: by default, a browser will start showing any content in the table only after having got, parsed, and processed the entire table. This may cause a delay of several seconds. A way to deal with this is to use fixed layout (table-layout: fixed) with specific widths set on table columns (they need not be fixed in physical units; the great em unit works OK, and on modern browsers you can use ch too).
Another difficulty is bad line breaks. It’s easy fixable with CSS (or HTML), but authors often miss the issue, causing e.g. cell contents like “10 m” to be split into two lines.
Other common problems with formatting statistical data in HTML include:
Not aligning numeric fields to the right.
Using serif fonts.
Using fonts where not all digits have equal width.
Using the unnoticeable hyphen “-” insted of the proper Unicode minus “−” (U+2212, −).
Not indicating missing values in some reasonable way, leaving some cells empty. (Browsers may treat empty cells in odd ways.)
Insufficient horizontal padding, making cell contents (almost) hit cell border or cell background edge.
There are good and fairly easy solutions to such problems, so this is just something to be noted when using HTML as output format, not an argument against it.

How do I design a class in Python?

I've had some really awesome help on my previous questions for detecting paws and toes within a paw, but all these solutions only work for one measurement at a time.
Now I have data that consists off:
about 30 dogs;
each has 24 measurements (divided into several subgroups);
each measurement has at least 4 contacts (one for each paw) and
each contact is divided into 5 parts and
has several parameters, like contact time, location, total force etc.
Obviously sticking everything into one big object isn't going to cut it, so I figured I needed to use classes instead of the current slew of functions. But even though I've read Learning Python's chapter about classes, I fail to apply it to my own code (GitHub link)
I also feel like it's rather strange to process all the data every time I want to get out some information. Once I know the locations of each paw, there's no reason for me to calculate this again. Furthermore, I want to compare all the paws of the same dog to determine which contact belongs to which paw (front/hind, left/right). This would become a mess if I continue using only functions.
So now I'm looking for advice on how to create classes that will let me process my data (link to the zipped data of one dog) in a sensible fashion.
How to design a class.
Write down the words. You started to do this. Some people don't and wonder why they have problems.
Expand your set of words into simple statements about what these objects will be doing. That is to say, write down the various calculations you'll be doing on these things. Your short list of 30 dogs, 24 measurements, 4 contacts, and several "parameters" per contact is interesting, but only part of the story. Your "locations of each paw" and "compare all the paws of the same dog to determine which contact belongs to which paw" are the next step in object design.
Underline the nouns. Seriously. Some folks debate the value of this, but I find that for first-time OO developers it helps. Underline the nouns.
Review the nouns. Generic nouns like "parameter" and "measurement" need to be replaced with specific, concrete nouns that apply to your problem in your problem domain. Specifics help clarify the problem. Generics simply elide details.
For each noun ("contact", "paw", "dog", etc.) write down the attributes of that noun and the actions in which that object engages. Don't short-cut this. Every attribute. "Data Set contains 30 Dogs" for example is important.
For each attribute, identify if this is a relationship to a defined noun, or some other kind of "primitive" or "atomic" data like a string or a float or something irreducible.
For each action or operation, you have to identify which noun has the responsibility, and which nouns merely participate. It's a question of "mutability". Some objects get updated, others don't. Mutable objects must own total responsibility for their mutations.
At this point, you can start to transform nouns into class definitions. Some collective nouns are lists, dictionaries, tuples, sets or namedtuples, and you don't need to do very much work. Other classes are more complex, either because of complex derived data or because of some update/mutation which is performed.
Don't forget to test each class in isolation using unittest.
Also, there's no law that says classes must be mutable. In your case, for example, you have almost no mutable data. What you have is derived data, created by transformation functions from the source dataset.
The following advices (similar to #S.Lott's advice) are from the book, Beginning Python: From Novice to Professional
Write down a description of your problem (what should the problem do?). Underline all the nouns, verbs, and adjectives.
Go through the nouns, looking for potential classes.
Go through the verbs, looking for potential methods.
Go through the adjectives, looking for potential attributes
Allocate methods and attributes to your classes
To refine the class, the book also advises we can do the following:
Write down (or dream up) a set of use cases—scenarios of how your program may be used. Try to cover all the functionally.
Think through every use case step by step, making sure that everything we need is covered.
I like the TDD approach...
So start by writing tests for what you want the behaviour to be. And write code that passes. At this point, don't worry too much about design, just get a test suite and software that passes. Don't worry if you end up with a single big ugly class, with complex methods.
Sometimes, during this initial process, you'll find a behaviour that is hard to test and needs to be decomposed, just for testability. This may be a hint that a separate class is warranted.
Then the fun part... refactoring. After you have working software you can see the complex pieces. Often little pockets of behaviour will become apparent, suggesting a new class, but if not, just look for ways to simplify the code. Extract service objects and value objects. Simplify your methods.
If you're using git properly (you are using git, aren't you?), you can very quickly experiment with some particular decomposition during refactoring, and then abandon it and revert back if it doesn't simplify things.
By writing tested working code first you should gain an intimate insight into the problem domain that you couldn't easily get with the design-first approach. Writing tests and code push you past that "where do I begin" paralysis.
The whole idea of OO design is to make your code map to your problem, so when, for example, you want the first footstep of a dog, you do something like:
dog.footstep(0)
Now, it may be that for your case you need to read in your raw data file and compute the footstep locations. All this could be hidden in the footstep() function so that it only happens once. Something like:
class Dog:
def __init__(self):
self._footsteps=None
def footstep(self,n):
if not self._footsteps:
self.readInFootsteps(...)
return self._footsteps[n]
[This is now a sort of caching pattern. The first time it goes and reads the footstep data, subsequent times it just gets it from self._footsteps.]
But yes, getting OO design right can be tricky. Think more about the things you want to do to your data, and that will inform what methods you'll need to apply to what classes.
After skimming your linked code, it seems to me that you are better off not designing a Dog class at this point. Rather, you should use Pandas and dataframes. A dataframe is a table with columns. You dataframe would have columns such as: dog_id, contact_part, contact_time, contact_location, etc.
Pandas uses Numpy arrays behind the scenes, and it has many convenience methods for you:
Select a dog by e.g. : my_measurements['dog_id']=='Charly'
save the data: my_measurements.save('filename.pickle')
Consider using pandas.read_csv() instead of manually reading the text files.
Writing out your nouns, verbs, adjectives is a great approach, but I prefer to think of class design as asking the question what data should be hidden?
Imagine you had a Query object and a Database object:
The Query object will help you create and store a query -- store, is the key here, as a function could help you create one just as easily. Maybe you could stay: Query().select('Country').from_table('User').where('Country == "Brazil"'). It doesn't matter exactly the syntax -- that is your job! -- the key is the object is helping you hide something, in this case the data necessary to store and output a query. The power of the object comes from the syntax of using it (in this case some clever chaining) and not needing to know what it stores to make it work. If done right the Query object could output queries for more then one database. It internally would store a specific format but could easily convert to other formats when outputting (Postgres, MySQL, MongoDB).
Now let's think through the Database object. What does this hide and store? Well clearly it can't store the full contents of the database, since that is why we have a database! So what is the point? The goal is to hide how the database works from people who use the Database object. Good classes will simplify reasoning when manipulating internal state. For this Database object you could hide how the networking calls work, or batch queries or updates, or provide a caching layer.
The problem is this Database object is HUGE. It represents how to access a database, so under the covers it could do anything and everything. Clearly networking, caching, and batching are quite hard to deal with depending on your system, so hiding them away would be very helpful. But, as many people will note, a database is insanely complex, and the further from the raw DB calls you get, the harder it is to tune for performance and understand how things work.
This is the fundamental tradeoff of OOP. If you pick the right abstraction it makes coding simpler (String, Array, Dictionary), if you pick an abstraction that is too big (Database, EmailManager, NetworkingManager), it may become too complex to really understand how it works, or what to expect. The goal is to hide complexity, but some complexity is necessary. A good rule of thumb is to start out avoiding Manager objects, and instead create classes that are like structs -- all they do is hold data, with some helper methods to create/manipulate the data to make your life easier. For example, in the case of EmailManager start with a function called sendEmail that takes an Email object. This is a simple starting point and the code is very easy to understand.
As for your example, think about what data needs to be together to calculate what you are looking for. If you wanted to know how far an animal was walking, for example, you could have AnimalStep and AnimalTrip (collection of AnimalSteps) classes. Now that each Trip has all the Step data, then it should be able to figure stuff out about it, perhaps AnimalTrip.calculateDistance() makes sense.

python solutions for managing scientific data dependency graph by specification values

I have a scientific data management problem which seems general, but I can't find an existing solution or even a description of it, which I have long puzzled over. I am about to embark on a major rewrite (python) but I thought I'd cast about one last time for existing solutions, so I can scrap my own and get back to the biology, or at least learn some appropriate language for better googling.
The problem:
I have expensive (hours to days to calculate) and big (GB's) data attributes that are typically built as transformations of one or more other data attributes. I need to keep track of exactly how this data is built so I can reuse it as input for another transformation if it fits the problem (built with right specification values) or construct new data as needed. Although it shouldn't matter, I typically I start with 'value-added' somewhat heterogeneous molecular biology info, for example, genomes with genes and proteins annotated by other processes by other researchers. I need to combine and compare these data to make my own inferences. A number of intermediate steps are often required, and these can be expensive. In addition, the end results can become the input for additional transformations. All of these transformations can be done in multiple ways: restricting with different initial data (eg using different organisms), by using different parameter values in the same inferences, or by using different inference models, etc. The analyses change frequently and build on others in unplanned ways. I need to know what data I have (what parameters or specifications fully define it), both so I can reuse it if appropriate, as well as for general scientific integrity.
My efforts in general:
I design my python classes with the problem of description in mind. All data attributes built by a class object are described by a single set of parameter values. I call these defining parameters or specifications the 'def_specs', and these def_specs with their values the 'shape' of the data atts. The entire global parameter state for the process might be quite large (eg a hundred parameters), but the data atts provided by any one class require only a small number of these, at least directly. The goal is to check whether previously built data atts are appropriate by testing if their shape is a subset of the global parameter state.
Within a class it is easy to find the needed def_specs that define the shape by examining the code. The rub arises when a module needs a data att from another module. These data atts will have their own shape, perhaps passed as args by the calling object, but more often filtered from the global parameter state. The calling class should be augmented with the shape of its dependencies in order to maintain a complete description of its data atts.
In theory this could be done manually by examining the dependency graph, but this graph can get deep, and there are many modules, which I am constantly changing and adding, and ... I'm too lazy and careless to do it by hand.
So, the program dynamically discovers the complete shape of the data atts by tracking calls to other classes attributes and pushing their shape back up to the caller(s) through a managed stack of __get__ calls. As I rewrite I find that I need to strictly control attribute access to my builder classes to prevent arbitrary info from influencing the data atts. Fortunately python is making this easy with descriptors.
I store the shape of the data atts in a db so that I can query whether appropriate data (i.e. its shape is a subset of the current parameter state) already exists. In my rewrite I am moving from mysql via the great SQLAlchemy to an object db (ZODB or couchdb?) as the table for each class has to be altered when additional def_specs are discovered, which is a pain, and because some of the def_specs are python lists or dicts, which are a pain to translate to sql.
I don't think this data management can be separated from my data transformation code because of the need for strict attribute control, though I am trying to do so as much as possible. I can use existing classes by wrapping them with a class that provides their def_specs as class attributes, and db management via descriptors, but these classes are terminal in that no further discovery of additional dependency shape can take place.
If the data management cannot easily be separated from the data construction, I guess it is unlikely that there is an out of the box solution but a thousand specific ones. Perhaps there is an applicable pattern? I'd appreciate any hints at how to go about looking or better describing the problem. To me it seems a general issue, though managing deeply layered data is perhaps at odds with the prevailing winds of the web.
I don't have specific python-related suggestions for you, but here are a few thoughts:
You're encountering a common challenge in bioinformatics. The data is large, heterogeneous, and comes in constantly changing formats as new technologies are introduced. My advice is to not overthink your pipelines, as they're likely to be changing tomorrow. Choose a few well defined file formats, and massage incoming data into those formats as often as possible. In my experience, it's also usually best to have loosely coupled tools that do one thing well, so that you can chain them together for different analyses quickly.
You might also consider taking a version of this question over to the bioinformatics stack exchange at http://biostar.stackexchange.com/
ZODB has not been designed to handle massive data, it is just for web-based applications and in any case it is a flat-file based database.
I recommend you to try PyTables, a python library to handle HDF5 files, which is a format used in astronomy and physics to store results from big calculations and simulations. It can be used as an hierarchical-like database and has also an efficient way to pickle python objects. By the way, the author of pytables explained that ZOdb was too slow for what he needed to do, and I can confirm you that. If you are interested in HDF5, there is also another library, h5py.
As a tool for managing the versioning of the different calculations you have, you can have a try at sumatra, which is something like an extension to git/trac but designed for simulations.
You should ask this question on biostar, you will find better answers there.

Categories