Changing the bit-depth of figures produced using Matplotlib - python

I'm using matplotlib to generate some figures via savefig. These figures are black and white and need to be saved at a very high resolution (1000 dpi) in TIFF format. It would therefore be beneficial to save them with a reduced bit depth so as to use less memory.
To that end, my question: how does one specify the bit depth when saving figures with matplotlib?
Thanks!

So far I get the impression that matplotlib doesn't support a bit-depth option. I'm thus using imagemagick to convert the image posthoc:
convert -monochrome +dither A.tiff B.tiff
Several things I'll mention in case someone else is trying to do similarly:
When I first changed the bitdepth by running convert -monochrome A.tiff B.tiff, the fonts looked unacceptably ugly (even at 1000 DPI!). This was because of antialiasing, which matplotlib performs by default. I couldn't find any option to turn this off, but its negative effects (when downsampling the DPI) can be largely circumvented by enabling dithering. Therefore, even if there is an option to change the DPI of the output image in matplotlib, it isn't useful unless it performs dithering or unless there's also an option to disable antialiasing.
Short answer, I would suggest to anyone in a similar situation as me to do their monochrome conversion posthoc as I have done.

Related

Converting svg to png without changing the pixel values

I have been stuck in this problem for some time now. Essentially, I have a bunch of svg images. Each 'child' in the svg file has been labelled with some pixel value. Currently the number of this values is very small and everything is labelled as rgb(0.0, 0.0, 0.0), rgb(1.0, 1.0, 1.0) ... rgb(9.0, 9.0, 9.0), so essentially I have 10 different types of pixels.
Now, I want to convert these images into png format, and more importantly I need the mapping of pixel values to be 1-to-1. Essentially, all pixels that have values rgb(0.0, 0.0, 0.0) in svg files, need to have values rgb(x,x,x) in png files (or even better L(x)); rgb(1.0, 1.0, 1.0) on svg files need to be converted to rgb(y,y,y) on png files (or even better L(y)) and so on. This one to one mapping is a dealbreaker for my application, because this is essentially the ground truth for my work.
By simply writing in the console:
convert test.svg test.png
doesn't give me what I want. Checking the historgram of values, it seems that I have 248 unique values instead of 10, and that isn't good for me (despite that the vast majority of them have just a few pixels).
Does anyone know:
if this can be done.
how this can be done.
I've tried so far using other libraries like Python's cairosvg but that seems to work even worse. Yes, I know that svg and png are totally different formats.
For clarification, adding an svg and a png file:
svg: https://drive.google.com/open?id=0B_vhcDz1zxeYeGVDSnhfeWplOWs
png: https://drive.google.com/open?id=0B_vhcDz1zxeYUnYzZUtIUmVqVWM
Opening the file in Python, seems that there are 248 unique pixel values, while there should be only 4 (background + three symbols).
Thanks!
Your request isn't making a lot of sense. As far as I can see, your sample SVG file only has two colours: black and white (rgb(255.0, 255.0, 255.0)). So where does this 10 colours idea come from?
Also the SVG standard does not specify exactly how vector shapes should be converted to pixels. There will be subtle differences between SVG renderers.
Remember that vector shape edges that pass through the middle of a pixel will produce a grey pixel. This is called anti-aliasing. It is designed to give a smoother look to the edge. And I imagine that is why you are seeing many more pixel values than you expect.
Perhaps what you are saying is that you want a way to disable anti-aliasing? Some conversion programs may have options to do this. Alternatively you can try adding the shape-rendering attribute to the root <svg> tag of your file:
<ns0:svg ...(snip)... shape-rendering="crispEdges">
However some SVG conversion programs may not support this attribute. But you can see it working if you try it in most browsers.
The output generated by turning antialiasing off will not look as good. But perhaps for your purposes you don't care about that.
Alternatively, perhaps you are wanting to know how to convert the SVG to a bitmap, whilst limiting the antialiasing to 10 specific levels of grey? Imagemagick lets you do that. I am not an Imagemagick user, but apparently you can tell imagemagick to use a specific colour paletter by passing a palette image using the -map parameter.
3 Years but no great solution gg:
I found 2 good ones:
in the ImageMagick command line you can use:
"convert in.svg +antialias out.png"
but be careful my source said that "+" deactivates AA and "-" activates it. But the source is pretty old and this seems strange so. I couldn't try this
Inkscape brought a great solution in an update (I tried this in Version 1.0.1):
in the Advanced Settings for "export png" you can choose Antialiasing directly "CAIRO_ANTIALIAS_NONE" works perfectly for me

How to make a density plot in python without loss information?

I would like to know how I can make a density plot in python. I'm using the following code plt.hist2d(x[:,1],x[:,2],weights=log(y),bins=100)
where the x values are an array, and y is how much energie there are in the respective pixel (I'm working with galaxies's images, but not fits images). But there is a problem with this code, if I choose a little value of bins, for example 240, I can see well the structures of the galaxy, however distorced. If I choose a bin's value of 3000, the image loss an amount of information, many values of y do not are plotted. I will show the two examples below.
I tried to use plt.imshow but does not work, appears the problem TypeError: Invalid dimensions for image data. The data that I'm working comes from hdf5 files.
I would like to have the possibility to plot the image, with high resolution, to be possible see the structures of the galaxy better. It's possible?
Here is the images:
With the system you describe, you should set the bin size according to the size of the pixel. That way you would have the maximum resolution.
I suspect however, that the maximum number of levels you can represent is 256.
If you would like more resolution, you may have to calculate the image yourself. According to this article, you can save images with up to 32 bit precision in grayscale. That would probably be exaggerating. 16-bit is nice though. This calculation isn't that difficult, and the PIL (Python Imaging Library) has the tools to do the formatting work.
Of course, much depends on the resolution of the data you have available!

image rendering issue in psychopy

I am a long-time psychopy user, and i just upgraded to 1.81.03 (from 1.78.x). In one experiment, i present images (.jpgs) to the user and ask for a rating scale response. The code worked fine before the update, but now i am getting weird artifacts on some images. For example, here is one image i want to show:
But here is what shows up [screencapped]:
You can see that one border is missing. This occurs for many of my images, though it is not always the same border, and sometimes two or three borders are missing.
Does anyone have an idea about what might be going on?
I received this information from the psychopy-users group (Micahel MacAskill):
As a general point, you should avoid using .jpgs for line art: they aren't designed for this (if you zoom in, in the internal corners of your square, you'll see the typical compression artefacts that their natural image-optimised compression algorithm introduces when applied to line art). .png format is optimal for line art. It is lossless and for this sort of image will still be very small file-size wise.
Graphics cards sometimes do scaling-up and then down-scaling of bitmaps, which can lead to issues like this with single-pixel width lines. Perhaps this is particularly the issue here because (I think) this image was supposed to be 255 × 255 pixels, and cards will sometimes scale up to the nearest power-of-two size (256 × 256) and then down again, so easy to see how the border might be trimmed.
I grabbed your image off SO, it seemed to have a surrounding border around the black line to make it 321 × 321 in total. I made that surround transparent and saved it as .png (another benefit of png vs jpg). It displays without problems (although a version cropped to just the precise dimensions of the black line did show the error you mentioned). (Also, the compression artefacts are still there, as I just made this png directly from the jpg). See attached file.
If this is the sort of simple stimulus you are showing, you might want to use ShapeStim/Polygon stimuli instead of bitmaps. They will always be drawn precisely, without any scaling issues, and there wouldn't be the need for any jiggery pokery.
Why this changed from 1.78 I'm not sure. The issue is also there in 1.82.00

matplotlib animation movie: quality of movie decreasing with time

I am trying to create a movie with the animation.FuncAnimation function in matplotlib. The movie looks fine interactively, but when I save it with the command
anim2.save('somefilm.mp4',codec='mpeg4', fps=15)
It starts out fine, but then becomes blurry (both using QuickTime and vlc, so I figured it's the movie, not the player).
I've played around with blitting, since I thought it was maybe the fact that the canvas wasn't redrawn, but to no avail. Increasing the bitrate also doesn't help.
Setting dpi=500 does improve the quality of the movie somewhat, though then it gets stuck repeatedly, which makes it difficult to watch.
I was just wondering whether this is the best one can do, or am I missing something?
In order to dig into this problem it is important to understand that video files are usually compressed with a highly lossy compression whereas the interactive display is not compressed. The usual movie compressions are often extremely bad with graphs, and it is a matter of compression parameters.
There are four things you can do:
set the image resolution (by dpi), but this may actually make the output visually poorer, as the problem is usually not in the lacking pixels
set the image bitrate (by bitrate); the higher your bitrate, the better your movie will be - one possibility is to set bitrate=-1 and let matplotlib choose the best bitrate
change the codec (e.g., to codec="libx264")
give extra arguments to the codec (e.g., extra_args=['-pix_fmt', 'yuv420p'])
Unfortunately, these options really depend on the video codec, which is a third-party program (usually ffmpeg), the intended use of your video, and your platform. I would start by adding the kwarg bitrate=-1 to see if it improves things.
If you cannot make it work, please add a full (but as simple as possible) example of how to create a bad file. Then it is easier to debug!
I was having the same problem while animating ~3500 frames of some subsurface water current vectors over a basemap and finally fixed the problem. I had been trying to set the bitrate in the anim.save declaration but was still getting the same blurriness later in the animation. What I had to do was set the bitrate when defining the writer:
plt.rcParams['animation.ffmpeg_path']='C:/ffmpeg/bin/ffmpeg.exe'
writer=animation.FFMpegWriter(bitrate=500)
anim.save('T:/baysestuaries/USERS/TSansom/Tiltmeters/testdeployment/tilt2.mp4',
writer=writer,fps=8)
If I set the bitrate to anything less than 500 the animation would still get blurry. bitrate=-1 and codec='libx264' did nothing for me. Hope this helps!

Using Python to convert color formats?

I'm working on a Python tool to convert image data into these color formats:
RGB565
RGBA5551
RGBA4444.
What's the simplest way to achieve this?
I've used the Python Imaging Library (PIL) frequently. So I know how to load an image and obtain each pixel value in RGBA8888 format. And I know how to write all the conversion code manually from that point.
Is there an easier way? Perhaps some type of 'formatter' plugin for PIL?
Does PIL already support some of the formats I'm targeting? I can't ever figure out which formats PIL really supports without digging though all of the source code.
Or is there a better library than PIL to accomplish this in Python?
Any tips would be appreciated. Thanks!
Changing something from 8 to 5 bits is trivial. In 8 bits the value is between 0 and 255, in 5 bits it's between 0 and 31, so all you need to do is divide the value with 8. Or 4 in the case for green in RGB565 mode. Or 16 in RGBA4444 mode as it uses 4 bits per channel, etc.
Edit: Reading through your question again, I think there is a confusion (either with me or you). RGB555 and RGBA4444 etc are not really formats, like GIF or JPG, they are color spaces. That conversion is trivial (see above). What file format you want to save it in later is another question. Most file formats have limited support for color spaces. I think for example that JPEG always saves it in YCbCr (but I could be mistaken), GIF uses a palette (which in turn always is RGB888, I think) etc.
There's a module called Python Colormath which provides a lot of different conversions. Highly recommended.
Numpy is powerful indeed, but to get there and back to PIL requires two memory copies. Have you tried something along the following lines?
im = Image.open('yourimage.png')
im.putdata([yourfunction(r,g,b) for (r,g,b) in im.getdata()])
This is quite fast (especially when you can use a lookup table). I am not familiar with the colour spaces you mention, but as I understand you know the conversion so implementation of yourfunction(r,g,b) should be straight forward.
Also im.convert('RGBA', matrix) might be very powerful as it is super fast in applying a colour transformation through the supplied matrix. However I have never gotten that to do what I wanted it to do... :-/
There is also a module named Grapefruit that let you do conversions between quite a lot of color formats.
I ended up doing the conversions manually as Lennart Regebro suggested.
However, pure Python (iterating over each pixel) turned out to be too slow.
My final solution used PIL to load the image and numpy to operate on (convert) an array of pixels.

Categories