Python qrcode not consistent - python

I've got a very strange problem with python-qrcode.
I've had it working in our dev environment for a while now, without any issues. We use it to create two QR codes both of which contain URLs of almost exactly the same length (one contains an extra letter and two extra slashes). It's crucial that these two codes be exactly the same size.
Since we setup python-qrcode about five months ago, every single qrcode we have generated has been exactly the same size without fail. However, we've now pushed everything through to a production server and suddenly we have a problem.
Basically, one of the codes we generate is bigger than normal (this is the one with the three extra characters). The other code is the correct size. The two codes are generated using exactly the same function, we just pass the different URL to be encoded.
On my local machine and on our dev server, all the qrcodes are exactly the same size (including the one with the extra characters), but on the production server, the longer one is bigger while the other is correct.
We use Git version control, so all the files/functions etc are identical between the servers. The only difference between the setups is the version of Ubuntu (12.04 vs 12.10 on the production server), but I can't see why that would cause this issue.
If both codes were bigger, I could understand, but I can't work out why one would be bigger than the other on only one server.....?
If anyone can make any suggestion as to where to start working this out, I'd be very grateful!
EDIT:
Here's the relevant code:
myQrGenerator = qrcode.QRCode(
version=QRCODE_SIZE,
error_correction=qrcode.constants.ERROR_CORRECT_M,
box_size=QRCODE_BOX_SIZE,
border=QRCODE_BORDER_SIZE
)
myQrGenerator.add_data('%s%s/' % (theBaseUrl, str(theHash)))
myQrGenerator.make(fit=True)
We get those variables from local_settings.py

After a lengthy discussion it was established that the two servers used different URLs. The one that spewed out a larger QR code (in terms of QR pixels, and subsequently in terms of image pixels) overflowed, where the limit of bits it could store for the predefined size was not enough, and qrcode made it fit by increasing the amount of data it could store.
To fix this, fit was set False to provide a constraint for overflows, and version was increased to accomodate more bits from the start. box_size can be decreased a bit to maintain, more or less, the original image size.

Probably a difference in the way PIL is installed on the box. Looking at the python-qrcode source, it does:
try:
from PIL import Image, ImageDraw
except ImportError:
import Image, ImageDraw
See what happens when you do:
from PIL import Image, ImageDraw
On each machine. Either way if it really isn't a bug in the application code (make doubly sure the same code is on each box), it sounds like it's going to be because of some difference in the way PIL builds itself on Ubuntu 12.10 vs 12.04, probably due to some different headers / libs it uses to compile. Look into ensuring the PIL installation consistent with the other boxes.

Related

Strange behavior in pyvips, impossible to write some images

I'm currently trying to make pyvips work for a project where I need to manipulate images of a "big but still sensible" size, between 1920x1080 and 40000x40000.
The installation worked well, but these particular 2 lines sometime work, sometime don't.
img = pyvips.Image.new_from_file('global_maps/MapBigBig.png')
img.write_to_file('global_maps/MapTest.png')
It seems that for the biggest images, I get the following error message when writing back the image (the loading works fine):
pyvips.error.Error: unable to call VipsForeignSavePngFile
pngload: arithmetic overflow
vips2png: unable to write to target global_maps/MapFishermansRowHexTest.png
I say it seems, because the following lines work perfectly well (with a size of 100 000 x 100 000, far bigger than the problematic images):
size = 100000
test = pyvips.Image.black(size, size, bands=3)
test.write_to_file('global_maps/Test.png')
I could not find an answer anywhere, do you have any idea what I'm doing wrong ?
EDIT:
Here is a link to an image that does not work (it weights 102 Mo).
This image was created using pyvips and a 40 time smaller image, this way:
img = pyvips.Image.new_from_file('global_maps/MapNormal.png')
out = img.resize(40, kernel='linear')
out.write_to_file('global_maps/MapBigBig.png')
And it can be read using paint3D or gimp.
I found your error message in libspng:
https://github.com/randy408/libspng/blob/master/spng/spng.c#L5989
It looks like it's being triggered if the decompressed image size would go over your process pointer size. If I try a 32-bit libvips on Windows I see:
$ ~/w32/vips-dev-8.12/bin/vips.exe copy MapFishermansRowHexBigBig.png x2.png
pngload: arithmetic overflow
vips2png: unable to write to target x2.png
But a 64-bit libvips on Windows works fine:
$ ~/vips-dev-8.12/bin/vips.exe copy MapFishermansRowHexBigBig.png x.png
$
So I think switching to a 64-bit libvips build would probably fix your problem. You'll need a 64-bit python too, of course.
I also think this is probably a libspng bug (or misfeature?) since you can read >4gb images on a 32-bit machine as long as you don't try to read them all in one go (libvips reads in chunks, so it should be fine). I'll open an issue on the libspng repo.

Geotiff overlay position is slightly off on Holoviews/Bokeh tilemap

I have a Geotiff that I display on a tile map, but it's slightly off to the south. For example, on this screenshot the edge of the image should be where the country border is, but it's a bit to the south:
Here's the relevant part of the code:
tiff_rio_500 = rioxarray.open_rasterio('/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif')
dataarray_500 = tiff_rio_500[0]
dataarray_500_meters = dataarray_500.copy()
dataarray_500_meters['x'], dataarray_500_meters['y'] = ds.utils.lnglat_to_meters(dataarray_500.x, dataarray_500.y)
hv_dataset_500_meters = hv.Dataset(dataarray_500_meters, name='nightlights', vdims='cumulative_cost')
hv_tiles_osm_bokeh = hv.element.tiles.OSM().opts(width=1000, height=800)
hv_image_500_meters_bokeh = hv.Image(hv_dataset_500_meters, kdims=['x', 'y'], vdims=['cumulative_cost'], rtol=1).opts(cmap='inferno_r')
hv_combined_osm_500_meters_bokeh = hv_tiles_osm_bokeh * hv_image_500_meters_bokeh
hv_combined_osm_500_meters_bokeh
You can see the live notebook on google colab.
Now this is not the usual "everything is way off" problem that occurs when one doesn't convert the map to Web Mercator. It is almost perfect, it just isn't.
The Geotiff is an Earth Engine export. This is how it looked originally in Earth Engine (see live code):
As you can see, the image follows the borders everywhere.
At first, I suspected that maybe the export went wrong, or the google map tileset is somewhat different, but no, if I open the same exported Tiff in the QGis application on my windows laptop and view it on the same OSM tilemap as I do in the colab notebook, it looks fine:
Okay, the image does not follow the borders perfectly, but I know why and that's unrelated (I oversimplified the country border geometry). The point is, that it is projected to the correct location. So based on that, the tiff contains the correct information, it can be displayed at the same location as the borders are in the OSM tilemap, but still in my Holoviews-Datashader-Bokeh project it is slightly off.
Any idea why this happens?
I've got the answer on the Holoviz Discourse from one of the developers. Seeing how the recommended function is practically undocumented, I copy it here in case somebody looks for an easy way to load a geotiff and add to a tilemap in Holoviews/Geoviews:
https://discourse.holoviz.org/t/geotiff-overlay-position-is-slightly-off-on-holoviews-bokeh-tilemap/2071
philippjfr
I wouldn’t expect manually transforming the coordinates to work
particularly well. While it’s a much heavier weight dependency for
accurate coordinate transforms I’d recommend using GeoViews.
img = gv.util.load_tiff( '/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif' )
gv.tile_sources.OSM() * img.opts(cmap='inferno_r')
Edit: Now it is possible one doesn't want to use Geoviews as it has a pretty heavy dependency chain that requires a lot of patience and luck to set it up right. Fortunately rioxarray (through rasterio) has a tool to reproject, just append ".rio.reproject('EPSG:3857')" to the first line and then you don't have to use the lnglat_to_meters which is not intended for this purpose.
So the corrected code becomes:
tiff_rio_500 = rioxarray.open_rasterio('/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif').rio.reproject('EPSG:3857')
hv_dataset_500_meters = hv.Dataset(tiff_rio_500[0], name='nightlights', vdims='cumulative_cost')
hv_tiles_osm_bokeh = hv.element.tiles.OSM().opts(width=1000, height=800)
hv_image_500_meters_bokeh = hv.Image(hv_dataset_500_meters, kdims=['x', 'y'], vdims=['cumulative_cost'], rtol=1).opts(cmap='inferno_r')
hv_combined_osm_500_meters_bokeh = hv_tiles_osm_bokeh * hv_image_500_meters_bokeh
hv_combined_osm_500_meters_bokeh
Now compared to the Geoviews solution (that supposedly handles everything automatically), this solution has a downside that if you use a Hover Tooltip to display the values and coordinates under the mouse cursor, the coordinates are showing up in the newly projected web mercator system in millions of meters instead of the expected degrees. The solution for that is outside the scope of this answer, but I'm just finishing a detailed step by step guide that contains a solution for that too, and I will link that here as soon as it is published. Of course if you don't use Hover Tooltip, the code above will be perfect for you without any more tinkering.

How can i align star field FIT images taken with CCD in PYTHON?

I have seven of star field images taken with CCD. Extensions of them are FIT. I'm trying to align them by Python but, i have confused. This time is my very first attempt to align images. I found a few module related with alignment of fits images but they seem to me very confusing. I need a help.
The APLpy module (https://aplpy.github.io/) does what you need to do.
However, it might not be the most straightforward thing to use for a first-timer.
What I would recommend is using PyRAF, which is a python wrapper for the IRAF data reduction software developed by NOAO (National Optical Astronomy Organization) in the 80's/90's to deal with CCD data reduction.
You can get pyraf by typing pip install pyraf. Once you have pyraf, I would recommend following Josh Wallawender's IRAF tutorial; skip to Section V ("Basic Reduction Steps for Imaging Data"). Keep in mind you are using PyRAF, so any IRAF-specific things (sections I-IV) don't necessarily apply to you. PyRAF is a much easier to use system.
The specific PyRAF tasks you need are imalign and imcombine. You'll also need to give a file with the rough shifts between each image (the help file for imalign is a fantastic resource, btw, and you can access it via epar imalign and clicking on the "Help" button when the GUI pops up).
I hope this gives you a starting point. There are other ways to do image combining in python, but astropy is kind of finicky for first-time users.

How to most efficiently correct Gamma value of Qimages/QPixmaps or QGraphicsView or QGraphicsItem or QGraphicsProxyWidget in PySide (Qt4, Python)?

I am trying to change the gamma value of of a video widget (Phonon.VideoWidget) displayed in a QGraphicsView via a proxy. I noticed the QGraphicsEffect from QtGui works on it via proxy but there are only defaults for blur, single color overlay, and drop shadow. Phonon.VideoWidget itself has options for brightness, contrast, and even hue that work great but no options for gamma correction strangely. QGraphicsEffects are not very fast but they definitely works with the videowidget playing media via graphicsproxy. I decided I would start by creating my own QGraphicsEffect but at the current state it does not seem possible.
I started with something simpler, a single QPixmap (I take the QImage from it. Since QGraphicsEffect seems to work primarily with QPixmaps in its virtual draw function I figure this is a good place to start) and gamma correcting that. It worked. However, it is currently far too slow even with numpy (setPixel for QImage is even slower), taking 5 seconds or more to convert one 1200x900, 400kb jpg image.
The way I do it is I create a list of gamma converted values from 0-255 based on the gamma value entered. Then using numpy, with the QImage from the pixmap, I create an array that points to the image data and finally edit each pixel with it's corresponding value on the table (so no extra calculations are made for each pixel). The meat of the code is as follows
gammaTable = []
for i in xrange(256):
gammaPixel = CorrectGamma( i, gammaValue ) #returns numpy.int8 or int, works properly
gammmaTable.append(gammaPixel)
qimage = myPixmap.toImage()
bytes = qimage.bits() #qimage.constBits()
imageBytes = numpy.asarray(bytes).reshape(qimage.width(), qimage.height(), 4)
#Change each pixel
for x in xrange(qimage.width()):
for y in xrange(qimage.height()):
imageBytes[x,y,0] = gammmaTable[imageBytes[x,y,0]] #gammaDictionary[imageBytes[x,y,0]]
imageBytes[x,y,1] = gammmaTable[imageBytes[x,y,1]] #gammaDictionary[imageBytes[x,y,1]]
imageBytes[x,y,2] = gammmaTable[imageBytes[x,y,2]] #gammaDictionary[imageBytes[x,y,2]]
imageBytes[x,y,3] = gammmaTable[imageBytes[x,y,3]] #gammaDictionary[imageBytes[x,y,3]]
return qimage
I only run it once at the start of the program, and it works, but it is far too slow. I also tried using QImage.scanLine() but I honestly am not sure how to use it and there is no setScanLine() function to work with either. constScanLine returns an uneditable array. Another thing in the defaults I looked into was QImage.colorTable(). However, they are always empty for me so I wasn't able to work with them.
I am considering trying to OpenCV for python but I am not sure if it will suit my needs but I did see a youtube video earlier of some person claiming to use Qt and OpenCV to create what seemed like an overlay (looked to be a noisefilter and nightfilter) over a video stream in a widget, and that is more or less what I need. It did not say anything about it however and could have been C++ Qt (I am working Python and Qt4, via PySide).
I have a good feeling that if I did not do something wrong that maybe Python is too slow for this. Currently the bottleneck seems to be that it takes a long time to iterate and change each value of the numpy imageBytes array that points to the QImage's data array. I don't know how to work with C++ and compiling for PySide/Python and I'm not sure if even if I could that I can even translate this to C++ especially when it comes to the idea of dealing with pointers and sharing between Python and C++.
I also have a feeling I am missing out on a solution somewhere but I have not discovered any other options to think about or try. I was perhaps thinking that maybe I could put some kind of overlay over the graphics items but I realize it makes little sense and there seems to be no such thing (closest is QGraphicsEffect).

Python: Manipulating a 16-bit .tiff image in PIL &/or pygame: convert to 8-bit somehow?

Hello all,
I am working on a program which determines the average colony size of yeast from a photograph, and it is working fine with the .bmp images I tested it on. The program uses pygame, and might use PIL later.
However, the camera/software combo we use in my lab will only save 16-bit grayscale tiff's, and pygame does not seem to be able to recognize 16-bit tiff's, only 8-bit. I have been reading up for the last few hours on easy ways around this, but even the Python Imaging Library does not seem to be able to work with 16-bit .tiff's, I've tried and I get "IOError: cannot identify image file".
import Image
img = Image.open("01 WT mm.tif")
My ultimate goal is to have this program be user-friendly and easy to install, so I'm trying to avoid adding additional modules or requiring people to install ImageMagick or something.
Does anyone know a simple workaround to this problem using freeware or pure python? I don't know too much about images: bit-depth manipulation is out of my scope. But I am fairly sure that I don't need all 16 bits, and that probably only around 8 actually have real data anyway. In fact, I once used ImageMagick to try to convert them, and this resulted in an all-white image: I've since read that I should use the command "-auto-levels" because the data does not actually encompass the 16-bit range.
I greatly appreciate your help, and apologize for my lack of knowledge.
P.S.: Does anyone have any tips on how to make my Python program easy for non-programmers to install? Is there a way, for example, to somehow bundle it with Python and pygame so it's only one install? Can this be done for both Windows and Mac? Thank you.
EDIT: I tried to open it in GIMP, and got 3 errors:
1) Incorrect count for field "DateTime" (27, expecting 20); tag trimmed
2) Sorry, can not handle images with 12-bit samples
3) Unsupported layout, no RGBA loader
What does this mean and how do I fit it?
py2exe is the way to go for packaging up your application if you are on a windows system.
Regarding the 16bit tiff issue:
This example http://ubuntuforums.org/showthread.php?t=1483265 shows how to convert for display using PIL.
Now for the unasked portion question: When doing image analysis, you want to maintain the highest dynamic range possible for as long as possible in your image manipulations - you lose less information that way. As you may or may not be aware, PIL provides you with many filters/transforms that would allow you enhance the contrast of an image, even out light levels, or perform edge detection. A future direction you might want to consider is displaying the original image (scaled to 8 bit of course) along side a scaled image that has been processed for edge detection.
Check out http://code.google.com/p/pyimp/wiki/screenshots for some more examples and sample code.
I would look at pylibtiff, which has a pure python tiff reader.
For bundling, your best bet is probably py2exe and py2app.
This is actually a 2 part question:
1) 16 bit image data mangling for Python - I usually use GDAL + Numpy. This might be a bit too much for your requirements, you can use PIL + Numpy instead.
2) Release engineering Python apps can get messy. Depending on how complex your app is you can get away with py2deb, py2app and py2exe. Learning distutils will help too.

Categories