Anyone aware of some Python module or library capable of modifying EXIF and IPTC data in Adobe RAW files (.dng)? Until some eight years ago, I used JPEG and could rather easily do such modifications helped by Python. After having switched to RAW, I have to use image tools to modify EXIF info.
Primarily the EXIF Taken date is of interest to be modified, but some IPTC-fields are also candidates of modification.
(I'm geotagging photos from my cameras each of which have RTC's that creeps in various directions and amounts. My 'worst' camera 'hurries' ~2.4 sec per day. Before matching photodates with .gpx-data from a GPS-logger, I need to modify the Taken date with various amounts depending on number of days since cameraclocksetting.)
In one of my projects I use GExiv2 (https://wiki.gnome.org/Projects/gexiv2) with the PyGObject bindings (https://wiki.gnome.org/Projects/PyGObject). GExiv2 is a wrapper around exiv2, which can read & write Exif, IPTC and XMP metadata in DNG files: http://www.exiv2.org/manpage.html
Related
I am on OSX, running python and trying to extra EXIF data from a large set of images in my library.
I've been using Pillow so far with my JPG photos and it works like a charm.
However, I stumbled on the first PNG photo I hit.
I am able to view a lot of the EXIF data on Mac though, using the photo inspector.
First, it seems that Pillow doesn't support __get_exif on PNGs.
Then I tried switching into pyexiv2, but that one hits an installation issue.
exiftool also didn't work for me.
Any idea whether there is a python way of extracting EXIF data on OSX?
PNG standard just started supporting EXIF in 2017 so make sure it's EXIF and not other metadata chunks that look like EXIF. See this Stack overflow question and answers for details.
You can use PyExifTool to extract EXIF data.
If you want to use a GUI you can use PyExifToolGui.
Make sure you have the latest version of the command line
ExifTool which recently added PNG EXIF support.
You say that ExifTool doesn't work for you but don't give any
details on what your issues are and perhaps you just need to update
your version. From the previously referenced thread the ExifTool author states ExifTool supports PNG EXIF:
The PNG group has recently been considering adding a new "eXIf" and/or
"zXIf" chunk to store EXIF information. ExifTool 10.43 added support
for "exIf" and "zxIf" chunks in support of this. – PhilHarvey May 26
'17 at 14:49
A few months ago I was able to download TMY3 data from NSRDB and use it with pvlib.tmy.readtmy3
Now I have tried to download files for other locations but these seem to come in a different format. I am using NSRDB Data Viewer, more specifically the Data Download Wizard. I click on MTS2, as this seems to be the only model that now provides data in TMY3, and I click on the TMY3 button when I select the file for download. But the internal structure of the obtained CSV file is clearly different from what I got a few months ago and is also clearly different from what pvlib.tmy.readtmy3 expects (I have checked the current python source code).
At https://nsrdb.nrel.gov/tmy I get the following info:
Format of TMY Data
All TMY data are now in the System Advisor Model (SAM) CSV file
format. Formerly, TMY data were available only through TMY file
formats (i.e., TMY, TMY2, TMY3). By switching to the more
user-friendly SAM CSV, TMY data are more flexible than ever and can be
plugged into the vast majority of solar modeling programs.
This seems to imply that data is no longer available in TMY3 format, even though TMY3 data seems to be available in the NSRDB Data Download Wizard.
Do I need to write my own code to adapt NSRDB files to what pvlib expects?
The TMY3 files available at the link below are readable by pvlib:
https://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/tmy3/by_state_and_city.html
I'm not experienced with the NSRDB Data Viewer or how the format of its TMY3 files might differ. We'd welcome a contribution to improve compatibility with the new files, if necessary.
I'm using OpenCV with Python, but actually can switch to C++, so if it's matter please answer question considering it.
I'm writing .avi file(joining multiple avi files into one) using
cv2.VideoWriter([filename, fourcc, fps, frameSize[, isColor]])
but recently found out that I can't write .avi file larger than 2 GB with it. It even mentioned there: Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB.
But right now I've got no time to learn new library like ffmpeg, I need to do it very fast.
How can I write this file, using C++ or Python with knowledge of OpenCV, or at least with input part - using
cv::Mat
as frames
This limitation was removed in OpenCV 3.0, due to the introduction of new file formats such as .mkv, who do support video files larger than 2GB.
See Does OpenCV 3.0 Still Has Limits On VideoWriter Size?.
NOTE: The documentation and examples weren't updated yet, so maybe this should be considered experimental.
You have answered your own question but I'm afriad it isn't the answer you want.
From your link
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer vision library, not a video stream, codec and write one. Therefore, the developers tried to keep this part as simple as possible. Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB. Furthermore you can only create and expand a single video track inside the container. No audio or other track editing support here. Nevertheless, any video codec present on your system might work. If you encounter some of these limitations you will need to look into more specialized video writing libraries such as FFMpeg or codecs as HuffYUV, CorePNG and LCL.
What this paragraph says is that the developers of OpenCV made a design choice that says you cannot write video files larger than 2Gb using OpenCV for the specific reason that it is a computer vision library not a video tool.
Unfortunately if you want to write videos larger than 2Gb you are going to need to learn to use FFMPEG or something similar (It isn't that hard and has good bindings to OpenCV)
I've got a program that downloads part01, then part02 etc of a rar file split across the internet.
My program downloads part01 first, then part02 and so on.
After some tests, I found out that using, on example, UnRAR2 for python I can extract the first part of the file (an .avi file) contained in the archive and I'm able to play it for the first minutes. When I add another file it extracts a bit more and so on. What I wonder is: is it possible to make it extract single files WHILE downloading them?
I'd need it to start extracting part01 without having to wait for it to finish downloading... is that possible?
Thank you very much!
Matteo
You are talking about an .avi file inside the rar archives. Are you sure the archives are actually compressed? Video files released by the warez scene do not use compression:
Ripped movies are still packaged due to the large filesize, but compression is disallowed and the RAR format is used only as a container. Because of this, modern playback software can easily play a release directly from the packaged files, and even stream it as the release is downloaded (if the network is fast enough).
(I'm thinking VLC, BSPlayer, KMPlayer, Dziobas Rar Player, rarfilesource, rarfs,...)
You can check for the compression as follows:
Open the first .rar archive in WinRAR. (name.part01.rar or name.rar for old style volumes names)
Click the info button.
If Version to extract indicates 2.0, then the archive uses no compression. (unless you have decade old rars) You can see Total size and Packed size will be equal.
is it possible to make it extract
single files WHILE downloading them?
Yes. When no compression is used, you can write your own program to extract the files. (I know of someone who wrote a script to directly download the movie from external rar files; but it's not public and I don't have it.) Because you mentioned Python I suggest you take a look at rarfile 2.2 by Marko Kreen like the author of pyarrfs did. The archive is just the file chopped up with headers (rar blocks) added. It will be a copy operation that you need to pause until the next archive is downloaded.
I strongly believe it is also possible for compressed files. Your approach here will be different because you must use unrar to extract the compressed files. I have to add that there is also a free RARv3 implementation to extract rars implemented in The Unarchiver.
I think this parameter for (un)rar will make it possible:
-vp Pause before each volume
By default RAR asks for confirmation before creating
or unpacking next volume only for removable disks.
This switch forces RAR to ask such confirmation always.
It can be useful if disk space is limited and you wish
to copy each volume to another media immediately after
creation.
It will give you the possibility to pause the extraction until the next archive is downloaded.
I believe that this won't work if the rar was created with the 'solid' option enabled.
When the solid option is used for rars, all packed files are treated as one big file stream. This should not cause any problems if you always start from the first file even if it doesn't contain the file you want to extract.
I also think it will work with passworded archives.
I highly doubt it. By nature of compression (from my understanding), every bit is needed to uncompress it. It seems that the source of where you are downloading from has intentionally broken the avi into pieces before compression, but by the time you apply compression, whatever you compressed is now one atomic unit. So they kindly broke the whole avi into Parts, but each Part is still an atomic nit.
But I'm not an expert in compression.
The only test I can currently think of is something like: curl http://example.com/Part01 | unrar.
I don't know if this was asked with a specific language in mind, but it is possible to stream a compressed RAR directly from the internet and have it decompressed on the fly. I can do this with my C# library http://sharpcompress.codeplex.com/
The RAR format is actually kind of nice. It has headers preceding each entry and the compressed data itself does not require random access on the stream of bytes.
Do it multi-part files, you'd have to fully extract part 1 first, then continue writing when part 2 is available.
All of this is possible with my RarReader API. Solid archive are also streamable (in fact, they're only streamable. You can't randomly access files in a solid archive. You pretty much have to extract them all at once.)
If you Save as > jpg in Adobe Photoshop a path (selection) is stored in the file.
Is it possible to read that path in python, for example to create a composition with PIL?
EDIT
Imagemagick seems to help, example
This code (by /F AKA the effbot, author of PIL and generally wondrous Python contributor) shows how to walk through the 8BIM resource blocks (but it's looking for 0x0404, the IPTC/NAA data, so of course you'll need to edit it).
Per Tom Ruark's post to this thread, paths will have IDs of 2000 to 2999 (the latter gives the name of the clipping path, so it's different from the others) and the data's a series of 26-bytes "point records" (so the resource length is always a multiple of 26).
Read the rest in Tom's post in all the gory details -- it's a pesky and very detailed binary format that will take substantial experimentation (and skill with struct, bitwise manipulation, etc) to read and interpret just right (not helped by the fact that the fields can be big-endian or little-endian -- little-endian in Windows, if I read the post correctly).
Are you sure the path is stored in the jpg? That seems unlikely. Paths would be stored in native photoshop format, but not the jpg.
Do you know of any other tools that can read the path? Can you try saving the item as a jpg, close photoshop, reopen only the jpg and see if you still have the path? I doubt it'd be there.