I have a image which is FAT (16 bit), I want to parser the image to file, so that i can get the files in image.
As far as reading a FAT32 filesystem image in Python goes, the Wikipedia page has all the detail you need to write a read-only implementation.
Construct may be of some use. Looks like they have an example for FAT16 (https://github.com/construct/construct/blob/master/construct/examples/formats/filesystem/fat16.py) which you could try extending.
Actually, I was in a similar situation, where I needed FAT12/16/32 support in Python. Searching the web you can find various implementations (such as maxpat78/FATtools, em-/grasso or hisahi/PyFAT12).
None of those libraries were available via PyPI at the time or were lacking features I needed so (full disclosure) I decided to write my own but I'll try to sum it up as objectively as possible:
pyfatfs supports FAT12, FAT16 as well as FAT32 including VFAT (long file names) and can be installed via pip as a pure Python package (no native dependencies such as mtools needed and/or included). It implements functionality for PyFilesystem2, a framework for basic file operations across different filesystem implementations (SSH, AWS S3, OSFS host directory pass-through, …). Aside from that pyfatfs can also be used standalone (without PyFilesystem2) in case you need to make more low-level operations (manipulating directory/file entries, changing disk attributes, formatting disks/images, manipulating FAT, etc.).
For instance, to copy files from a diskette image to your host via PyFilesystem2:
import fs
fat_fs = fs.open_fs("fat://my_diskette.img") # Open disk image
host_fs = fs.open_fs("osfs:///tmp") # Open '/tmp' directory on host
fs.copy.copy_dir(fat_fs, "/", host_fs, "/") # Copy all files from the disk image to the host_fs filesystem (/tmp directory on host)
Related
I'm using OpenCV to process some video data in a web service. Before calling OpenCV, the video is already loaded to a bytearray buffer, which I would like to pass to VideoCapture object:
# The following raises cv2.error because it can't convert '_io.BytesIO' to 'str' for 'filename'
cap = cv2.VideoCapture(buffer)
Unfortunately, VideoCapture() expects a string filename, not a buffer. For now, I'm saving the bytearray to a temporary file, and pass its name to VideoCapture().
Questions:
Is there a way to create named in-memory files in Python, so I can pacify OpenCV?
Alternatively, is there another OpenCV API which does support buffers?
Note: POSIX-specific! As you haven't provided OS tag, I assume it's okay.
According to this answer (and this shm_overview manpage) there is /dev/shm always present on the system. That's a tmpfs mapped in a shared (not Python process memory) memory pool, as suggested here, but the plus is that you don't need to create it, so no funny inventing of:
os.system("mount ...") or
Popen(["mount", ...]) wrappers.
Simply use tempfile.NamedTemporaryFile() like this:
from tempfile import NamedTemporaryFile
with NamedTemporaryFile(dir="/dev/shm") as file:
print(file.name)
# /dev/shm/tmp2m86e0e0
which you could then feed into OpenCV's API wrapper. Alternatively, utilize pyfilesystem as a more extensive wrapper around that device/FS.
Also, multiprocessing.heap.Arena uses it too, so if it didn't work, there'd be much more trouble present. For Windows check this implementation which uses winapi.
For the size of /dev/shm:
this is one of the size "specifications" I found,
shm.h, shm_add_rss_swap(), newseg() from Linux source code may hold more details
Judging by sudo ipcs it's most likely the way you want to utilize when sharing stuff between processes if you don't use sockets, pipes or disk.
As it's POSIX, it should work on POSIX-compliant systems, thus also on MacOS(no) or Solaris, but I have no means to try it.
Partially to answer the question: there is no way I know of in python to create named file-like objects which point to memory: that's something for an operating system to do. There is a very easy way to do something very like creating named memory mapped files in most modern *nixs: save the file to /tmp. These days /tmp is almost always a ramdisk. But of course it might be zram (basically a compressed ramdisk) and you likely want to check that first. At any rate it's better than thrashing your disk or depending on os caching.
Incidentally making a dedicated ramdisk is as easy as mount -t tmpfs -o size=1G tmpfs /path/to/tmpfs or similarly with ramfs.
Looking into it I don't think you're going to have much luck with alternative apis either: the use of filenames goes right down to cap.cpp, where we have things like:
VideoCapture::VideoCapture(const String& filename, int apiPreference) : throwOnFail(false)
{
CV_TRACE_FUNCTION();
open(filename, apiPreference);
}
It seems the python bindings are just a thin layer on top of this. But I'm willing to be proven wrong!
References
https://github.com/opencv/opencv/blob/master/modules/videoio/src/cap.cpp#L72
If VideoCapture was a regular Python object, and it accepted "file-like objects" in addition to paths, you could feed it a "file-like object", and it could read from that.
Python's StringIO and BytesIO are file-like objects in memory. Something useful to remember ;)
OpenCV specifically expects a file system path there, so that's out of the question.
OpenCV is a library for computer vision. It's not a library for handling video files.
You should look into PyAV. It's a (proper!) wrapper for ffmpeg's libraries. You can feed data directly in there and it will decode. Here are some examples and here are its tests that demonstrate further functionality. Its documentation is thin because most usage is (or should have been...) documented by ffmpeg itself.
You might be able to get away with a named pipe. You can use os.mkfifo to create one, then use the multiprocess module to spawn a background process that feeds the video file into it. Note that mkfifo is not supported on Windows.
The most important limitation is that a pipe does not support seeking, so your video won't be seekable or rewindable either. And whether it actually works might depend on the video format and on the backend (gstreamer, v4l2, ...) that OpenCV is using.
We have a tool which is designed to allow vendors to deliver files to a company and update their database. These files (generally of predetermined types) use our web-based transport system, a new record is created in the db for each one, and the files are moved into a new structure when delivered.
We have a new request from a client to use this tool to be able to pass through entire directories without parsing every record. Imagine if the client made digital cars then this tool allows the delivery of the digital nuts and bolts and tracks each part, but they want to also deliver a directory with all of the assets which went into creating a digital bolt without adding each asset as a new record.
The issue is that the original code doesn't have a nice way to handle these passthrough folders, and would require a lot of rewriting to make it work. We'd obviously need to create a new function which happens around the time of the directory walk, which takes out each folder which matches this passthrough and then handles it separately. The problem is that all the tools which do the transport, db entry, and delivery all expect files, not folders.
My thinking: what if we could treat that entire folder as a file? That way the current file-level tools don't need to be modified, we'd just need to add the "conversion" step. After generating the manifest, what if we used a library to turn it into a "file", send that, and then turn it back into a "folder" after ingest. The most obvious way to do that is ZIP files - and the current delivery tool does handle ZIPs - but that is slow and some of these deliveries are very large, which means when transporting if something goes wrong the entire ZIP would fail.
Is there a method which we can use which doesn't necessarily compress the files but just somehow otherwise can treat a directory and all its contents like a file, so the rest of the code doesn't need to be rewritten? Or something else I'm missing entirely?
Thanks!
You could use tar files. Python has great support for it, and it is customary in *nix environments to use them as backup files. For compression you could use Gzip (also supported by the standard library and great for streaming).
I'm working on a web application that manages VRML files. I also want to let users see the uploaded files, without requiring a specific plug-in or player. X3DOM allows viewing X3D files without plug-ins on most browsers, so I'd like to use it.
Alas, it works on X3D files, and not VRML files. I need to convert VRML files to the X3D format.
The same people behind X3DOM released a package called InstantReality that has a utility that converts VRML to X3D. However, I'd much rather not use an external utility (I'm not even sure I'm allowed to use it on a commercial environment, I couldn't find its terms of use) but call a conversion routine from my application code.
MeshLab! There's an opensource project called MeshLab that does all sorts of processing on 3D meshes. It also has a command-line tool called MeshlabServer.
Running meshlabserver.exe -i <wrl file> -o <x3d file> performs the conversion (very quickly). Since it's open-source, I don't have any licensing issues.
are you talking about this online converter?
http://doc.instantreality.org/tools/x3d_encoding_converter/
you could probably build some scripting to convert the vrml to x3d/x3dom and then store and or display
as well blender aopt and others should be able to convert vrml to x3d on the command line. depending on your servers os this could be batched/scripted as well
im in a rush to get some other work done but hope this helps.
let me know if you need more info or examples and ill see what i can do
I also needed to convert VRML .wrl to .x3d; I tried meshlab (meshlabserver), but unfortunately, the version I have (.deb 2016.12~trusty2 on Ubuntu 14.04) compacts everything to a single mesh, and looses color in the process.
I found that view3dscene can do conversion from the command line, where the materials/colors are preserved in .x3d, as they were in .wrl:
view3dscene mymodel.wrl --write --write-encoding xml > mymodel.x3d
Since view3dscene functions as a viewer for both .wrl and .x3d files, it can also be used immediately, to check if the converted (or the original) file has colors or not.
ok so i think this is the full solution for you
1) user uploads a vrml file
2) that file gets saved to (file or db)
3) upon confirmation that the vrml file has been saved (and possibly validated as correct vrml syntax) it gets converted and saved to x3d (again as file or db) , with aopt this would be accomplished by aopt -i input.wrl -o output.x3d
FYI: aopt is avail for linux windows and mac
since you use python this maybe a way you could do it as well with blender although there are no full example of vrml to x3d this link should get you started
http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/Wavefront_OBJ#Command_Line_Converting
4) display the x3d via x3dom
Since the ClassicVRML X3D Encoding is a direct successor of the VRML97 standard, in most cases you can copy the file, rename the .wrl file extension to a .x3dv file extension, and change the scene header from
#VRML V2.0 utf8
to
#VRML V3.3 utf8
PROFILE Immersive
Many converters exist, both commercial and open source. Several are integrated with X3D-Edit. A full list is maintained at
X3D Resources: Conversions and Translation Tools
http://www.web3d.org/x3d/content/examples/X3dResources.html#Conversions
Personal favorite:
Castle Game Engine: Convert everything to X3D
https://castle-engine.io/convert.php
If you simply want to convert X3D XML encoded files to VRML Classic encoded files you can use Titania, http://titania.create3000.de/. Open your .x3d file and save it as .x3dv or .wrl.
Titania also comes with a command line utitity »x3dtidy« that can do the conversion too.
You can use this tool (a java jar that can be run from the command line) to convert VRML to X3D:
http://www.deem7.com/vrmlmerge/howto.php
java -jar VrmlMerge-[version].jar -convert inputfile.wrl [outputfile.x3d]
The license:
VrmlMerge is free for non-commercial use. If you somehow make money out of VrmlMerge then I'd like you to contact me to agree on some terms of use. VrmlMerge is provided "as is" and I don't take any responsibility for any damage it can make to you, your computer, files, data, wife, brain etc..
I'm writing a script to make backups of various different files. What I'd like to do is store meta information about the backup. Currently I'm using the file name, so for example:
backups/cool_file_bkp_c20120119_104955_d20120102
Where c represents the file creation datetime, and d represents "data time" which represents what the cool_file actually contains. The reason I currently use "data time" is that a later backup may be made of the same file, in which case, I know I can safely replace the previous backup of the same "data time" without loosing any information.
It seems like an awful way to do things, but it does seem to have the benefit of being non-os dependent. Is there a better way?
FYI: I am using Python to script my backup creation, and currently need to have this working on Windows XP, 2003, and Redhat Linux.
EDIT: Solution:
From the answers below, I've inferred that metadata on files is not widely supported in a standard way. Given my goal was to tightly couple the metadata with the file, it seems that archiving the file alongside a metadata textfile is the way to go.
I'd take one of two approaches there:
create a stand alone file, on the backub dir, that would contain the desired metadata - this could be somethnng in human readable form, just to make life easier, such as a json data structure, or "ini" like file.
The other one is to archive the copied files - possibily using "zip", and bundle along with it a textual file with the desired meta-data.
The idea of creating zip archives to group files that you want together is used in several places, like in java .jar files, Open Document Format (offfice files created by several office sutres), Office Open XML (Microsoft specific offic files), and even Python language own eggs.
The ziplib module in Python's standard library has all the toools necessary to acomplish this - you can just use a dictionary's representation in a file bundled with the original one to have as much metadata as you need.
In any of these approaches you will also need a helper script to letyou see and filter the metadata on the files, of course.
Different file systems (not different operating systems) have different capabilities for storing metadata. NTFS has plenty of possibilities, while FAT is very limited, and ext* are somewhere in between. None of widespread (subjective term, yes) filesystems support custom tags which you could use. Consequently there exists no standard way to work with such tags.
On Windows there was an attempt to introduce Extended Attributes, but these were implemented in such a tricky way that were almost unusable.
So putting whatever you can into the filename remains the only working approach. Remember that filesystems have limitations on file name and file path length, and with this approach you can exceed the limit, so be careful.
The ZODB blobstorage directory contains a .layout file with the string 'lawn', 'bushy'.
What is the difference between the various blob storage directory formats?
It is explained here: https://github.com/zopefoundation/ZODB/blob/master/src/ZODB/tests/blob_layout.txt
FTA:
======================
Blob directory layouts
The internal structure of the blob directories is governed by so called
layouts. The current default layout is called bushy.
The original blob implementation used a layout that we now call lawn and
which is still available for backwards compatibility.
Layouts implement two methods: one for computing a relative path for an
OID and one for turning a relative path back into an OID.
Our terminology is roughly the same as used in DirectoryStorage.
It also explains the formats in detail.
You generally don't need to worry about the layout; lawn is there only for backwards compatibility.
If you do have a lawn layout blobstorage (you'll get a warning in the log if you do) and want to migrate to a bushy layout, use the migrateblobs script; here is a buildout part to create the script:
[migrateblobs]
recipe = zc.recipe.egg
eggs = ZODB3
entry-points = migrateblobs=ZODB.scripts.migrateblobs:main
Shut down any instances and ZEO servers, back up your blob storage and run the script on your blobstorage directory:
$ mv var/blobstorage var/blobstorage-lawn
$ bin/migrateblobs var/blobstorage-lawn/ var/blobstorage
var/blobstorage will then contain the migrated blobs using the bushy layout.