Is there any way to save a file to a dictionary under python?
(Indeed I am not asking how to export dictionaries to files here.)
Maybe a file could be pickled or transformed into me python object
and then saved.
Is this generally advisable?
Or should I only save the file's path to the dictionary?
How would I retrieve the file later on?
The background of my question relates to my usage of dictionaries as
databases. I use the handy little module sqlitshelf as a form of permanent dictionary: https://github.com/shish/sqliteshelf
Each dataset includes a unique config file (~500 kB) which is retrieved from an application. Upon opening of the respective data set the config file are copied into and back from the working directory of the application. I might use a folder instead where I save the config files to. Yet, it strikes me as more elegant to save them together with the other data.
Related
I am working on a python app and I am trying to logistically plan how saving/loading files will work
In this app multiple data sheets will be loaded and they will all be used in some capacity, the problem I am having is that I want users to be able to save the changes to the files that they have imported.
I was thinking more of a save file that holds the content of all the files they have loaded into the app. But I have no idea how to structure this! I've done some research and heard that parquet/feather are good formats for saving files but I don't know if they support saving multiple data frames to the same file.
The most important part about this is that the files need to be loadable by pandas/another library so that a user can save the changes they've made and then load it back up later if they were so inclined
Any advice is appreciated!
Essentially, I would want to be able to go through a folder with text files, jpg files, csv files, png files, any kind of file, and be able to load it into memory as some kind of object. When necessary, I would then like to be able to save it and create an instance on disk. This would need to work for any kind of file type.
I would create a class that would contain the file data itself as well as metadata, but that is not necessary for my question.
Is this possible ,and if so, how can I do this?
Is there a way to extract a file generated by pickle, while you don't even have the same classes as the original project that pickled it?
I'm trying to read a pickled file generated in an older project. I don't have the source code of the old project. But I would like to retrieve the data of the file, even as plain dictionaries. Is there another solution or should I just use a binary editor?
Thanks,
I'm using the Psychopy 1.82.01 Coder and its iohub functionality (on Ubuntu 14.04 LTS). It is working but I was wondering if there is a way to dynamically rename the hdf5 file it produces during an experiment (such that in the end, I know which participant it belongs to and two participants will get two files without overwriting one of them).
It seems to me that the filename is determined in this file: https://github.com/psychopy/psychopy/blob/df68d434973817f92e5df78786da313b35322ae8/psychopy/iohub/default_config.yaml
But is there a way to change this dynamically?
If you want to create a different hdf5 file for each experiment run, then the options depend on how you are starting the ioHub process. Assuming you are using the psychopy.iohub.launchHubServer() function to start ioHub, then you can pass the 'experiment_code' kwarg to the function and that will be used as the hdf5 file name.
For example, if you created a script with the following code and ran it:
import psychopy.iohub as iohub
io = iohub.launchHubServer(experiment_code="exp_sess_1")
# your experiment code here ....
# ...
io.quit()
An ioHub hdf5 file called 'exp_sess_1.hdf5' will be created in the same folder as the script file.
As a side note, you do not have to save each experiment sessions data into a separate hdf5 file. The ioHub hdf5 file structure is designed to save multiple participants / sessions data in a single file. Each time the experiment is run, a unique session code is required, and the data from each run is saved in the hdf5 file with a session id that is associated with the session code.
I'm working on a webapp that uses SCORM so it can be included in our clients' learning management systems. This works by building a zip file that contains several files. Two of the files depend on the particular resource they want to include and the client themselves. I'd therefore like to generate these zip files automatically, on demand.
So imagine I have a "template" version of the ZIP, extracted to a directory:
/zipdir/fileA.html
/zipdir/fileB.xml
/zipdir/static-file.jpg
Let's imagine I use Django's template sytax in fileA and fileB. I know how to run a file through the template loader and render it, but how do I add that file to a ZIP file?
Could I create a base zip file (that doesn't have fileA and fileB in) and add the two renders to it? Otherwise, how would you go about cloning the zipdir to a temporary location and then rendering those two files to it before zipping it?
Using zipfile with StringIO will allow you to create a zip file in memory that you can later send to the client.