I'm writing a script that processes files in directories, but I want the directories' timestamps to remain the same. So I want to get the timestamp before the operation, then set in back after.
Getting the timestamp is easy enough (os.path.getmtime()), but I can't seem to find an equivalent set method.
Any suggestion?
use os.utime().
Should work fine.
Define variables with date, time and stuff before so the actual setting function doesn't get that cluttered.
Related
I need to access the date and time in ubuntu from a program.This program may not use any commands to do this. So making a call to date is not an option.
Is there a file or files which hold this information?
Where can it be found ?
No, read time(7). There are some system calls (listed in syscalls(2)...) to query the time (since Unix Epoch); in particular time(2) and clock_gettime(2).
You then need to convert that time into a string, probably using localtime(3) then strftime(3). That conversion use some files notably /etc/timezone (and some under /usr/share/zoneinfo/ ...) according to TZ variable (see environ(7) and locale(7)).
BTW, date is free software (so you could study its source code). And you could strace(1) it.
See also vdso(7) and this.
I have a Blender file called Assets.blend containing over 100 objects for a game I'm developing in Unity.
When ever I make modifications, I run a script that exports each root object as a separate fbx file.
However I have no way of detecting which ones have been updated, so every time I have to re-export every single object even though I've only created/modified 1.
The time it takes to run the script is about 10 seconds, but then Unity detects the changes and spends over 30 seconds processing mostly unchanged prefabs.
How can I improve my script so that it knows which objects have been altered since the last export?
There does not appear to be any date_modified variable for objects or meshes.
Blender does not record a timestamp of object modifications. My first suggestion would be to keep each object in it's own blend file, or maybe smaller groups of items in each file.
Another approach would be to change your export script, instead of exporting every object just export the selected objects. After you have changed an item or two, select the ones you changed and then export just those items.
for obj in bpy.context.selected_objects:
bpy.ops.export_scene.fbx(obj.name+'.fbx')
Another approach is to compute a CRC-like signature on meaningful values (mesh geometry, materials, whatever it is you change often) and store that somewhere (in each object as a custom property, for instance).
Then you can easily skip objects whose signatures did not change since last export.
I am trying to automate the report creation in Geomagics, using the create_report() function.
However, we have several sets of results, which need to be reviewed by a human operator (within the Geomagics interface) before the various reports can be created if the results are considered acceptable. Since create_report() works on the current ResultObject, I'd like to be able to set this to all my results in a loop.
(Alternatively, there might be a way to write a report for a specific object, not just the current result?)
Can you break the problem down further ?
How should the operator see the results, as a spreadsheet or some other way ?
For example, can you script outside of GeoMagic to fetch result sets and display those to the operator, then write back approved results to another dataset
then at the end, create the report within GeoMagic from the "approved" dataset.
I have successfully used the bulkloader with my project before, but I recently added a new field to timestamp when the record was modified. This new field is giving me trouble, though, because it's defaulting to null. Short of manually inserting the timestamp in the csv before importing it, is there a way I can insert the current right data? I assume I need to look toward the import_transform line, but I know nothing of Python (my app is in Java).
Ideally, I'd like to insert the current timestamp (milliseconds since epoch) automatically. If that's non-trivial, maybe set the value statically in the transform statement before running the import. Thanks.
Defining a custom conversion function, as you did, is the correct method. You don't have to modify transform.py, though - put the function in a file in your own app, and import it in the yaml file's python_preamble.
I would like to write a small script that does the following (and that I can then run using my crontab):
Look into a directory that contains directories whose names are in some date format, e.g. 30-10-09.
Convert the directory name to the date it represents (of course, I could put this information as a string into a file in these directories, that doesn't matter to me).
Compare each date with the current system time and find the one that has a specific time difference to the current system date, e.g. less than two days.
Then, do something with the files in that directory (e.g., paste them together and send an email).
I know a little bash scripting, but I don't know whether bash can itself handle this. I think I could do this in R, but the server where this needs to run doesn't have R.
I'm curious anyway to learn a little bit of either Python or Ruby (both of which are on the server).
Can someone point me in the right direction what might be the best way to do this?
I would suggest using Python. You'll need the following functions:
os.listdir gives you the directory contents, as a list of strings
time.strptime(name, "%d-%m-%y") will try to parse such a string, and return a time tuple. You get a ValueError exception if parsing fails.
time.mktime will convert a time tuple into seconds since the epoch.
time.time returns seconds since the epoch
the smtplib module can send emails, assuming you know what SMTP server to use. Alternatively, you can run /usr/lib/sendmail, through the subprocess module (assuming /usr/lib/sendmail is correctly configured)