I'm writing some C code that needs to embed the current time in its (binary) output file. Later, this file will be read by some other C code (possibly compiled for different architecture) and/or some python code. In both cases calculations may be required on the time.
What I'd like to know is:
How do I get current UTC time in C? Is time() the write call?
What format should I write this to file in? ASN1? ISO?
How do I convert to that format?
How do I read that format in C and Python and convert it into something useful?
You could use rfc 3339 datetime format (a profile of ISO8601). It avoids many pitfalls of unconstrained ISO8601 timestamps.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
char buf[21];
time_t ts = time(NULL);
struct tm *tp = gmtime(&ts);
if (tp == NULL || tp->tm_year > 8099 || tp->tm_year < 0) {
perror("gmtime");
exit(EXIT_FAILURE);
}
if (strftime(buf, sizeof buf, "%Y-%m-%dT%H:%M:%SZ", tp) == 0) {
fprintf(stderr, "strftime returned 0\n");
exit(EXIT_FAILURE);
}
exit(puts(buf) != EOF ? EXIT_SUCCESS : EXIT_FAILURE);
}
Output
2014-12-20T11:08:44Z
To read it in Python:
>>> from datetime import datetime, timezone
>>> dt = datetime.strptime('2014-12-20T11:08:44Z', '%Y-%m-%dT%H:%M:%SZ')
>>> dt = dt.replace(tzinfo=timezone.utc)
>>> print(dt)
2014-12-20 11:08:44+00:00
Use the following C code to get a suitable date output:
time_t rawtime;
struct tm *now;
char timestamp[80];
time(&rawtime);
now = gmtime(&rawtime);
strftime(timestamp, sizeof(timestamp), "%Y%m%d%H%M%S", now);
Then use the following python to read it:
start_time = datetime.datetime.strptime(data, "%Y%m%d%H%M%S")
Variations on the format work, as long as it's consistent.
Related
These are my example source code:
C
#include <stdio.h>
#include <stdlib.h>
__declspec(dllexport)
char* sys_open(char* file_name)
{
char *file_path_var = (char *) malloc(100*sizeof(char));
FILE *wrt = fopen(file_name, "r");
fscanf(wrt, "%s", file_path_var);
fclose(wrt);
return file_path_var;
}
Test.txt
test
Python
from ctypes import *
libcdll = CDLL("c.dll")
taken_var = libcdll.sys_open("test.txt")
print("VAR: ", taken_var)
Result
VAR: 4561325
So I'm just getting a random number. What should i do?
I'm not a C developer, but isn't sys_open returning a pointer?. Last time I checked pointers are WORD sized memory addresses in HEX, so it might make sense that python sees a numerical value in HEX and converts it to a decimal?. Maybe what you want to return from your C funcion is &file_path_var
I found the true one.
The python file was wrong, must be:
from ctypes import *
libcdll = CDLL("c.dll")
taken_var = libcdll.sys_open("test.txt")
print("VAR: ", c_char_p(taken_var).value)
I was trying a little experiment in order to get the timestamps of the RTP packets using the VideoCapture class from Opencv's source code in python, also had to modify FFmpeg to accommodate the changes in Opencv.
Since I read about the RTP packet format.Wanted to fiddle around and see if I could manage to find a way to get the NTP timestamps. Was unable to find any reliable help in trying to get RTP timestamps. So tried out this little hack.
Credits to ryantheseer on github for the modified code.
Version of FFmpeg: 3.2.3
Version of Opencv: 3.2.0
In Opencv source code:
modules/videoio/include/opencv2/videoio.hpp:
Added two getters for the RTP timestamp:
.....
/** #brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
*/
CV_WRAP virtual int64 getRTPTimeStampSeconds() const;
/** #brief Gets the lower bytes of the RTP time stamp in NTP format (fraction of seconds).
*/
CV_WRAP virtual int64 getRTPTimeStampFraction() const;
.....
modules/videoio/src/cap.cpp:
Added an import and added the implementation of the timestamp getter:
....
#include <cstdint>
....
....
static inline uint64_t icvGetRTPTimeStamp(const CvCapture* capture)
{
return capture ? capture->getRTPTimeStamp() : 0;
}
...
Added the C++ timestamp getters in the VideoCapture class:
....
/**#brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
*/
int64 VideoCapture::getRTPTimeStampSeconds() const
{
int64 seconds = 0;
uint64_t timestamp = 0;
//Get the time stamp from the capture object
if (!icap.empty())
timestamp = icap->getRTPTimeStamp();
else
timestamp = icvGetRTPTimeStamp(cap);
//Take the top 32 bytes of the time stamp
seconds = (int64)((timestamp & 0xFFFFFFFF00000000) / 0x100000000);
return seconds;
}
/**#brief Gets the lower bytes of the RTP time stamp in NTP format (seconds).
*/
int64 VideoCapture::getRTPTimeStampFraction() const
{
int64 fraction = 0;
uint64_t timestamp = 0;
//Get the time stamp from the capture object
if (!icap.empty())
timestamp = icap->getRTPTimeStamp();
else
timestamp = icvGetRTPTimeStamp(cap);
//Take the bottom 32 bytes of the time stamp
fraction = (int64)((timestamp & 0xFFFFFFFF));
return fraction;
}
...
modules/videoio/src/cap_ffmpeg.cpp:
Added an import:
...
#include <cstdint>
...
Added a method reference definition:
...
static CvGetRTPTimeStamp_Plugin icvGetRTPTimeStamp_FFMPEG_p = 0;
...
Added the method to the module initializer method:
...
if( icvFFOpenCV )
...
...
icvGetRTPTimeStamp_FFMPEG_p =
(CvGetRTPTimeStamp_Plugin)GetProcAddress(icvFFOpenCV, "cvGetRTPTimeStamp_FFMPEG");
...
...
icvWriteFrame_FFMPEG_p != 0 &&
icvGetRTPTimeStamp_FFMPEG_p !=0)
...
icvGetRTPTimeStamp_FFMPEG_p = (CvGetRTPTimeStamp_Plugin)cvGetRTPTimeStamp_FFMPEG;
Implemented the getter interface:
...
virtual uint64_t getRTPTimeStamp() const
{
return ffmpegCapture ? icvGetRTPTimeStamp_FFMPEG_p(ffmpegCapture) : 0;
}
...
In FFmpeg's source code:
libavcodec/avcodec.h:
Added the NTP timestamp definition to the AVPacket struct:
typedef struct AVPacket {
...
...
uint64_t rtp_ntp_time_stamp;
}
libavformat/rtpdec.c:
Store the ntp time stamp in the struct in the finalize_packet method:
static void finalize_packet(RTPDemuxContext *s, AVPacket *pkt, uint32_t timestamp)
{
uint64_t offsetTime = 0;
uint64_t rtp_ntp_time_stamp = timestamp;
...
...
/*RM: Sets the RTP time stamp in the AVPacket */
if (!s->last_rtcp_ntp_time || !s->last_rtcp_timestamp)
offsetTime = 0;
else
offsetTime = s->last_rtcp_ntp_time - ((uint64_t)(s->last_rtcp_timestamp) * 65536);
rtp_ntp_time_stamp = ((uint64_t)(timestamp) * 65536) + offsetTime;
pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp;
libavformat/utils.c:
Copy the ntp time stamp from the packet to the frame in the read_frame_internal method:
static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
{
...
uint64_t rtp_ntp_time_stamp = 0;
...
while (!got_packet && !s->internal->parse_queue) {
...
//COPY OVER the RTP time stamp TODO: just create a local copy
rtp_ntp_time_stamp = cur_pkt.rtp_ntp_time_stamp;
...
#if FF_API_LAVF_AVCTX
update_stream_avctx(s);
#endif
if (s->debug & FF_FDEBUG_TS)
av_log(s, AV_LOG_DEBUG,
"read_frame_internal stream=%d, pts=%s, dts=%s, "
"size=%d, duration=%"PRId64", flags=%d\n",
pkt->stream_index,
av_ts2str(pkt->pts),
av_ts2str(pkt->dts),
pkt->size, pkt->duration, pkt->flags);
pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp; #Just added this line in the if statement.
return ret;
My python code to utilise these changes:
import cv2
uri = 'rtsp://admin:password#192.168.1.67:554'
cap = cv2.VideoCapture(uri)
while True:
frame_exists, curr_frame = cap.read()
# if frame_exists:
k = cap.getRTPTimeStampSeconds()
l = cap.getRTPTimeStampFraction()
time_shift = 0x100000000
#because in the getRTPTimeStampSeconds()
#function, seconds was multiplied by 0x10000000
seconds = time_shift * k
m = (time_shift * k) + l
print("Imagetimestamp: %i" % m)
cap.release()
What I am getting as my output:
Imagetimestamp: 0
Imagetimestamp: 212041451700224
Imagetimestamp: 212041687629824
Imagetimestamp: 212041923559424
Imagetimestamp: 212042159489024
Imagetimestamp: 212042395418624
Imagetimestamp: 212042631348224
...
What astounded me the most was that when i powered off the ip camera and powered it back on, timestamp would start from 0 then quickly increments. I read NTP time format is relative to January 1, 1900 00:00. Even when I tried calculating the offset, and accounting between now and 01-01-1900, I still ended up getting a crazy high number for the date.
Don't know if I calculated it wrong. I have a feeling it's very off or what I am getting is not the timestamp.
As I see it, you receive a timestamp of type uint64 which contains to values uint32 in the high and low bits. I see that in a part of the code you use:
seconds = (int64)((timestamp & 0xFFFFFFFF00000000) / 0x100000000);
Which basically removes the lower bits and shifts the high bits to be in the lower bits. Then you cast it to int64. Here I only consider that it should be unsigned first of all, since it should not be negative in any case (seconds since epoch is always positive) and it should be uint32, since it is guarantee it is not bigger (you are taking only 32 bits). Also, this can be achieved (probably faster) with bitshifts like this:
auto seconds = static_cast<uint32>(timestamp >> 32);
Another error I spotted was in this part:
time_shift = 0x100000000
seconds = time_shift * k
m = (time_shift * k) + l
Here you are basically reconstructing the 64 bit timestamp, instead of creating the timestamp usable in other contexts. This means, you are shifting the lower bits in seconds to higher bits and adding the fraction part as the lower bits... This will end in a really big number which may not be useful always. You can still use it for comparison, but then all the conversions done in the C++ part are not needed. I think a more normal timestamp, which you can use with python datetime would be like this:
timestamp = float(str(k) + "." + str(l)) # don't know if there is a better way
date = datetime.fromtimestamp(timestamp)
If you don't care of the fractional part you can just use the seconds directly.
Another thing to consider is, that the timestamp of RTP protocols depends on the camera/server... They may use the clock timestamp or just some other clock like start of the streaming of start of the system. So it may or not be from epoch.
datetime.datetime.strptime seems to force directive matching regardless of the actual string length used. By using shorter strings, the directives will force the datetime.datetime object to use "something" in the string regardless of actual directives.
This is the correct behavior with enough input to fill the directives
>>> datetime.datetime.strptime('20180822163014', '%Y%m%d%H%M%S')
datetime.datetime(2018, 8, 22, 16, 30, 14)
This directives however will change the previous parsing
>>> datetime.datetime.strptime('20180822163014', '%Y%m%d%H%M%S%f')
datetime.datetime(2018, 8, 22, 16, 30, 1, 400000)
Is there any way to drop rightmost directives if input string is not long enough instead of cannibalizing the left ones?
I've tagged C and ubuntu because documentation says
"The full set of format codes supported varies across platforms,
because Python calls the platform C library’s strftime() function, and
platform variations are common. To see the full set of format codes
supported on your platform, consult the strftime(3) documentation."
EDIT:
man ctime shows the following structure as output. It is interesting that the microseconds ( %f ) precision doesn't seem to be supported.
struct tm {
int tm_sec; /* Seconds (0-60) */
int tm_min; /* Minutes (0-59) */
int tm_hour; /* Hours (0-23) */
int tm_mday; /* Day of the month (1-31) */
int tm_mon; /* Month (0-11) */
int tm_year; /* Year - 1900 */
int tm_wday; /* Day of the week (0-6, Sunday = 0) */
int tm_yday; /* Day in the year (0-365, 1 Jan = 0) */
int tm_isdst; /* Daylight saving time */
};
Well, I guess you have to do it by yourself, which doesn't seems to hard because you know the pattern.
Something like that should to the job
pattern = ""
if len(s) == 0: raise Exception "empty time string"
if len(s) <= 4: pattern += "%Y"
... # as many if as you need here
datetime.datetime.strptime(s, pattern)
Which is very painful to write if you have long date pattern, but I doubt that there is some function doing it already in the datetime module - for the reason that its just a binding with C.
You can try to do something more generic and ask if it could be add to the datetime module.
I have code for both Python and C that need to communicate to each other through a pipe created by Popen. I have a test struct in C that needs to be passed back to Python but I can't seem to reconstruct that struct on the Python side. This is a much more complicated project but the struct I created below is just an example to get the code to work, and I can try to figure out the more advanced things later. I am not an expert in C, pointers and piping, and I do not have a clear understanding of it. Most of the C code below is just from my readings.
Python:
testStruct = struct.Struct('< i')
cProg = Popen("./cProg.out", stdin=PIPE, stdout=PIPE)
data = ""
dataRead = cProg.stdout.read(1)
while dataRead != "\n":
data += dataRead
dataRead = cProg.stdout.read(1)
myStruct = testStruct.unpack(data)
print myStruct.i
C:
typedef struct{
int i;
} TestStruct;
int main(void)
{
int wfd = fileno(stdout);
TestStruct t;
t.i = 5;
char sendBack[sizeof(t)];
memcpy(sendBack, &t, sizeof(t));
write(wfd, sendBack, sizeof(sendBack));
write(wfd, "\n", 1);
}
But when I run the Python code I get the error:
unpack requires a string argument of length 4
Like I said I do not understand how structs and C. If there's any suggestion on refining this code, or better yet another suggestion on passing a C struct back to Python to unpack and grab the data. I can read and write through the pipe, the code I have posted are just snippets from my actual code. I know that the issue has to do with sending of the struct back to Python through stdout.
Here's an example of reading data in Python from a C program through a pipe.
C Program
#include <stdio.h>
typedef struct{
int i;
int j;
} TestStruct;
int main() {
TestStruct ts = {11111, 22222};
fwrite(&ts, sizeof ts, 1, stdout);
return 0;
}
Python 2.7 Program
from subprocess import Popen, PIPE
from struct import calcsize, unpack
cprog = Popen("cprog", stdout=PIPE)
fmt = "#ii"
str = cprog.stdout.read(calcsize(fmt))
cprog.stdout.close()
(i, j) = unpack(fmt, str)
print i, j
EDIT2: This question is assuming a POSIX-ish platform with Python
linked against Glibc.
On my system, round-trip conversion using the %z formatting directive
using Python’s time library fails to parse the offset part of ISO 8601
formatted timestamps. This snippet:
import time
time.daylight = 0
fmt = "%Y-%m-%dT%H:%M:%SZ%z"
a=time.gmtime()
b=time.strftime(fmt, a)
c=time.strptime(b, fmt)
d=time.strftime(fmt, c)
print ("»»»»", a == c, b == d)
print ("»»»»", a.tm_zone, b)
print ("»»»»", c.tm_zone, d)
outputs:
»»»» False False
»»»» GMT 2018-02-16T09:26:34Z+0000
»»»» None 2018-02-16T09:26:34Z
whereas the expected output would be
»»»» True True
»»»» GMT 2018-02-16T09:26:34Z+0000
»»»» GMT 2018-02-16T09:26:34Z+0000
How do I get %z to respect that offset?
Python 3.3.2 and 3.6.4
[Glibc 2.17 and 2.25 ⇒ see below!]
EDIT: Glibc can be acquitted as proven by this C analogue:
#define _XOPEN_SOURCE
#define _DEFAULT_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
/* 2018-02-16T09:59:21Z+0000 */
#define ISO8601_FMT "%Y-%m-%dT%H:%M:%SZ%z"
int main () {
const time_t t0 = time (NULL);
struct tm a;
char b [27];
struct tm c;
char d [27];
(void)setenv ("TZ", "UTC", 1);
tzset ();
daylight = 0;
(void)gmtime_r (&t0, &a); /* a=time.gmtime () */
(void)strftime (b, sizeof(b), ISO8601_FMT, &a); /* b=time.strftime (fmt, a) */
(void)strptime (b, ISO8601_FMT, &c); /* c=time.strptime (b, fmt) */
(void)strftime (d, sizeof(d), ISO8601_FMT, &c); /* d=time.strftime (fmt, c) */
printf ("»»»» b ?= d %s\n", strcmp (b, d) == 0 ? "yep" : "hell, no");
printf ("»»»» %d <%s> %s\n", a.tm_isdst, a.tm_zone, b);
printf ("»»»» %d <%s> %s\n", c.tm_isdst, c.tm_zone, d);
}
Which outputs
»»»» b ?= d yep
»»»» 0 <GMT> 2018-02-16T10:28:18Z+0000
»»»» 0 <(null)> 2018-02-16T10:28:18Z+0000
With the "time.gmtime()" naturally you are getting the UTC time, so the offset will be always +0000, therefore an output string "2018-02-16T09:26:34Z" is correct for the ISO8601. If you want absolutely the "+0000" add it manually because it will be alway the same:
d = time.strftime(fmt, c) + '+0000'
I don't pretend to have the solution to generate the proper hour shift according to the time zone, but I can explain what happens here.
As hinted in Python timezone '%z' directive for datetime.strptime() not available answers:
strptime is implemeted in pure python so it has a constant behaviour
strftime depends on the platform/C library it was linked against.
On my system (Windows, Python 3.4), %z returns the same thing as %Z ("Paris, Madrid"). So when strptime tries to parse it back as digits, it fails. Your code gives me:
ValueError: time data '2018-02-16T10:00:49ZParis, Madrid' does not match format '%Y-%m-%dT%H:%M:%SZ%z'
It's system dependent for the generation, and not for the parsing.
This dissymetry explains the weird behaviour.