python the code strftime("%s") errors - python

THE python program is as follows, it errors:
File "C:\Python\PyCharmProject\FaceBookCrawl\group_download.py", line 31, in getFeed
params += "&since=" + SINCE.strftime("%s")
ValueError: Invalid format string
the program it seems SINCE.strftime("%s") is wrong, how to solve it?
SINCE = datetime.datetime.now() - datetime.timedelta(DAYS)
params = "?fields=permalink_url,from,story,type,message,link,created_time,updated_time,likes.limit(0).summary(total_count),comments.limit(0).summary(total_count)"
#Default paging limit
params += "&amp&limit=" + DEFAULT_LIMIT
#Time-based limit
params += "&since=" + SINCE.strftime("%s")
graph_url = GRAPH_URL_PREFIX + group + "/feed" + params

Actually, it should be capital S:
params += "&since=" + SINCE.strftime("%S")
^

ValueError: Invalid format string
You're using the wrong formatter i.e. it has to be upper case 'S' - here's datetime's strftime reference.
UnicodeEncodeError: 'locale' codec can't encode character '\uff33' in
position 1: Illegal byte sequence
the \uff33 is basically the the full width latin letter 'S' - the one you edited to get rid of previous ValueError.
Solution/way-outs:
1.Use raw string i.e. prefix your string with an 'r'
params = r"?fields=permalink_url,from,story,type,message,link,created_time,updated_time,likes.limit(0).summary(total_count),comments.limit(0).summary(total_count)"
2.If you're using str() to convert from unicode to encoded text / bytes - instead use .encode() to encode the string. A helpful SO thread.

For anybody coming here when using ("%s") to generate an Epoch timestamp. Note that the usage of strftime("%s") is platform dependent and doesnt work on windows while it works on Linux with you Local Timezone. You can just use timestamp():
int(datetime.datetime.utcnow().timestamp())
You can read more here Convert python datetime to epoch with strftime

Related

What if Python has multiple coding methods at the same time? [duplicate]

I have a file which is mostly UTF-8, but some Windows-1252 characters have also found their way in.
I created a table to map from the Windows-1252 (cp1252) characters to their Unicode counterparts, and would like to use it to fix the mis-encoded characters, e.g.
cp1252_to_unicode = {
"\x85": u'\u2026', # …
"\x91": u'\u2018', # ‘
"\x92": u'\u2019', # ’
"\x93": u'\u201c', # “
"\x94": u'\u201d', # ”
"\x97": u'\u2014' # —
}
for l in open('file.txt'):
for c, u in cp1252_to_unicode.items():
l = l.replace(c, u)
But attempting to do the replace this way results in a UnicodeDecodeError being raised, e.g.:
"\x85".replace("\x85", u'\u2026')
UnicodeDecodeError: 'ascii' codec can't decode byte 0x85 in position 0: ordinal not in range(128)
Any ideas for how to deal with this?
If you try to decode this string as utf-8, as you already know, you will get an "UnicodeDecode" error, as these spurious cp1252 characters are invalid utf-8 -
However, Python codecs allow you to register a callback to handle encoding/decoding errors, with the codecs.register_error function - it gets the UnicodeDecodeerror a a parameter - you can write such a handler that atempts to decode the data as "cp1252", and continues the decoding in utf-8 for the rest of the string.
In my utf-8 terminal, I can build a mixed incorrect string like this:
>>> a = u"maçã ".encode("utf-8") + u"maçã ".encode("cp1252")
>>> print a
maçã ma��
>>> a.decode("utf-8")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 9-11: invalid data
I wrote the said callback function here, and found a catch: even if you increment the position from which to decode the string by 1, so that it would start on the next chratcer, if the next character is also not utf-8 and out of range(128), the error is raised at the first out of range(128) character - that means, the decoding "walks back" if consecutive non-ascii, non-utf-8 chars are found.
The worka round this is to have a state variable in the error_handler which detects this "walking back" and resume decoding from the last call to it - on this short example, I implemented it as a global variable - (it will have to be manually reset to "-1" before each call to the decoder):
import codecs
last_position = -1
def mixed_decoder(unicode_error):
global last_position
string = unicode_error[1]
position = unicode_error.start
if position <= last_position:
position = last_position + 1
last_position = position
new_char = string[position].decode("cp1252")
#new_char = u"_"
return new_char, position + 1
codecs.register_error("mixed", mixed_decoder)
And on the console:
>>> a = u"maçã ".encode("utf-8") + u"maçã ".encode("cp1252")
>>> last_position = -1
>>> print a.decode("utf-8", "mixed")
maçã maçã
With thanks to jsbueno and a whack of other Google searches and other pounding I solved it this way.
#The following works very well but it does not allow for any attempts to FIX the data.
xmlText = unicode(xmlText, errors='replace').replace(u"\uFFFD", "?")
This version allows for a limited opportunity to repair invalid characters. Unknown characters are replaced with a safe value.
import codecs
replacement = {
'85' : '...', # u'\u2026' ... character.
'96' : '-', # u'\u2013' en-dash
'97' : '-', # u'\u2014' em-dash
'91' : "'", # u'\u2018' left single quote
'92' : "'", # u'\u2019' right single quote
'93' : '"', # u'\u201C' left double quote
'94' : '"', # u'\u201D' right double quote
'95' : "*" # u'\u2022' bullet
}
#This is is more complex but allows for the data to be fixed.
def mixed_decoder(unicodeError):
errStr = unicodeError[1]
errLen = unicodeError.end - unicodeError.start
nextPosition = unicodeError.start + errLen
errHex = errStr[unicodeError.start:unicodeError.end].encode('hex')
if errHex in replacement:
return u'%s' % replacement[errHex], nextPosition
return u'%s' % errHex, nextPosition # Comment this line out to get a question mark
return u'?', nextPosition
codecs.register_error("mixed", mixed_decoder)
xmlText = xmlText.decode("utf-8", "mixed")
Basically I attempt to turn it into utf8. For any characters that fail I just convert it to HEX so I can display or look it up in a table of my own.
This is not pretty but it does allow me to make sense of messed up data
Good solution that of #jsbueno, but there is no need of global variable last_position, see:
def mixed_decoder(error: UnicodeError) -> (str, int):
bs: bytes = error.object[error.start: error.end]
return bs.decode("cp1252"), error.start + 1
import codecs
codecs.register_error("mixed", mixed_decoder)
a = "maçã".encode("utf-8") + "maçã".encode("cp1252")
# a = b"ma\xc3\xa7\xc3\xa3ma\xe7\xe3"
s = a.decode("utf-8", "mixed")
# s = "maçãmaçã"
This is usually called Mojibake.
There's a nice Python library that might solve these issues for you called ftfy.
Example:
>>> from ftfy import fix_text
>>> fix_text("Ð¨ÐµÐ¿Ð¾Ñ (напоминалки)")
'Шепот (напоминалки)'
Just came into this today, so here is my problem and my own solution:
original_string = 'Notifica\xe7\xe3o de Emiss\xe3o de Nota Fiscal Eletr\xf4nica.'
def mixed_decoding(s):
output = ''
ii = 0
for c in s:
if ii <= len(s)-1:
if s[ii] == '\\' and s[ii+1] == 'x':
b = s[ii:ii+4].encode('ascii').decode('unicode-escape')
output = output+b
ii += 3
else:
output = output+s[ii]
ii += 1
print(output)
return output
decoded_string = mixed_decoding(original_string)
Now it prints:
>>> Notificação de Emissão de Nota Fiscal Eletrônica.

Error: time data "b'YYYY/MM/DD" does not match format '%Y/%m/%d' but it does

I'm trying to parse dates from a textfile, but executing the scripts throws incorrect data format, when the format is correct.
The file is a .txt file with the following structure
2018/02/15 05:00:13 - somestring - anotherstring
2018/02/15 05:00:14 - somestring - anotherstring
2018/02/15 05:00:15 - somestring - anotherstring
... etc
The script gets the file divided in lines, and each line is divided on fields, of which one field is a date and time. I divided the date and the time in two separate fields, the time gets converted ok so the problem is in the date.
This is what I get on execution:
ValueError: time data "b'2018/02/15" does not match format '%Y/%m/%d'
I noticed it prints the string with a "b" in front of it, which if I'm not mistaken it means it's a byte literal. I've tried using "decode("utf-8")" on it, but it throw's exception as "string" has no method decode.
#the file is in one long string as I get it from a 'cat' bash command via ssh
file = str(stdout.read()) #reads the cat into a long string
strings = file.split("\\n") #splits the string into lines
for string in strings:
fields = string.split(" - ")
if len(fields) >= 3:
#dates.append(datetime.strptime(campos[0],"%Y/%m/%d %H:%M:%S")) #Wrong format
datentime = fields[0].split()
dates.append(datetime.strptime(datentime[0],"%Y/%m/%d")) #Wrong format
print(datentime[1])
dates.append(datetime.strptime(datentime[1],"%H:%M:%S")) #WORKS
I can't figure out why that is happening to you with the code you gave so I can't offer a fix for that but I tried testing on it and this worked for me:
datetime.strptime(str(datentime[0])[2,:-1], "%Y/%m/%d")
It removes the B and ' from the string, if you still have problems with that, please post how you got that string, maybe there was some error on the way.
use try and except:
import datetime
def convertDate(d):
strptime = datetime.datetime.strptime
try:
return strptime(d, "%Y/%m/%d")
except TypeError:
return strptime(d.decode("utf-8"), "%Y/%m/%d")
print(convertDate(b'2018/02/15'))
print(convertDate('2018/02/15'))

Python adds extra to crypt result

I'm trying to create an API with token to communicate between an Raspberry Pi and a Webserver. Right now i'm tring to generate an Token with Python.
from Crypto.Cipher import AES
import base64
import os
import time
import datetime
import requests
BLOCK_SIZE = 32
BLOCK_SZ = 14
#!/usr/bin/python
salt = "123456789123" # Zorg dat de salt altijd even lang is! (12 Chars)
iv = "1234567891234567" # Zorg dat de salt altijd even lang is! (16 Chars)
currentDate = time.strftime("%d%m%Y")
currentTime = time.strftime("%H%M")
PADDING = '{'
pad = lambda s: s + (BLOCK_SIZE - len(s) % BLOCK_SIZE) * PADDING
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)
secret = salt + currentTime
cipher=AES.new(key=secret,mode=AES.MODE_CBC,IV=iv)
encode = currentDate
encoded = EncodeAES(cipher, encode)
print (encoded)
The problem is that the output of the script an exta b' adds to every encoded string.. And on every end a '
C:\Python36-32>python.exe encrypt.py
b'Qge6lbC+SulFgTk/7TZ0TKHUP0SFS8G+nd5un4iv9iI='
C:\Python36-32>python.exe encrypt.py
b'DTcotcaU98QkRxCzRR01hh4yqqyC92u4oAuf0bSrQZQ='
Hopefully someone can explain what went wrong.
FIXED!
I was able to fix it to decode it to utf-8 format.
sendtoken = encoded.decode('utf-8')
You are running Python 3.6, which uses Unicode (UTF-8) for string literals. I expect that the EncodeAES() function returns an ASCII string, which Python is indicating is a bytestring rather than a Unicode string by prepending the b to the string literal it prints.
You could strip the b out of the output post-Python, or you could print(str(encoded)), which should give you the same characters, since ASCII is valid UTF-8.
EDIT:
What you need to do is decode the bytestring into UTF-8, as mentioned in the answer and in a comment above. I was wrong about str() doing the conversion for you, you need to call decode('UTF-8') on the bytestring you wish to print. That converts the string into the internal UTF-8 representation, which then prints correctly.

How to print out 0xfb in python

I'm falling the unicode hell.
My environment in on unix, python 2.7.3
LC_CTYPE=zh_TW.UTF-8
LANG=en_US.UTF-8
I'm trying to dump hex encoded data in human readable format, here is simplified code
#! /usr/bin/env python
# encoding:utf-8
import sys
s=u"readable\n" # previous result keep in unicode string
s2="fb is not \xfb" # data read from binary file
s += s2
print s # method 1
print s.encode('utf-8') # method 2
print s.encode('utf-8','ignore') # method 3
print s.decode('iso8859-1') # method 4
# method 1-4 display following error message
#UnicodeDecodeError: 'ascii' codec can't decode byte 0xfb
# in position 0: ordinal not in range(128)
f = open('out.txt','wb')
f.write(s)
I just want to print out the 0xfb.
I should describe more here. The key is 's += s2'.
Where s will keep my previous decoded string.
And the s2 is next string which should append into s.
If I modified as following, it occurs on write file.
s=u"readable\n"
s2="fb is not \xfb"
s += s2.decode('cp437')
print s
f=open('out.txt','wb')
f.write(s)
# UnicodeEncodeError: 'ascii' codec can't encode character
# u'\u221a' in position 1: ordinal not in range(128)
I wish the result of out.txt is
readable
fb is not \xfb
or
readable
fb is not 0xfb
[Solution]
#! /usr/bin/env python
# encoding:utf-8
import sys
import binascii
def fmtstr(s):
r = ''
for c in s:
if ord(c) > 128:
r = ''.join([r, "\\x"+binascii.hexlify(c)])
else:
r = ''.join([r, c])
return r
s=u"readable"
s2="fb is not \xfb"
s += fmtstr(s2)
print s
f=open('out.txt','wb')
f.write(s)
I strongly suspect that your code is actually erroring out on the previous line: the s += s2 one. s2 is just a series of bytes, which can't be arbitrarily tacked on to a unicode object (which is instead a series of code points).
If you had intended the '\xfb' to represent U+FB, LATIN SMALL LETTER U WITH CIRCUMFLEX, it would have been better to assign it like this instead:
s2 = u"\u00fb"
But you said that you just want to print out \xHH codes for control characters. If you just want it to be something humans can understand which still makes it apparent that special characters are in a string, then repr may be enough. First, don't have s be a unicode object, because you're treating your strings here as a series of bytes, not a series of code points.
s = s.encode('utf-8')
s += s2
print repr(s)
Finally, if you don't want the extra quotes on the outside that repr adds, for nice pretty printing or whatever, there's not a simple builtin way to do that in Python (that I know of). I've used something like this before:
import re
controlchars_re = re.compile(r'[\x00-\x31\x7f-\xff]')
def _show_control_chars(match):
txt = repr(match.group(0))
return txt[1:-1]
def escape_special_characters(s):
return controlchars_re.sub(_show_control_chars, s.replace('\\', '\\\\'))
You can pretty easily tweak the controlchars_re regex to define which characters you care about escaping.

Using a Unicode format for Python's `time.strftime()`

I am trying to call Python's time.strftime() function using a Unicode format string:
u'%d\u200f/%m\u200f/%Y %H:%M:%S'
(\u200f is the "Right-To-Left Mark" (RLM).)
However, I am getting an exception that the RLM character cannot be encoded into ascii:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u200f' in position 2: ordinal not in range(128)
I have tried searching for an alternative but could not find a reasonable one. Is there an alternative to this function, or a way to make it work with Unicode characters?
Many standard library functions still don't support Unicode the way they should. You can use this workaround:
import time
my_format = u'%d\u200f/%m\u200f/%Y %H:%M:%S'
my_time = time.localtime()
time.strftime(my_format.encode('utf-8'), my_time).decode('utf-8')
You can format string through utf-8 encoding:
time.strftime(u'%d\u200f/%m\u200f/%Y %H:%M:%S'.encode('utf-8'), t).decode('utf-8')
You should read from a file as Unicode and then convert it to Date-time format.
from datetime import datetime
f = open(LogFilePath, 'r', encoding='utf-8')
# Read first line of log file and remove '\n' from end of it
Log_DateTime = f.readline()[:-1]
You can define Date-time format like this:
fmt = "%Y-%m-%d %H:%M:%S.%f"
But some programming language like C# doesn't support it easily, so you can change it to:
fmt = "%Y-%m-%d %H:%M:%S"
Or you can use like following way (to satisfy .%f):
Log_DateTime = Log_DateTime + '.000000'
If you have an unrecognized symbol (an Unicode symbol) then you should remove it too.
# Removing an unrecognized symbol at the first of line (first character)
Log_DateTime = Log_DateTime[1:] + '.000000'
At the end, you should convert string date-time to real Date-time format:
Log_DateTime = datetime.datetime.strptime(Log_DateTime, fmt)
Current_Datetime = datetime.datetime.now() # Default format is '%Y-%m-%d %H:%M:%S.%f'
# Calculate different between that two datetime and do suitable actions
Current_Log_Diff = (Current_Datetime - Log_DateTime).total_seconds()

Categories