how to simulate const variable in python - python

Hello i am trying to create a const in python using this example found from Creating constant in Python (in the first answer from the link) and use instance as module.
The first file const.py has
# Put in const.py...:
class _const:
class ConstError(TypeError): pass
def __setattr__(self,name,value):
if self.__dict__ in (name):
raise self.ConstError("Can't rebind const(%s)"%name)
self.__dict__[name]=value
import sys
sys.modules[__name__]=_const()
And the rest goes to test.py for example.
# that's all -- now any client-code can
import const
# and bind an attribute ONCE:
const.magic = 23
# but NOT re-bind it:
const.magic = 88 # raises const.ConstError
# you may also want to add the obvious __delattr__
Although i have made 2 changes cause i am using python 3 i still get errors
Traceback (most recent call last):
File "E:\Const_in_python\test.py", line 4, in <module>
const.magic = 23
File "E:\Const_in_python\const.py", line 5, in __setattr__
if self.__dict__ in (name):
TypeError: 'in <string>' requires string as left operand, not dict
I dont understand what the line 5 error is. Can anyone explain? Correcting the example would also be nice. Thanks in advance.

This looks weird (where did it come from?)
if self.__dict__ in (name):
shouldn't it be
if name in self.__dict__:
That fixes your example
Python 3.2.3 (default, May 3 2012, 15:51:42)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import const
>>> const.magic = 23
>>> const.magic = 88
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "const.py", line 6, in __setattr__
raise self.ConstError("Can't rebind const(%s)"%name)
const.ConstError: Can't rebind const(magic)
Do you really need this const hack? Lots of Python code seems to somehow work without it

This line:
if self.__dict__ in (name):
should be
if name in self.__dict__:
... you want to know if the attribute is in the dict, not if the dict is in the attribute name (which doesn't work, because strings contain strings, not dictionaries).

Maybe kkconst - pypi is what you search.
support str, int, float, datetime
the const field instance will keep its base type behavior.
Like orm model definition, BaseConst is Constant Helper which manage const field.
For example:
from __future__ import print_function
from kkconst import (
BaseConst,
ConstFloatField,
)
class MathConst(BaseConst):
PI = ConstFloatField(3.1415926, verbose_name=u"Pi")
E = ConstFloatField(2.7182818284, verbose_name=u"mathematical constant") # Euler's number"
GOLDEN_RATIO = ConstFloatField(0.6180339887, verbose_name=u"Golden Ratio")
magic_num = MathConst.GOLDEN_RATIO
assert isinstance(magic_num, ConstFloatField)
assert isinstance(magic_num, float)
print(magic_num) # 0.6180339887
print(magic_num.verbose_name) # Golden Ratio
# MathConst.GOLDEN_RATIO = 1024 # raise Error, because assignment allow only once
more details usage you can read the pypi url:
pypi or github
same answer: Creating constant in Python

Related

How does one comply with the mypy type 'SupportsWrite[str]'?

I have a 'smart' open function that opens a variety of files and returns an IO-ish object type:
def sopen(anything_at_all: str, mode: str) -> FileIO:
...
And I use it in a print statement like:
with sopen('footxt.gz', mode = 'w+') as fout:
print("hello, world!", file=fout)
Then, when analyzing this code with mypy 0.812, I get the following mystery error:
Argument "fout" to "print" has incompatible type "FileIO"; expected "Optional[SupportsWrite[str]]"
Ok great: SupportsWrite is definitely better than FileIO, only one problem: when I adapt my code to support write using _typeshed.SupportsWrite, nothing gets better...
def sopen(anything_at_all: str, mode: str) \
-> Union[SupportsWrite[str],SupportsRead[str]]:
...
mypy wants exactly Optional[SupportsWrite]:
Argument "fout" to "print" has incompatible type "Union[SupportsWrite[str], SupportsRead[str]]"; expected "Optional[SupportsWrite[str]]"
Next I try casting, and creating some sort of type-casting enforcement, but in the middle of trying out my caster in the interpreter to see what errors fall out when:
>>> from _typeshed import SupportsRead, SupportsWrite
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named '_typeshed'
And now the fundamental problem is: how does one comply, in this situation, with mypy's wishes?
TL;DR Use typing.IO instead of FileIO. typing.IO supports all the return types that the built-in open might return.
print itself annotates its file argument as Optional[SupportsWrite[str]], so mypy is correct.
To fix the missing _typeshed module error (also correct, it is only available when type checking, not when the interpreter is executing code) you can use the if TYPE_CHECKING1 trick and then use string annotations.
The below almost satisfies mypy:
from typing import Optional, TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import SupportsWrite
def sopen(anything_at_all: str, mode: str) -> 'Optional[SupportsWrite[str]]':
...
with sopen('footxt.gz', mode = 'w+') as fout:
print("hello, world!", file=fout)
Feeding this to mypy results with
test.py:9: error: Item "SupportsWrite[str]" of "Optional[SupportsWrite[str]]" has no attribute "__enter__"
test.py:9: error: Item "None" of "Optional[SupportsWrite[str]]" has no attribute "__enter__"
test.py:9: error: Item "SupportsWrite[str]" of "Optional[SupportsWrite[str]]" has no attribute "__exit__"
test.py:9: error: Item "None" of "Optional[SupportsWrite[str]]" has no attribute "__exit__"
Enters typing.IO.
Instead of messing with SupportsWrite directly, you can simply use typing.IO (which also happens to match open's return types). The following fully satisfies mypy:
from typing import IO
def sopen(anything_at_all: str, mode: str) -> IO:
...
with sopen('footxt.gz', mode = 'w+') as fout:
print("hello, world!", file=fout)
1 TYPE_CHECKING is a constant which is False by default, and is only being set to True by mypy and/or other type analyzing tools.

What is the type of Python's code object?

Is there a way I can compare to the type of a code object constructed by compile or __code__ to the actual code object type?
This works fine:
>>> code_obj = compile("print('foo')", '<string>', 'exec')
>>> code_obj
<code object <module> at 0x7fb038c1ab70, file "<string>", line 1>
>>> print(type(code_obj))
code
>>> def foo(): return None
>>> type(foo.__code__) == type(code_obj)
True
But I can't do this:
>>> type(foo.__code__) == code
NameError: name 'code' is not defined
but where do I import code from?
It doesn't seem to be from code.py. It's defined in the CPython C file but I couldn't find the Python interface type for it.
You're after CodeType which can be found in types.
>>> from types import CodeType
>>> def foo(): pass
...
>>> type(foo.__code__) == CodeType
True
Note that there's nothing special about it, it just uses type on a functions __code__.
Since it's in the standard lib, you can be sure it will work even if some change happens in the way code objects are exposed.

mypy typing leeds to unexpected traceback

I'm trying to use a work-around for the problems described in this GitHub issue ("Class with function fields incorrectly thinks the first argument is self").
from dataclasses import dataclass
from typing import TypeVar, Generic, Any, Iterable, List
T = TypeVar("T")
# See https://github.com/python/mypy/issues/5485
#dataclass
class Box(Generic[T]):
inner: T
#property
def unboxed(self) -> T:
return self.inner
I run into a traceback like this, though:
However, upon importing (only(!) ) the code above from another module I run into a traceback like this:
(py) sugarline:~/src/oss/ormsnack write-compile-eval
Traceback (most recent call last):
[.....]
File "/Users/jacob/src/oss/ormsnack/ormsnack/ng_desc.py", line 8, in <module>
from kingston.kind import Box # type: ignore
File "kingston/kind.py", line 12, in <module>
AttributeError: attribute '__dict__' of 'type' objects is not writable
Googling only lands me in old bugs that seem to have gone stale, though (https://bugs.python.org/issue38099, https://github.com/man-group/arctic/issues/17), etc.
Is anyone able to figure out a work-around?

Saving and loading objects and using pickle

I´m trying to save and load objects using pickle module.
First I declare my objects:
>>> class Fruits:pass
...
>>> banana = Fruits()
>>> banana.color = 'yellow'
>>> banana.value = 30
After that I open a file called 'Fruits.obj'(previously I created a new .txt file and I renamed 'Fruits.obj'):
>>> import pickle
>>> filehandler = open(b"Fruits.obj","wb")
>>> pickle.dump(banana,filehandler)
After do this I close my session and I began a new one and I put the next (trying to access to the object that it supposed to be saved):
file = open("Fruits.obj",'r')
object_file = pickle.load(file)
But I have this message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\pickle.py", line 1365, in load
encoding=encoding, errors=errors).load()
ValueError: read() from the underlying stream did notreturn bytes
I don´t know what to do because I don´t understand this message.
Does anyone know How I can load my object 'banana'?
Thank you!
EDIT:
As some of you have sugested I put:
>>> import pickle
>>> file = open("Fruits.obj",'rb')
There were no problem, but the next I put was:
>>> object_file = pickle.load(file)
And I have error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\pickle.py", line 1365, in load
encoding=encoding, errors=errors).load()
EOFError
As for your second problem:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\pickle.py", line
1365, in load encoding=encoding,
errors=errors).load() EOFError
After you have read the contents of the file, the file pointer will be at the end of the file - there will be no further data to read. You have to rewind the file so that it will be read from the beginning again:
file.seek(0)
What you usually want to do though, is to use a context manager to open the file and read data from it. This way, the file will be automatically closed after the block finishes executing, which will also help you organize your file operations into meaningful chunks.
Finally, cPickle is a faster implementation of the pickle module in C. So:
In [1]: import _pickle as cPickle
In [2]: d = {"a": 1, "b": 2}
In [4]: with open(r"someobject.pickle", "wb") as output_file:
...: cPickle.dump(d, output_file)
...:
# pickle_file will be closed at this point, preventing your from accessing it any further
In [5]: with open(r"someobject.pickle", "rb") as input_file:
...: e = cPickle.load(input_file)
...:
In [7]: print e
------> print(e)
{'a': 1, 'b': 2}
The following works for me:
class Fruits: pass
banana = Fruits()
banana.color = 'yellow'
banana.value = 30
import pickle
filehandler = open("Fruits.obj","wb")
pickle.dump(banana,filehandler)
filehandler.close()
file = open("Fruits.obj",'rb')
object_file = pickle.load(file)
file.close()
print(object_file.color, object_file.value, sep=', ')
# yellow, 30
You're forgetting to read it as binary too.
In your write part you have:
open(b"Fruits.obj","wb") # Note the wb part (Write Binary)
In the read part you have:
file = open("Fruits.obj",'r') # Note the r part, there should be a b too
So replace it with:
file = open("Fruits.obj",'rb')
And it will work :)
As for your second error, it is most likely cause by not closing/syncing the file properly.
Try this bit of code to write:
>>> import pickle
>>> filehandler = open(b"Fruits.obj","wb")
>>> pickle.dump(banana,filehandler)
>>> filehandler.close()
And this (unchanged) to read:
>>> import pickle
>>> file = open("Fruits.obj",'rb')
>>> object_file = pickle.load(file)
A neater version would be using the with statement.
For writing:
>>> import pickle
>>> with open('Fruits.obj', 'wb') as fp:
>>> pickle.dump(banana, fp)
For reading:
>>> import pickle
>>> with open('Fruits.obj', 'rb') as fp:
>>> banana = pickle.load(fp)
Always open in binary mode, in this case
file = open("Fruits.obj",'rb')
You can use anycache to do the job for you. Assuming you have a function myfunc which creates the instance:
from anycache import anycache
class Fruits:pass
#anycache(cachedir='/path/to/your/cache')
def myfunc()
banana = Fruits()
banana.color = 'yellow'
banana.value = 30
return banana
Anycache calls myfunc at the first time and pickles the result to a
file in cachedir using an unique identifier (depending on the the function name and the arguments) as filename.
On any consecutive run, the pickled object is loaded.
If the cachedir is preserved between python runs, the pickled object is taken from the previous python run.
The function arguments are also taken into account.
A refactored implementation works likewise:
from anycache import anycache
class Fruits:pass
#anycache(cachedir='/path/to/your/cache')
def myfunc(color, value)
fruit = Fruits()
fruit.color = color
fruit.value = value
return fruit
You didn't open the file in binary mode.
open("Fruits.obj",'rb')
Should work.
For your second error, the file is most likely empty, which mean you inadvertently emptied it or used the wrong filename or something.
(This is assuming you really did close your session. If not, then it's because you didn't close the file between the write and the read).
I tested your code, and it works.
It seems you want to save your class instances across sessions, and using pickle is a decent way to do this. However, there's a package called klepto that abstracts the saving of objects to a dictionary interface, so you can choose to pickle objects and save them to a file (as shown below), or pickle the objects and save them to a database, or instead of use pickle use json, or many other options. The nice thing about klepto is that by abstracting to a common interface, it makes it easy so you don't have to remember the low-level details of how to save via pickling to a file, or otherwise.
Note that It works for dynamically added class attributes, which pickle cannot do...
dude#hilbert>$ python
Python 2.7.6 (default, Nov 12 2013, 13:26:39)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from klepto.archives import file_archive
>>> db = file_archive('fruits.txt')
>>> class Fruits: pass
...
>>> banana = Fruits()
>>> banana.color = 'yellow'
>>> banana.value = 30
>>>
>>> db['banana'] = banana
>>> db.dump()
>>>
Then we restart…
dude#hilbert>$ python
Python 2.7.6 (default, Nov 12 2013, 13:26:39)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from klepto.archives import file_archive
>>> db = file_archive('fruits.txt')
>>> db.load()
>>>
>>> db['banana'].color
'yellow'
>>>
Klepto works on python2 and python3.
Get the code here:
https://github.com/uqfoundation

Django '<object> matching query does not exist' when I can see it in the database

My model looks like this:
class Staff(models.Model):
StaffNumber = models.CharField(max_length=20,primary_key=True)
NameFirst = models.CharField(max_length=30,blank=True,null=True)
NameLast = models.CharField(max_length=30)
SchoolID = models.CharField(max_length=10,blank=True,null=True)
AutocompleteName = models.CharField(max_length=100, blank=True,null=True)
I'm using MySQL, in case that matters.
From the manage.py shell:
root#django:/var/www/django-sites/apps# python manage.py shell
Python 2.5.2 (r252:60911, Jan 20 2010, 21:48:48)
[GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from disciplineform.models import Staff
>>> s = Staff.objects.all()
>>> len(s)
406
So I know there are 406 "Staff" objects in there. I can also see them in the database. I check one of the values:
>>> s[0].NameFirst
u'"ANDREA"'
That also matches what I see in the database. Now I try to 'get' this object.
>>> a = Staff.objects.get(NameFirst='ANDREA')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/var/lib/python-support/python2.5/django/db/models/manager.py", line 93, in get
return self.get_query_set().get(*args, **kwargs)
File "/var/lib/python-support/python2.5/django/db/models/query.py", line 309, in get
% self.model._meta.object_name)
DoesNotExist: Staff matching query does not exist.
Huh? This is happening for all the values of all the columns I've tested. I'm getting the same result in my view.py code.
I'm obviously doing something dumb. What is it?
Try
a = Staff.objects.get(NameFirst=u'"ANDREA"')
The u tells Python/Django it's a Unicode string, not a plain old str, and in your s[0].NameFirst sample, it's showing the value as containing double quotes.
I ran into the same issue, here's the solution:
from django.db import reset_queries, close_connection
close_connection()
reset_queries()
Name was stored in database with an extra redundant doublequotes. So, if you want to catch that record, correct code is:
a = Staff.objects.get(NameFirst='"ANDREA"')
…instead of:
a = Staff.objects.get(NameFirst='ANDREA')
If you can't be sure, that query will return result, you must to add exception handling, something like that:
try:
a = Staff.objects.get(NameFirst='"ANDREA"')
except Exception:
a = None
I have run into similar issues before.
I'm not entirely sure why but the raw 'get' tends to give me problems. So, I usually end up using 'filter' instead, then grabbing the first result.
a = Staff.objects.filter(NameFirst='ANDREA')
result = a[0]
A simliar thing happened to me. I was able to solve it by putting something in the db for the query to return.
This isn't exactly like yours, but it was the first hit on Google, so I thought I'd share.

Categories