say i want to ask the many users to give me their ID number and their name, than save it.
and than i can call any ID and get the name. can someone tell me how i can do that by making a class and using the _ _ init _ _ method?
The "asking" part, as #Zonda's answer says, could use raw_input (or Python 3's input) at a terminal ("command window" in Windows); but it could also use a web application, or a GUI application -- you don't really tell us enough about where the users will be (on the same machine you're using to run your code, or at a browser while your code runs on a server?) and whether GUI or textual interfaces are preferred, so it's impossible to give more precise advice.
For storing and retrieving the data, a SQL engine as mentioned in #aaron's answer is a possibility (though some might consider it overkill if this is all you want to save), but his suggested alternative of using pickle directly makes little sense -- I would instead recommend the shelf module, which offers (just about) the equivalent of a dictionary persisted to disk. (Keys, however, can only be strings -- but even if your IDs are integers instead, that's no problem, just use str(someid) as the key both to store and to retrieve).
In a truly weird comment I see you ask...:
is there any way to do it by making a
class? and using the __init__
method?
Of course there is a way to do "in a class, using the __init__ method" most anything you can do in a function -- at worst, you write all the code that would (in a sensible program) be in the function, in the __init__ method instead (in lieu of return, you stash the result in self.result and then get the .result attribute of the weirdly useless instance you have thus created).
But it makes any sense to use a class only when you need special methods, or want to associate state and behavior, and you don't at all explain why either condition should apply here, which is why I call your request "weird" -- you provide absolutely no context to explain why you would at all want that in lieu of functions.
If you can clarify your motivations (ideally by editing your question, or, even better, asking a separate one, but not by extending your question in sundry comments!-) maybe it's possible to help you further.
To get data from a user, use this code (python 3).
ID = input("Enter your id: ")
In python 2, replace input with raw_input.
The same should can be done to get the users name.
This will save it to a variable, which can be used later in the program. If you want to save it to a file, use the following code:
w = open('\path\to\file.txt', 'w')
w.write(ID, age)
w.close()
if you're not concerned with security, you can use the pickle module to pickle a dictionary.
import pickle
data = {}
# whatever you do to collect the data
data[id] = name
pickle.dump(data, filename)
new_data = pickle.load(filename)
new_name = new_data[id]
#new_name == name
otherwise use the sqlite3 module
import sqlite3
conn = sqlite3.connect(filename)
cur = conn.cursor()
cur.execute('CREATE TABLE IF NOT EXISTS names (id INTEGER, name TEXT)')
#do whatever you do to get the data
cur.execute('INSERT INTO names VALUES (?,?)', (id, name))
#to get the name later by id you would do...
cur.execute('SELECT name FROM names WHERE id = ?', (id, ))
name = cur.fetchone()[0]
Related
I'm teaching myself how to write a basic game in python (text based - not using pygame). (Note: I haven't actually gotten to the "game" part per-se, because I wanted to make sure I have the basic core structure figured out first.)
I'm at the point where I'm trying to figure out how I might implement a save/load scenario so a game session could persist beyond a signle running of the program. I did a bit of searching and everything seems to point to pickling or shelving as the best solutions.
My test scenario is for saving and loading a single instance of a class. Specifically, I have a class called Characters(), and (for testing's sake) a sigle instance of that class assigned to a variable called pc. Instances of the Character class have an attribute called name which is originally set to "DEFAULT", but will be updated based on user input at the initial setup of a new game. For ex:
class Characters(object):
def __init__(self):
self.name = "DEFAULT"
pc = Characters()
pc.name = "Bob"
I also have (or will have) a large number of functions that refer to various instances using the variables they are asigned to. For example, a made up one as a simplified example might be:
def print_name(character):
print character.name
def run():
print_name(pc)
run()
I plan to have a save function that will pack up the pc instance (among other info) with their current info (ex: with the updated name). I also will have a load function that would allow a user to play a saved game instead of starting a new one. From what I read, the load could work something like this:
*assuming info was saved to a file called "save1"
*assuming the pc instance was shelved with "pc" as the key
import shelve
mysave = shelve.open("save1")
pc = mysave["pc"]
My question is, is there a way for the shelve load to "remember" the variable name assotiated with the instance, and automatically do that << pc = mysave["pc"] >> step? Or a way for me to store that variable name as a string (ex as the key) and somehow use that string to create the variable with the correct name (pc)?
I will need to "save" a LOT of instances, and can automate that process with a loop, but I don't know how to automate the unloading to specific variable names. Do I really have to re-asign each one individually and explicitly? I need to asign the instances back to the apropriate variable names bc I have a bunch of core functions that refer to specific instances using variable names (like the example I gave above).
Ideas? Is this possible, or is there an entirely different solution that I'm not seeing?
Thanks!
~ribs
Sure, it's possible to do something like that. Since a shelf itself is like a dictionary, just save all the character instances in a real dictionary instance inside it using their variable's name as the key. For example:
class Character(object):
def __init__(self, name="DEFAULT"):
self.name = name
pc = Character("Bob")
def print_name(character):
print character.name
def run():
print_name(pc)
run()
import shelve
mysave = shelve.open("save1")
# save all Character instances without the default name
mysave["all characters"] = {varname:value for varname,value in
globals().iteritems() if
isinstance(value, Character) and
value.name != "DEFAULT"}
mysave.close()
del pc
mysave = shelve.open("save1")
globals().update(mysave["all characters"])
mysave.close()
run()
there's something I'm struggling to understand with SQLAlchamy from it's documentation and tutorials.
I see how to autoload classes from a DB table, and I see how to design a class and create from it (declaratively or using the mapper()) a table that is added to the DB.
My question is how does one write code that both creates the table (e.g. on first run) and then reuses it?
I don't want to have to create the database with one tool or one piece of code and have separate code to use the database.
Thanks in advance,
Peter
create_all() does not do anything if a table exists already, so just call it as soon as you set up your engine or connection.
(Note that if you change your table schema, create_all() will not update it! So you still need "another program" to do that.)
This is the usual pattern:
def createEngine(metadata, dsn, **args):
engine = create_engine(dsn, **args)
metadata.create_all(engine)
return engine
def doStuff(engine):
res = engine.execute('select * from mytable')
# etc etc
def main():
engine = createEngine(metadata, 'sqlite:///:memory:')
doStuff(engine)
if __name__=='__main__':
main()
I think you're perhaps over-thinking the situation. If you want to create the database afresh, you normally just call Base.metadata.create_all() or equivalent, and if you don't want to do that, you don't call it.
You could try calling it every time and handling the exception if it goes wrong, assuming that the database is already set up.
Or you could try querying for a certain table and if that fails, call create_all() to put everything in place.
Every other part of your app should work in the same way whether you perform the db creation or not.
I am writing a script that requires interacting with several databases (not concurrently). In order to facilitate this, I am mainting the db related information (connections etc) in a dictionary. As an aside, I am using sqlAlchemy for all interaction with the db. I don't know whether that is relevant to this question or not.
I have a function to set up the pool. It looks somewhat like this:
def setupPool():
global pooled_objects
for name in NAMES:
engine = create_engine("postgresql+psycopg2://postgres:pwd#localhost/%s" % name)
metadata = MetaData(engine)
conn = engine.connect()
tbl = Table('my_table', metadata, autoload=True)
info = {'db_connection': conn, 'table': tbl }
pooled_objects[name] = info
I am not sure if there are any gotchas in the code above, since I am using the same variable names, and its not clear (to me atleast), how the underlying pointers to the resources (connection are being handled). For example, will creating another engine (to a different db) and assigning it to the 'engine' variable cause the previous instance to be 'harvested' by the GC (since no code is using that reference yet - the pool is still being setup).
In short, is the code above OK?, and if not, why not - i.e. how may I fix it with respect to the issues mentioned above?
The code you have is perfectly good.
Just because you use the same variable name does not mean you are overriding (or freeing) another object that was assigned to that variable. In fact, you can look at the names as temporary labels to your objects.
Now, you store the final objects in the global dictionary pooled_objects, which means that until your program is done or your delete data from there explicitely, GC is not going to free them.
i want to know if db.run_in_transaction() acts as a lock for Data store operations
and helps in case of concurrent access on same entity.
Does in following code it is guarantied that a concurrent access will not cause a race and instead of creating new entity it will not do a over-write
Is db.run_in_transaction() correct/best way to do so
in following code i m trying to create new unique entity with following code
def txn(charmer=None):
new = None
key = my_magic() + random_part()
sk = Snake.get_by_name(key)
if not sk:
new = Snake(key_name=key, charmer= charmer)
new.put()
return new
db.run_in_transaction(txn, charmer)
That is a safe method. Should the same name get generated twice, only one entity would be created.
It sounds like you have already looked at the transactions documentation. There is also a more detailed description.
Check out the docs (specifically the equivalent code) on Model.get_or_insert, it answers exactly the question you are asking:
The get and subsequent (possible) put
are wrapped in a transaction to ensure
atomicity. Ths means that
get_or_insert() will never overwrite
an existing entity, and will insert a
new entity if and only if no entity
with the given kind and name exists.
What you've done is right and sort of duplicates the Model.get_or_insert, like Robert already explained.
I don't know if this can be called a 'lock'... the way this works is optimistic concurrency - the operation will execute assuming that no one else is trying to do the same thing at the same time, and if someone is, it will give you an exception. You'll need to figure out what you want to do in that case. Maybe ask the user to choose a new name?
I have a method in my Python code that returns a tuple - a row from a SQL query. Let's say it has three fields: (jobId, label, username)
For ease of passing it around between functions, I've been passing the entire tuple as a variable called 'job'. Eventually, however, I want to get at the bits, so I've been using code like this:
(jobId, label, username) = job
I've realised, however, that this is a maintenance nightmare, because now I can never add new fields to the result set without breaking all of my existing code. How should I have written this?
Here are my two best guesses:
(jobId, label, username) = (job[0], job[1], job[2])
...but that doesn't scale nicely when you have 15...20 fields
or to convert the results from the SQL query to a dictionary straight away and pass that around (I don't have control over the fact that it starts life as a tuple, that's fixed for me)
#Staale
There is a better way:
job = dict(zip(keys, values))
I'd say that a dictionary is definitely the best way to do it. It's easily extensible, allows you to give each value a sensible name, and Python has a lot of built-in language features for using and manipulating dictionaries. If you need to add more fields later, all you need to change is the code that converts the tuple to a dictionary and the code that actually makes use of the new values.
For example:
job={}
job['jobid'], job['label'], job['username']=<querycode>
This is an old question, but...
I'd suggest using a named tuple in this situation: collections.namedtuple
This is the part, in particular, that you'd find useful:
Subclassing is not useful for adding new, stored fields. Instead, simply create a new named tuple type from the _fields attribute.
Perhaps this is overkill for your case, but I would be tempted to create a "Job" class that takes the tuple as its constructor argument and has respective properties on it. I'd then pass instances of this class around instead.
I would use a dictionary. You can convert the tuple to a dictionary this way:
values = <querycode>
keys = ["jobid", "label", "username"]
job = dict([[keys[i], values [i]] for i in xrange(len(values ))])
This will first create an array [["jobid", val1], ["label", val2], ["username", val3]] and then convert that to a dictionary. If the result order or count changes, you just need to change the list of keys to match the new result.
PS still fresh on Python myself, so there might be better ways off doing this.
An old question, but since no one mentioned it I'll add this from the Python Cookbook:
Recipe 81252: Using dtuple for Flexible Query Result Access
This recipe is specifically designed for dealing with database results, and the dtuple solution allows you to access the results by name OR index number. This avoids having to access everything by subscript which is very difficult to maintain, as noted in your question.
With a tuple it will always be a hassle to add or change fields. You're right that a dictionary will be much better.
If you want something with slightly friendlier syntax you might want to take a look at the answers this question about a simple 'struct-like' object. That way you can pass around an object, say job, and access its fields even more easily than a tuple or dict:
job.jobId, job.username = jobId, username
If you're using the MySQLdb package, you can set up your cursor objects to return dicts instead of tuples.
import MySQLdb, MySQLdb.cursors
conn = MySQLdb.connect(..., cursorclass=MySQLdb.cursors.DictCursor)
cur = conn.cursor() # a DictCursor
cur2 = conn.cursor(cursorclass=MySQLdb.cursors.Cursor) # a "normal" tuple cursor
How about this:
class TypedTuple:
def __init__(self, fieldlist, items):
self.fieldlist = fieldlist
self.items = items
def __getattr__(self, field):
return self.items[self.fieldlist.index(field)]
You could then do:
j = TypedTuple(["jobid", "label", "username"], job)
print j.jobid
It should be easy to swap self.fieldlist.index(field) with a dictionary lookup later on... just edit your __init__ method! Something like Staale does.