The behavior of Django sessions changes between "standard" views code and test code, making it unclear how test code is written for sessions. Googling this yields two relevant discussions about this issue:
Easier manipulation of sessions by
test client
test.Client.session.save() raises
error for anonymous users
I'm confused because both tickets have different ways of dealing with this problem and they were both Accepted. I assume this means they were patched and the behavior is now different. I also don't know to which versions these patches would pertain.
If I'm writing a unit test in Django 1.0, how would I set up my session store for sessions to work as they do in the browser?
I don't quite understand what do you mean by saying the behavior changes between "standard" view and "test" code, maybe you should elaborate on that.
but regarding how to test the session, I do think there are approaches.
you have to understand how django session works, read the unit test for the session package you used in your application. this is regarding understand how server side works.
you probably need to capture a few conversations between browser and server( using FIREBUG for example )
so the issue for you looks like that you are not passing session_id you get when you log in back to server when you talk to server. like put it in (POST,GET,COOKIES I don't quite remember that ).
The important thing here is understand how session works in HTTP, once you get that, you definitely have a clear idea about what is happening there, and make explainations accordingly.
Related
I am working on my scrapping project which requires login to the page, and two questions just appeared in front of me. Ok, maybe two blocks of questions. (please have mercy with me as I am beginner learning new things and probably I already poorly designed the whole project).
The design is: as the page requires login, I wanted to use python request.Session() method object and with "with context manager" keep to be logged in.
So my first question is:
1, If I want to have, lets say main.php file, which will call python file with login function (the one with context manager), do I have to have the rest of the functions in this same python file (all the functions for the specific scraping) ? more precisely all functions within this "with block" ? What if I want to have each important function in separate python file (one for login.py, another update.py, table.py, etc)? If I call other functions than login function, how the session will be still valid and I wont get logged out ? or should I just call the functions with "s" object somehow ?
Second question:
2, It's related due the sessions. I went through some questions and not sure if I found the correct answer. But when I logged into the page, through network tab on chrome I saw some session ID. But when I checked session ID via "what_in_session = s.cookies.get_dict()",
ID was different. When I moved on the page the session id in network was still same, however, via python was different every time.
Am I doing something wrong ? Or is it correct behavior ? I found this but I am not sure: Why are the IDs different in each request?.
I guess, once the "with request.Session" block is used, I dont need to care about the rest ? Then back to the first question, how to mange these python functions ?
Hope that you'll understand the questions. (if not I can try to rephrase them)
many thanks
This is a bit of a strange question, I know, but bear with me. We've developed a RESTful platform using Python for one of our iPhone apps. The webapp version has been built using Django, which makes use of this API as well. We were thinking it would be a great idea to use Django's built-in control panel capabilities to help manage the data.
This itself isn't the issue. The problem is that everyone has decided it would be best of the admin center was essentially a client that sits on top of the RESTful platform.
So, my question is, is there a way to manipulate the model layer of Django to access our API directly, rather than communicated directly with the database? The model layer would act as the client passing requests and responses to and from the admin center.
I'm sure this is possible, but I'm not so sure as to where I would start. Any input?
I remember I once thought about doing such thing. At the time, I created a custom Manager using a custom QuerySet. And I overrode some methods such as _filter_or_exclude(), count(), exists(), select_related(), ... and added some properties. It took less than a week to become a total mess that had probably no chance to work one day. So I immediately stopped everything and found a more suitable solution.
If I had to do it once again, I would take a long time to consider alternatives. And if it really sounds like the best thing to do, I'd probably create a custom database backend. This backend would, rather than converting Django ORM queries to SQL queries, convert them to HTTP requests.
To do so, I think the best starting point would be to get familiar with django source code concerning database backends.
I also think there are some important things to consider before starting such development:
Is the API able to handle any Django ORM request? Put another way: Will any Django ORM query be translatable to an API request?
If not, may "untranslatable" queries be safely ignored? For instance, an ORDER BY clause might be safe to ignore. While a GROUP BY clause is very unlikely to be safely dismissed.
If some queries can't be neither translated nor ignored, may them be reasonably emulated. For instance, if your API does not support a COUNT() operation, you could emulate it by getting the whole data and count it in python with len(), but is this reasonable?
If they are still some queries that you won't be able to handle (which is more than likely): Are all "common" queries (in this case, all queries potentially used by Django Admin) covered and will it be possible to upgrade the API if an uncovered case is discovered lately or is introduced in a future version of Django?
According to the use case, there are probably tons of other considerations to take, such as:
the integrity of the data
support of transactions
the timing of a query which will be probably much higher than just querying a local (or even remote) database.
In my multiple choice Quiz show project on google app engine multiple users can use the webapp simultaneously once they are login. But due to some reason they are interfering with each others instances.
Scenario example: Suppose user A wants to use the quiz show for 10 questions and at the same time user B wants to run the quiz show for 10 questions on another machine. But since they are using the application at the same, they are only getting 5 questions each and their result getting messed up.
Does anybody know how to avoid it ? I am not using any session or cookies till now. Is that a solution or something else?
Thanks
#views.py
def display(request):
skipped_questions=[]
question_number=[]
user_answer_list=[]
answer_list=[]
all_questions=[]
if request.method=='POST':
initial_value=1
id_list=[]
result=Questions.objects.all()
for i in result:
id_value=i.id
id_list.append(id_value)
data=request.POST.copy()
total_question=data['number_of_question']
mytime=data['time']
seconds=59
minutes=int(mytime)-1
already_questions=Random_model.objects.all().delete()
already_answers=User_answer.objects.all().delete()
random_questions_list=random.sample(id_list,int(total_question))
for i in random_questions_list:
random_model=Random_model()
random_model.list_id=i
random_model.initial_value=int(initial_value)
random_model.save()
initial_value+=1
question_list=1
a=Random_model.objects.get(initial_value=question_list)
new_question=Questions.objects.get(id=a.list_id)
template_value={ 'output': new_question,'minutes':minutes,'seconds':seconds,'question_list':question_list }
return render_to_response("quiz.html",template_value)
Followup-#Adam:Hi,I have removed global variables and again the program is working fine when I am working alone on my laptop. But when I am asking my colleague to try from his end,we both are getting same questions and interfering in each others sessions due to which end output getting messed up. I started using gae-sessions and able to use request.session but how should I use gae-sessions in this scenario.
Let me know if I am not clear.
Without some concrete details about what kind of data your application stores to make one session different from any other, it is impossible to give you anything really useful, but one approach would be to store it in memcache keyed off of the user's user_id.
Completely hypothetical for-example code:
def get_session_data():
from google.appengine.api import users
found_session = None
user = users.get_current_user()
if user:
from google.appengine.api import memcache
users_session = memcache.get(user.user_id())
return found_session
def save_session_data(session_object):
from google.appengine.api import users
from google.appengine.api import memcache
memcache.set(users.get_current_user().user_id(), serialized_object)
Now, before you go cutting and pasting, there are a lot of caveats to this approach, and it is meant only as a suggested starting point. Memcache is not guaranteed to hold items in memory, and there are plenty of other competing implementations that would be more reliable in some respects.
Fundamentally, I'd suggest using cookies to store the session data, but AppEngine doesn't have native support for cookies, so you'd have to go find an implementation of them and include it in your code. There are a number of fine implementations that are available on Google Code.
Here are some libraries to pick from that provide cookie support. There are even more.
gae-utilities
gae-sessions
app-engine-oil
FOLLOWUP, based on the sample code that you just added:
I don't want to put too fine of a point on it, but what you're doing just ain't gonna work.
Using global variables is generally a bad idea, but it is specifically an unworkable idea in a piece of code that is going to be called by many different users in an overlapping-fashion. The best advice that I can give you is to take all of the painful global variables (which are really specific to a particular user), and store them in a dictionary that is specific to a particular user. The pickling/unpickling code that I posted above is a workable approach, but seriously, until you get rid of those globals, your code isn't going to work.
When exactly the database transaction is being commited? Is it for example at the end of every response generation?
To explain the question: I need to develop a bit more sophisticated application where I have to control DB transactions less or more manually. Especialy I have to be able to design a set of forms with some complex logics behind the forms (some kind of 'wizard') but the database operations must not be commited until the last form and the confirmation.
Of course I could put everything to the session without making any DB change but it's not a solution, the changes are quite complex and realy have to be performed. So the only way is to keep it uncommited.
Now back to the question: if I undertand how is it working in web2py it will be easier for me to decide if thats a good framework for me. I am a java and php programmer, I know python but I don't know web2py yet ...
If you know any web page when it's explained I also wppreciate.
THanks!
you can call db.commit() and db.rollback() pretty much everywhere. If you do not and the action does not raise an exception, it commits before returning a response to the client. If it raises an exception and it is not explicitly caught, it rollsback.
Have you checked out the official documentation? It explains commit policies and distributed transactions pretty well.
In my website, users have the possibility to store links.
During typing the internet address into the designated field I would like to display a suggest/autocomplete box similar to Google Suggest or the Chrome Omnibar.
Example:
User is typing as URL:
http://www.sta
Suggestions which would be displayed:
http://www.staples.com
http://www.starbucks.com
http://www.stackoverflow.com
How can I achieve this while not reinventing the wheel? :)
You could try with
http://google.com/complete/search?output=toolbar&q=keyword
and then parse the xml result.
I did this once before in a Django server. There's two parts - client-side and server-side.
Client side you will have to send out XmlHttpRequests to the server as the user is typing, and then when the information comes back, display it. This part will require a decent amount of javascript, including some tricky parts like callbacks and keypress handlers.
Server side you will have to handle the XmlHttpRequests which will be something that contains what the user has typed so far. Like a url of
www.yoursite.com/suggest?typed=www.sta
and then respond with the suggestions encoded in some way. (I'd recommend JSON-encoding the suggestions.) You also have to actually get the suggestions from your database, this could be just a simple SQL call or something else depending on your framework.
But the server-side part is pretty simple. The client-side part is trickier, I think. I found this article helpful
He's writing things in php, but the client side work is pretty much the same. In particular you might find his CSS helpful.
Yahoo has a good autocomplete control.
They have a sample here..
Obviously this does nothing to help you out in getting the data - but it looks like you have your own source and arent actually looking to get data from Google.
If you want the auto-complete to use date from your own database, you'll need to do the search yourself and update the suggestions using AJAX as users type. For the search part, you might want to look at Lucene.
That control is often called a word wheel. MSDN has a recent walkthrough on writing one with LINQ. There are two critical aspects: deferred execution and lazy evaluation. The article has source code too.