In my new django project, I want the visitors to confirm the terms and condition before visiting a specific site, but this should only happen on the first time they visit the site. They don't have to be logged in, I will have to do this with cookies I think. Any ideas?
If you google for django cookie law you'll find several different options.
One of these options is django-cookie-law.
Related
I am working on a django project and I have completed working on it. And I tested it and ran the project in on two webpages in chrome with my localhost:8000 server . I logged in in first page as a first user , lets say user1 and logged in on another page with another username , lets say user2 . But when I refresh the first page , the user1 is not logged in and I get logged in as the user2. The first page user1 is not logged in . I want to login multiple users at the same time so I can interact with the page. Kindly help me .
When the user logs in, django stores information on the browser, which is why this happens. What you can do is open up a private window private window, and have the second user log in there. You can also use another browser to test out a third user. There might be a more efficient method, but this is quick an d simple solution.
UPDATE
After a bit of checking I found a much better solution to the question. Ghost Browser was made for this specific situation. And since as of now this does not work in Linux, you can also use extensions in Firefox, like Temporary Containers, and there probably are similar extensions for other browsers.
I've found several posts about this but can't seem to get a straight answer for my case.
I'm using django allauth to make users sign up before posting anything. However, all the pages are visible to the public and don't require login to view.
How do I allow google to crawl the pages without having to go through the login?
Would a simple sitemap work?
I looked into first click free but from what I read it only crawls the first page after the login? Or should I whitelist the IP's?
Any direction would be great, thanks.
I recently deployed a web2py app, and am going through the debugging phase. Part of the app includes an auth.wiki, which mostly works great. Last night I added several pages to the wiki with no problems.
However, today, whenever I navigate to the wiki or try to edit a page, I'm immediately logged out.
Any suggestions? I can't interact with the wiki if I'm not logged in...
EDIT: It's not just the wiki, I keep getting logged out of the whole site. Other users do not have this problem. It continues even when I select "remember me for 30 days" on login.
Your comment hints at the answer: When you log into the admin session, when you refresh your website, it is now accessed through the admin session, which has no client user logged in.
One solution is to use different browsers for admin and a different browser for client.
This question already has answers here:
How can you tell if a site has been made with Django?
(7 answers)
Closed last year.
I actually have some questions (real childish).
1) If I know that a website has been developed using django , can we determine from the html source code (by right clicking and choosing "view page source ") , if that website has been developed using django?
2) If I have an HTML code for website written in HTML , and I just want to present it like that using django , how can present this HTML code using django?
3) For what kind of websites , should django be used or are used for ? I mean pure static page , blogs , or a simple google like .
Thanks in advance
Here are a few things you could use to determine if a web app was written in Django. None of these are foolproof by any means, but they could be indicators.
Try http://site.com/admin/ and see if it says "Django site admin" at the top.
Inspect all of the HTML source code of every form you can find on the site, and see if any contain an input tag with name='csrfmiddlewaretoken'. csrfmiddlewaretoken is Django's CSRF token identifier. Other web frameworks may use this same name, but Django is the predominant user of it.
Find information about the site owner and/or developer from a "Contact" page, Google their usernames/emails as well as the word "Django," such as "emailname#gmail.com django". If you see posts or questions about Django, this could possibly mean they use it often.
If all else fails, simply contact the site owner and ask them.
No, the source depends completely on the person who developed it, and there are no necessary "hints" that it was written in Django.
You should at least try the Django tutorial at https://www.djangoproject.com/, you'll learn the basics of setting up a Django application, and you'll answer your own question.
Django is pretty general purpose, a bit overkill for static pages. Anything else can be done in Django, the same way it can be done in Ruby on Rails or other Web frameworks.
Well, what you should do is to test the website behaviour in an unusal stiatuation, for example forcing it to return 404 or 500 error message, which developers often forget to customize.
If you for example go to http://www.galaxyzoo.org/ and then try to determine backend just by looking at HTML, you'll fail.
If, however, you try to access a page 'blablablabla' i.e. http://www.galaxyzoo.org/blablablabla then you'll see 404 message. If you paste the entrie text into google, you'll most likely get hits to Ruby On Rails... :)
Django leaves no trace on the html source unless you specifically do. If you only want a static site, django is overkill. Though if you really want to, have a look at django flatpages.
You could possibly try www.domainname.com/admin. Some people leave their admin at that url and you can see the login page.
If they left the login page as default, the title tag will say Login | Django site admin or something like that.
For example: http://www.snowbird.com/admin/ (no affiliation)
No.
Yes. See direct_to_template
See djangosites.org
I have been Googling for sometime but I guess I am using the wrong set of keywords. Does anyone know this URI that lets me request permission from Facebook to let me crawl their network? Last time I was using Python to do this, someone suggested that I look at it but I couldn't find that post either.
Amazingly enough, that's given in their robots.txt.
The link you're looking for is this one:
http://www.facebook.com/apps/site_scraping_tos.php
If you're not a huge organization already, don't expect to be explicitly whitelisted there. If you're not explicitly whitelisted, you're not allowed to crawl at all, according to the robots.txt and the TOS. You must use the API instead.
Don't even think about pretending to be one of the whitelisted crawlers. Facebook filters by whitelisted IP for each crawler and anything else that looks at all like crawling gets an instant perma-ban. For a while users who simply clicked too fast could occasionally run into this.
Since this is a community with login & password, I am not sure how much of it is legally crawl-able. If you see even Google indexes just the user profile pages. But not their wall posts or photos etc.
I would suggest you to post this question in Facebook Forum. But you can check it up here -
Facebook Developers
Facebook Developers Documentation
Facebook Developers Forum