Python/Django: How to Prepend a # on to all URLS - python

I am building a mobile web app with Django and jQuery Mobile. My problem is that jQuery Mobile likes for all links to be prepended with a # so it can accurately keep track of browsing history.
Example: http://www.fest.com/#/foo/1/
I would like know how to automatically redirect all urls that point From: /foo/1/ To: /#/foo/1/
If I don't do that and someone goes directly to /foo/1/, then clicks a link pointing to /bar/2/, they'll end up with a URL path like this:
/foo/1/#/bar/2/
I would very much like to prevent that from happening because its causes lots of problems. Whats the best way to do this?

You have misunderstood what the # does.
The # in a URL is the "fragment" separator. Nothing after that is sent to the server. So there is no such URL as "foo. com#/foo" - as far as the server is concerned, it's just "foo.com". So you can't do any server-side redirection.
If your JS library is using the fragments to simulate navigation, you'll need to handle this with Javascript.

This is jquery mobile, so the answer is a bit different. Jquery mobile uses #something for history when working with AJAX. The AJAX call is introduced for every <a href=...
So you just link to a page like this: <a href="some.html?var1=foo" and JQM calls an ajax on it without reloading the page AND stores the item in the DOM document to not load again. The url is updated to have #some.html at the end and it's how the history is managed.
<a href="#something" WILL NOT work as in a normal page, because jquery mobile takes over.
Read here to get all info on links in jquery mobile: http://jquerymobile.com/demos/1.0a2/#docs/pages/link-formats.html

Related

Python crawler in an ajax website (modem-router settings)

How can I create a python crawler for an ajax website that has same url all the time?
Is it possible?
Should I go step by step from the index page to the page that I want and wish for the best that everything works fine? (or there are other ways to heaven too?)
edit:
The url is http://192.168.1.1/
I actually want to access my router settings.
The short answer is "Yes, it's possible"
You can open your browsers console (network tab), and then look for api url which will be called by frontend scripts and contains needed data.
Responses are usually json/xml so you can parse them easily

flask change jinja2 variable content in a template and display the new content

I don't know how dumb my problem is, but I really couldn't figure out how to solve it
So, I have a html page that is rendered by flask, the page contains a variable {{ log }} (string) that is initially empty.
#app.route('/some-roote')
def function(id,log='some string'):
return render_template('webpage.html',log= log)
When the page is rendered initially, everything works fine, ('some string' is displayed in the UI)
At some point of my execution, I would like to change the content of {{ log }} and display the new content to the user.
I tried doing:
logs = new content;
render_template('webpage.html',log= logs)
but nothing happened, literally, the string value doesn't change, the webpage doesn't refresh, and I don't get any errors...
Please help guys. what am I doing wrong?
Jinja2 templates cannot change a page without the browser refreshing, they are simply a way for you to place dynamic content on a page before sending it to the browser. To change your page while on the client's browser you'll need to use javascript.
You can perform an ajax call from the browser back to your server, which then the server can send back the updated log variable. However, you'll then need to use javascript again to update that text on the DOM.
The best library for this is jQuery. You can also use frameworks like AngularJS and EmberJS that help you with data binding (among many other things).
You'll have to branch out into javascript for this kind of interactivity. Unfortunately, Python does not run natively in the browser.

how merge string to open webpage

I want everytime people go to my website, it will automatically redirect to another website, say:
https://www.redirect.com/link?q=NEWSTRING
it sounds weird, but actually, what I want is when people load my page, it will trigger some scripts, in which it can tract "NEWSTRING" from some website, then put it to the link above to redirect my website there.
So what I have is a first part of the link, which is fixed:
https://www.redirect.com/link?q=
then
NEWSTRING
is obtained from my script, now how to write some thing that will redirect the webpage?
What I can think of is, write something in python.py, then rename it to .cgi, then upload it to my server at :
/home/xxxxx/public_html/cgi-bin/python.cgi
Will this make my website automatically load and redirect there ? Thanks a lot !
Why not just use straight HTML to redirect to the website you want? I might not understand your question correctly, but for example
<meta http-equiv="refresh" content="0; url=http://example.com/">
would redirect the user to http://example.com when loading your page.
What you need is an AJAX request.
When people load your web page, the page sends an AJAX request to some other site, and render your page after getting the request result.

Scrapy, hash tag on URLs

I'm on the middle of a scrapping project using Scrapy.
I realized that Scrapy strips the URL from a hash tag to the end.
Here's the output from the shell:
[s] request <GET http://www.domain.com/b?ie=UTF8&node=3006339011&ref_=pe_112320_20310580%5C#/ref=sr_nr_p_8_0?rh=n%3A165796011%2Cn%3A%212334086011%2Cn%3A%212334148011%2Cn%3A3006339011%2Cp_8%3A2229010011&bbn=3006339011&ie=UTF8&qid=1309631658&rnid=598357011>
[s] response <200 http://www.domain.com/b?ie=UTF8&node=3006339011&ref_=pe_112320_20310580%5C>
This really affects my scrapping because after a couple of hours trying to find out why some item was not being selected, I realized that the HTML provided by the long URL differs from the one provided by the short one. Besides, after some observation, the content changes in some critical parts.
Is there a way to modify this behavior so Scrapy keeps the whole URL?
Thanks for your feedback and suggestions.
This isn't something scrapy itself can change--the portion following the hash in the url is the fragment identifier which is used by the client (scrapy here, usually a browser) instead of the server.
What probably happens when you fetch the page in a browser is that the page includes some JavaScript that looks at the fragment identifier and loads some additional data via AJAX and updates the page. You'll need to look at what the browser does and see if you can emulate it--developer tools like Firebug or the Chrome or Safari inspector make this easy.
For example, if you navigate to http://twitter.com/also, you are redirected to http://twitter.com/#!/also. The actual URL loaded by the browser here is just http://twitter.com/, but that page then loads data (http://twitter.com/users/show_for_profile.json?screen_name=also) which is used to generate the page, and is, in this case, just JSON data you could parse yourself. You can see this happen using the Network Inspector in Chrome.
Looks like it's not possible. The problem is not the response, it's in the request, which chops the url.
It is retrievable from Javascript - as
window.location.hash. From there you
could send it to the server with Ajax
for example, or encode it and put it
into URLs which can then be passed
through to the server-side.
Can I read the hash portion of the URL on my server-side application (PHP, Ruby, Python, etc.)?
Why do you need this part which is stripped if the server doesn't receive it from browser?
If you are working with Amazon - i haven't seen any problems with such urls.
Actually, when entering that URL in a web browser, it will also only send the part before the hash tag to the web server. If the content is different, it's probably because there are some javascript on the page that - based on the content of the hash tag part - changes the content of the page after it has been loaded (most likely an XmlHttpRequest is made that loads additional content).

Serving up snippets of html and using urlfetch

I'm trying to "modularize" a section of an appengine website where a profile is requested as a small hunk of pre-rendered html
Sending a request to /userInfo?id=4992 sends down some html like:
<div>
(image of john) John
Information about this user
</div>
So, from my google appengine code, I need to be able to repeatedly fetch results from this URL when displaying a group of people.
The only way I can do it now is send down a collection of <iframes> like
<iframe src="/userInfo?id=4992"></iframe>
<iframe src="/userInfo?id=4993"></iframe>
<iframe src="/userInfo?id=4994"></iframe>
The iframes work to request the data.
I tried using urlfetch.fetch() but it keeps timing out on me.
Am I doing this right? I thought this would be handy-dandy (url that serves up a snippet of html) but it turns out its looking like a design error.
You're currently serializing urlfetch requests, which ends up summing their wait times and may easily push you beyond your latency deadline. I'm afraid that you'll need to switch to async urlfetch requests -- an advanced technique which may suit your architecture better!

Categories