Using a Flask page inside of the HTML-iframe block - python

I've created a Flask. It works fine if I use a direct link to Flask pages HTTP://flask_host:flask_port/.
If I'm using the same link inside of HTML iframe tags it returns an empty block (page is not showing)
<html>
<body>
<iframe src="HTTP://flask_host:flask_port/" name="iframe_a"></iframe>
</body>
</html>
What am I doing wrong?

The problem was that I used to HTTP instead of HTTPS. I had seen the error inside of the browser developer console. Flask was configured by me to use the simplest ad-hoc HTTPS.
After that, I'm able to see the page inside of the iframe. content of dynamically created iframe is empty

Related

How to display the actual HTML page from HTMLResponse in Swagger UI using FastAPI?

I have a FastAPI app that returns an HTMLResponse. The code is simple and straightforward as the examples in FastAPI's documentation. The response works fine, but Swagger UI displays the raw HTML content. Is there a way to display the actual HTML page?
FastAPI app:
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
app = FastAPI()
#app.get("/items/")
async def read_items():
html_content = """
<html>
<head>
<title>Some HTML in here</title>
</head>
<body>
<h1>Look ma! HTML!</h1>
</body>
</html>
"""
return HTMLResponse(content=html_content, status_code=200)
Response:
This is the expected behaviour by Swagger UI (see here as well). Swagger UI correctly displays the response body, and not how that response would be interpeted by a user-agent; more specifically, a Web browser. That being said, if you return an image using a FileResponse (including the correct media_type, which would automatically be added by FastAPI if left unset, by using the file's extension to infer the media_type), you would see that Swagger UI will actually display the image (instead of the image bytes as text). However, this is not the case when it comes to HTML content.
There was a discussion around this topic, but the idea was rejected due to security risks. Someone has made a suggestion of having a Show Preview button, which would preview the HTML content returned in a response and allow the user to interact with; however, this has not been officially implemented yet.
I should also mention that OpenAPI supports markdown elements, as well as standard HTML tags, which you can use in the description property to display images, links, etc. Have a look at this answer.

Incomplete html from Selenium

Hi I was wondering why if I have a certain page's url and use selenium like this:
webdriver.get(url)
webdriver.page_source
The source code given by selenium lacks elements that are there when inspecting the page from the browser ?
Is it some kind of way the website protects itself from scraping ?
Try adding some delay between webdriver.get(url) and webdriver.page_source to let the page completely loaded
Generally it should give you entire page source content with all the tags and tag attributes. But this is only applicable for static web pages .
for dynamic web pages, webdriver.page_source will only give you page resource whatever is available at that point of time in DOM. cause DOM will be updated based on user interaction with page.
Note that iframes are excluded from page_source in any way.
If the site you are scraping is a Dynamic website, then it takes some time to load as the JavaScript should run, do some DOM manipulations etc., and only after this you get the source code of the page.
So it is better to add some time delay between your get request and getting the page source.
import time
webdriver.get(url)
# pauses execution for x seconds.
time.sleep(x)
webdriver.page_source
The page source might contain one link on javascript file and you will see many controls on the page that has been generated on your side in your browser by running js code.
The source page is:
<script>
[1,2,3,4,5].map(i => document.write(`<p id="${i}">${i}</p>`))
</script>
Virtual DOM is:
<p id="1">1</p>
<p id="2">2</p>
<p id="3">3</p>
<p id="4">4</p>
<p id="5">5</p>
To get Virtual DOM HTML:
document.querySelector('html').innerHTML
<script>
[1,2,3,4,5].map(i => document.write(`<p id="${i}">${i}</p>`))
console.log(document.querySelector('body').innerHTML)
</script>

Link in AngularJS not calling Flask endpoint

Code snippets:
app.config
$locationProvider.html5Mode({
enabled: true,
requireBase: false
});
$urlRouterProvider
.when('logout', '/logout')
.otherwise('/');
Relevant HTML
<li>Logout</li>
Flask endpoint
#app.route("/logout")
#login_required
def logout():
logout_user()
return redirect(url_for("login"))
I have also set <base href="/"></base> in my HTML's header. Clicking on the link, however, does not result into anything happening (literally nothing happens).
What gives?
Because the request to logout never reaches your server. You need to create a LogoutController associated with the logout route that actually makes a request to your flask endpoint.
Solved this by virtue of reading the AngularJS documentation twice and found this little gem there. Providing the answer here so that it helps others who are beginning developing in Flask-Angular.
To quote:
Html link rewriting
When you use HTML5 history API mode, you will not need special
hashbang links. All you have to do is specify regular URL links, such
as: link
When a user clicks on this link,
In a legacy browser, the URL changes to /index.html#!/some?foo=bar
In a modern browser, the URL changes to /some?foo=bar
In cases like the following, links are not rewritten; instead, the
browser will perform a full page reload to the original link.
Links that contain target element
Example: link
Absolute links that go to a different domain
Example: link
Links starting with '/' that lead to a different base path
Example: link
Basically, changing this:
<li>Logout</li>
to this:
<li>Logout</li>
solved the issue.

Django ajax no refresh:django view without redirecting or refreshing a page

I'd like to execute some Python code if that button is pressed on web.The thing is, I need the page not to redirect or refresh or anything.
Just use jQuery ajax,it is easy to do
in django
views.py
def fun1(request):
user_input = request.GET.get('value')
#put your code here
return HttpResponse('what you want to output to web')
urls.py
url(r'^link-to-fun1$', views.fun1),
html
<html>
<head>
<title>Your title</title>
<script type="text/javascript" src="/media/js/jquery-1.10.2.min.js"></script>
<script>
function myFun() {
$.get('link-to-fun1/',{"value":"get_the_value_from_web"},function(ret) {
return ret;//you can handle with return value ret here
});
}
</script>
</head>
<body>
More see jQuery $.get method
there's a lot of info on internet about django and ajax, just google it.
http://www.youtube.com/watch?v=lllVAFbRGfI
or as Mingyu said use dajax
You may want to try the Ajax library for Django project:
http://www.dajaxproject.com/
In case you've never heard of Ajax, here's the definition from Wikipedia:
Ajax (an acronym for asynchronous JavaScript and XML) is a group of interrelated web development techniques used on the client-side to create asynchronous web applications.
With Ajax, web applications can send data to, and retrieve data from, a server asynchronously (in the background) without interfering with the display and behavior of the existing page.

Cannot fetch a web site with python urllib.urlopen() or any web browser other than Shiretoko

Here is the URL of the site I want to fetch
https://salami.parc.com/spartag/GetRepository?friend=jmankoff&keywords=antibiotic&option=jmankoff%27s+tags
When I fetch the web site with the following code and display the contents with the following code:
sock = urllib.urlopen("https://salami.parc.com/spartag/GetRepository?friend=jmankoff&keywords=antibiotic&option=jmankoff's+tags")
html = sock.read()
sock.close()
soup = BeautifulSoup(html)
print soup.prettify()
I get the following output:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>
Error message
</title>
</head>
<body>
<h2>
Invalid input data
</h2>
</body>
</html>
I get the same result with urllib2 as well. Now interestingly, this URL works on only Shiretoko web browser v3.5.7. (when I say it works I mean that it brings me the right page). When I feed this URL into Firefox 3.0.15 or Konqueror v4.2.2. I get exactly the same error page (with "Invalid input data"). I don't have any idea what creates this difference and how I can fetch this page using Python. Any ideas?
Thanks
If you see the urllib2 doc, it says
urllib2.build_opener([handler, ...])ΒΆ
.....
If the Python installation has SSL support (i.e., if the ssl module can be imported), HTTPSHandler will also be added.
.....
you can try using urllib2 together with ssl module. alternatively, you can use httplib
That's exactly what you get when you click on the link with a webbrowser. Maybe you are supposed to be logged in or have a cookie set or something
I get the same message for firefox 3.5.8 (shiretoko) on linux

Categories