How to call Nodejs api in the python code? - python

I am automating the functionality which is written in Nodejs,and gives a graphical view when the web page is called. I need to retrieve the contents of webpage in a file. All this code will be written in python. How can I call the web page api in the python code so that I get all the contents in a file.

You can use Python Requests Module:
import requests
response = requests.get('https://example.com')
print(response.text)
To learn more

Related

I need to set up an HTML code to recieve a requests.post HTTP request from python

This is my first question I've posted here so let me know if I need to add more information. I have set up a python code which utilizes requests.post to send an HTTP request to the website (the code shown below). I am trying to post the data that is sent from python to the weebly website I have created. I believe the easiest option for this would be to embed HTML code into the website, however I have never used HTML before and cannot find a good source to learn it.
Python code:
import requests
DataSent = {"somekey":"somevalue"}
url = "http://www.greeniethegenie123.weebly.com"
r = requests.post(url, data = DataSent)
print(r.text)
Edit: The question is how can I set up an HTML code to receive the request and post it on the website. Or if there is any other way to send the data that would work too. I just have a sensor recording numbers that I would like to post to the weebly website.
Edit: It looks like HTML is not possible to do this, does anyone have other advice for how to send data from a raspberry pi to a website? The main problem is the website needs to update the data every minute to be useful in what I am trying to do.
You would have to use Javascript instead of HTML to accomplish this.
HTML is used for the structure of a webpage, while javascript can be used for requests, updating content, and lots of other stuff.
Here are some links to help you out on HTML and Javascript:
HTML Intro
Javascript Intro
For requests with Javascript, I would recommend using Axios:
Axios NPM
Here's a link explaining how to use Axios as well:
Axios Tutorial

Python webbrowser not functioning with GIS server

I am trying to write a code that will download all the data from a server which holds the .rar files about imaginary cadastrial particles for student projects. What I got for now is the query for the server which only needs to input a specific number of particle and access it as url to download the .rar file.
url = 'http://www.pg.geof.unizg.hr/geoserver/wfs?request=getfeature&version=1.0.0&service=wfs&&propertyname=broj,naziv_ko,kc_geom&outputformat=SHAPE-ZIP&typename=gf:katastarska_cestica&filter=<Filter+xmlns="http://www.opengis.net/ogc"><And><PropertyIsEqualTo><PropertyName>broj</PropertyName><Literal>1900/1</Literal></PropertyIsEqualTo><PropertyIsEqualTo><PropertyName>naziv_ko</PropertyName><Literal>Suma Striborova Stara (9997)</Literal></PropertyIsEqualTo></And></Filter>'
This is the "url" I want to open with the web browser module for a particle "1900/1" but this way I get an error:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
When I manually input this url it downloads the file without a problem.
What is the way I can make this python web application work?
I used a webbrowser.open_new(url) option which does not work.
You're using the wrong tool. webbrowser is for controlling a native web browser. If you just want to download a file, use the requests module (or urllib.request if you can't install Requests).
import requests
r = requests.get('http://www.pg.geof.unizg.hr/geoserver/wfs', params={
'request': 'getfeature',
...
'filter': '<Filter xmlns=...>'
})
print(r.content) # or write it to a file, or whatever
Note requests will handle encoding GET parameters for you -- you don't need to worry about escaping the request yourself.

Accessing Indeed through Python

My goal for this python code is to create a way to obtain job information into a folder. The first step is being unsuccessful. When running the code I want the url to print https://www.indeed.com/. However instead the code returns https://secure.indeed.com/account/login. I am open to using urlib or cookielib to resolve this ongoing issue.
import requests
import urllib
data = {
'action':'Login',
'__email':'email#gmail.com',
'__password':'password',
'remember':'1',
'hl':'en',
'continue':'/account/view?hl=en',
}
response = requests.get('https://secure.indeed.com/account/login',data=data)
print(response.url)
If you're trying to scrape information from indeed, you should use the selenium library for python.
https://pypi.python.org/pypi/selenium
You can then write your program within the context of a real user browsing the site normally.

How to run Python script on a webserver

I use a webapp that can generate a PDF report of some data stored in the app. To get to that report, however, requires several clicks and monkeying around with the app.
I support a group of users of this app (we use the app, we don't create the app) and I'd like them to be able to generate and view this report with as few clicks as possible. Thankfully, this web app provides a lot of data via a RESTful API. So I did some scripting.
I have a Python script that makes an HTTP GET request, processes the JSON results, and uses that resultant data to dynamically build a URL. Here's a simplified version of my python code:
#!/usr/bin/env python
import requests
app_id="12345"
secret="67890"
api_url='https://api.webapp.example/some_endpoint'
resp = requests.get(api_url, auth=(app_id,secret))
json_data = resp.json()
# Simplification of the data processing I'm doing
my_data = json_data['attr1']['attr2'] + my_data_processing
# Result of the script is a link to a dynamically generated PDF
pdf_url = 'https://pdf.webapp.example/items/' + my_data
The above is a simplification of the code I actually have, but it shows the relevant points. In my actual script, I continue on by doing another GET with the dynamically built URL. The webapp generates a PDF based on the my_data portion of the URL, and I write that PDF to file. This works very well today.
Currently, this is a python script that runs on my local machine on-demand. However, I'd like to host this somewhere on the web so that when a user hits a URL in their browser it runs and generates the pdf_url, instead of having to install this script on each user's local machine, and so that the PDF can be generated and viewed on a mobile device.
The thought is that the user can open http://example.com/report-shortcut, the python script would run server-side, dynamically build the URL, and redirect the user to that URL, which would then show the PDF in the browser (assuming the user is using a browser that shows PDFs like Chrome, Safari, etc). Alternately, if a redirect is problematic, going to http://example.com/report-shortcut could just show an HTML page with a link to the URL generated by the Python script.
I'm looking for a solution on how to host this Python script and have it run when a user accesses a webpage. I've looked into AWS Lambda and Django, but both seem like overkill for such a simple script (~20 lines of code, plus comments and whitespace). I've also looked at Python CGI scripting, which looks promising, but I have no experience setting up something like that.
Looking for suggestions on how best to host and run this code when a user goes to the example URL.
PS: I thought about just re-implementing in Javascript, but I'd rather the API key not be publicly accessible.
I suggest building the script in AWS Lambda and using the API Gateway to invoke it.
You could create the pdf, store it in S3 and generate a pre-signed URL. Then return a response 302 to the user to redirect them to the pre-signed URL. This will display the PDF in their browser.
Very quick to setup and using Boto3 getting the PDF into S3 and generating the URL is simple.
It will be much simpler than some of your other suggestions.
See API Gateway
& Boto3

Download a file from GoogleDrive exportlinks

Trying to download a file directly using Python and the Google Drive API exportlinks response.
Suppose I have an export link like this:
a) https://docs.google.com/feeds/download/documents/export/Export?id=xxxx&exportFormat=docx
To download this file, I simply paste it into the browser, and the file automatically downloads to my Downloads folder.
How do I do the same thing in Python?
EX: module.download_file_using_url(https://docs.google.com/feeds/download/documents/export/Export?id=xxxx&exportFormat=docx)
This is a repost of How do I download a file over HTTP using Python?
In Python 2, use urllib2 which comes with the standard library.
import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read()
This is the most basic way to use the library, minus any error handling. You can also do more complex stuff such as changing headers. The documentation can be found here.

Categories