So basically I would like to know how to handle a POST request from a python program, and store it on the website server so I can make a GET request to retrieve that information. I'm hoping you can help me. Currently this is my code:
import requests
url = 'mywebsitehere.com'
source_code = 'print('Hello World')
data = {'code': source_code, 'format': 'python'}
r = requests.post(url = url, data = data)
print(r.text)
I'm trying to send some code and the format for the code in the post request, but I'm not sure how to handle the post request once it reaches the website so other programs can access it with GET requests. I know how to actually send POST and GET requests in Python just not how to handle them once they reach the website/server. From my research, it seems like you have to make a PHP file or something and specify individual boxes or variables for the program to enter the information into.
I know it's a really noob question but I am just starting to get into more advanced stuff with Python and modules and stuff.
I'm going to learn more about general web development so instead of just barely understanding it I can get a good grasp on the post requests and actually develop my website into something custom rather than copying and pasting other peoples work without completely understanding it.
...also I'm not sure how to close a post to "answered" or something but yeah.
Related
This is my first question I've posted here so let me know if I need to add more information. I have set up a python code which utilizes requests.post to send an HTTP request to the website (the code shown below). I am trying to post the data that is sent from python to the weebly website I have created. I believe the easiest option for this would be to embed HTML code into the website, however I have never used HTML before and cannot find a good source to learn it.
Python code:
import requests
DataSent = {"somekey":"somevalue"}
url = "http://www.greeniethegenie123.weebly.com"
r = requests.post(url, data = DataSent)
print(r.text)
Edit: The question is how can I set up an HTML code to receive the request and post it on the website. Or if there is any other way to send the data that would work too. I just have a sensor recording numbers that I would like to post to the weebly website.
Edit: It looks like HTML is not possible to do this, does anyone have other advice for how to send data from a raspberry pi to a website? The main problem is the website needs to update the data every minute to be useful in what I am trying to do.
You would have to use Javascript instead of HTML to accomplish this.
HTML is used for the structure of a webpage, while javascript can be used for requests, updating content, and lots of other stuff.
Here are some links to help you out on HTML and Javascript:
HTML Intro
Javascript Intro
For requests with Javascript, I would recommend using Axios:
Axios NPM
Here's a link explaining how to use Axios as well:
Axios Tutorial
I want to build a api that accepts a string and returns html code.
Here is my scraping code that i want as a web-service.
Code
from selenium import webdriver
import bs4
import requests
import time
url = "https://www.pnrconverter.com/"
browser = webdriver.Firefox()
browser.get(url)
string = "3 PS 232 M 03FEB 7 JFKKBP HK2 1230A 420P 03FEB E
PS/JPIX8U"
button =
browser.find_element_by_xpath("//textarea[#class='dataInputChild']")
button.send_keys(string) #accept string
button.submit()
time.sleep(5)
soup = bs4.BeautifulSoup(browser.page_source,'html.parser')
html = soup.find('div',class_="main-content") #returns html
print(html)
Can anyone tell me the best possible solution to wrap up my code as a api/web-service.
There's no best possible solution in general, because a solution has to fit the problem and the available resources.
Right now it seems like you're trying to wrap someone else's website. If that's the problem you're actually trying to solve, and you want to give credit, you should probably just forward people to their site. Have your site return a 302 Redirect with their URL in the Location field in your header.
If what you're trying to do is get the response from this one sample check you have hardcoded, and and make that result available, I would suggest you put it in a static file behind nginx.
If what you're trying to do is use their backend to turn itineraries you have into responses you can return, you can do that by using their backend API, once that becomes available. Read the documentation, use the requests library to hit the API endpoint that you want, and get the JSON result back, and format it to your desires.
If you're trying to duplicate their site by making yourself a man-in-the-middle, that may be illegal and you should reconsider what you're doing.
For hosting purposes, you need to figure out how often your API will be hit. You can probably start on Heroku or something similar fairly easily, and scale up if you need to. You'll probably want WebObj or Flask or something similar sitting at the website where you intend to host this application. You can use those to process what I presume will be a simple request into the string you wish to hit their API with.
I am the owner of PNR Converter, so I can shed some light on your attempt to scrape content from our site. Unfortunately scraping from PNR Converter is not recommended. We are developing an API which looks like it would suit your needs, and should be ready in the not too distant future. If you contact us through the site we would be happy to work with you should you wish to use PNR Converter legitimately. PNR Converter gets at least one complete update per year and as such we change all the code on a regular basis. We also monitor all requests to our site, and we will block any requests which are deemed as improper usage. Our filter has already picked up your IP address (ends in 250.144) as potential misuse.
Like I said, should you wish to work with us at PNR Converter legitimately and not scrape our content then we would be happy to do so! please keep checking https://www.pnrconverter.com/api-introduction for information relating to our API.
We are releasing a backend upgrade this weekend, which will have a different HTML structure, and dynamically named elements which will cause a serious issue for web scrapers!
I am trying to fetch some information from Workflowy using Python Requests Library. Basically I am trying to programmatically get the content under this URL: https://workflowy.com/s/XCL9FCaH1b
The problem is Workflowy goes through a 'loading phase' before the actual content is displayed when I visit this website so I end up getting the content of 'loading' page when I get the request. Basically I need a way to defer getting the content so I can bypass the loading phase.
It seemed like Requests library is talking about this problem here: http://www.python-requests.org/en/latest/user/advanced/#body-content-workflow but I couldn't get this example work for my purposes.
Here is the super simple block of code that ends up getting the 'loading page':
import requests
path = "https://workflowy.com/s/XCL9FCaH1b"
r = requests.get(path, stream=True)
print(r.content)
Note that I don't have to use Requests just picked it up because it looked like it might offer a solution to my problem. Also currently using Python 2.7.
Thanks a lot for your time!
I'm writing a script, to help me do some repetitive testing of a bunch of URLs.
I've written a python method in the script that it opens up the URL and sends a get request. I'm using Requests: HTTP for Humans -http://docs.python-requests.org/en/latest/- api to handle the http calls.
There's the request.history that returns a list of status codes of the directs. I need to be able to access the particular redirects for those list of 301s. There doesn't seem to be a way to do this - to access and trace what my URLS are redirecting to. I want to be able to access the redirected URLS (status code 301)
Can anyone offer any advice?
Thanks
Okay, I'm so silly. Here's the answer I was looking for
r = requests.get("http://someurl")
r.history[1].url will return the URL
I'm working on a project that involves uploading an image to tumblr from Python. I've had luck using Tumblr's API( http://www.tumblr.com/docs/en/api ) in doing regular text-posts, but image uploads have been giving me trouble. The error messages their server returns have been limited to just telling me that there was an "Error Uploading Photo", which has been less than helpful.
Since their API seems to be based in using standard HTTP POST operations, I know that there has to be a way to do this. Unfortunately, I haven't made any progress for a couple of days, and I've decided to resort to bothering you guys about it.
I have tried using curl, and python's libraries: httplib, urllib, urllib2, and a third party library called urllib2_file (http://fabien.seisen.org/python/urllib2_file/). I'm frustrated that I haven't gotten them to work-- but I'm willing to try other additional terminal apps you can come up with.
Each method works fine with simple text posts, but each one of them doesn't seem to get the photo uploading done properly.
Here's my syntax for doing it with urllib2_file. Since urllib2 doesn't support 'multipart/form-data' methods for uploading data, I'm using urllib2_file to add that functionality-- but I haven't been able to get it to work. The tumblr api says that their servers accept multipart/form-data as well as the 'normal post' method for uploading files. I'd be happy if either worked.
import urllib, urllib2, urllib2_file
url = "http://www.tumblr.com/api/write"
values1 = { 'email':'EMAIL',
'password':'PASSWORD',
'type':'regular',
'title':'Pythons urllib2',
'body':'its pretty nice. Not sure how to make it upload stuff yet, though. Still getting some "error uploading photo" errors... So unhelpful.'}
values2 = { 'email':'EMAIL',
'password':'PASSWORD',
'type':'photo',
'data': open('../data/media/pics/2009/05-14/100_1167.JPG'),
'caption':'Caption'}
data = urllib.urlencode(values2)
print "just before defining the request"
req = urllib2.Request(url,data)
print "just before doing the urlopen."
#response = urllib2.urlopen(req)
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
print e.code
print e.read()
print "figure out how to handle .read() properly"
#the_page = response.read()
#print the_page
print "done"
This would be the ideal way if it worked since using dictionaries to define the fields is really easy and I could make it look much cleaner in the future.
Any advice on how to troubleshoot what could be going wrong would be appreciated. At this point I don't know how to learn what could be going wrong. I wish I had the attention span for the http RFC.
I've been considering sniffing the packets between my computer at the server-- but reverse-engineering HTTP might be overkill.
Thanks!
'data': open('../data/media/pics/2009/05-14/100_1167.JPG'),
Looks like you're just passing in a file object .. add a .read() there
Tumblr has API v2 defined for Python. You can find it at GitHub PyTumblr.
I have used it to create a terminal based tool for using tumblr which is called teblr. You can find the source code here: https://github.com/vijaykumarhackr/teblr/