I'm trying to read & print the result from google's URL in GAE. When i run the first program, output was blank. then i have added a print statement before printing the url result and run it. Now i got the result.
Why the Program 1 doesn't give any output ?
Program 1
import urllib
class MainHandler(webapp.RequestHandler):
def get(self):
url = urllib.urlopen("http://www.google.com/ig/calculator?hl=en&q=100EUR%3D%3FAUD")
result = url.read()
print result
Program 2
import urllib
class MainHandler(webapp.RequestHandler):
def get(self):
# Print something before print urllib result
print "Result -"
url = urllib.urlopen("http://www.google.com/ig/calculator?hl=en&q=100EUR%3D%3FAUD")
result = url.read()
print result
You're using print from inside a WSGI application. Never, ever use print from inside a WSGI application.
What's happening is that your text is being output in the place where the webserver expects to see headers, so your output is not displayed as you expect.
Instead, you should use self.response.out.write() to send output to the user, and logging.info etc for debugging data.
I met this issue before. But cannot find an exactly answer about it yet.
maybe the cache mechanism cause this issue, not sure.
You need do flush output to print the data:
import sys
sys.stdout.flush()
or just do like the way you did:
print "*" * 10
print data
I think you'll like logging when you are debugging:
logging.debug('A debug message here')
or
logging.info('The result is: %s', yourResultData)
Related
Example: suppose I make a code like this: (just an example)
#Start
url = 'www.example.com'
response = request.urlopen(url)
result = request.read().decode('utf-8')
print(result) #End
But, let's say that I want to implement something that makes the script perform a modification in the code, ex: utf-8 cannot be decoded, if it happens the code will change or execute some other code that would send it without.decode(utf-8)
then it would be:
response = request.urlopen(url)
result = response.read().decode('utf-8')
print(result)
it is not possible to decode with utf-8
then the code changes... or some command executes a code below it, and then the code re-executes with:
result = response.read()
print (result) **#END**
This is an example implementation where 1 of 2 commands don't work.
def iDontWork():
# This is a random example, where division by zero will not work.
return 1/0
def iDoWork():
# This is a random working method.
return 1/1
try:
iDontWork()
except Exception:
iDoWork()
print("First function (iDontWork) didn't work, so function the second function (iDoWork) started working.")
I have just learned the basics of Python, and I am trying to make a few projects so that I can increase my knowledge of the programming language.
Since I am rather paranoid, I created a script that uses PycURL to fetch my current IP address every x seconds, for VPN security. Here is my code[EDITED]:
import requests
enterIP = str(input("What is your current IP address?"))
def getIP():
while True:
try:
result = requests.get("http://ipinfo.io/ip")
print(result.text)
except KeyboardInterrupt:
print("\nProccess terminated by user")
return result.text
def checkIP():
while True:
if enterIP == result.text:
pass
else:
print("IP has changed!")
getIP()
checkIP()
Now I would like to expand the idea, so that the script asks the user to enter their current IP, saves that octet as a string, then uses a loop to keep running it against the PycURL function to make sure that their IP hasn't changed? The only problem is that I am completely stumped, I cannot come up with a function that would take the output of PycURL and compare it to a string. How could I achieve that?
As #holdenweb explained, you do not need pycurl for such a simple task, but nevertheless, here is a working example:
import pycurl
import time
from StringIO import StringIO
def get_ip():
buffer = StringIO()
c = pycurl.Curl()
c.setopt(pycurl.URL, "http://ipinfo.io/ip")
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
return buffer.getvalue()
def main():
initial = get_ip()
print 'Initial IP: %s' % initial
try:
while True:
current = get_ip()
if current != initial:
print 'IP has changed to: %s' % current
time.sleep(300)
except KeyboardInterrupt:
print("\nProccess terminated by user")
if __name__ == '__main__':
main()
As you can see I moved the logic of getting the IP to separate function: get_ip and added few missing things, like catching the buffer to a string and returning it. Otherwise it is pretty much the same as the first example in pycurl quickstart
The main function is called below, when the script is accessed directly (not by import).
First off it calls the get_ip to get initial IP and then runs the while loop which checks if the IP has changed and lets you know if so.
EDIT:
Since you changed your question, here is your new code in a working example:
import requests
def getIP():
result = requests.get("http://ipinfo.io/ip")
return result.text
def checkIP():
initial = getIP()
print("Initial IP: {}".format(initial))
while True:
current = getIP()
if initial == current:
pass
else:
print("IP has changed!")
checkIP()
As I mentioned in the comments above, you do not need two loops. One is enough. You don't even need two functions, but better do. One for getting the data and one for the loop. In the later, first get initial value and then run the loop, inside which you check if value has changed or not.
It seems, from reading the pycurl documentation, like you would find it easier to solve this problem using the requests library. Curl is more to do with file transfer, so the library expects you to provide a file-like object into which it writes the contents. This would greatly complicate your logic.
requests allows you to access the text of the server's response directly:
>>> import requests
>>> result = requests.get("http://ipinfo.io/ip")
>>> result.text
'151.231.192.8\n'
As #PeterWood suggested, a function would be more appropriate than a class for this - or if the script is going to run continuously, just a simple loop as the body of the program.
I am trying to test this demo program from lynda using Python 3. I am using Pycharm as my IDE. I already added and installed the request package, but when I run the program, it runs cleanly and shows a message "Process finished with exit code 0", but does not show any output from print statement. Where am I going wrong ?
import urllib.request # instead of urllib2 like in Python 2.7
import json
def printResults(data):
# Use the json module to load the string data into a dictionary
theJSON = json.loads(data)
# now we can access the contents of the JSON like any other Python object
if "title" in theJSON["metadata"]:
print(theJSON["metadata"]["title"])
# output the number of events, plus the magnitude and each event name
count = theJSON["metadata"]["count"];
print(str(count) + " events recorded")
# for each event, print the place where it occurred
for i in theJSON["features"]:
print(i["properties"]["place"])
# print the events that only have a magnitude greater than 4
for i in theJSON["features"]:
if i["properties"]["mag"] >= 4.0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"])
# print only the events where at least 1 person reported feeling something
print("Events that were felt:")
for i in theJSON["features"]:
feltReports = i["properties"]["felt"]
if feltReports != None:
if feltReports > 0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"], " reported " + str(feltReports) + " times")
# Open the URL and read the data
urlData = "http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_day.geojson"
webUrl = urllib.request.urlopen(urlData)
print(webUrl.getcode())
if webUrl.getcode() == 200:
data = webUrl.read()
data = data.decode("utf-8") # in Python 3.x we need to explicitly decode the response to a string
# print out our customized results
printResults(data)
else:
print("Received an error from server, cannot retrieve results " + str(webUrl.getcode()))
Not sure if you left this out on purpose, but this script isn't actually executing any code beyond the imports and function definition. Assuming you didn't leave it out on purpose, you would need the following at the end of your file.
if __name__ == '__main__':
data = "" # your data
printResults(data)
The check on __name__ equaling "__main__" is just so your code is only executing when the file is explicitly run. To always run your printResults(data) function when the file is accessed (like, say, if its imported into another module) you could just call it at the bottom of your file like so:
data = "" # your data
printResults(data)
I had to restart the IDE after installing the module. I just realized and tried it now with "Run as Admin". Strangely seems to work now.But not sure if it was a temp error, since even without restart, it was able to detect the module and its methods.
Your comments re: having to restart your IDE makes me think that pycharm might not automatically detect newly installed python packages. This SO answer seems to offer a solution.
SO answer
I have file test.py:
import cgi, cgitb # Import modules for CGI handling
form = cgi.FieldStorage()
person_name = form.getvalue('person_name')
print ("Content-type:text/html\n\n")
print ("<html>")
print ("<head>")
print ("</head>")
print ("<body>")
print (" hello world <br/>")
print(person_name)
print ("</body>")
print ("</html>")
When I go to www.myexample.com/test.py?person_name=john, the result I get is:
hello world
None
meaning that I could not get the param "person_name" from the url.
p.s. It works perfect in my localhost server, but when upload it to online webserver, somewhy cant parse the param from url.
How can I fix it?
Use this then
form_arguments = cgi.FieldStorage(environ={'REQUEST_METHOD':'GET', 'QUERY_STRING':qs})
for i in form_arguments.keys():
print form_arguments[i].value
In my previous answer I assumed you have webapp2. I think this will solve your purpose.
Alternatively you can try:
import urlparse
url = 'www.myexample.com/test.py?person_name=john'
par = urlparse.parse_qs(urlparse.urlparse(url).query)
person_name= par['person_name']
And to get the current url, use this:
url = os.environ['HTTP_HOST']
uri = os.environ['REQUEST_URI']
url = url + uri
par = urlparse.parse_qs( urlparse.urlparse(url).query )
I have a very simple pyramid application which serves a simple static page. Let's say its name is mypyramid and uses port 9999.
If I launch mypyramid in another linux console manually, then I can use the following code to print out the html string.
if __name__ == "__main__":
import urllib2
print 'trying to download url'
response = urllib2.urlopen('http://localhost:9999/index.html')
html = response.read()
print html
But I want to launch mypyramid in an application automatically.
So in my another application, I used pexpect to launch mypyramid, and then try to get the html string from http://localhost:9999/index.html.
def _start_mypyramid():
p = pexpect.spawn(command='./mypyramid')
return p
if __name__ == "__main__":
p = _start_mypyramid()
print p
print 'mypyramid started'
import urllib2
print 'trying to download url'
response = urllib2.urlopen('http://localhost:9999/index.html')
html = response.read()
print html
It seems mypyramid has been successfully launched using pexpect, as I can see the print of the process and mypyramid started has been reached.
However, the application is just hanging after trying to download url, and I can't get anything.
What is the solution? I mean I thought pexpect would create another process. If that's true, then why it is stopping the retrieval of the html?
My guess would be that the child returned by pexpect.spawn needs to communicate.
It attempts to write but nobody reads, so the app stops. (I am only guessing though).
If you do not have any reason to use pexpect (which you probably don't if you do not communicate with the child process), why wouldn't you just go for a standard module subprocess?