I'm trying to make a simple get request using requests
import requests
def main():
content = requests.get("https://google.com")
print(content.status_code)
if __name__ == "__main__":
main()
I'm running this on Linux, version 17.10.
Python version: either 2.7 or 3.6 (tried both).
The code gets stuck in running, it doesn't timeout or anything.
After I stop it, based on the callstack, it gets stuck at:
File "/usr/lib/python2.7/socket.py", line 228, in meth return getattr(self._sock,name)(*args)
I just ran your code on Python console and it returned 200. I am Running Python 3.6.7, Ubuntu 18.04.
It may be the case that your computer cannot connect to google.com for a very long time, you should pass timeout parameter along with try except
Use the following code:
import requests
def main():
success = False
while success==False:
try:
content = requests.get("https://google.com", timeout=5)
success=True
except:
pass
print(content.status_code)
if __name__ == "__main__":
main()
If there is a temporary problem with your network connection, this code snippet gurantees a proper response.
Related
On Google IT automation with python specialization "Using python interact with Operating System"'s week 1 Qwiklabs Assessment: Working with Python 3rd module is not properly work
on ~/scripts directory :
network.py code
#!usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
print(localhost)
if localhost == '127.0.0.1':
return True
return False
def check_connectivity():
request = requests.get("http://www.google.com")
responses = request.status_code
print(responses)
if responses == 200:
return True
return False
By using this code I make "Create a new Python module
"
but Qwiklab show me that I cannot properly write code.!!!
now what is the problem?
I am responding with this piece here because I noticed a lot of folks taking this course "Using Python to interact with the Operating System" on Coursera do have similar issues with writing the Python function check_localhost and check_connectivity. Please, copy these functions to your VM and try again.
To ping the web and also check whether the local host is correctly configured, we will import requests module and socket module.
Next, write a function check_localhost, which checks whether the local host is correctly configured. And we do this by calling the gethostbyname within the function.
localhost = socket.gethostbyname('localhost')
The above function translates a hostname to IPv4 address format. Pass the parameter localhost to the function gethostbyname. The result for this function should be 127.0.0.1.
Edit the function check_localhost so that it returns true if the function returns 127.0.0.1.
import requests
import socket
#Function to check localhost
def check_localhost():
localhost = socket.gethostbyname('localhost')
if localhost == "127.0.0.1":
return True
else:
return False
Now, we will write another function called check_connectivity. This checks whether the computer can make successful calls to the internet.
A request is when you ping a website for information. The Requests library is designed for this task. You will use the request module for this, and call the GET method by passing a http://www.google.com as the parameter.
request = requests.get("http://www.google.com")
This returns the website's status code. This status code is an integer value. Now, assign the result to a response variable and check the status_code attribute of that variable. It should return 200.
Edit the function check_connectivity so that it returns true if the function returns 200 status_code.
#Function to check connectivity
def check_connectivity():
request = requests.get("http://www.google.com")
if request.status_code == 200:
return True
else:
return False
Once you have finished editing the file, press Ctrl-o, Enter, and Ctrl-x to exit.
When you're done, Click Check my progress to verify the objective.
I was also using a similar code, although It was getting executed fine in the lab terminal, however, it was not getting successfully verified. I contacted the support team using support chat and they provided a similar but relatively efficient code that worked ;
#!/usr/bin/env python3
import requests
import socket
localhost = socket.gethostbyname('localhost')
request = requests.get("http://www.google.com")
def check_localhost():
if localhost == "127.0.0.1":
return True
def check_connectivity():
if request.status_code == 200:
return True
I use the same code of yours to check what's the problem in the code, but your code successfully passed the qwiklabs check.
I think there is something wrong, Did you retry again by end this lab session and create another session just to check if this is something wrong with their end.
It's so easy, in this case, the shebang line would be /usr/bin/env python3.
you type a wrong shebang line:
#!usr/bin/env python3
But you should type:
#!/usr/bin/env python3
Add a shebang line to define where the interpreter is located.
It can be written like this
#!/usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
return localhost == "127.0.0.1"
def check_connectivity():
request = requests.get("http://www.google.com")
return request.status_code() == 200
This script does work even if you are facing the issue in the post:
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
return True # or return localhost == "127.0.0.1"
def check_connectivity():
request = requests.get("http://www.google.com")
return True #or return request==200
What could be wrong with your code? The verification system is faulty and doesn't accept:
-tab instead of 4 spaces as right identation
-spaces between lines
#!usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
print(localhost)
if localhost == '127.0.0.1':
return True
def check_connectivity():
request = requests.get("http://www.google.com")
responses = request.status_code()
print(responses)
if responses == '200':
return True
#!/usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
if localhost == "127.0.0.1":
return True
def check_connectivity():
request = requests.get("http://www.google.com")
if request.status_code() == 200:
return True
I am trying to create a simple HTTP server that uses the Python HTTPServer which inherits BaseHTTPServer. [https://github.com/python/cpython/blob/main/Lib/http/server.py][1]
There are numerous examples of this approach online and I don't believe I am doing anything unusual.
I am simply importing the class via:
"from http.server import HTTPServer, BaseHTTPRequestHandler"
in my code.
My code overrides the do_GET() method to parse the path variable to determine what page to show.
However, if I start this server and connect to it locally (ex: http://127.0.0.1:50000) the first page loads fine. If I navigate to another page (via my first page links) that too works fine, however, on occasion (and this is somewhat sporadic), there is a delay and the server log shows a Request timed out: timeout('timed out') error. I have tracked this down to the handle_one_request method in the BaseHTTPServer class:
def handle_one_request(self):
"""Handle a single HTTP request.
You normally don't need to override this method; see the class
__doc__ string for information on how to handle specific HTTP
commands such as GET and POST.
"""
try:
self.raw_requestline = self.rfile.readline(65537)
if len(self.raw_requestline) > 65536:
self.requestline = ''
self.request_version = ''
self.command = ''
self.send_error(HTTPStatus.REQUEST_URI_TOO_LONG)
return
if not self.raw_requestline:
self.close_connection = True
return
if not self.parse_request():
# An error code has been sent, just exit
return
mname = 'do_' + self.command ## the name of the method is created
if not hasattr(self, mname): ## checking that we have that method defined
self.send_error(
HTTPStatus.NOT_IMPLEMENTED,
"Unsupported method (%r)" % self.command)
return
method = getattr(self, mname) ## getting that method
method() ## finally calling it
self.wfile.flush() #actually send the response if not already done.
except socket.timeout as e:
# a read or a write timed out. Discard this connection
self.log_error("Request timed out: %r", e)
self.close_connection = True
return
You can see where the exception is thrown in the "except socket.timeout as e:" clause.
I have tried overriding this method by including it in my code but it is not clear what is causing the error so I run into dead ends. I've tried creating very basic HTML pages to see if there was something in the page itself, but even "blank" pages cause the same sporadic issue.
What's odd is that sometimes a page loads instantly, and almost randomly, it will then timeout. Sometimes the same page, sometimes a different page.
I've played with the http.timeout setting, but it makes no difference. I suspect it's some underlying socket issue, but am unable to diagnose it further.
This is on a Mac running Big Sur 11.3.1, with Python version 3.9.4.
Any ideas on what might be causing this timeout, and in particular any suggestions on a resolution. Any pointers would be appreciated.
After further investigation, this particular appears to be an issue with Safari. Running the exact same code and using Firefox does not show the same issue.
I have some raspberry pi running some python code. Once and a while my devices will fail to check in. The rest of the python code continues to run perfectly but the code here quits. I am not sure why? If the devices can't check in they should reboot but they don't. Other threads in the python file continue to run correctly.
class reportStatus(Thread):
def run(self):
checkInCount = 0
while 1:
try:
if checkInCount < 50:
payload = {'d':device,'k':cKey}
resp = requests.post(url+'c', json=payload)
if resp.status_code == 200:
checkInCount = 0
time.sleep(1800) #1800
else:
checkInCount += 1
time.sleep(300) # 2.5 min
else:
os.system("sudo reboot")
except:
try:
checkInCount += 1
time.sleep(300)
except:
pass
The devices can run for days and weeks and will check in perfectly every 30 minutes, then out of the blue they will stop. My linux computers are in read-only and the computer continue to work and run correctly. My issue is in this thread. I think they might fail to get a response and this line could be the issue
resp = requests.post(url+'c', json=payload)
I am not sure how to solve this, any help or suggestions would be greatly appreciated.
Thank you
A bare except:pass is a very bad idea.
A much better approach would be to, at the very minimum, log any exceptions:
import traceback
while True:
try:
time.sleep(60)
except:
with open("exceptions.log", "a") as log:
log.write("%s: Exception occurred:\n" % datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
traceback.print_exc(file=log)
Then, when you get an exception, you get a log:
2016-12-20 13:28:55: Exception occurred:
Traceback (most recent call last):
File "./sleepy.py", line 8, in <module>
time.sleep(60)
KeyboardInterrupt
It is also possible that your code is hanging on sudo reboot or requests.post. You could add additional logging to troubleshoot which issue you have, although given you've seen it do reboots, I suspect it's requests.post, in which case you need to add a timeout (from the linked answer):
import requests
import eventlet
eventlet.monkey_patch()
#...
resp = None
with eventlet.Timeout(10):
resp = requests.post(url+'c', json=payload)
if resp:
# your code
Your code basically ignores all exceptions. This is considered a bad thing in Python.
The only reason I can think of for the behavior that you're seeing is that after checkInCount reaches 50, the sudo reboot raises an exception which is then ignored by your program, keeping this thread stuck in the infinite loop.
If you want to see what really happens, add print or loggging.info statements to all the different branches of your code.
Alternatively, remove the blanket try-except clause or replace it by something specific, e.g. except requests.exceptions.RequestException
Because of the answers given I was able to come up with a solution. I realized requests has a built in time out function. The timeout will never happen if a timeout is not specified as a parameter.
here is my solution:
resp = requests.post(url+'c', json=payload, timeout=45)
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter. Nearly all production
code should use this parameter in nearly all requests. Failure to do
so can cause your program to hang indefinitely
The answers provided by TemporalWolf and other helped me alot. Thank you to all that helped.
I am trying to use socketIO_client in python and I am pretty successful with it, however when I let the program below run for a while (like an hour), it crashes and if I look at the system information with the 'top' command I can see the CPU is spinning at something like 80 or 90%.
PS: this happens only on my raspberry, so it might be due to an implementation of the python socketio module on ARM?
Am I doing anything wrong? Is there any socket I should close? I am not very familiar with sockets...
Here below my code:
from socketIO_client import SocketIO, BaseNamespace
class MainNamespace(BaseNamespace):
def on_message(self, message):
try:
typestr = message["depth"]["type_str"]
price_int = int(message["depth"]["price_int"])
total_volume_int = long(message["depth"]["total_volume_int"])
print "price_int:%s total_volume_int:%s" % (price_int,total_volume_int)
except:
pass
if __name__ == "__main__":
try:
mainSocket = SocketIO('socketio.mtgox.com', 80)
chatSocket = mainSocket.connect('/mtgox',MainNamespace)
mainSocket.wait()
except Exception, e:
print e
I rewrote socketIO-client in v0.5 so that it uses coroutines instead of threads to save memory. The external API remains the same.
pip install -U socketIO-client
Does v0.5 fix your issue?
I'm new to python and have been struggling with this for quite a while. Okay so I have a program that logs into a server and then constantly pings the server every 10 seconds to see if the status of the server has changed. In the same script I have a function which sends a message to the server.
a simple example of the send method:
def send(self, message):
url = ("https://testserver.com/socket?message=%s") % (message)
req = urllib2.Request(url, None, None)
response = urllib2.urlopen(req).read()
print response
would it be possible for me to call this method from another script while this one was running? as in using the same session. It seems as though when I run my script which calls this function it creates a new instance of it instead of using the current instance of that script causing it to throw my exception saying I am not connected to the server.
Sorry for the noob question. I have tried googling for a while but I cant seem to find the answer. I have read the following but these didn't solve the problem:
Python call function within class
Python code to get current function into a variable?
Hi #nFreeze thanks for the reply I have tried to use ZeroRPC but every time I run the script/example you gave (obviously edited) I run into this error:
Traceback (most recent call last):
File "C:\Users\dwake\Desktop\Web Projects\test.py", line 1, in <module>
import zerorpc
File "C:\Python27\lib\site-packages\zerorpc\__init__.py", line 27, in <module>
from .context import *
File "C:\Python27\lib\site-packages\zerorpc\context.py", line 29, in <module>
import gevent_zmq as zmq
File "C:\Python27\lib\site-packages\zerorpc\gevent_zmq.py", line 33, in <module>
import gevent.event
File "C:\Python27\lib\site-packages\gevent\__init__.py", line 48, in <module>
from gevent.greenlet import Greenlet, joinall, killall
File "C:\Python27\lib\site-packages\gevent\greenlet.py", line 6, in <module>
from gevent.hub import greenlet, getcurrent, get_hub, GreenletExit, Waiter
File "C:\Python27\lib\site-packages\gevent\hub.py", line 30, in <module>
greenlet = __import__('greenlet').greenlet
ImportError: No module named greenlet
Even though I have installed gevent. I'm not sure how to fix this. Have been googling for a good hour now.
What you're looking for is called an RPC server. It allows external clients to execute exposed functions in your app. Luckily python has many RPC options. ZeroRPC is probably my favorite as it is easy to use and supports node.js. Here is an example of how to expose your send method using ZeroRPC:
In your app (server)
import zerorpc
class HelloRPC(object):
def send(self, message):
url = ("https://testserver.com/socket?message=%s") % (message)
req = urllib2.Request(url, None, None)
response = urllib2.urlopen(req).read()
return response
s = zerorpc.Server(HelloRPC())
s.bind("tcp://0.0.0.0:4242")
s.run()
In the other app (client)
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
print c.send("RPC TEST!")
Simpliest way is to use UNIX signals. You'll need no third-party libraries.
# your-daemon.py
import signal
from time import sleep
def main():
while True:
print "Do some job..."
sleep(5)
def send():
print "Send your data"
def onusr1(*args):
send()
if __name__ == '__main__':
signal.signal(signal.SIGUSR1, onusr1)
main()
Run in terminal:
$ pgrep -f your-daemon.py | xargs kill -SIGUSR1
Of course, this works only in local machine. Also you can't specify any argument to send function, and if you want to have many handlers, then use RPC as adviced below.