Python: Network calls before network service is up - python

I have a script that gets launched on boot, and it is possible that it would be launched before networking is fully up.
The following code fails if it is run before networking is up, if it gets called again later it succeeds.
Even if I increase the tries to 5 minutes, it will still continue until the 5 minutes and then return false, even though networking comes up probably less than 30 seconds after the script launches.
Rather than just sleeping for 1 minute before making any attempt, is there a way to make the following code work and not die if the ethernet is not up ?
self.TRIES = 60
self.URL="http://www.somedomain.com"
## Do we have internet
def isup():
try:
urllib2.urlopen(self.URL).close()
return True
except urllib2.URLError,e:
pass
return False
## Try the lookup
while (self.TRIES > 0):
if isup():
check()
break
self.TRIES = self.TRIES-1
time.sleep(1)
Edit
During OS bootup (Arch Linux) the adapter (eth0 in this case) and the networking service are initially not running, and are started during the bootup process.
It appears that urllib2 (and other network-related calls) die if it is called before networking service is fully up, and subsequent calls will always result in a fail.
This is NOT the same as just disconnecting the ethernet cable, if you just unplug the ethernet cable and call the function (class) then it will succeed, but if it is called BEFORE the networking service is fully up, it will fail and die.
I can solve this problem by adding a time.sleep(30) to the top of the code, this then gives enough time for the O/S network service to fully start and the script works 100% as expected.

use requests and check the status code?
import requests
In [36]: r = requests.get('http://httpbin.org/get')
In [37]: r.status_code == requests.codes.ok
Out[37]: True
In [38]: r.status_code
Out[38]: 200
200
In [33]: r = requests.get('http://httpbin.org/bad')
In [34]: r.status_code
Out[34]: 404
In [35]: r.status_code == requests.codes.ok
Out[35]: False
def isup():
try:
r = requests.get(self.URL)
return r.status_code == requests.codes.ok
except Exception, e:
print e
return False

You could do it as:
def isup():
try:
urllib2.urlopen(self.URL).close()
return True
except urllib2.URLError,e:
pass
return False
## Try the lookup
while not isup():
pass #or replace pass with time.sleep(1) or time.sleep(0.5)
check()

Why do you not use ping?
def isUp():
ret = os.system("ping -c 1 www.google.com")
return (ret != 0)
NOTE: this does not work on Windows as is but you got the idea...

Related

Why "Using python interact with Operating System"'s week 1 Qwiklabs Assessment: Working with Python code is not correctly working

On Google IT automation with python specialization "Using python interact with Operating System"'s week 1 Qwiklabs Assessment: Working with Python 3rd module is not properly work
on ~/scripts directory :
network.py code
#!usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
print(localhost)
if localhost == '127.0.0.1':
return True
return False
def check_connectivity():
request = requests.get("http://www.google.com")
responses = request.status_code
print(responses)
if responses == 200:
return True
return False
By using this code I make "Create a new Python module
"
but Qwiklab show me that I cannot properly write code.!!!
now what is the problem?
I am responding with this piece here because I noticed a lot of folks taking this course "Using Python to interact with the Operating System" on Coursera do have similar issues with writing the Python function check_localhost and check_connectivity. Please, copy these functions to your VM and try again.
To ping the web and also check whether the local host is correctly configured, we will import requests module and socket module.
Next, write a function check_localhost, which checks whether the local host is correctly configured. And we do this by calling the gethostbyname within the function.
localhost = socket.gethostbyname('localhost')
The above function translates a hostname to IPv4 address format. Pass the parameter localhost to the function gethostbyname. The result for this function should be 127.0.0.1.
Edit the function check_localhost so that it returns true if the function returns 127.0.0.1.
import requests
import socket
#Function to check localhost
def check_localhost():
localhost = socket.gethostbyname('localhost')
if localhost == "127.0.0.1":
return True
else:
return False
Now, we will write another function called check_connectivity. This checks whether the computer can make successful calls to the internet.
A request is when you ping a website for information. The Requests library is designed for this task. You will use the request module for this, and call the GET method by passing a http://www.google.com as the parameter.
request = requests.get("http://www.google.com")
This returns the website's status code. This status code is an integer value. Now, assign the result to a response variable and check the status_code attribute of that variable. It should return 200.
Edit the function check_connectivity so that it returns true if the function returns 200 status_code.
#Function to check connectivity
def check_connectivity():
request = requests.get("http://www.google.com")
if request.status_code == 200:
return True
else:
return False
Once you have finished editing the file, press Ctrl-o, Enter, and Ctrl-x to exit.
When you're done, Click Check my progress to verify the objective.
I was also using a similar code, although It was getting executed fine in the lab terminal, however, it was not getting successfully verified. I contacted the support team using support chat and they provided a similar but relatively efficient code that worked ;
#!/usr/bin/env python3
import requests
import socket
localhost = socket.gethostbyname('localhost')
request = requests.get("http://www.google.com")
def check_localhost():
if localhost == "127.0.0.1":
return True
def check_connectivity():
if request.status_code == 200:
return True
I use the same code of yours to check what's the problem in the code, but your code successfully passed the qwiklabs check.
I think there is something wrong, Did you retry again by end this lab session and create another session just to check if this is something wrong with their end.
It's so easy, in this case, the shebang line would be /usr/bin/env python3.
you type a wrong shebang line:
#!usr/bin/env python3
But you should type:
#!/usr/bin/env python3
Add a shebang line to define where the interpreter is located.
It can be written like this
#!/usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
return localhost == "127.0.0.1"
def check_connectivity():
request = requests.get("http://www.google.com")
return request.status_code() == 200
This script does work even if you are facing the issue in the post:
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
return True # or return localhost == "127.0.0.1"
def check_connectivity():
request = requests.get("http://www.google.com")
return True #or return request==200
What could be wrong with your code? The verification system is faulty and doesn't accept:
-tab instead of 4 spaces as right identation
-spaces between lines
#!usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
print(localhost)
if localhost == '127.0.0.1':
return True
def check_connectivity():
request = requests.get("http://www.google.com")
responses = request.status_code()
print(responses)
if responses == '200':
return True
#!/usr/bin/env python3
import requests
import socket
def check_localhost():
localhost = socket.gethostbyname('localhost')
if localhost == "127.0.0.1":
return True
def check_connectivity():
request = requests.get("http://www.google.com")
if request.status_code() == 200:
return True

Python issue with time.sleep in sleekxmpp

I am using sleekxmpp as the xmpp client for python. The requests are coming which I am further forwarding to other users/agents.
Now the use case is, if a user is not available we need to check the availability every 10 seconds and transfer to him when he is available. We need to send a message to customer only 5 times but check the availability for a long time.
I am using time.sleep() if the user is not available to check again in 10 seconds, but the problem is it is blocking the entire thread and no new requests are coming to the server.
send_msg_counter = 0
check_status = False
while not check_status:
check_status = requests.post(transfer_chat_url, data=data)
if send_msg_counter < 5:
send_msg("please wait", customer)
send_msg_counter += 1
time.sleep(10)
This is true that time.sleep(10) will block your active thread. You may actually find that using Python 3's async/await to be the way to go. Sadly I don't have much experience with those keywords yet, but another route might be to implement python's threading.
https://docs.python.org/3/library/threading.html
Here might be one way to implement this feature.
import threading
def poll_counter(customer, transfer_chat_url, data, send_count=5, interval=10):
send_msg_counter = 0
check_status = False
while not check_status:
check_status = requests.post(transfer_chat_url, data=data)
if send_msg_counter < send_count:
send_msg("please wait", customer)
send_msg_counter += 1
time.sleep(interval)
# If we're here, check status became true
return None
... pre-existing code ...
threading.Thread(target=poll_counter, args=(customer, transfer_chat_url, data)).start()
... proceed to handle other tasks while the thread runs in the background.
Now, I won't go into detail, but there are use cases where threading is a major mistake. This shouldn't be one of them, but here is a good read for you to understand those use cases.
https://realpython.com/python-gil/
Also, for more details on asyncio (async/await) here is a good resource.
https://docs.python.org/3/library/asyncio-task.html
Try implementing
delay = min(self.reconnect_delay * 2, self.reconnect_max_delay)
delay = random.normalvariate(delay, delay * 0.1)
log.debug('Waiting %s seconds before connecting.', delay)
elapsed = 0
try:
while elapsed < delay and not self.stop.is_set():
time.sleep(0.1)
elapsed += 0.1
except KeyboardInterrupt:
self.set_stop()
return False
except SystemExit:
self.set_stop()
return False
Source Link

auto execute a web service in falcon

I have a function which registers my web-services to spring-eureka discovery server but it automatically de-registers it. to solve this problem, I thought to make a function which will automatically execute in few seconds and register my service again and again.
Please suggest what to do and if u have a better approach to encounter this problem that will be great.
We can make another program which will ping the health check URL of my web server.
responsePythonAPI = requests.request("GET", "http://10.95.51.8:5050/health", headers=headers)
pythonAPI = True if responsePythonAPI.json()["status"]["value"] == u'200 OK' and responsePythonAPI.json()["status"]["code"] == 200 else False
if pythonAPI == True:
eureka.registerWebService()
else:
eureka.deregisterWebService()
This program will run as soon as application gets up and running and registers it in in time inteval of 100seconds

Bug in python thread

I have some raspberry pi running some python code. Once and a while my devices will fail to check in. The rest of the python code continues to run perfectly but the code here quits. I am not sure why? If the devices can't check in they should reboot but they don't. Other threads in the python file continue to run correctly.
class reportStatus(Thread):
def run(self):
checkInCount = 0
while 1:
try:
if checkInCount < 50:
payload = {'d':device,'k':cKey}
resp = requests.post(url+'c', json=payload)
if resp.status_code == 200:
checkInCount = 0
time.sleep(1800) #1800
else:
checkInCount += 1
time.sleep(300) # 2.5 min
else:
os.system("sudo reboot")
except:
try:
checkInCount += 1
time.sleep(300)
except:
pass
The devices can run for days and weeks and will check in perfectly every 30 minutes, then out of the blue they will stop. My linux computers are in read-only and the computer continue to work and run correctly. My issue is in this thread. I think they might fail to get a response and this line could be the issue
resp = requests.post(url+'c', json=payload)
I am not sure how to solve this, any help or suggestions would be greatly appreciated.
Thank you
A bare except:pass is a very bad idea.
A much better approach would be to, at the very minimum, log any exceptions:
import traceback
while True:
try:
time.sleep(60)
except:
with open("exceptions.log", "a") as log:
log.write("%s: Exception occurred:\n" % datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
traceback.print_exc(file=log)
Then, when you get an exception, you get a log:
2016-12-20 13:28:55: Exception occurred:
Traceback (most recent call last):
File "./sleepy.py", line 8, in <module>
time.sleep(60)
KeyboardInterrupt
It is also possible that your code is hanging on sudo reboot or requests.post. You could add additional logging to troubleshoot which issue you have, although given you've seen it do reboots, I suspect it's requests.post, in which case you need to add a timeout (from the linked answer):
import requests
import eventlet
eventlet.monkey_patch()
#...
resp = None
with eventlet.Timeout(10):
resp = requests.post(url+'c', json=payload)
if resp:
# your code
Your code basically ignores all exceptions. This is considered a bad thing in Python.
The only reason I can think of for the behavior that you're seeing is that after checkInCount reaches 50, the sudo reboot raises an exception which is then ignored by your program, keeping this thread stuck in the infinite loop.
If you want to see what really happens, add print or loggging.info statements to all the different branches of your code.
Alternatively, remove the blanket try-except clause or replace it by something specific, e.g. except requests.exceptions.RequestException
Because of the answers given I was able to come up with a solution. I realized requests has a built in time out function. The timeout will never happen if a timeout is not specified as a parameter.
here is my solution:
resp = requests.post(url+'c', json=payload, timeout=45)
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter. Nearly all production
code should use this parameter in nearly all requests. Failure to do
so can cause your program to hang indefinitely
The answers provided by TemporalWolf and other helped me alot. Thank you to all that helped.

celery + eventlet = 100% CPU usage

We are using celery to get flights data from different travel
agencies, every request takes ~20-30 seconds(most agencies require
request sequence - authorize, send request, poll for results).
Normal
celery task looks like this:
from eventlet.green import urllib2, time
def get_results(attr, **kwargs):
search, provider, minprice = attr
data = XXX # prepared data
host = urljoin(MAIN_URL, "RPCService/Flights_SearchStart")
req = urllib2.Request(host, data, {'Content-Type': 'text/xml'})
try:
response_stream = urllib2.urlopen(req)
except urllib2.URLError as e:
return [search, None]
response = response_stream.read()
rsp_host = urljoin(MAIN_URL, "RPCService/FlightSearchResults_Get")
rsp_req = urllib2.Request(rsp_host, response, {'Content-Type':
'text/xml'})
ready = False
sleeptime = 1
rsp_response = ''
while not ready:
time.sleep(sleeptime)
try:
rsp_response_stream = urllib2.urlopen(rsp_req)
except urllib2.URLError as e:
log.error('go2see: results fetch failed for %s IOError %s'%
(search.id, str(e)))
else:
rsp_response = rsp_response_stream.read()
try:
rsp = parseString(rsp_response)
except ExpatError as e:
return [search, None]
else:
ready = rsp.getElementsByTagName('SearchResultEx')
[0].getElementsByTagName('IsReady')[0].firstChild.data
ready = (ready == 'true')
sleeptime += 1
if sleeptime > 10:
return [search, None]
hash = "%032x" % random.getrandbits(128)
open(RESULT_TMP_FOLDER+hash, 'w+').write(rsp_response)
# call to parser
parse_agent_results.apply_async(queue='parsers', args=[__name__,
search, provider, hash])
This tasks are run in eventlet pool with concurency 300,
prefetch_multiplier = 1, broker_limit = 300
When ~100-200 task are fetched from queue - CPU usage raises up to 100%
( whole CPU core is used) and task fetching from queue is performed
with delays.
Could you please point on possible issues - blocking
operations( eventlet ALARM DETECTOR gives no exceptions ), wrong
architecture or whatever.
A problem occurs if you fire 200 requests to a server, responses could be delayed and therefore urllib.urlopen will hang.
Another thing i noticed: If an URLError is raised, the program stays in the while loop until sleeptime is greater than 10. So an URLError error will let this script sleep for 55 sec (1+2+3.. etc)
Sorry for late response.
Thing i would try first in such situation is to turn off Eventlet completely in both Celery and your code, use process or OS thread model. 300 threads or even processes is not that much load for OS scheduler (although you may lack memory to run many processes). So i would try it and see if CPU load drops dramatically. If it does not, then problem is in your code and Eventlet can't magically fix it. If it does drop, however, we would need to investigate the issue closer.
If bug still persists, please, report it via any of these ways:
https://bitbucket.org/which_linden/eventlet/issues/new
https://github.com/eventlet/eventlet/issues/new
email to eventletdev#lists.secondlife.com

Categories