How to check Network Share response time using Python Script - python

Trying to write a script to check "Network share response time using Python.
file_name= f"\\\fileserver_name\\share_name$\\TestFile-{time.time()}.txt"
try:
with open(file_name, 'a') as f:
f.write('hello')
if os.path.isfile(file_name):
time.sleep(1)
os.remove(file_name)
print("done")
except Exception as e:
print(e)
I am able to access the share but trying to implement timeout error if the share is not responded within timeout interval.
Looking for better ways to check network share response time using Python.

Related

Telethon, TelegramClient.connect how do I set authentication code programmatically

I am using telethon to automate some tasks on Telegram.
I am trying to create an API where third party users can provide phone number and enter code through an api. I have got the phone number part working, As I allow users to input their phone number through a webservice, this gets wrote to a file, then I am opening that file and fetching the phone number in python which is {number}, I then connect to the client using below.
client = TelegramClient(f'{number}', API_ID, API_KEY)
try:
await client.connect()
except Exception as e:
print('Failed to connect', e, file=sys.stderr)
return
Once the code is run the user enters the verification code (not in the python app) which gets wrote to a file.
And in python the following is returned
Please enter the code you received:
I can open the file which contains the verification code which as {code}
but how do I use {code} to reply to 'Please enter the code you received:'
Thanks
I think that you get the code on telegram and not in the file
It is possible, but this will be very complex as the code is sent to another device. You can write custom Telegram client that will send this code to your program, but it is too complex and in 99.9% of cases you won't need it.
Edit:
If you already have this code, let's say in code variable, you can try to use method sign_in() instead of connect()
try:
await client.sign_in(number, code)
except Exception as e:
print('Failed to connect', e, file=sys.stderr)
return
Reference for sign_in() in docs

Python - OPCDA read from remote server with OpenOPC

I have huge problem with OPCDA and OpenOPC. I should (must) read a set of tags from a remote server, I have no access to the machine in any way. I only know the IP and the OPC server name.
Testing OpenOPC locally with this code all work fine. Otherwise, changing the hostname with the remote one nothing work with 0x800706BA error.
import OpenOPC
import time
try:
opc = OpenOPC.client()
opc.servers()
#change localhost to remote
opc.connect('Matrikon.OPC.Simulation.1', 'localhost')
srvList = opc.list()
print(srvList)
tags = opc.read(opc.list('Simulation Items.Random.Int*'), group='myTest')
for name, value, quality, tagTime in opc.read(opc.list('Simulation Items.Random.Int*'), group='myTest'):
print(name, value, quality, tagTime)
for tag in tags:
print(tag)
except Exception as e:
print('OPC failed')
print(str(e))
pass
finally:
print('END')
Anyone have any ideas on this?
Not having access to the server (set with anonymous logon), I have done DCOM configurations as much as possible.
Does anyone know a procedure for a possible solution?
Thanks!

Python requests causing error on certain urls

For some reason when I try to get and process the following url with python-requests I receive an error causing my program to fail. Other similar urls seems to work fine
import requests
test = requests.get('http://t.co/Ilvvq1cKjK')
print test.url, test.status_code
What could be causing this URL to fail instead of just producing a 404 status code?
The requests library has an exception hierarchy as listed here
So wrap your GET request in a try/except block:
import requests
try:
test = requests.get('http://t.co/Ilvvq1cKjK')
print test.url, test.status_code
except requests.exceptions.ConnectionError as e:
print e.request.url, "*connection failed*"
That way you end up with similar behaviour to what you're doing now (so you get the redirected url), but cater for not being able to connect rather than print the status code.

Checking if website responds in python using a browser user agent

I am trying to come up with a script to check if a domain name resolves to its IP address via dns; using a python script I wrote.
I want to be able to do this in a few sequential loops, however after trying to run a loop once, the second time i run the script, the names that previously returned a successful dns resolution response, now do not.
Below is my script:
#! C:\Python27
import socket,time
localtime = time.asctime( time.localtime(time.time()) )
def hostres(hostname):
print "Attempting to resolve " + hostname
try:
socket.gethostbyname(hostname)
print "Resolved Successfully!"
except socket.error:
print "Could Not Resolve"
print "*************************************************"
print "Website loop starting.."
print "Local current time :", localtime
print "*************************************************"
print ""
text_file = open("sites.txt", "r")
lines = text_file.readlines()
for line in lines:
hostres(line)
text_file.close()
The contents of the text file are:
www.google.com
en.wikipedia.org
www.youtube.com
us.gamespot.com
I am thinking it is to do with these domains servers recognizing the script as a "bot" rather than a legitimate end-user, would it be correct to assume this?
If so, how can I still check if the dns name resolves by looking up the name of the website (or IP, does not matter) and be able to run this without getting a false reading of "request failed" despite the fact that the service is fully accessible from a browser?
Several problems in this question.
You are not checking if "a website responds" you are testing DNS resolution. All your DNS requests go to a single name server, your LDNS resolver. If all of them resolve, it still says nothing about the status of the website. Also, since you aren't actually talking to these website, they have no way of knowing you're a bot. They can only detect this (based on the HTTP user-agent header) if you make a HTTP request.
Regarding your code problem, you need to trim the newline character before you can do a socket.gethostbyname() on it. Replace socket.gethostbyname(hostname) with socket.gethostbyname(hostname.rstrip()) and you'll be fine.

Python: have urllib skip failed connection

Using a Nokia N900 , I have a urllib.urlopen statement that I want to be skipped if the server is offline. (If it fails to connect > proceed to next line of code ) .
How should / could this be done in Python?
According to the urllib documentation, it will raise IOError if the connection can't be made.
try:
urllib.urlopen(url)
except IOError:
# exception handling goes here if you want it
pass
else:
DoSomethingUseful()
Edit: As unutbu pointed out, urllib2 is more flexible. The Python documentation has a good tutorial on how to use it.
try:
urllib.urlopen("http://fgsfds.fgsfds")
except IOError:
pass
If you are using Python3, urllib.request.urlopen has a timeout parameter. You could use it like this:
import urllib.request as request
try:
response = request.urlopen('http://google.com',timeout = 0.001)
print(response)
except request.URLError as err:
print('got here')
# urllib.URLError: <urlopen error timed out>
timeout is measured in seconds. The ultra-short value above is just to demonstrate that it works. In real life you'd probably want to set it to a larger value, of course.
urlopen also raises a urllib.error.URLError (which is also accessible as request.URLError) if the url does not exist or if your network is down.
For Python2.6+, equivalent code can be found here.

Categories