How can i make this web3 python script faster? - python

I want to make a python script (for BSC) which keeps track of the balance of that particular token in the wallet. I need the python script to be very fast. Currently with the below code, it takes about 6 seconds for the script to detect the token entering the wallet. Is there a faster, more efficient way to do it? (I added the sleep func to act as some kind of buffer. Don't know if its a good idea though?)
Edit: removed the sleep function but still takes 6s.
from web3 import Web3
import json
bsc = "https://bsc-dataseed.binance.org/"
web3 = Web3(Web3.HTTPProvider(bsc))
print(web3.isConnected())
main_address = "wallet to be tracked"
contract_address = "token contract address"
abi = json.loads('the abi')
contract = web3.eth.contract(address=contract_address, abi = abi)
balanceOfToken = contract.functions.balanceOf(main_address).call()
print(web3.fromWei(balanceOfToken, 'ether'))
while(True):
balanceOfToken = contract.functions.balanceOf(main_address).call()
if(balanceOfToken > web3.fromWei(0.5, 'ether')):
break
time.sleep(1.1)
x+=1
print(f"Still looking {x}")
continue
second_address = "the other wallet address"
main_key = "private key of first wallet"
nonce = web3.eth.getTransactionCount(main_address)
token_tx = contract.functions.transfer(second_address, balanceOfToken).buildTransaction({
'chainId':56, 'gas': 90000, 'gasPrice': web3.toWei('5', 'gwei'), 'nonce':nonce
})
signed_tx = web3.eth.account.signTransaction(token_tx, main_key)
web3.eth.sendRawTransaction(signed_tx.rawTransaction)
print(contract.functions.balanceOf(my_address).call() + " " + contract.functions.name().call())

Key to answering your question is: What takes 6 seconds?
Running the code from start to finish?
If I run the code on my laptop - using the same node - the code executes in 0.45-0.55s. So perhaps it is not the code itself, but your connection to the node that is slowing down calls or broadcasting the transaction? If so, maybe trying another node will speed up execution. See Binance's docs for alternatives or check a 3rd party provider.
Unlikely, but it could also be the lack of available processing power on your laptop (?)
Starting the code until the transaction shows up in the block?
The code takes c. 0.5 to run. Add the 3s target block time on BSC and you are already at 3.5s, assuming there's space in the block (/your fee is sufficient to be included) and assuming it gets broadcasted and picked up immediately. I am unsure what the lower bound should be, but it will take a couple of seconds.
PS. As mentioned by – #Mikko Ohtamaa - Aug 17 '21 at 5:07 "Instead of polling, you can subscribe to all new blocks and filter out events in the block yourself. (..)" To do this, you can have a look at filtering in web3py.

You can make it faster by running an Ethereum node locally. Thus, you have 100% of the Ethereum node server capacity and there is no network delay. More information here.

Related

Python-CAN script receiving half of the expected CAN messages

I have written a Python script utilizing the Python-CAN library which records received CAN messages at a 1 second rate for 5 minutes, before logging all the messages into a file and exiting. The computer has a CAN module which is connecting to the CAN bus. (The other device on the bus is an engine) I communicate with it using the SocketCAN interface.
The test engine system that this computer is connected to is sending around 114 messages at what I believe is a 250kb baud rate. I am expecting to see 114 messages recorded in the file for each 1 second period, but instead I'm seeing about half that count. (~65 messages)
Could it be possible that the engine's ECU is set to a 500kb baud rate, and that's why I'm not getting the count I am expecting? I would think there would be no communication if the baud rates do not match, but I do not have physical access to the system because I'm sending the script remotely through an OTA update and not running it myself. (The device is headless, but is setup to run the script on startup) I just see the log files that are generated.
Here is the python code:
(A note, I have code parsing the received messages into the contained signals, but I did not include this code here because it happens at the end, and it is not relevant)
class logging:
def __init__(self):
#Dictionary to hold received CAN messages
self.message_Dict = {}
#List to hold queued dictionaries
self.message_Queue = []
#A "filters" object is also created here, but I did not include it
#I have verified the filters are correct on my test system
def main(self):
#Record the current time
currentTime = datetime.datetime.now()
#Record the overall start time
startTime = datetime.datetime.now()
#Record the iteration start time
lastIterationStartTime = currentTime
#Create the CanBus that will be used to send and receive CAN msgs from the MCU
canbus = can.interfaces.socketcan.SocketcanBus(channel='can0', bitrate=250000)
#These filters are setup correctly, because all the messages come through
#on my test system, but I did not include them here
canbus.set_filters(self.Filters)
# Creating Listener filters and notifier
listener = can.Listener()
#Main loop
while 1:
#create a variable to hold received data
msg2 = canbus.recv()
#Record the current time
currentTime = datetime.datetime.now()
#If a valid message is detected
if(msg2 != None):
if(len(msg2.data) > 0):
try:
#Save the message data into a queue (will be processed later)
self.message_Dict[msg2.arbitration_id] = msg2.data
except:
print("Error in storing CAN message")
#If 1 second has passed since the last iteration,
#add the dictionary to a new spot in the queue
if((currentTime - lastIterationStartTime) >= datetime.timedelta(seconds=1)):
#Add the dictionary with messages into the queue for later processing
messageDict_Copy = self.message_Dict.copy()
self.message_Queue.append(messageDict_Copy)
print("Number of messages in dictionary: " + str(len(self.message_Dict)) + "
Number of reports in queue: " + str(len(self.message_Queue)))
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#Record the reset time
lastIterationStartTime = datetime.datetime.now()
#Once 5 minutes of data has been recorded, write to the file
if((currentTime - startTime) > datetime.timedelta(minutes=5)):
#Here is where I write the data to a file. This is too long to include
#Clear the queue
self.message_Queue = []
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#When the script is run, execute the Main method
if __name__ == '__main__':
mainClass = logging()
mainClass.main()
I appreciate any ideas or input you have. Thank you
In my experience, most of the engine's ECU usually uses 250kb, but the newest ones are using 500kb. I would suggest you too also try the both.
Also, the messages will only come to the bus if they have been sent, it seems silly but for example a truck, if you don't step on the accelerator the messages referring to the accelerator will not appear. So, maybe you need to check if all components are being using as you expect. The lib of can-utils has a 'Can sniffer' that can also help you.
I suggest you to use 'can-utils' to help you in that. It is a powerful tool to can analyses.
Did you try to loop the baudrate? Maybe can also help to find another.

How to continue my program after internet disconnect-reconnects?

I have a program like this:
for i in range(25200):
time.sleep(1)
with requests.Session() as s:
data = {'ContractCode' : 'SAFMO98' }
r = s.post('http://cdn.ime.co.ir/Services/Fut_Live_Loc_Service.asmx/GetContractInfo', json = data ).json()
for key, value in r.items():
plt.clf()
last_prices = (r[key]['LastTradedPrice'])
z.append(last_prices)
plt.figure(1)
plt.plot(z)
Sometimes server rejects the connection and gives Exceeds request message. Or sometimes I lost my connection, etc.
Then I must re run my program and I will loose my plotted graph, and also the time my program was disconnected and the data I lost through this time. So what I like to do is add something to my program to keep my connection against interupts/desconnections. I mean my program wouldn't stop when it lost the connection or rejected from server side and will keep it's work when it connected again.
How is it possible?
EDIT: I edited my code like following but don't know how good is this way?
try:
for i in range(25200):
time.sleep(1)
with requests.Session() as s:
data = {'ContractCode' : 'SAFMO98' }
r =s.post('http://cdn.ime.co.ir/Services/Fut_Live_Loc_Service.asmx/GetContractInfo', json = data ).json()
for key, value in r.items():
plt.clf()
last_prices = (r[key]['LastTradedPrice'])
z.append(last_prices)
plt.figure(1)
plt.plot(z)
except:
pass
You have at least two connection failure events here, and either might result in an inability to connect for undefined amounts of time. A good option here is exponential backoff.
Basically, you attempt an operation, detect failures you know will require retrying, and wait. Each subsequent time the operation fails (in this case, presumably throwing an exception), you wait a multiple of the previous wait time. The idea is that, if you're being rate limited, you'll wait longer and longer until the API you're connecting to stops rejecting your requests. Also, if you've been physically disconnected, you'll attempt fewer connections over time, rather than spamming requests at a dead adapter.
There's a Python library, backoff, that handles most of the work involved in this for you with a decorator.

How to transfer a value from a function in one script to another script without re-running the function (python)?

I'm really new to programming in general and very inexperienced, and I'm learning python as I think it's more simple than other languages. Anyway, I'm trying to use Flask-Ask with ngrok to program an Alexa skill to check data online (which changes a couple of times per hour). The script takes four different numbers (from a different URL) and organizes it into a dictionary, and uses Selenium and phantomjs to access the data.
Obviously, this exceeds the 8-10 second maximum runtime for an intent before Alexa decides that it's taken too long and returns an error message (I know its timing out as ngrok and the python log would show if an actual error occurred, and it invariably occurs after 8-10 seconds even though after 8-10 seconds it should be in the middle of the script). I've read that I could just reprompt it, but I don't know how and that would only give me 8-10 more seconds, and the script usually takes about 25 seconds just to get the data from the internet (and then maybe a second to turn it into a dictionary).
I tried putting the getData function right after the intent that runs when the Alexa skill is first invoked, but it only runs when I initialize my local server and just holds the data for every new Alexa session. Because the data changes frequently, I want it to perform the function every time I start a new session for the skill with Alexa.
So, I decided just to outsource the function that actually gets the data to another script, and make that other script run constantly in a loop. Here's the code I used.
import time
def getData():
username = '' #username hidden for anonymity
password = '' #password hidden for anonymity
browser = webdriver.PhantomJS(executable_path='/usr/local/bin/phantomjs')
browser.get("https://gradebook.com") #actual website name changed
browser.find_element_by_name("username").clear()
browser.find_element_by_name("username").send_keys(username)
browser.find_element_by_name("password").clear()
browser.find_element_by_name("password").send_keys(password)
browser.find_element_by_name("password").send_keys(Keys.RETURN)
global currentgrades
currentgrades = []
gradeids = ['2018202', '2018185', '2018223', '2018626', '2018473', '2018871', '2018886']
for x in range(0, len(gradeids)):
try:
gradeurl = "https://www.gradebook.com/grades/"
browser.get(gradeurl)
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:3]
if grade[2] != "%":
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:4]
if grade[1] == "%":
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:1]
currentgrades.append(grade)
except Exception:
currentgrades.append('No assignments found')
continue
dictionary = {"class1": currentgrades[0], "class2": currentgrades[1], "class3": currentgrades[2], "class4": currentgrades[3], "class5": currentgrades[4], "class6": currentgrades[5], "class7": currentgrades[6]}
return dictionary
def run():
dictionary = getData()
time.sleep(60)
That script runs constantly and does what I want, but then in my other script, I don't know how to just call the dictionary variable. When I use
from getdata.py import dictionary
in the Flask-ask script it just runs the loop and constantly gets the data. I just want the Flask-ask script to take the variable defined in the "run" function and then use it without running any of the actual scripts defined in the getdata script, which have already run and gotten the correct data. If it matters, both scripts are running in Terminal on a MacBook.
Is there any way to do what I'm asking about, or are there any easier workarounds? Any and all help is appreciated!
It sounds like you want to import the function, so you can run it; rather than importing the dictionary.
try deleting the run function and then in your other script
from getdata import getData
Then each time you write getData() it will run your code and get a new up-to-date dictionary.
Is this what you were asking about?
This issue has been resolved.
As for the original question, I didn't figure out how to make it just import the dictionary instead of first running the function to generate the dictionary. Furthermore, I realized there had to be a more practical solution than constantly running a script like that, and even then not getting brand new data.
My solution was to make the script that gets the data start running at the same time as the launch function. Here was the final script for the first intent (the rest of it remained the same):
#ask.intent("start_skill")
def start_skill():
welcome_message = 'What is the password?'
thread = threading.Thread(target=getData, args=())
thread.daemon = True
thread.start()
return question(welcome_message)
def getData():
#script to get data here
#other intents and rest of script here
By design, the skill requested a numeric passcode to make sure I was the one using it before it was willing to read the data (which was probably pointless, but this skill is at least as much for my own educational reasons as for practical reasons, so, for the extra practice, I wanted this to have as many features as I could possibly justify). So, by the time you would actually be able to ask for the data, the script to get the data will have finished running (I have tested this and it seems to work without fail).

Have a python function run for an alotted time

I have a python script that pulls from various internal network sources. With how our systems are set up we will initiate a urllib pull from a network location and it will get hung up waiting forever for a response on certain parts of the network. I would like my script to check that if it hasnt finished the pull in lets say 5 minutes it will pass the function and attempt to pull from the next address, and record it to a bad directory repository(so we can go check out which systems get hung up, there's like over 20,000 IP addresses we are checking some with some older scripts running on them that no longer work but will still try and run when requested, and they never stop trying to run)
Im familiar with having a script pause at a certain point
import time
time.sleep(300)
What Im thinking from a psuedo code perspective (not proper python just illustrating the idea)
import time
import urllib2
url_dict = ['http://1', 'http://2', 'http://3', ...]
fail_log_path = 'C:/Temp/fail_log.txt'
for addresses in url_dict:
clock_value = time.start()
while clock_value <= 300:
print str(clock_value)
res = urllib2.retrieve(url)
if res != []:
pass
else:
fail_log = open(fail_log_path, 'a')
fail_log.write("Failed to pull from site location: " + str(url) + "\n")
faile_log.close
Update: a specific option for this dealing with urls timeout for urllib2.urlopen() in pre Python 2.6 versions
Found this answer which is more in line with the overall problem of my question:
kill a function after a certain time in windows
Your code as is doesn't seem to describe what you were saying. It seems you want the if/else check inside your while loop. On top of that, you would want to loop over the ip addresses and not over a time period as your code is currently written (otherwise you will keep requesting the same ip address every time). Instead of keeping track of time yourself, I would suggest reading up on urllib.request.urlopen - specifically the timeout parameter. Once set, that function call will throw a socket.timeout exception once the time limit is reached. Surround that with a try/except block catching that error and then handle it appropriately.

Wait for directory (Windows) to exist to continue code? (2.7)

I am trying to have my code detect when a flashdrive is plugged in and then continue the code. I am currently using "os.path.exists". When I start the code with the flashdrive plugged in, the code functions fine, however, if I start when the flashdrive is unplugged, and attempt to plug it in while the code is running, the code never checks to see if the flashdrive is plugged in and keeps forcing the code to sleep. How can I get the code to work?
import os
import sys
import datetime
from datetime import datetime
import shutil
import time
#Wait for FlashDrive to be connected
if os.path.exists("F:\"):
connected = 1
else:
connected = 0
while connected == 0:
print "..."
time.sleep(10)
#Get current date
currentdate=datetime.now().strftime("%m-%d-%Y")
print "Photos saved: " + currentdate
#Copy and rename DCIM
src = "F:/Pictures"
dst = "C:/Users/Josh/Desktop/photos/" + currentdate
shutil.copytree(src, dst)
The code is supposed to be a loop and execute every time an iPhone connects and never stop running, but I cannot get the code to work if it does not really check for the flashdrive.
Cycle with some arbitrary sleeps isn't a good idea (at all). It makes your program less responsive to the event, because it will take at least N ms to catch an event fired at the start of the iteration*. Also it wastes CPU due to a large amount of API calls.
Create a window.
Listen to WM_DEVICECHANGE message in your message loop. It will fire every time your device configuration changed, but won't tell you, how.
On such event, ask for current configuration.
You can find a tutorial here. Also, take a look at the similar answer on SO.
(*) Actually sleep will test on each next system tick if time_passed >= sleep_timeout. If so, it will return to the program. Problem is that system tick could be 1/18 of second on an old PC (56 ms), so you'll never have 10 ms delay.
Your problem is htat you set the connected variable outside the loop so it's never updated.
Try:
while not os.path.exists('F:\'):
print("...")
time.sleep(10)
--edit---
Then, wait for it to be removed at the end:
while os.path.exists('F:\'):
print("...")
time.sleep(10)
And, finally, wrap the entire thing in a big while True: so that the whole program repeats.
(Again, I do agree this is a 'hackish' and inefficent way to do this task)

Categories