Get Avalanche block data by block hash with web3.py - python

How do I get block data by block hash? I'm interested in getting block timestamp for each new block.
from web3 import Web3
avalanche_url = 'https://api.avax.network/ext/bc/C/rpc'
provider = Web3(Web3.HTTPProvider(avalanche_url))
new_block_filter = provider.eth.filter('latest')
while True:
block_hashes = new_block_filter.get_new_entries()
for block_hash in block_hashes:
block = provider.eth.get_block(block_hash.hex())
print(block)
This causes an error:
web3.exceptions.ExtraDataLengthError: The field extraData is 80 bytes, but should be 32. It is quite likely that you are connected to a POA chain. Refer to http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority for more details. The full extraData is: HexBytes('0x0000000000000000000000000001edd400000000000000000000000000000000000000000000000000000000002cb3970000000000000000000000000005902b00000000000000000000000000000000')
The same query works on Ethereum.

Adding geth_poa_middleware worked for me:
from web3 import Web3
from web3.middleware import geth_poa_middleware
w3 = Web3(Web3.HTTPProvider('https://api.avax.network/ext/bc/C/rpc'))
w3.middleware_onion.inject(geth_poa_middleware, layer=0)

Related

How to get all ENS domains for one wallet address with web3.py

I am trying to write a code in python that returns the ENS domain of a given wallet address with web3.py, but I am having problems when the wallet has registered more than one ENS and I can't find anything in the documentation about this issue.
Here is an example code:
infura_endpoint = f'https://mainnet.infura.io/v3/{infura_api_key}'
w3 = Web3(HTTPProvider(infura_endpoint))
print(w3.isConnected()) # This returns True
ns = ns.fromWeb3(web3=w3)
domain = ns.name('0xC99c2bdA0BEaA0B4c9774B48B81307C00e19CAde')
print(domain) # This prints None
# This try/except block prints "Something went wrong" because the domain variable is None
try:
assert ns.address(domain) == '0xC99c2bdA0BEaA0B4c9774B48B81307C00e19CAde'
except AssertionError:
print('Something went wrong')
print(ns.address('seeds.eth')) # "seeds.eth" is one of the domains that should be returned
I would appreciate any help you can give me.
The following code worked for me (via Alchemy to connect to web3) and leveraging the ENS package.
from ens import ENS
from web3 import Web3
ALCHEMY_KEY = "<YOUR_KEY>"
alchemy_url = f"https://eth-mainnet.g.alchemy.com/v2/{ALCHEMY_KEY}"
w3 = Web3(Web3.HTTPProvider(alchemy_url))
ns = ENS.fromWeb3(w3)
addr = "0xD2Af803ad747ea12Acf5Ae468056703aE48785b5"
wallet_name = ns.name(addr)
print(wallet_name)
The address used should resolve to whaleshark.eth

Adding nonce to run multiple contract transactions with web3 python

I am trying to create an ERC-1155 minter in Python that can mint hundreds (or more) NFTs, so I don't want to wait for each to be successful. How can I add nonce to the contract_instance transact() function? I can't seem to find this in the web3 py docs, or is there a better way to do this? I'm also wondering if I should expect some transactions to fail and maybe have some verification step. I did try with "wait_for_transaction_receipt" between each, but it takes about 10~ seconds for each individual mint which may be too slow, and it seems to randomly fail after about 20-30 mints.
With my current code I'm getting this error after about 20+ mints:
ValueError: {'code': -32000, 'message': 'replacement transaction underpriced'}
Appreciate any help you can offer!
Here's my code:
from web3 import Web3
from decouple import config
from eth_account import Account
from eth_account.signers.local import LocalAccount
from web3.auto import w3
from web3.middleware import geth_poa_middleware,construct_sign_and_send_raw_middleware
infura_url = config('INFURA_URL')
print(infura_url)
private_key=Private_Key
account: LocalAccount = Account.from_key(private_key)
w3 = Web3(Web3.HTTPProvider(infura_url))
w3.middleware_onion.add(construct_sign_and_send_raw_middleware(account))
print(f"Your hot wallet address is {account.address}")
w3.eth.defaultAccount = 'MYADDRESS'
#added for testnet
w3.middleware_onion.inject(geth_poa_middleware, layer=0)
res = w3.isConnected()
for x in range(200):
address = 'CONTRACT_ADDRESS'
abi = 'ABI_HERE'
contract_instance = w3.eth.contract(address=address, abi=abi)
thisvar = contract_instance.functions.symbol().call()
print(thisvar)
print(x)
#generate a random address for simulating a large list of addresses
mint2acc = Account.create('KEYSMASH FJAFJKLDSKF7JKFDJ 1530')
print(mint2acc.address)
thistx = contract_instance.functions.mint(mint2acc.address,1,1).transact()
in the transact you can put params like that
nonce = w3.eth.get_transaction_count(mint2acc.address)
contract_instance.functions.mint(mint2acc.address,1,1).transact({"nonce":nonce})

How to properly read J1939 messages from .asc file with cantools?

I'm trying to create a CAN logs converter from .asc files to .csv files (in human readable form). I'm somewhat successful. My code works fine with almost any database but j1939.dbc.
The thing is, that if I print out the messages read from the dbc file, I can see that the messages from j1939.dbc are read into the database. But it fails to find any of those messages in the processed log file. At the same time I can read the same file using Vector CANalyzer with no issues.
I wonder why this may happed and why it only affects the j1939.dbc and not the others.
I suspect that maybe the way I convert those messages is wrong because it never goes by the if msg_id in database: line (and as mentioned above, those messages are certainly there because Vector CANalyzer works fine with them).
EDIT: I realized that maybe the problem is not cantools but python-can package, maybe the can.ASCReader() doeasn't do well with j1939 frames and omits them? I'm gonna investigate myself but I hope someone better at coding will help.
import pandas as pd
import can
import cantools
import time as t
from tqdm import tqdm
import re
import os
from binascii import unhexlify
dbcs = [filename.split('.')[0] for filename in os.listdir('./dbc/') if filename.endswith('.dbc')]
files = [filename.split('.')[0] for filename in os.listdir('./asc/') if filename.endswith('.asc')]
start = t.time()
db = cantools.database.Database()
for dbc in dbcs:
with open(f'./dbc/{dbc}.dbc', 'r') as f:
db.add_dbc(f)
f_num = 1
for fname in files:
print(f'[{f_num}/{len(files)}] Parsing data from file: {fname}')
log=can.ASCReader(f'./asc/{fname}.asc')
entries = []
all_msgs =[]
message = {'Time [s]': ''}
database = list(db._frame_id_to_message.keys())
print(database)
lines = sum(1 for line in open(f'./asc/{fname}.asc'))
msgs = iter(log)
try:
for msg, i in zip(msgs, tqdm(range(lines))):
msg = re.split("\\s+", str(msg))
timestamp = round(float(msg[1]), 0)
msg_id = int(msg[3], 16)
try:
data = unhexlify(''.join(msg[7:15]))
except:
continue
if msg_id in database:
if timestamp != message['Time [s]']:
entries.append(message.copy())
message.update({'Time [s]': timestamp})
message.update(db.decode_message(msg_id, data))
except ValueError:
print('ValueError')
df = pd.DataFrame(entries[1:])
duration = t.time() - start
df.to_csv(f'./csv/{fname}.csv', index=False)
print(f'DONE IN {int(round(duration, 2)//60)}min{round(duration % 60, 2)}s!\n{len(df.columns)} signals extracted!')
f_num += 1
class can.ASCReader(file, base=’hex’)
Bases: can.io.generic.BaseIOHandler
Iterator of CAN messages from a ASC logging file. Meta data (comments, bus statistics, J1939 Transport
Protocol messages) is ignored.
Might answer your question...

Binance API Issue - Python

APIError(code=-2015): Invalid API-key, IP, or permissions for action
I keep getting the above issue.
I am not sure what the issue is.
I am able to access the client.get_all_tickers() command no problem but when I try to place an order or access user_data (both which require a signature) I get the error
APIError(code=-2015): Invalid API-key, IP, or permissions for action
I think the issue has something to do with the signature. I checked to see if I have the relevant permissions enabled and I do. Furthermore, I tried to create a new API key and I still go the same issue.
NOTE: I am using binance.us not binance.com because I am located in the US so I cannot make an account on binance.com
Therefore, another idea I had was to create a VPN that places me in England so I can make an account through binance.com and maybe that will work.
import time
import datetime
import json
from time import sleep
from binance.client import Client
from binance.enums import *
import sys
import requests, json, time, hashlib
import urllib3
import logging
from urllib3 import PoolManager
from binance.exceptions import BinanceAPIException, BinanceWithdrawException
r = requests.get('https://www.binance.us/en/home')
client = Client(API_key,Secret_key,tld="us")
prices = client.get_all_tickers()
#Def to get location
def crypto_location(sym):
count = 0
for i in prices:
count += 1
ticker = i.get('symbol')
if ticker == sym:
val = i.get('price')
count = count-1
return count
bitcoin_location = crypto_location('BTCUSDT')
ethereum_location = crypto_location('ETHUSDT')
stable_coin_location = crypto_location('BUSDUSDT')
bitcoin_as_BUSD_location = crypto_location('BTCBUSD')
#%% Where to quickly get bitcoin price
t_min = time.localtime().tm_min
prices = client.get_all_tickers()
bitcoin_price = prices[bitcoin_location].get('price')
print(bitcoin_price)
ethereum_price = prices[ethereum_location].get('price')
print(ethereum_price)
stable_coin_price = prices[stable_coin_location].get('price')
print(stable_coin_price)
bitcoin_as_BUSD = prices[bitcoin_as_BUSD_location].get('price')
print(bitcoin_as_BUSD)
client.session.headers.update({ 'X-MBX-APIKEY': API_key})
client.get_account()
error occurs at client.get_account()
I had the same problem, the binance APIs without any IP restrictions, expire every 90 days. I have restricted the API to my IP and it works!
Still, you're sure to find it all here:
https://python-binance.readthedocs.io/en/latest/index.html

504 Deadline Exceeded error when downloading BQ query results to Python dataframe

I'm using Python to run a query on a BigQuery dataset and then put the results into a Python dataset.
The query runs OK; I can see a temporary table is created for the results in the dataset in BQ, but when using the query client's to_dataset method, it falls over on the 504 Deadline Exceeded error
client = bigquery.Client( credentials=credentials, project= projectID )
dataset = client.dataset('xxx')
table_ref = dataset.table('xxx')
JobConfig = bigquery.QueryJobConfig(destination = table_ref)
client.delete_table(table_ref, not_found_ok=True)
QueryJob = client.query(queryString, location='EU', job_config=JobConfig)
QueryJob.result()
results = client.list_rows(table_ref, timeout =100).to_dataframe()
It all runs fine until the last line. I've added a timeout argument to the list_rows method, but it hasn't helped.
I'm running this on a Windows virtual machine, with Python 3.8 installed.
(I've also tested the same code on my laptop and it worked just fine - don't know what's different.)
Take a look at:
https://github.com/googleapis/python-bigquery-storage/issues/4
it's a known bug in Windows, the "solution" is to:
import google.cloud.bigquery_storage_v1.client
from functools import partialmethod
# Set a two hours timeout
google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows = partialmethod(google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows, timeout=3600*2)
Providing that you'll use:
bqClient = bigquery.Client(credentials=credentials, project=project_id)
bq_storage_client = bigquery_storage_v1.BigQueryReadClient(credentials=credentials)
raw_training_data = bqClient.query(SOME_QUERY).to_arrow(bqstorage_client=bq_storage_client).to_pandas()
If you can use pandas try this :
import pandas as pd
df = pd.read_gbq("select * from `xxx.xxx`", dialect='standard', use_bqstorage_api=True)
To be able to use use_bqstorage_api you have to set it up on GCP. Read more about that in documentation
This link has helped me : https://googleapis.dev/python/bigquery/latest/usage/pandas.html
My working code is :
credentials, your_project_id = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
bqclient = bigquery.Client(credentials=credentials, project=your_project_id)
query_string = """SELECT..."""
df = bqclient.query(query_string).to_dataframe()
Hope it will help you guys

Categories