Swap trade in Uniswap fails using uniswap-python standard functions - python

I am trying to do a simple trade using uniswap-python and it doesn't work.
Sample code:
from uniswap import Uniswap
provider = "https://polygon-mainnet.infura.io/v3/"+INFURA_API_KEY
uniswap = Uniswap(address, private_key, version = 3, provider)
result = uniswap.make_trade(USDT, WMATIC, 10)
Result:
raise ExtraDataLengthError(web3.exceptions.ExtraDataLengthError:
The field extraData is 97 bytes, but should be 32.
It is quite likely that you are connected to a POA chain.
Refer to http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority for more details.
The full extraData is: HexBytes('0xd3...d0')
I've checked PoA docs and tested all options without success. I always get the same message.
There are enough funds in my wallet to do the trade.
Any clues?

This is a problem with the web3 provider used internally by the uniswap-python module. If you're running on Polygon or another Proof of Authority chain (BNB for instance), you need to tell that to the web3 provider:
from web3.middleware import geth_poa_middleware
uniswap.w3.middleware_onion.inject(geth_poa_middleware, layer=0)

Related

Script to transact tokens on the Polygon chain?

I like to automatically transfer funds from all my Metamask wallets into one central wallet automatically on the Polygon chain. How exactly do I do this? Currently I don't how to exactly approach this as the token I'd like to transact is on the polygon chain and I've only seen implementations for the Ethereum chain. This is the token: https://polygonscan.com/token/0x3a9A81d576d83FF21f26f325066054540720fC34
Also don't see an ABI there. It is still an ERC20 token, but I don't know how the implementation differs from a regular token on the Ethereum chain. Currently this is my code for just checking balance, but that doesn't work either as the contract address is not recognized. The error says: "Could not transact with/call contract function, is contract deployed correctly and chain synced?
from ethtoken.abi import EIP20_ABI
w3 = Web3(HTTPProvider("https://mainnet.infura.io/v3/..."))
contract_address = '0x3a9A81d576d83FF21f26f325066054540720fC34'
contract = w3.eth.contract(address=contract_address, abi=EIP20_ABI)
print(contract.address)
n1 = '0x...'
raw_balance = contract.functions.balanceOf(n1).call()
You are using wrong RPC url for Polygon Mainnet.
if you are using infura, then it should be like this:
w3 = Web3(HTTPProvider("https://polygon.infura.io/v3/YOUR_INFURA_PROJECT_ID"))
or you can use public RPC url:
w3 = Web3(HTTPProvider("https://polygon-rpc.com/"))

web3.py getAmountOut() outputs INVALID LIQUIDITY

I'm trying to use the getAmountOut() function from the Uniswap Router-02 which should tell you how much of output token you should receive for a given input token amount.
Problem is I am getting the following error:
web3.exceptions.ContractLogicError: execution reverted: UniswapV2Library: INSUFFICIENT_LIQUIDITY.
Now does this function require you to use some ETH because it has to pay gas fees in order to execute or is there an error in my code ?
def checkSubTrade(exchange, in_token_address, out_token_address, amount):
# Should be general cross exchange but would have to check if each routers contain the same methods
router_address = w3.toChecksumAddress(router_dict[str(exchange)])
router_abi = abi_dict[str(exchange)]
router_contract = w3.eth.contract(address = router_address, abi = router_abi)
swap_path = [in_token_address, out_token_address]
output = router_contract.functions.getAmountsOut(amount, swap_path).call()
return output
output = checkSubTrade('uniswap', token_dict['WETH'], token_dict['UNI'], 100000000)
print(output())
token_dict, router_dict contain the addresses and abi_dict contains the ABI for the DEX.
i think you need to check two things.
router_address : maybe it is different then you think
when you go scope page that chain providing, you can see transaction
detail. and there is from and to, for example this link you
might see the to, in swap session to is route_address. so like that, you need to search in scope page to get route_address
router_abi : i think you using uniswapv2library, but in uniswap2library there is many other abi. for example factoryabi. so you need to check abi is correct or not. i highly recommend that you need to get or make solidity file of abi and change abi via web ide

How to connect kafka IO from apache beam to a cluster in confluent cloud

I´ve made a simple pipeline in Python to read from kafka, the thing is that the kafka cluster is on confluent cloud and I am having some trouble conecting to it.
Im getting the following log on the dataflow job:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:820)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:631)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:612)
at org.apache.beam.sdk.io.kafka.KafkaIO$Read$GenerateKafkaSourceDescriptor.processElement(KafkaIO.java:1495)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
So I think Im missing something while passing the config since it mentions something related to it, Im really new to all of this and I know nothing about java so I dont know how to proceed even reading the JAAS documentation.
The code of the pipeline is the following:
import apache_beam as beam
from apache_beam.io.kafka import ReadFromKafka
from apache_beam.options.pipeline_options import PipelineOptions
import os
import json
import logging
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='credentialsOld.json'
with open('cluster.configuration.json') as cluster:
data=json.load(cluster)
cluster.close()
def logger(element):
logging.INFO('Something was found')
def main():
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"auto.offset.reset":"earliest"
}
print('======================================================')
beam_options = PipelineOptions(runner='DataflowRunner',project='project',experiments=['use_runner_v2'],streaming=True,save_main_session=True,job_name='kafka-stream-test')
with beam.Pipeline(options=beam_options) as p:
msgs = p | 'ReadKafka' >> ReadFromKafka(consumer_config=config,topics=['users'],expansion_service="localhost:8088")
msgs | beam.FlatMap(logger)
if __name__ == '__main__':
main()
I read something about passing a property java.security.auth.login.config in the config dictionary but since that example is with java and I´am using python Im really lost at what I have to pass or even if that´s the property I have to pass etc.
btw Im getting the api key and secret from here and this is what I am passing to sasl.username and sasl.password
I faced the same error the first time I tried the beam's expansion service. The key sasl.mechanisms that you are supplying is incorrect, try with sasl.mechanism also you do not need to supply the username and password since you are connection is authenticated by jasl basically the consumer_config like below worked for me:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanism":data["sasl.mechanisms"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I got a partial answer to this question since I fixed this problem but got into another one:
config={
"bootstrap.servers":data["bootstrap.servers"],
"security.protocol":data["security.protocol"],
"sasl.mechanisms":data["sasl.mechanisms"],
"sasl.username":data["sasl.username"],
"sasl.password":data["sasl.password"],
"session.timeout.ms":data["session.timeout.ms"],
"group.id":"tto",
"sasl.jaas.config":f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="Kafka" username=\"{data["sasl.username"]}\" password=\"{data["sasl.password"]}\";',
"auto.offset.reset":"earliest"
}
I needed to provide the sasl.jaas.config porpertie with the api key and secret of my cluster and also the service name, however, now Im facing a different error whe running the pipeline on dataflow:
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
This error shows after 4-5 mins of trying to run the job on dataflow, actually I have no idea how to fix this but I think is related to my broker on confluent rejecting the connection, I think this could be related to the zone execution since the cluster is in a different zone than job region.
UPDATE:
I tested the code on linux/ubuntu and I dont know why but the expansión service gets downloaded automatically so you wont get unsoported signal error, still having some issues trying to autenticate to confluent kafka tho.

latency with group in pymongo in tests

Good Day.
I have faced following issue using pymongo==2.1.1 in python2.7 with mongo 2.4.8
I have tried to find solution using google and stack overflow but failed.
What's the issue?
I have following function
from bson.code import Code
def read(groupped_by=None):
reducer = Code("""
function(obj, prev){
prev.count++;
}
""")
client = Connection('localhost', 27017)
db = client.urlstats_database
results = db.http_requests.group(key={k:1 for k in groupped_by},
condition={},
initial={"count": 0},
reduce=reducer)
groupped_by = list(groupped_by) + ['count']
result = [tuple(res[col] for col in groupped_by) for res in results]
return sorted(result)
Then I am trying to write test for this function
class UrlstatsViewsTestCase(TestCase):
test_data = {'data%s' % i : 'data%s' % i for i in range(6)}
def test_one_criterium(self):
client = Connection('localhost', 27017)
db = client.urlstats_database
for column in self.test_data:
db.http_requests.remove()
db.http_requests.insert(self.test_data)
response = read([column])
self.assertEqual(response, [(self.test_data[column], 1)])
this test sometimes fails as I understand because of latency. As I can see response has not cleaned data in it
If I add delay after remove test pass all the time.
Is there any proper way to test such functionality?
Thanks in Advance.
A few questions regarding your environment / code:
What version of pymongo are you using?
If you are using any of the newer versions that have MongoClient, is there any specific reason you are using Connection instead of MongoClient?
The reason I ask second question is because Connection provides fire-and-forget kind of functionality for the operations that you are doing while MongoClient works by default in safe mode and is also preferred approach of use since mongodb 2.2+.
The behviour that you see is very conclusive for Connection usage instead of MongoClient. While using Connection your remove is sent to server, and the moment it is sent from client side, your program execution moves to next step which is to add new entries. Based on latency / remove operation completion time, these are going to be conflicting as you have already noticed in your test case.
Can you change to use MongoClient and see if that helps you with your test code?
Additional Ref: pymongo: MongoClient or Connection
Thanks All.
There is no MongoClient class in version of pymongo I use. So I was forced to find out what exactly differs.
As soon as I upgrade to 2.2+ I will test whether everything is ok with MongoClient. But as for connection class one can use write concern to control this latency.
I older version One should create connection with corresponding arguments.
I have tried these twojournal=True, safe=True (journal write concern can't be used in non-safe mode)
j or journal: Block until write operations have been commited to the journal. Ignored if the server is running without journaling. Implies safe=True.
I think this make performance worse but for automatic tests this should be ok.

authGSSServerInit looks for wrong entry from keytab

I am attempting to initialize a context for GSSAPI server-side authentication, using python-kerberos (1.0.90-3.el6). My problem is that myserver.localdomain gets converted to myserver - a part of my given principal gets chopped off somewhere. Why does this happen?
Example failure:
>>> import kerberos
>>> kerberos.authGSSServerInit("HTTP#myserver.localdomain")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
kerberos.GSSError: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Unknown error', 0))
>>>
With the help of KRB5_TRACE I get the reason:
[1257] 1346344556.406343: Retrieving HTTP/myserver#LOCALDOMAIN from WRFILE:/etc/krb5.keytab (vno 0, enctype 0) with result: -1765328203/No key table entry found for HTTP/myserver#LOCALDOMAIN
I can not generate the keytab for plain HTTP/myserver#LOCALDOMAIN because it would force also the users to access the server with such address. I need to get the function to work with the proper FQDN name. As far as I can see authGSSServerInit is supposed to work with the FQDN without mutilating it.
I think the python-kerberos method calls the following krb5-libs (1.9-33.el6) provided functions, the problem might be also in those:
maj_stat = gss_import_name(&min_stat, &name_token, GSS_C_NT_HOSTBASED_SERVICE, &state->server_name);
maj_stat = gss_acquire_cred(&min_stat, state->server_name,GSS_C_INDEFINITE,GSS_C_NO_OID_SET, GSS_C_ACCEPT, &state->server_creds, NULL, NULL);
The kerberos is properly configured on this host, and confirmed to work. I can for instance kinit as user, and perform authentication the tickets. It is just the authGSSServerInit that fails to function properly.
Some of the documentation is misleading:
def authGSSServerInit(service):
"""
Initializes a context for GSSAPI server-side authentication with the given service principal.
authGSSServerClean must be called after this function returns an OK result to dispose of
the context once all GSSAPI operations are complete.
#param service: a string containing the service principal in the form 'type#fqdn'
(e.g. 'imap#mail.apple.com').
#return: a tuple of (result, context) where result is the result code (see above) and
context is an opaque value that will need to be passed to subsequent functions.
"""
In fact the API expects only the type. For instance "HTTP". The rest of the principal gets generated with the help of resolver(3). Although the rest of the kerberos stuff is happy using short names the resolver generates FQDN, but only if dnsdomainname is properly set.
A bit more info for completeness, include following variables in the python command:
This is optional -> KRB5_TRACE=/path-to-log/file.log
Usually this path -> KRB5_CONFIG= /etc/krb5.conf
Usually this path -> KTNAME=/etc/security/keytabs/foo.keytab
For example:
KRB5_TRACE=/path-to-log/file.log KRB5_CONFIG='/etc/krb5.conf' KTNAME=/etc/security/keytabs/foo.keytab /opt/anaconda3.5/bin/python3.6
In python run:
import kerberos
kerberos.authGSSServerInit("user")
Considerations:
In your keytab the principal must be user/host#REALM
Both "user" must be identical
The full principal will be composed by your kerberos client config
If the return code is 0 you are done! Congratz!
If not go to the log file and enjoy debugging :P

Categories