web3.py getAmountOut() outputs INVALID LIQUIDITY - python

I'm trying to use the getAmountOut() function from the Uniswap Router-02 which should tell you how much of output token you should receive for a given input token amount.
Problem is I am getting the following error:
web3.exceptions.ContractLogicError: execution reverted: UniswapV2Library: INSUFFICIENT_LIQUIDITY.
Now does this function require you to use some ETH because it has to pay gas fees in order to execute or is there an error in my code ?
def checkSubTrade(exchange, in_token_address, out_token_address, amount):
# Should be general cross exchange but would have to check if each routers contain the same methods
router_address = w3.toChecksumAddress(router_dict[str(exchange)])
router_abi = abi_dict[str(exchange)]
router_contract = w3.eth.contract(address = router_address, abi = router_abi)
swap_path = [in_token_address, out_token_address]
output = router_contract.functions.getAmountsOut(amount, swap_path).call()
return output
output = checkSubTrade('uniswap', token_dict['WETH'], token_dict['UNI'], 100000000)
print(output())
token_dict, router_dict contain the addresses and abi_dict contains the ABI for the DEX.

i think you need to check two things.
router_address : maybe it is different then you think
when you go scope page that chain providing, you can see transaction
detail. and there is from and to, for example this link you
might see the to, in swap session to is route_address. so like that, you need to search in scope page to get route_address
router_abi : i think you using uniswapv2library, but in uniswap2library there is many other abi. for example factoryabi. so you need to check abi is correct or not. i highly recommend that you need to get or make solidity file of abi and change abi via web ide

Related

Script to transact tokens on the Polygon chain?

I like to automatically transfer funds from all my Metamask wallets into one central wallet automatically on the Polygon chain. How exactly do I do this? Currently I don't how to exactly approach this as the token I'd like to transact is on the polygon chain and I've only seen implementations for the Ethereum chain. This is the token: https://polygonscan.com/token/0x3a9A81d576d83FF21f26f325066054540720fC34
Also don't see an ABI there. It is still an ERC20 token, but I don't know how the implementation differs from a regular token on the Ethereum chain. Currently this is my code for just checking balance, but that doesn't work either as the contract address is not recognized. The error says: "Could not transact with/call contract function, is contract deployed correctly and chain synced?
from ethtoken.abi import EIP20_ABI
w3 = Web3(HTTPProvider("https://mainnet.infura.io/v3/..."))
contract_address = '0x3a9A81d576d83FF21f26f325066054540720fC34'
contract = w3.eth.contract(address=contract_address, abi=EIP20_ABI)
print(contract.address)
n1 = '0x...'
raw_balance = contract.functions.balanceOf(n1).call()
You are using wrong RPC url for Polygon Mainnet.
if you are using infura, then it should be like this:
w3 = Web3(HTTPProvider("https://polygon.infura.io/v3/YOUR_INFURA_PROJECT_ID"))
or you can use public RPC url:
w3 = Web3(HTTPProvider("https://polygon-rpc.com/"))

Product system type error using olca API for OpenLCA

I'm copying the standard code from the olca openlca github.
This describes using python to calculate product systems
When I run the code I get the error 'type' object is not subscriptable.
A copy of the code is below
import olca
client = olca.Client(8080)
create the calculation setup
setup = olca.CalculationSetup()
define the calculation type here
see http://greendelta.github.io/olca-schema/html/CalculationType.html
setup.calculation_type = olca.CalculationType.CONTRIBUTION_ANALYSIS
select the product system and LCIA method
setup.impact_method = client.find(olca.ImpactMethod, 'TRACI 2.1')
setup.product_system = client.find(olca.ProductSystem, 'compost plant, open')
amount is the amount of the functional unit (fu) of the system that
should be used in the calculation; unit, flow property, etc. of the fu
can be also defined; by default openLCA will take the settings of the
reference flow of the product system
setup.amount = 1.0
calculate the result and export it to an Excel file
result = client.calculate(setup)
client.excel_export(result, 'result.xlsx')
I've update python and downloaded both versions of olca from github and pip.
I believe the issue is with the type of the ProductSystem, but I can't be sure. Any help would be greatly appreciated.

Python Neuroglancer Getting Input

Below is a snippet of code from Google's publicly available Neuroglancer. It is from an example on their github. Could someone explain what exactly this code does and how it does it? I am having trouble understanding it, and don't know what exactly the variable s is. Thank you for the help.
def my_action(s):
print('Got my-action')
print(' Mouse position: %s' % (s.mouse_voxel_coordinates,))
print(' Layer selected values: %s' % (s.selected_values,))
viewer.actions.add('my-action', my_action)
with viewer.config_state.txn() as s:
s.input_event_bindings.viewer['keyt'] = 'my-action'
s.status_messages['hello'] = 'Welcome to this example'
This example adds a key binding to the viewer and adds a status message. When you press the t key, the my_action function will run. my_action takes the current state of the action and grabs the mouse coordinates and selected values in the layer.
The .txn() method performs a state-modification transaction on the ConfigState object. And by state-modification, I mean it changes the config. There are several default actions in the ConfigState object (defined in part here), and you are modifying that config by adding your own action.
The mouse_coordinates and selected_values objects are defined in Python here, and link to the typescript implementation here. The example also sets a status message on the config state, and that is implemented here.
It might be useful to first point to the source code for the various functions involved.
the example is available on GitHub
viewer.config_state
viewer.config_state is a "trackable" version of neuroglancer.viewer_config_state.ConfigState
viewer.config_state.txn()

How to use IP_FILTER with python libtorrent

The question I have is: How can I use ip_filter in libtorrent using the python language.
The goal I am trying to achieve is: Block all IP-addresses (in or out going traffic) using libtorrent ip-filter except for the one’s I allow. Code snippet below is where I try to achieve my goal...
class Session:
def __init__(self)
self.session = libtorrent.session({'listen_interfaces': '0.0.0.0:6881'})
self.ip_filter = None
….more….
def set_access_rules(self):
self.ip_filter = libtorrent.ip_filter()
self.ip_filter.add_rule('0.0.0.0', '255.255.255.255', 1) # I assume ‘1’ means blocking
self.ip_filter.add_rule('172.16.100.36', '172.16.100.36', 0) # I assume ‘0’ allow, prob. wrong...
self.session.set_ip_filter(self.ip_filter)
The (c source) documentation said:
// Adds a rule to the filter. first and last defines a range of
// ip addresses that will be marked with the given flags. The flags
// can currently be 0, which means allowed, or ip_filter::blocked, which
// means disallowed.
ip_filter::blocked <- This is where I get stuck, how do I use/write that in python?
The thing is that if I call ‘handle.get_peer_info()’ I expect only to see 172.16.100.36 but I see all sorts of public addresses… Note: My torrent has no trackers and I configured no trackers elsewhere. Can you maybe give me an example in python how to achieve my goal?

Using core.Token to pass a String Parameter as a number

I raised a feature request on the CDK github account recently and was pointed in the direction of Core.Token as being pretty much the exact functionality I was looking for. I'm now having some issues implementing it and getting similar errors, heres the feature request I raised previously: https://github.com/aws/aws-cdk/issues/3800
So my current code looks something like this:
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-memory_limit')),
execution_role=fargate_iam_role,
container_port=core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='port')),
cpu = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-container_cpu')),
image=ecs.ContainerImage.from_registry(ecrRepo)
)
When I try synthesise this code I get the following error:
jsii.errors.JavaScriptError:
Error: Resolution error: Supplied properties not correct for "CfnSecurityGroupEgressProps"
fromPort: "dummy-value-for-template-service-container_port" should be a number
toPort: "dummy-value-for-template-service-container_port" should be a number.
Object creation stack:
To me it seems to be getting past the validation requiring a number to be passed into the FargateService validation, but when it tried to create the resources after that ("CfnSecurityGroupEgressProps") it cant resolve the dummy string as a number. I'd appreciate any help on solving this or alternative suggestions to passing in values from AWS system params instead (I thought it might be possible to parse the values into here via a file pulled from S3 during the build pipeline or something along those lines, but that seems hacky).
With some help I think we've cracked this!
The problem was that I was passing "ssm.StringParameter.value_from_lookup" the solution is to provide the token with "ssm.StringParameter.value_for_string_parameter", when this is synthesised it stores the token and then upon deployment the value stored in system parameter store is substituted.
(We also came up with another approach for achieving similar which we're probably going to use over SSM approach, I've detailed below the code snippet if you're interested)
See the complete code below:
from aws_cdk import (
aws_ec2 as ec2,
aws_ssm as ssm,
aws_iam as iam,
aws_ecs as ecs,
aws_ecs_patterns as ecs_patterns,
core,
)
class GenericFargateService(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
containerPort = core.Token.as_number(ssm.StringParameter.value_for_string_parameter(
self, 'template-service-container_port'))
vpc = ec2.Vpc(
self, "cdk-test-vpc",
max_azs=2
)
cluster = ecs.Cluster(
self, 'cluster',
vpc=vpc
)
fargate_iam_role = iam.Role(self,"execution_role",
assumed_by = iam.ServicePrincipal("ecs-tasks"),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryFullAccess")]
)
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = 1024,
execution_role=fargate_iam_role,
container_port=containerPort,
cpu = 512,
image=ecs.ContainerImage.from_registry("000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr")
)
fargate_service.target_group.configure_health_check(path=self.node.try_get_context("health_check_path"), port="9000")
app = core.App()
GenericFargateService(app, "generic-fargate-service", env={'account':'000000000000', 'region': 'eu-west-1'})
app.synth()
Solutions to problems are like buses, apparently you spend ages waiting for one and then two arrive together. And I think this new bus is the option we're probably going to run with.
The plan is to have developers provide an override for the cdk.json file withing their code repos, which can then put parsed into the CDK pipeline where the generic code will be synthesised. This file will contain some "context", the context will then be used within the CDK to set our variables for the LoadBalancedFargate service.
I've included some code snippets for setting cdk.json file and then using its values within code below.
Example CDK.json:
{
"app": "python3 app.py",
"context": {
"container_name":"template-service",
"memory_limit":1024,
"container_cpu":512,
"health_check_path": "/gb/template/v1/status",
"ecr_repo": "000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr"
}
}
Python example for assigning context to variables:
memoryLimitMib = self.node.try_get_context("memory_limit")
I believe we could also use a Try/Catch block to assign some default values to this if not provided by the developer in their CDK.json file.
I hope this post has provided some useful information to those looking for ways to create a generic template for deploying CDK code! I don't know if we're doing the right thing here, but this tool is so new it feels like some common patterns dont exist yet.

Categories