I want to extract all confirmed transactions from the bitcoin blockchain. I know there are repos out there (e.g. https://github.com/znort987/blockparser) but I want to write something myself for my better understanding.
I am have tried the following code after having downloaded far more than 42 blocks and while running bitcoind (minimal example):
from bitcoin.rpc import RawProxy
proxy = RawProxy()
blockheight=42
block = proxy.getblock(proxy.getblockhash(blockheight))
tx_list = block['tx']
for tx_id in tx_list:
raw_tx = proxy.getrawtransaction(tx_id)
This yields the following error:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/donkeykong/.local/lib/python3.7/site-packages/bitcoin/rpc.py", line 315, in <lambda>
f = lambda *args: self._call(name, *args)
File "/home/donkeykong/.local/lib/python3.7/site-packages/bitcoin/rpc.py", line 239, in _call
'message': err.get('message', 'error message not specified')})
bitcoin.rpc.InvalidAddressOrKeyError: {'code': -5, 'message': 'No such mempool transaction. Use -txindex or provide a block hash to enable blockchain transaction queries. Use gettransaction for wallet transactions.'}
Could anyone elucidate what I am misunderstanding?
For reproduction:
python version: Python 3.7.3
I installed the bitcoin.rpc via: pip3 install python-bitcoinlib
bitcoin-cli version: Bitcoin Core RPC client version v0.20.1
Running the client as bitcoind -txindex solves the problem as it maintains the full transaction index. I should have spent more attention to the error message...
Excerpt from bitcoind --help:
-txindex
Maintain a full transaction index, used by the getrawtransaction rpc
call (default: 0)
Related
I am trying to check my Spot account balance using the Kucoin API, get_accounts().(Github: Link)
And although on github and API documentation get_accounts specifies 2 optional arguments, when I pass any args I get an error. (client.get_accounts(currency='USDT') does not work either)
Am I doing something wrong? / Is there a better way to check my spot wallet balance for a specific coin?
from kucoin.client import Client
import config_ku
client = Client(config_ku.API_KEY, config_ku.API_SECRET, config_ku.API_PASSPHRASE)
results = client.get_accounts('USDT')
print(results)
Error:
Traceback (most recent call last):
File "c:\Users\Ajitesh Singh Thakur\Documents\Visual Studio Code\kucoin\test2.py", line 5, in <module>
results = client.get_accounts('USDT')
TypeError: get_accounts() takes 1 positional argument but 2 were given
I uninstalled the package and installed it again and now it works. Thanks all.
Using pysolr trying to add document to solr with Python3.9 and getting below error 400 even with only 1 or 2 fields.
The fields that I'm using here are dynamic fields.
No issue with connecting to solr.
#/usr/bin/python
import pysolr
solr = pysolr.Solr('http://localhost:8080/solr/', always_commit=True)
if solr.ping():
print('connection successful')
docs = [{'id':'123c', 's_chan_name': 'TV-201'}]
solr.add(docs)
res = solr.search('123c')
print(res)
Getting below error:
connection successful
Traceback (most recent call last):
File "/tmp/test-solr.py", line 8, in <module>
solr.add(docs)
File "/usr/local/lib/python3.9/site-packages/pysolr.py", line 1042, in add
return self._update( File "/usr/local/lib/python3.9/site-packages/pysolr.py", line 568, in
_update
return self._send_request( File "/usr/local/lib/python3.9/site-packages/pysolr.py", line 463, in
_send_request
raise SolrError(error_message % (resp.status_code, solr_message))
pysolr.SolrError: Solr responded with an error (HTTP 400): [Reason: None]
<html><head><meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 Unexpected character '[' (code 91) in prolog;
expected '<' at [row,col {unknown-source}]: [1,1]</title>
</head><body><h2>HTTP ERROR 400</h2><p>Problem accessing /solr/update/.
Reason:<pre> Unexpected character '[' (code 91) in prolog;
expected '<' at [row,col {unknown-source}]: [1,1]</pre>
</p><hr><a href="http://eclipse.org/jetty">
Powered by Jetty://9.4.15.v20190215</a><hr/></body></html>
I’d recommend looking at the debug logs (which might require you to use logging.basicConfig() to enable them) and testing the URLs it’s using yourself. Depending on your Solr configuration, it might be the case that your Solr URL needs to have the core at the end (e.g. http://localhost:8983/solr/mycore).
In general, most reports like this which we get turn out to be oddities in how someone configured Solr. In addition to the debug logging, I highly recommend looking through the test suite since that does everything needed to run Solr and can be a handy point for comparison:
https://github.com/django-haystack/pysolr
solr.add method supports only one attribute
docs = {'id':'123c', 's_chan_name': 'TV-201'}
solr.add(docs)
it is not suitable for iterable.
docs = [{'id':'123c', 's_chan_name': 'TV-201'}]
solr.add_many(docs)
it will works
I am currently working on authenticating my users. I checked out this blog - RESTful Authentication with Flask - and followed its steps to produce a piece of code I believed would serve my purpose. I wanted to use the Timed Json Serializer class for my particular use case. Below, I am creating an object of that class, generating a token and trying to load the data with it.
from itsdangerous import TimedJSONWebSignatureSerializer
user_id = 'fake1'
s = TimedJSONWebSignatureSerializer(parser_app.config['SECRET_KEY'], expires_in=3600)
token = s.dumps({'user_id' : user_id})
print(token)
print (s.loads(token))
I get the following callback:
Traceback (most recent call last):
eyJhbGciOiJIUzI1NiIsImV4cCI6MTQ2ODI3MjU3MSwiaWF0IjoxNDY4MjY4OTcxfQ.eyJ1c2VyX2lkIjoiZmFrZTEifQ.Ch8y6BDMIIBdIGM0lmjdAimINvP3PnUmBpOp-jDW18w
File "C:/Users/vaibhav/PycharmProjects/Coding/Coding.py", line 6, in <module>
print (s.loads(token))
File "C:\Users\vaibhav\Anaconda\lib\site-packages\itsdangerous.py", line 798, in loads
self, s, salt, return_header=True)
File "C:\Users\vaibhav\Anaconda\lib\site-packages\itsdangerous.py", line 752, in loads
self.make_signer(salt, self.algorithm).unsign(want_bytes(s)),
File "C:\Users\vaibhav\Anaconda\lib\site-packages\itsdangerous.py", line 377, in unsign
payload=value)
itsdangerous.BadSignature: Signature 'Ch8y6BDMIIBdIGM0lmjdAimINvP3PnUmBpOp-jDW18w' does not match
I have just created the token with an expiry of an hour and it gives me a BadSignature which indicates that the token does not match. The desired output would be:
{"user_id" : "fake1"}
Please help me out.
I ended up having to uninstall and reinstall the package itsdangerous with pip. The statements used were:
pip uninstall itsdangerous
followed by:
pip install itsdangerous
Apparently, the file had been corrupted somehow causing it not to work properly.
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)
I keep getting this exception from TweetStream 1.1.1, "exception.code == 404:uthenticationError("Access denied")" It worked last week and now it doesn't. I have tried different usernames and passwords. I can log into twitter with my account information. I even deleted and reinstalled the module. what gives? Thanks for the help!
I try running this...
import tweetstream
stream = tweetstream.SampleStream("MY_USERNAME", "MY_PASSWORD")
for tweet in stream:
print tweet
The error actually looks like this:
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
for tweet in stream:
File "C:\Python27\lib\site-packages\tweetstream-1.1.1-py2.7.egg\tweetstream\streamclasses.py", line 165, in __iter__
self._init_conn()
File "C:\Python27\lib\site-packages\tweetstream-1.1.1-py2.7.egg\tweetstream\streamclasses.py", line 103, in _init_conn
raise AuthenticationError("Access denied")
AuthenticationError: Access denied
Twitter released the next version of API (1.1). And tweetstream doesn't support it yet. See relevant issue on tweetstream project issue tracker.
Had the same problem here, and I could not get the patched version mentioned on the project issue tracker (linked by #alecxe) to work either.
Twitter provides a list of libraries that should work with the newer API, at https://dev.twitter.com/docs/twitter-libraries
It lists many, including several for Python.