"transaction underpriced" error python using brownie deploying contract - python

I've deployed that contract to rinkeby, and now I am trying to deploy to mumbai with no success.
from brownie import Bbum, accounts, config, network
def main():
dev = accounts.add(config["wallets"]["from_key"])
print(dev)
print(network.show_active())
deployed_contract = Bbum.deploy({"from": dev})
this is the code I'm using to deploy. I just changed the command to polygon-test from rinkeby.
the error:
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\_cli\run.py", line 51, in main
return_value, frame = run(
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\project\scripts.py", line 103, in run
return_value = f_locals[method_name](*args, **kwargs)
File ".\scripts\deployERC.py", line 8, in main
deployed_contract = Bbum.deploy({"from": dev})
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\network\contract.py", line 531, in __call__
return tx["from"].deploy(
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\network\account.py", line 510, in deploy
receipt, exc = self._make_transaction(
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\network\account.py", line 752, in _make_transaction
exc = VirtualMachineError(e)
File "C:\Users\Omer\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_brownie-1.17.2-py3.10.egg\brownie\exceptions.py", line 96, in __init__
raise ValueError(exc["message"]) from None
ValueError: transaction underpriced

So after some digging I found the answer solved when adding "priority_fee" next to account address when I deployed the contract
deployed_contract = Bbum.deploy({"from": dev, "priority_fee": 35000000000})
yet, I just inserted random fee. how can I get the minimal fee required for the transaction?
see same problem explanation here

Related

"[ERROR] OSError: [Errno 38] Function not implemented" - Accessing trend deepsecurity.ComputersApi via Lambda

I have written a python script that successfully queries the trend deepsecurity api calls when ran locally on my machine.
I've been tasked with running the script in an aws lambda so that it is automated and can be scheduled.
The script is following the examples in the api reference and calls the legacy api successfully. However when I attempt to query using the computers api It blows up on the line : computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
def get_computer_status_api():
# Include computer status information in the returned Computer objects
#expand = deepsecurity.Expand(deepsecurity.Expand.computer_status)
expand = deepsecurity.Expand()
expand.add(deepsecurity.Expand.security_updates)
expand.add(deepsecurity.Expand.computer_status)
expand.add(deepsecurity.Expand.anti_malware)
# Set Any Required Values
computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
try:
computers = computers_api.list_computers(api_version, expand=expand.list(), overrides=False)
print("Querying ComputersApi...")
api_response_str=str(computers)
computer_count = len(computers.computers)
print(str(computer_count) + " Computers listed in Trend")
...
the error I get is :
[ERROR] OSError: [Errno 38] Function not implemented
Traceback (most recent call last):
File "/var/task/handler.py", line 782, in main
get_computer_status_api()
File "/var/task/handler.py", line 307, in get_computer_status_api
computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
File "/var/task/deepsecurity/api_client.py", line 69, in __init__
self.pool = ThreadPool()
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 925, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 196, in __init__
self._change_notifier = self._ctx.SimpleQueue()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 113, in SimpleQueue
return SimpleQueue(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/queues.py", line 336, in __init__
self._rlock = ctx.Lock()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
searching for this error it implies that I can't use the deepsecurity api in a lambda because lambdas don't support multiprocessing.
Looking for either confirmation that this is the case or suggestions for what I can change to get this working.
Trend support ticket suggested posting to here.
Resolved the issue by changing the python version from 3.8 to 3.7 in the lambda. The script now runs successfully

Random NoTransaction in Pyramid

I'm having trouble identifying the source of transaction.interfaces.NoTransaction errors within my Pyramid App. I don't see any patterns to when the error happens, so to me it's quite random.
This app is a (semi-) RESTful API and uses SQLAlchemy and MySQL. I'm currently running within a docker container that connects to an external (bare metal) MySQL instance on the same host OS.
Here's the stack trace for a login attempt within the App. This error happened right after another login attempt that was actually successful.
2020-06-15 03:57:18,982 DEBUG [txn.140501728405248:108][waitress-1] new transaction
2020-06-15 03:57:18,984 INFO [sqlalchemy.engine.base.Engine:730][waitress-1] BEGIN (implicit)
2020-06-15 03:57:18,984 DEBUG [txn.140501728405248:576][waitress-1] abort
2020-06-15 03:57:18,985 ERROR [waitress:357][waitress-1] Exception while serving /auth
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/waitress/channel.py", line 350, in service
task.service()
File "/usr/local/lib/python3.8/site-packages/waitress/task.py", line 171, in service
self.execute()
File "/usr/local/lib/python3.8/site-packages/waitress/task.py", line 441, in execute
app_iter = self.channel.server.application(environ, start_response)
File "/usr/local/lib/python3.8/site-packages/pyramid/router.py", line 270, in __call__
response = self.execution_policy(environ, self)
File "/usr/local/lib/python3.8/site-packages/pyramid_retry/__init__.py", line 127, in retry_policy
response = router.invoke_request(request)
File "/usr/local/lib/python3.8/site-packages/pyramid/router.py", line 249, in invoke_request
response = handle_request(request)
File "/usr/local/lib/python3.8/site-packages/pyramid_tm/__init__.py", line 178, in tm_tween
reraise(*exc_info)
File "/usr/local/lib/python3.8/site-packages/pyramid_tm/compat.py", line 36, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/pyramid_tm/__init__.py", line 135, in tm_tween
userid = request.authenticated_userid
File "/usr/local/lib/python3.8/site-packages/pyramid/security.py", line 381, in authenticated_userid
return policy.authenticated_userid(self)
File "/opt/REDACTED-api/REDACTED_api/auth/policy.py", line 208, in authenticated_userid
result = self._authenticate(request)
File "/opt/REDACTED-api/REDACTED_api/auth/policy.py", line 199, in _authenticate
session = self._get_session_from_token(token)
File "/opt/REDACTED-api/REDACTED_api/auth/policy.py", line 320, in _get_session_from_token
session = service.get(session_id)
File "/opt/REDACTED-api/REDACTED_api/service/__init__.py", line 122, in get
entity = self.queryset.filter(self.Meta.model.id == entity_id).first()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3375, in first
ret = list(self[0:1])
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3149, in __getitem__
return list(res)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3481, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3502, in _execute_and_instances
conn = self._get_bind_args(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3517, in _get_bind_args
return fn(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3496, in _connection_from_session
conn = self.session.connection(**kw)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1138, in connection
return self._connection_for_bind(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1146, in _connection_for_bind
return self.transaction._connection_for_bind(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 458, in _connection_for_bind
self.session.dispatch.after_begin(self.session, self, conn)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/event/attr.py", line 322, in __call__
fn(*args, **kw)
File "/usr/local/lib/python3.8/site-packages/zope/sqlalchemy/datamanager.py", line 268, in after_begin
join_transaction(
File "/usr/local/lib/python3.8/site-packages/zope/sqlalchemy/datamanager.py", line 233, in join_transaction
DataManager(
File "/usr/local/lib/python3.8/site-packages/zope/sqlalchemy/datamanager.py", line 89, in __init__
transaction_manager.get().join(self)
File "/usr/local/lib/python3.8/site-packages/transaction/_manager.py", line 91, in get
raise NoTransaction()
transaction.interfaces.NoTransaction
The trace shows that the execution eventually reaches my project, but only my custom authentication policy. And it fails right where the database should be queried for the user.
What intrigues me here is the third line on the stack trace. It seems Waitress somehow aborted the transaction it created? Any clue why?
EDIT: Here's the code where that happens: policy.py:320
def _get_session_from_token(self, token) -> UserSession:
try:
session_id, session_secret = self.parse_token(token)
except InvalidToken as e:
raise SessionNotFound(e)
service = AuthService(self.dbsession, None)
try:
session = service.get(session_id) # <---- Service Class called here
except NoResultsFound:
raise SessionNotFound("Invalid session found Request headers. "
"Session id: %s".format(session_id))
if not service.check_session(session, session_secret):
raise SessionNotFound("Session signature does not match")
now = datetime.now(tz=pytz.UTC)
if session.validity < now:
raise SessionNotFound(
"Current session ID {session_id} is expired".format(
session_id=session.id
)
)
return session
And here is an a view on the that service class method:
class AuthService(ModelService):
class Meta:
model = UserSession
queryset = Query(UserSession)
search_fields = []
order_fields = [UserSession.created_at.desc()]
# These below are from the generic ModelClass father class
def __init__(self, dbsession: Session, user_id: str):
self.user_id = user_id
self.dbsession = dbsession
self.Meta.queryset = self.Meta.queryset.with_session(dbsession)
self.logger = logging.getLogger("REDACTED")
#property
def queryset(self):
return self.Meta.queryset
def get(self, entity_id) -> Base:
entity = self.queryset.filter(self.Meta.model.id == entity_id).first()
if not entity:
raise NoResultsFound(f"Could not find requested ID {entity_id}")
As you can see, the there's already some exception treatment. I really don't see what other exception I could try to catch on AuthService.get
I found the solution to be much simpler than tinkering inside Pyramid or SQLAlchemy.
Debugging my Authentication Policy closely, I found out that my it was keeping a sticky reference for the dbsession. It was stored on the first request ever who used it, and never released.
The first request works as expected, the following one fails: My understanding is that the object is still in memory while the app is running, and after the initial transaction is closed. The second request has a new connection, and a new transaction, but the object in memory still points to the previous one, that when used ultimately causes this.
What I don't understand is why the exception didn't happen sometimes. As I mentioned initially, it was seemingly random.
Another thing that I struggled with was in writing a test case to expose the issue. On my tests, the issue never happens because I have (and I've never seen it done differently) a single connection and a single transaction throughout the entire testing session, as opposed of a new connection/transaction per request, so I have not found no way to actually reproduce.
Please let me know if that makes sense, and if you can shed a light on how to expose the bug on a test case.

Tornado authenticating to MongoDB using Motor

I made a tornado app that is supposed to run some queries against a mongodb instance running on another machine. To this purpose I have set up mongodb with authentication and users. I have checked that everything works and that I can authenticate to tornado using the Robo 3T app and a small synchronous script that uses pymongo.
This is how I initialize my tornado application:
class API(tornado.web.Application):
def __init__(self):
settings = dict(
autoreload=True,
compiled_template_cache=False,
static_hash_cache=False,
serve_traceback=True,
cookie_secret="secret",
xsrf_cookies=True,
static_path=os.path.join(os.path.dirname(__file__), "media"),
template_loader=tornado.template.Loader('./templates')
)
mongohost = os.environ.get('MONGOHOST', 'localhost')
mongoport = os.environ.get('MONGOPORT', 27017)
mongouser = os.environ.get('MONGOUSER')
mongopass = os.environ.get('MONGOPASS')
mongodb = os.environ.get('MONGODB')
mongouri = f'mongodb://{mongouser}:{mongopass}#{mongohost}:{mongoport}/{mongodb}'
self.client = motor.motor_tornado.MotorClient(mongouri)
logging.info('connected to mongodb')
self.db = self.client.get_default_database()
logging.info('got mongodb database')
tornado.web.Application.__init__(self, url_patterns, **settings)
def main():
port = 8888
FORMAT = ('%(asctime)s %(levelname) -10s %(name) -30s %(funcName) -35s %(lineno) -5d: %(message)s')
logging.basicConfig(level=logging.DEBUG, format=FORMAT)
tornado.ioloop.IOLoop.configure(TornadoUvloop)
app = API()
app.listen(port)
signal.signal(signal.SIGINT, sig_exit)
logging.info('Tornado server started on port %s' % port)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
Everything appears to run until one of my handlers that actually performs queries against the database is hit. Code from the handler looks like this:
cursor = self.application.db['events'].find(
find,
projection
).limit(perpage).skip(page*perpage)
buffsize = 0
try:
while (yield cursor.fetch_next):
message = cursor.next_object()
self.write(json.dumps(message, default=json_util.default))
buffsize += 1
if buffsize >= 10:
buffsize = 0
yield self.flush()
yield self.flush()
except Exception:
logging.error('Could not connect to mongodb', exc_info=True)
This code worked just fine before trying to use authentication but now it raises exceptions and stoped working:
017-12-12 13:00:20,718 INFO root __init__ 37 : connected to mongodb
2017-12-12 13:00:20,718 INFO root __init__ 39 : got mongodb database
2017-12-12 13:00:20,723 INFO root main 67 : Tornado server started on port 8888
2017-12-12 13:00:25,226 INFO tornado.general _check_file 198 : /Users/liviu/Documents/Work/api_v2/src/api_tmp/handlers/event_list.py modified; restarting server
2017-12-12 13:00:25,469 INFO root __init__ 37 : connected to mongodb
2017-12-12 13:00:25,469 INFO root __init__ 39 : got mongodb database
2017-12-12 13:00:25,472 INFO root main 67 : Tornado server started on port 8888
2017-12-12 13:00:28,152 INFO root get 266 : now querying database
2017-12-12 13:00:28,214 ERROR root get 355 : Could not connect to mongodb
Traceback (most recent call last):
File "/Users/liviu/Documents/Work/api_v2/src/api_tmp/handlers/event_list.py", line 346, in get
while (yield cursor.fetch_next):
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 4, in raise_exc_info
File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/cursor.py", line 1055, in _refresh
self.__collation))
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/cursor.py", line 892, in __send_message
**kwargs)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/mongo_client.py", line 950, in _send_message_with_response
exhaust)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/mongo_client.py", line 961, in _reset_on_error
return func(*args, **kwargs)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/server.py", line 99, in send_message_with_response
with self.get_socket(all_credentials, exhaust) as sock_info:
File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/server.py", line 168, in get_socket
with self.pool.get_socket(all_credentials, checkout) as sock_info:
File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/pool.py", line 852, in get_socket
sock_info.check_auth(all_credentials)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/pool.py", line 570, in check_auth
auth.authenticate(credentials, self)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/auth.py", line 486, in authenticate
auth_func(credentials, sock_info)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/auth.py", line 466, in _authenticate_default
return _authenticate_scram_sha1(credentials, sock_info)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/auth.py", line 209, in _authenticate_scram_sha1
res = sock_info.command(source, cmd)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/pool.py", line 477, in command
collation=collation)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/network.py", line 116, in command
parse_write_concern_error=parse_write_concern_error)
File "/Users/liviu/.venv/api/lib/python3.6/site-packages/pymongo/helpers.py", line 210, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Authentication failed.
2017-12-12 13:00:28,220 INFO tornado.access log_request 2063: 200 GET /event/list/web?starttime=2017-01-01&endtime=2017-02-03T14:00:00&minlatitude=10&maxlatitude=20&minlongitude=10&maxlongitude=20&minmagnitude=2&maxmagnitude=5&mindepth=10&maxdepth=100 (::1) 67.88ms
I was able to find some simple examples on how to log in and authenticate to MongoDB and as far as I can figure, that is exactly how I am also doing it (https://github.com/mongodb/motor/blob/master/doc/examples/authentication.rst).
Can anybody shed some light on what is going on and how to properly authenticate to MongoDB from an actual working tornado application using Motor?
P.S. I am using Python 3.6, tornado 4.5.2 and motor 1.1.
P.P.S. In the meantime I have discovered that using this as the uri makes it work properly:
mongouri = f'mongodb://{mongouser}:{mongopass}#{mongohost}:{mongoport}/admin'
In the above, I replaced {mongodb} with the "admin" db. After the client gets connected and authenticated on the admin database, I can proceed to get_database(mongodb) and it will work properly. If anyone cares to more clearly articulate what is going on I will accept the answer.
From the mongodb docs :
authSource
Specify the database name associated with the user’s credentials.
authSource defaults to the database specified in the connection
string.
That explains why it works if you specify "admin" as target database.
So in your case you should use:
mongouri = f'mongodb://{mongouser}:{mongopass}#{mongohost}:{mongoport}/{mongodb}?authSource=admin'

google-api-python-client broken because of OAuth2?

I am trying to check if a certain dataset exists in bigquery using the Google Api Client in Python. It always worked untill the last update where I got this strange error I don't know how to fix:
Traceback (most recent call last):
File "/root/miniconda/lib/python2.7/site-packages/dsUtils/bq_utils.py", line 106, in _get
resp = bq_service.datasets().get(projectId=self.project_id, datasetId=self.id).execute(num_retries=2)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/root/miniconda/lib/python2.7/site-packages/googleapiclient/http.py", line 755, in execute
method=str(self.method), body=self.body, headers=self.headers)
File "/root/miniconda/lib/python2.7/site-packages/googleapiclient/http.py", line 93, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 598, in new_request
self._refresh(request_orig)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 864, in _refresh
self._do_refresh_request(http_request)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 891, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 1597, in _generate_refresh_req
uest_body
assertion = self._generate_assertion()
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/service_account.py", line 263, in _generate_ass
ertion
key_id=self._private_key_id)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/crypt.py", line 97, in make_signed_jwt
signature = signer.sign(signing_input)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/_pycrypto_crypt.py", line 101, in sign
return PKCS1_v1_5.new(self._key).sign(SHA256.new(message))
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Signature/PKCS1_v1_5.py", line 112, in sign
m = self._key.decrypt(em)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 174, in decrypt
return pubkey.pubkey.decrypt(self, ciphertext)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/pubkey.py", line 93, in decrypt
plaintext=self._decrypt(ciphertext)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 235, in _decrypt
r = getRandomRange(1, self.key.n-1, randfunc=self._randfunc)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Util/number.py", line 123, in getRandomRange
value = getRandomInteger(bits, randfunc)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Util/number.py", line 104, in getRandomInteger
S = randfunc(N>>3)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 202, in read
return self._singleton.read(bytes)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 178, in read
return _UserFriendlyRNG.read(self, bytes)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 137, in read
self._check_pid()
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 153, in _check_pid
raise AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")
AssertionError: PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()
Is someone understanding what is hapening?
Note that I also get this error with other bricks like GCStorage.
Note also that I use the following command to load my Google credentials:
from oauth2client.client import GoogleCredentials
def get_credentials(credentials_path): #my json credentials path
logger.info('Getting credentials...')
try:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
credentials = GoogleCredentials.get_application_default()
return credentials
except Exception as e:
raise e
So if anyone know a better way to load my google credentials using my json service account file, and which would avoid the error, please tell me.
It looks like the error is in the PyCrypto module, which appears to be used under the hood by Google's OAuth2 implementation. If your code is calling os.fork() at some point, you may need to call Crypto.Random.atfork() afterward in both the parent and child process in order to update the module's internal state.
See here for PyCrypto docs; search for "atfork" for more info:
https://github.com/dlitz/pycrypto
This question and answer might also be relevant:
PyCrypto : AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")

Duplicity backup to onedrive client error

I'm trying to make backup of files on my computer in onedrive with duplicity.
I have installed all dependencies, when running duplicity there is the auth link generated which I must open in browser and than in duplicity after giving permissions for app paste the return url.
I do all this steps but duplicity is returning me
Traceback (most recent call last):
File "/usr/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/lib/python2.7/site-packages/duplicity/commandline.py", line 1116, in ProcessCommandLine
backup, local_pathname = set_backend(args[0], args[1])
File "/usr/lib/python2.7/site-packages/duplicity/commandline.py", line 1005, in set_backend
globals.backend = backend.get_backend(bend)
File "/usr/lib/python2.7/site-packages/duplicity/backend.py", line 223, in get_backend
obj = get_backend_object(url_string)
File "/usr/lib/python2.7/site-packages/duplicity/backend.py", line 209, in get_backend_object
return factory(pu)
File "/usr/lib/python2.7/site-packages/duplicity/backends/onedrivebackend.py", line 90, in __init__
self.initialize_oauth2_session()
File "/usr/lib/python2.7/site-packages/duplicity/backends/onedrivebackend.py", line 153, in initialize_oauth2_session
authorization_response=redirected_to)
File "/usr/lib/python2.7/site-packages/requests_oauthlib/oauth2_session.py", line 232, in fetch_token
self._client.parse_request_body_response(r.text, scope=self.scope)
File "/usr/lib/python2.7/site-packages/oauthlib/oauth2/rfc6749/clients/base.py", line 409, in parse_request_body_response
self.token = parse_token_response(body, scope=scope)
File "/usr/lib/python2.7/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 376, in parse_token_response
validate_token_parameters(params)
File "/usr/lib/python2.7/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 383, in validate_token_parameters
raise_from_error(params.get('error'), params)
File "/usr/lib/python2.7/site-packages/oauthlib/oauth2/rfc6749/errors.py", line 271, in raise_from_error
raise cls(**kwargs)
InvalidClientError: (invalid_client) The client does not exist. If you are the application developer, configure a new application through the application management site at https://manage.dev.live.com/.
It looks like there is no app with ID which duplicity generate auth link with.
But when I go to the link provided by duplicity I see that "Duplicity is asking for permissions".
So should I add my own app and in some way provide its id to duplicity? (I was searching how to do it but without result) or is it a duplicity bug?
All programmatic interaction with Windows Live requires a client ID,
which uniquely identifies your application to Windows Live. Your
application must include the client ID in every request that it sends
to the Messenger Connect API Service.
You have to register your application as shown in this official Windows Live tutorial:
https://msdn.microsoft.com/en-us/library/ff751474.aspx
And then pass your ID to the application to be able to authentificate in Windows Live in execution time when asking to the API.
You can use the code in
https://github.com/fkalis/bash-onedrive-upload
which also provide full support for upload files which size is bigger then 100MB

Categories