Google Pub/Sub Python Subscriber gets ALREADY_EXISTS after 10 minutes - python

I deployed a Python subscriber Sunday morning. The subscriber restarts every day. Starting today (3 days later), it experiences an ALREADY_EXISTS error 10-20 minutes after starting.
I've restarted it several times now. Every time it runs fine, retrieves previous messages and processes correctly. Then about 10-20 minutes later it dies again. At the time it dies, it didn't receive any messages and nothing significant happened.
Subscriber code
project_id = '***'
subscription = 'pop'
subscription_id = 'projects/{}/subscriptions/{}'.format(project_id, subscription)
def subscribe(callback):
subscriber = pubsub.SubscriberClient()
subscription = subscriber.subscribe(subscription_id)
future = subscription.open(callback)
future.result()
Error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 54, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib64/python3.6/site-packages/grpc/_channel.py", line 341, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.ABORTED, The operation was aborted.)>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "../monitors/bautopops.py", line 45, in <module>
main()
File "../monitors/bautopops.py", line 41, in main
gmail.start_pubsub_subscription(callback)
File "/home/ec2-user/bautopops/gmail.py", line 270, in start_pubsub_subscription
pubsub.subscribe(pubsub_callback)
File "/home/ec2-user/bautopops/pubsub.py", line 19, in subscribe
future.result()
File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/futures.py", line 103, in result
raise err
File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_consumer.py", line 336, in _blocking_consume
for response in responses:
File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_consumer.py", line 462, in _pausable_iterator
yield next(iterator)
File "/usr/local/lib64/python3.6/site-packages/grpc/_channel.py", line 347, in __next__
return self._next()
File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 56, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 2, in raise_from
google.api_core.exceptions.Aborted: 409 The operation was aborted.
Guest worker-005 exited with error A process has ended with a probable error condition: process ended with exit code 1.
I found in the Google Pub/Sub Reference that error 409 means ALREADY_EXISTS:
The topic or subscription already exists. This is an error on creation operations.
This really doesn't help since I am not creating a subscription. I am only subscribing to one already created, as described in the Google Cloud Docs.
Lastly, I found this Github Issue which experiences the same error, but under totally different circumstances. It does not seem to be related to my problem.
Please help.

Related

Faust-Streaming Crashes with error "Partition is not assigned"

We have recently switched to faust-streaming(0.6.9) from faust 1.10.4. Post this we have seen the applications crashing with the below exception. The application has multiple layers with aggregation and filtering of data at each stage. At each stage, the processor sends the message to Kafka topic and the respective faust app agent consumes the message. But we have kept the partition count the same, for the Kafka topic, at each layer.
Cluster Size = 12
Topic & Table Parition count = 36
faust-streaming version = 0.6.9
kafka-python version = 2.0.2
[2021-07-29 10:05:23,761] [18808] [ERROR] [^---Fetcher]: Crashed reason=AssertionError(‘Partition is not assigned’)
Traceback (most recent call last):
File “/usr/local/lib/python3.8/site-packages/mode/services.py”, line 802, in _execute_task
await task
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 176, in _fetcher
await consumer._drain_messages(self)
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 1104, in _drain_messages
async for tp, message in ait:
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 714, in getmany
highwater_mark = self.highwater(tp)
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 1367, in highwater
return self._thread.highwater(tp)
File “/usr/local/lib/python3.8/site-packages/faust/transport/drivers/aiokafka.py”, line 923, in highwater
return self._ensure_consumer().highwater(tp)
File “/usr/local/lib/python3.8/site-packages/aiokafka/consumer/consumer.py”, line 673, in highwater
assert self._subscription.is_assigned(partition), \
AssertionError: Partition is not assigned
[2021-07-29 10:05:23,764] [18808] [INFO] [^Worker]: Stopping...
[2021-07-29 10:05:23,765] [18808] [INFO] [^-App]: Stopping...
Please help us here.

google cloud logging not working while working with python

python code.
import google.cloud.logging
client = google.cloud.logging.Client.from_service_account_json("file.config")
client.setup_logging()
import logging
loggin.info("error")
traceback:
Traceback (most recent call last):
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/grpc/_channel.py", line 923, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/grpc/_channel.py", line 826, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "The caller does not have permission"
debug_error_string = "{"created":"#1612798593.245379000","description":"Error received from peer ipv4:142.250.71.42:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"The caller does not have permission","grpc_status":7}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/handlers/transports/background_thread.py", line 123, in _safely_commit_batch
batch.commit()
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/logger.py", line 383, in commit
client.logging_api.write_entries(entries, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/_gapic.py", line 121, in write_entries
self._gapic_api.write_log_entries(
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging_v2/gapic/logging_service_v2_client.py", line 476, in write_log_entries
return self._inner_api_calls["write_log_entries"](
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/retry.py", line 281, in retry_wrapped_func
return retry_target(
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 The caller does not have permission
Here i am trying to use google logging, But i am getting above error.
Please take a look
I am doing this using python. Is there any issue on generating service account creation
As John said, have you checked if the Service Account has the proper role assigned? As the official documentation says: "Using Cloud Logging library for Python requires the IAM Logs Writer role on Google Cloud. Most Google Cloud environments provide this role by default".
On the other hand, I am bit curious you used the "google-cloud-storage" tag, however you are not mention something related to it. Have you checked if the issue is because you do not have the enough permissions (As Storage Admin) to access your bucket?
I was using :
client = google.cloud.logging.Client()
(not ".from_service_account_json")
and I was getting the exact same error message. I also had the proper role roles/logging.logWriter. I am using using cloud-logging 2.6.0
I found that when I ran the code above in Vertex AI training job, if I don't create a client using my project_id in the following way:
client = google.cloud.logging.Client(project=<user project number>)
then google.cloud.logging use some kind of dummy project_id i.e "i285ca2410679d8f1p-tp" which don't have the necessay access. Putting my project_id and all the error messages are then gone.

Is it possible to deploy a daml smart contract with bots to Hyperledger Fabric?

I deployed quickstart tutorial based on the example "daml-on-fabric" https://github.com/hacera/daml-on-fabric and after that i tried to deploy the pingpong example from dazl https://github.com/digital-asset/dazl-client/tree/master/samples/ping-pong. The bots from the example works fine on daml ledger. However, when i try to deploy this example on fabric the bots are unable to send the transactions. Everything works fine based on this read me from https://github.com/hacera/daml-on-fabric/blob/master/README.md. The smart contract look like to be deployed on Fabric. The error is when i try to use the bots from pingpong python files https://github.com/digital-asset/dazl-client/blob/master/samples/ping-pong/README.md
I receive this error:
[ ERROR] 2020-03-10 15:40:57,475 | dazl | A command submission failed!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
>
[ ERROR] 2020-03-10 15:40:57,476 | dazl | An event handler in a bot has thrown an exception!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/bots.py", line 157, in _handle_event
await handler.callback(new_event)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
From the error message it looks like the parties defined in the quick start example have not been allocated on the ledger, hence the "Party not known on ledger" error.
You can follow the steps in https://docs.daml.com/deploy/index.html with use of daml deploy --host= --port=, which will both upload the dars and allocate the parties on the ledger.
You can also run just the allocate party command daml ledger allocate-parties, which will allocate based on the parties defined you your daml.yaml.

How to fix LDAPSocketReceiveError: error receiving data: The read operation timed out while using LDAP_MATCHING_RULE_IN_CHAIN/1.2.840.113556.1.4.1941?

I am trying to fetch user's group's recursively.
For eg: User A is part of G1 and G1 is part of G2, I should get G1 and G2 as the output for A.
My code is as below.
query = "(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=nn\, rr,OU=tt,OU=uu,OU=mm,OU=ss,OU=bb,OU=ss,OU=ll,DC=aa,DC=ss,DC=com))"
tls = ldap3.Tls(validate=ssl.CERT_NONE, version=ssl.PROTOCOL_TLS)
server = ldap3.Server(<<domaincontroller>>, get_info=ldap3.ALL, mode=ldap3.IP_V4_PREFERRED, tls=tls, use_ssl=True)
with ldap3.Connection(server=server,authentication=ldap3.NTLM,auto_bind=True,password=domain.password,read_only=True,receive_timeout=self.config.ldap_timeout,user=domain.user) as ldap_connection:
search_parameters = {'search_base': domain.base_dn,'search_filter': ldap_query_find_all_groups_with_our_user_as_member,'attributes': ['*']}
ldap_connection.search(**search_parameters)
print(ldap_connection.entries)
It is working fine without the :1.2.840.113556.1.4.1941:, but with it, I am getting error as below.
Note:
There are chances of duplicacy also, where Parent has a group as its child and the Child has same group as its child again.
Also, although I don't know exactly there could be possibilities where 2 groups are a part of each other and cause a deadlock. I am not sure if LDAP_MATCHING_RULE_IN_CHAIN handles such situations.
Traceback (most recent call last):
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/strategy/sync.py", line 82, in receiving
data = self.connection.socket.recv(self.socket_size)
File "/usr/local/lib/python3.6/ssl.py", line 994, in recv
return self.read(buflen)
File "/usr/local/lib/python3.6/ssl.py", line 871, in read
return self._sslobj.read(len, buffer)
File "/usr/local/lib/python3.6/ssl.py", line 633, in read
v = self._sslobj.read(len)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/myapp/venv/lib/python3.6/site-packages/app/core.py", line 283, in smita
ldap_connection.search(**search_parameters)
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/core/connection.py", line 789, in search
response = self.post_send_search(self.send('searchRequest', request, controls))
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/strategy/sync.py", line 139, in post_send_search
responses, result = self.get_response(message_id)
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/strategy/base.py", line 324, in get_response
responses = self._get_response(message_id)
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/strategy/sync.py", line 157, in _get_response
responses = self.receiving()
File "/opt/myapp/venv/lib/python3.6/site-packages/ldap3/strategy/sync.py", line 92, in receiving
raise communication_exception_factory(LDAPSocketReceiveError, type(e)(str(e)))(self.connection.last_error)
ldap3.core.exceptions.LDAPSocketReceiveError: error receiving data: The read operation timed out
A timeout, in general, means that the server did not respond in the expected amount of time, so the client gave up waiting. This can be a time-consuming query. Try increasing receive_timeout to allow more time for it to return the results.

Pymongo AssertionError: ids don't match

I use:
MongoDB 1.6.5
Pymongo 1.9
Python 2.6.6
I have 3 types of daemons. 1st load data from web, 2nd analyze it and save result, and 3rd group result. All of them working with Mongodb.
At some time 3rd daemon throws many exceptions like this(mostly when there are big amount of data in DB):
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/gevent-0.13.1-py2.6-linux-x86_64.egg/gevent/greenlet.py", line 405, in run
result = self._run(*self.args, **self.kwargs)
File "/data/www/spider/daemon/scripts/mainconverter.py", line 72, in work
for item in res:
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 601, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 564, in _refresh
self.__query_spec(), self.__fields))
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 521, in __send_message
**kwargs)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 743, in _send_message_with_response
return self.__send_and_receive(message, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 724, in __send_and_receive
return self.__receive_message_on_socket(1, request_id, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 714, in __receive_message_on_socket
struct.unpack("<i", header[8:12])[0])
AssertionError: ids don't match -561338340 0
<Greenlet at 0x2baa628: <bound method Worker.work of <scripts.mainconverter.Worker object at 0x2ba8450>>> failed with AssertionError
Can anyone tell what cause this exeption and how to fix this.
Thanks.
This is likely a threading problem related to how you are using worker threads with gevent coroutines. It seems like the pymongo connection object is reading a response for a request it didn't make.

Categories