I'm facing an issues when exporting traces to Grafana Tempo via the OpenTelemetry Collector. This is the error I get. Any help would be appreciated.
Transient error StatusCode.UNAVAILABLE encountered while exporting traces, retrying in Nones.
Exception while exporting Span batch.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py", line 305, in _export
self._client.Export(
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket operation on non-socket"
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.20.115.161:4317 {created_time:"2022-10-10T09:24:15.873984845+00:00", grpc_status:14, grpc_message:"Socket operation on non-socket"}"
Related
I installed the Python libraries on my local PC, I also added my GA-4 Property ID. When I attempt to run the script I get the following error, note that the error actually makes sense because the IP Address failing doesn't have a valid SSL certificate:
Traceback (most recent call last):
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "\\usalbodd01\bod_Share\BODS_Tools\grpc\_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "\\usalbodd01\bod_Share\BODS_Tools\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED {grpc_status:14, created_time:"2022-11-28T18:41:22.060505311+00:00"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "\\usalbodd01\bod_Share\BODS_Tools\GoogleQuickstart.py", line 51, in <module>
sample_run_report("")
File "\\usalbodd01\bod_Share\BODS_Tools\GoogleQuickstart.py", line 43, in sample_run_report
response = client.run_report(request)
File "\\usalbodd01\bod_Share\BODS_Tools\google\analytics\data_v1beta\services\beta_analytics_data\client.py", line 511, in run_report
response = rpc(
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\gapic_v1\method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "\\usalbodd01\bod_Share\BODS_Tools\google\api_core\grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses; last error: UNKNOWN: ipv4:142.250.190.10:443: Ssl handshake failed: SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
Have any of you run into this when attempting to run the quickstart.py script locally?
Regards,
Greg
I attempted to run the script after following the "TODOS". I installed the Google certificates locally. I then tested the IP Address the script is failing on, the address is not secured.
Your connection isn't private
Attackers might be trying to steal your information from 142.250.190.10 (for example, passwords, messages, or credit cards).
NET::ERR_CERT_AUTHORITY_INVALID
Need help solving an issue with Superset (version 1.0.1) running on docker.
Sendings reports (dashboards) by email is configured. Small dashboards are sent fine.
But sending several report ends with this error:
Report Schedule execution failed when generating a screenshot.
[05/Oct/2021:16:00:06 +0300] "GET /static/assets/5d82d1b53c008164c101.chunk.js HTTP/1.1" 200 308964 "http://192.168.90.132:8088/superset/dashboard/49/" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
Message: Failed to convert data to an object
[2021-10-05 16:00:08,431: ERROR/ForkPoolWorker-7] Message: Failed to convert data to an object
[2021-10-05 16:00:13,752: WARNING/ForkPoolWorker-7] Message: Failed to decode response from marionette
, retrying in 0 seconds...
[2021-10-05 16:00:13,754: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
[2021-10-05 16:00:13,755: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
[2021-10-05 16:00:13,756: WARNING/ForkPoolWorker-7] Message: Tried to run command without establishing a connection
, retrying in 0 seconds...
Failed at generating thumbnail 'WebDriver' object has no attribute 'screenshot'
[2021-10-05 16:00:13,758: ERROR/ForkPoolWorker-7] Failed at generating thumbnail 'WebDriver' object has no attribute 'screenshot'
Report Schedule execution failed when generating a screenshot.
Traceback (most recent call last):
File "/app/superset/utils/celery.py", line 50, in session_scope
yield session
File "/app/superset/reports/commands/execute.py", line 374, in run
raise ex
File "/app/superset/reports/commands/execute.py", line 371, in run
session, self._model, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 344, in run
self._session, self._report_schedule, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 261, in next
raise ex
File "/app/superset/reports/commands/execute.py", line 257, in next
self.send()
File "/app/superset/reports/commands/execute.py", line 195, in send
notification_content = self._get_notification_content()
File "/app/superset/reports/commands/execute.py", line 175, in _get_notification_content
screenshot_data = self._get_screenshot()
File "/app/superset/reports/commands/execute.py", line 167, in _get_screenshot
raise ReportScheduleScreenshotFailedError()
superset.reports.commands.exceptions.ReportScheduleScreenshotFailedError: Report Schedule execution failed when generating a screenshot.
[2021-10-05 16:00:13,775: ERROR/ForkPoolWorker-7] Report Schedule execution failed when generating a screenshot.
Traceback (most recent call last):
File "/app/superset/utils/celery.py", line 50, in session_scope
yield session
File "/app/superset/reports/commands/execute.py", line 374, in run
raise ex
File "/app/superset/reports/commands/execute.py", line 371, in run
session, self._model, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 344, in run
self._session, self._report_schedule, self._scheduled_dttm
File "/app/superset/reports/commands/execute.py", line 261, in next
raise ex
File "/app/superset/reports/commands/execute.py", line 257, in next
self.send()
File "/app/superset/reports/commands/execute.py", line 195, in send
notification_content = self._get_notification_content()
File "/app/superset/reports/commands/execute.py", line 175, in _get_notification_content
screenshot_data = self._get_screenshot()
File "/app/superset/reports/commands/execute.py", line 167, in _get_screenshot
raise ReportScheduleScreenshotFailedError()
superset.reports.commands.exceptions.ReportScheduleScreenshotFailedError: Report Schedule execution failed when generating a screenshot.
Report state: Report Schedule execution failed when generating a screenshot.
[2021-10-05 16:00:13,776: INFO/ForkPoolWorker-7] Report state: Report Schedule execution failed when generating a screenshot.
I a running a process on Apache Airflow that has a loop in which it reads data from a MSSQL data base, adds two columns and writes the data to another MSSQL data base. I am using MsSqlHook to connect to both bases
The process usually runs fine with a loop that reads and loads the data, but sometimes, after some successful data writes, I get the following error message:
ERROR - (20009, b'DB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist (SOURCE_DB.database.windows.net:PORT)\nNet-Lib error during Connection timed out (110)\nDB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist (SOURCE_DB.database.windows.net:PORT)\nNet-Lib error during Connection timed out (110)\n')
Traceback (most recent call last):
File "src/pymssql.pyx", line 636, in pymssql.connect
File "src/_mssql.pyx", line 1957, in _mssql.connect
File "src/_mssql.pyx", line 676, in _mssql.MSSQLConnection.__init__
File "src/_mssql.pyx", line 1683, in _mssql.maybe_raise_MSSQLDatabaseException
_mssql.MSSQLDatabaseException: (20009, b'DB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist (SOURCE_DB.database.windows.net:PORT)\nNet-Lib error during Connection timed out (110)\nDB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist (SOURCE_DB.database.windows.net:PORT)\nNet-Lib error during Connection timed out (110)\n')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python_operator.py", line 113, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python_operator.py", line 118, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/local/airflow/dags/DAG_NAME.py", line 156, in readWriteData
df = readFromSource(query)
File "/usr/local/airflow/dags/MX_CENT_SAMS_EXIT_APP_ITMS_MIGRATION.py", line 112, in readFromSource
df = mssql_hook.get_pandas_df(sql=query)
File "/usr/local/lib/python3.7/site-packages/airflow/hooks/dbapi_hook.py", line 99, in get_pandas_df
with closing(self.get_conn()) as conn:
File "/usr/local/lib/python3.7/site-packages/airflow/hooks/mssql_hook.py", line 48, in get_conn
port=conn.port)
File "src/pymssql.pyx", line 642, in pymssql.connect
I am guessing this is because the connection to the source data base is unstable, and whenever it is interrupted it can't reestablish it, so is there a way to pause or make the process wait if the source connection becomes unavaliable?
This is my current code:
def readFromSource(query):
"""
Args: query--> Query to be executed
Returns: Dataframe with source tables data
"""
print("Executing readFromSource()")
mssql_hook = MsSqlHook(mssql_conn_id=SRC_CONN)
mssql_hook.autocommit = True
df = mssql_hook.get_pandas_df(sql=query)
print(f"Source rows: {df.shape[0]}")
print("readFromSource() execution completed")
return df
def writeToTarget(df):
print("Executing writeToTarget()")
try:
fast_sql_conn = FastMSSQLConnection(TGT_CONN)
tgt_conn = fast_sql_conn.getConnection()
with closing(tgt_conn) as conn:
df.to_sql(
name=TGT_TABLE,
schema='dbo',
con=conn,
chunksize=CHUNK_SIZE,
method='multi',
index=False,
if_exists='append'
)
except Exception as e:
print("Error while loading data to target: " + str(e))
print("writeToTarget() execution completed")
def readWriteData(*op_args, **context):
"""Loads info to target table
"""
print("Executing readWriteData()")
partition_column_list = context['ti'].xcom_pull(
task_ids='getPartitionColumnList')
parallelProcParams = context['ti'].xcom_pull(
task_ids='setParallelProcessingParams')
range_start = parallelProcParams['i'][op_args[0]][0]
range_len = parallelProcParams['i'][op_args[0]][1]
for i in range(range_start, range_start + range_len):
filter_ = partition_column_list[i]
print(f"Executing for audititemid: {filter_}")
query = SRC_QUERY + ' and audititemid = ' + str(filter_).replace("[","").replace("]","") # a exit app
df = readFromSource(query)
df = df.rename(columns={"createdate": "CREAT_DATE", "scannedqty": "SCANNED_QTY", "audititemid":"AUDT_ITM_ID", "auditid":"AUDT_ID", "upc":"UPC", "itemnbr":"ITM_NBR", "txqty":"TXNS_QTY", "displayname":"DSPLY_NAME", "unitprice":"UNIT_PRICE", "cancelled":"CNCL"})
df['LOADG_CHNNL'] = 'Airflow Exit App DB'
df['LOADG_DATE'] = datetime.now()
writeToTarget(df)
print("readWriteData() execution completed")
You could split the task in two:
Read from DB and persist
Read persisted data and write to DB
The first task will read the data, transform it, and persist it (e.g., on the local disk). The second one will read the persisted data and write it to DB using a transaction. For the second task set the number of retries as needed.
Now, if the connection times out the second task will fail, the changes to DB will be rolled back, and Airflow will retry the task as many times as you set.
I am trying to use neo4j using the python driver. Implemented a program that exchanges the data between neo4j and Python frequently and each of the iterations of the program is independent. The program is running perfectly when no parallel processing is used. Then I am trying to implement this using parallel processing in python where I parallelize these independent iterations. I have got a 24 core machine. So I can run quite a few number of processes. Even in parallel execution, the program is executing until the number of processes is 5. For any number greater than this 90% times I am getting the following error.
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 828, in close
self.sync()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 793, in sync
self.session.sync()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 538, in sync
detail_count, _ = self._connection.sync()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 526, in sync
self.send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 388, in send
self._send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 408, in _send
self.socket.sendall(data)
File "/usr/local/lib/python3.7/ssl.py", line 1015, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 984, in send
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-4-616f793afd51>", line 9, in func
d2=run_query(streaming_query)
File "<ipython-input-2-01a2f4205218>", line 6, in run_query
result = session.read_transaction(lambda tx: tx.run(streaming_query))
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 710, in read_transaction
return self._run_transaction(READ_ACCESS, unit_of_work, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 686, in _run_transaction
tx.close()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 835, in close
self.session.commit_transaction()
File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 630, in commit_transaction
self._connection.sync()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 526, in sync
self.send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 388, in send
self._send()
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 408, in _send
self.socket.sendall(data)
File "/usr/local/lib/python3.7/ssl.py", line 1015, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 984, in send
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
"""
The above exception was the direct cause of the following exception:
BrokenPipeError Traceback (most recent call last)
<ipython-input-5-da15b33c8ad4> in <module>
7 pool = multiprocessing.Pool(processes=num_processes)
8 start = time.time()
----> 9 result = pool.map(func, chunks)
10 end = time.time()
11 print(end-start)
/usr/local/lib/python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize)
288 in a list that is returned.
289 '''
--> 290 return self._map_async(func, iterable, mapstar, chunksize).get()
291
292 def starmap(self, func, iterable, chunksize=None):
/usr/local/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
681 return self._value
682 else:
--> 683 raise self._value
684
685 def _set(self, i, obj):
BrokenPipeError: [Errno 32] Broken pipe
Also, I am receiving the following warnings
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to read from defunct connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687))
Failed to write data to connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687)); ("32; 'Broken pipe'")
Transaction failed and will be retried in 1.1551515321361832s (Failed to write to closed connection Address(host='127.0.0.1', port=7687) (Address(host='127.0.0.1', port=7687)))
The server neo4j debug log is as follows
2020-04-09 13:07:26.033+0000 INFO [o.n.l.i.StoreLogService] Rotated internal log file
2020-04-09 13:08:16.724+0000 ERROR [o.n.b.t.p.HouseKeeper] Fatal error occurred when handling a client connection: [id: 0xdb5b2521, L:/127.0.0.1:7687 ! R:/127.0.0.1:58086] javax.net.ssl.SSLException: bad record MAC
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: bad record MAC
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLException: bad record MAC
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1709)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:970)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:896)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:766)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:295)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1301)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1203)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1247)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
... 17 more
Caused by: javax.crypto.BadPaddingException: bad record MAC
at sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:238)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:963)
... 26 more
Few points I would like to mention
Tried to use a single driver for the whole program and setting up session as and when required
I tried to use a driver for each process and I am still facing the issue.
This sounds odd but I tried to setup a driver as and when a db call is required and closed it immediately after fetching the data. Here I am not facing the broken pipe error but Connection is being refused after hitting the limit.
I want to know what is the ideal way to setup a driver. Also please help me in fixing these issues.
Thanks
I deployed quickstart tutorial based on the example "daml-on-fabric" https://github.com/hacera/daml-on-fabric and after that i tried to deploy the pingpong example from dazl https://github.com/digital-asset/dazl-client/tree/master/samples/ping-pong. The bots from the example works fine on daml ledger. However, when i try to deploy this example on fabric the bots are unable to send the transactions. Everything works fine based on this read me from https://github.com/hacera/daml-on-fabric/blob/master/README.md. The smart contract look like to be deployed on Fabric. The error is when i try to use the bots from pingpong python files https://github.com/digital-asset/dazl-client/blob/master/samples/ping-pong/README.md
I receive this error:
[ ERROR] 2020-03-10 15:40:57,475 | dazl | A command submission failed!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
>
[ ERROR] 2020-03-10 15:40:57,476 | dazl | An event handler in a bot has thrown an exception!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/bots.py", line 157, in _handle_event
await handler.callback(new_event)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
From the error message it looks like the parties defined in the quick start example have not been allocated on the ledger, hence the "Party not known on ledger" error.
You can follow the steps in https://docs.daml.com/deploy/index.html with use of daml deploy --host= --port=, which will both upload the dars and allocate the parties on the ledger.
You can also run just the allocate party command daml ledger allocate-parties, which will allocate based on the parties defined you your daml.yaml.