Python: paramiko SCPException connection timeout error - python

From one Windows server i have a python script that connects to a remote Linux host and transfers some data via SSH/SCP. The script is scheduled to execute every morning via the WindowsTaskScheduler of the local server.
The problem i have is that sometimes (not Always strangely enough, and the last days this happens more often) the execution never completes as i get a connection timeout error. From the log of the script:
*Traceback (most recent call last):
File "D:\App\Anaconda3\lib\site-packages\paramiko\channel.py", line 665, in recv out = self.in_buffer.read(nbytes, self.timeout)
File "D:\App\Anaconda3\lib\site-packages\paramiko\buffered_pipe.py", line160, in read raise PipeTimeout() paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 314, in _recv_confirm
msg = self.channel.recv(512)
File "D:\App\Anaconda3\lib\site-packages\paramiko\channel.py", line 667, in recv
raise socket.timeout()socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "script.py", line 235, in <module>
copy_file_to_remote(LOCAL_FOLDER, file_path, DESTINATION_FOLDER, ssh)
File "script.py", line 184, in copy_file_to_remote
scp.put(win_path, linux_path)
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 154, in put
self._send_files(files)
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 255, in _send_files
self._recv_confirm()
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 316, in _recv_confirm
raise SCPException('Timout waiting for scp response')
scp.SCPException: Timout waiting for scp response*
My question is if it is possible to maximize the connection timeout limit in the ssh/scp funtions used in the script or in general how can i make my script to reestablish the connection, keep it open with keepalives or something similar.
It would be also nice if there is way to know at which side of the connection the problem exists, local server or remote machine. This could help a lot for troubleshooting. Any ideas/help very much appreciated!

Related

How to integrate and mocking Redshift and S3 locally using redshift-fake-driver

I would like to build redshift and s3 locally, and then use them for tasks that may run from airflow, tools ... to reduce CI/CD code when have to deploy them to dev, also want to avoid conflict about resources, files, ...
Currently can use LocalStack's S3, but for Redshift, jusr looking for solutions but only get combination using redshift-fake-driver along with package JayDeBeApi in python, but it seems not working properly
import jpype # JPype1==1.4.1
import jaydebeapi # JayDeBeApi==1.2.3
jars = "/Users/trancongminh/Downloads/jars/*"
jpype.startJVM(classpath=jars)
driverName = "jp.ne.opt.redshiftfake.postgres.FakePostgresqlDriver"
print(jpype.JClass(driverName))
# as I spin up a docker container for postgresQL
connectionString = "jdbc:postgresqlredshift://localhost:5432/docker"
uid = "docker"
pwd = "docker"
driverFileName = "/Users/trancongminh/Downloads/jars/redshift-fake-driver_2.12-1.0.15.jar"
conn = jaydebeapi.connect(
jclassname=driverName,
url=connectionString,
driver_args={'user': uid, 'password': pwd},
jars=driverFileName
)
curs = conn.cursor()
curs.execute("SELECT * FROM pg_catalog.pg_tables limit 10;")
curs.fetchall()
curs.execute("copy db_table_name_v2 from 'http://localhost:4566/events-streaming/traveller/v2/ym_202210/d_04/hm_131901.parquet' CREDENTIALS 'aws_access_key_id=test;aws_secret_access_key=test' ")
But get errors like No such file or directory, or smth like this
Traceback (most recent call last):
File "FakeConnection.scala", line 31, in jp.ne.opt.redshiftfake.FakeConnection.prepareStatement
Exception: Java Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 531, in execute
self._prep = self._connection.jconn.prepareStatement(operation)
java.lang.NoSuchMethodError: java.lang.NoSuchMethodError: 'void scala.util.parsing.combinator.Parsers.$init$(scala.util.parsing.combinator.Parsers)'
or may be like this:
Traceback (most recent call last):
File "FakePreparedStatement.scala", line 138, in jp.ne.opt.redshiftfake.FakePreparedStatement$FakeAsIsPreparedStatement.execute
Exception: Java Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 534, in execute
is_rs = self._prep.execute()
org.postgresql.util.PSQLException: org.postgresql.util.PSQLException: ERROR: could not open file "s3://events-streaming/traveller/v2/ym_202210/d_04/hm_131901.parquet" for reading: No such file or directory
Hint: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 536, in execute
_handle_sql_exception()
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 165, in _handle_sql_exception_jpype
reraise(exc_type, exc_info[1], exc_info[2])
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 57, in reraise
raise value.with_traceback(tb)
File "/Users/trancongminh/Pelago/pelago-ds-env/lib/python3.9/site-packages/jaydebeapi/__init__.py", line 534, in execute
is_rs = self._prep.execute()
jaydebeapi.DatabaseError: org.postgresql.util.PSQLException: ERROR: could not open file "s3://events-streaming/traveller/v2/ym_202210/d_04/hm_131901.parquet" for reading: No such file or directory
Hint: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy
Anyy body has exp with this pattern just help, thanks
Solutions or keywords that helpful for further investigation

can't able to connect to osquery daemon using python

I am trying to use evented tables of osquery using python but I am getting an exception. How can I use evented tables.
import osquery
if __name__=="__main__":
instance= osquery.ExtensionClient('\\.\pipe\osquery.em')
instance.open()
while True:
client=instance.extension_client()
results=client.query("SELECT * FROM ntfs_journal_events;")
if(results.response):
print(results.response)
break
instance.connection=None
The error I am getting is:
Traceback (most recent call last):
File "C:\Users\Yash\OneDrive - Incrux Technologies Private Limited\Desktop\Incrux\osquery3.py", line 11, in
results=client.query("SELECT * FROM _events;")
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\extensions\ExtensionManager.py", line 181, in query
self.send_query(sql)
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\extensions\ExtensionManager.py", line 190, in send_query
self._oprot.trans.flush()
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\thrift\transport\TTransport.py", line 179, in flush
self.__trans.write(out)
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\TPipe.py", line 126, in write
raise TTransportException(
thrift.transport.TTransport.TTransportException: Called read on non-open pipe
Called read on non-open pipe sounds like osquery isn't listening on that pipe. Is osquery running? Are you sure that's the socket path?

How to fix the error 'TypeError: can't pickle time objects'?

I am using the OpenOPC library to read data from an OPC Server, I am using 'Matrikon OPC Simulation Server', when I try to read the data it sends me the following error:
TypeError: can't pickle time objects
The code I use is the following, I run it from the python console.
CODE:
import OpenOPC
opc = OpenOPC.client()
opc.connect('Matrikon.OPC.Simulation')
opc.read('Random.Int4')
When I run the line opc.read ('Random.Int4'), that's when the error appears.
This is how the variable appears in my MatrikonOPC Explorer:
This is the complete error:
Traceback (most recent call last):
File "C:\Python27\Lib\multiprocessing\queues.py", line 264, in _feed
send(obj)
TypeError: can't pickle time objects
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\User\PycharmProjects\OPC2\venv\lib\site-packages\OpenOPC.py", line 625, in read
return list(results)
File "C:\Users\User\PycharmProjects\OPC2\venv\lib\site-packages\OpenOPC.py", line 543, in iread
raise TimeoutError('Callback: Timeout waiting for data')
TimeoutError: Callback: Timeout waiting for data
I solved this issue by adding sync=True when calling opc.read()
CODE:
import OpenOPC
opc = OpenOPC.client()
opc.connect('Matrikon.OPC.Simulation')
opc.read('Random.Int4', sync=True)
Reference: mkwiatkowski/openopc

D-Bus - 'ServiceUnknown' exception encountered while calling a remote procedure

I'm trying to call the remote procedure DisplayFolderAndSelect() of Thunar file manager from my own program:
import dbus
bus = dbus.SessionBus()
obj = bus.get_object('org.xfce.Thunar', '/org/xfce/FileManager')
iface = dbus.Interface(obj, 'org.xfce.FileManager')
_thunar_display_folder_and_select = iface.get_dbus_method('DisplayFolderAndSelect')
_thunar_display_folder_and_select('~/Downloads/', 'doc.pdf', '', '')
However I've encountered the following exception at runtime:
Traceback (most recent call last): File "", line 1, in
File "/usr/lib/python2.7/dist-packages/dbus/proxies.py",
line 70, in call
return self._proxy_method(*args, **keywords) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 145, in
call
**keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in
call_blocking
message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name :1.576 was not
provided by any .service files
I'm unable to understand what does this exception mean. And what's the cause behind the exception.
Any thoughts?
I think it is an OS-related issue, restarting D-Bus service solved the problem

Dask Distributed: Getting some errors after computations

I am running Dask Distributed on Linux CentOS 7, with a Python 3.6.2 installation. My computation seems to be getting fine (I am still improving my code, but I am able to have some results), but I keep getting some python errors apparently linked to tornado module. I am only launching a one node standalone Dask distributed cluster.
Here is the most common example:
Exception in thread Client loop:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/site-packages/tornado/ioloop.py", line 832, in start
self._run_callback(self._callbacks.popleft())
AttributeError: 'NoneType' object has no attribute 'popleft'
And here is another one:
tornado.application - ERROR - Exception in callback <bound method WorkStealing.balance of <distributed.stealing.WorkStealing object at 0x7f752ce6d6a0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tornado/ioloop.py", line 1026, in _run
return self.callback()
File "/usr/local/lib/python3.6/site-packages/distributed/stealing.py", line 248, in balance
sat = s.rprocessing[key]
KeyError: 'read-block-9024000000-e3fefd2110094168cc0505db69b326e0'
Do you have any idea why? Should I close some connections or stop the standalone cluster?
Yes, if you don't close down the Tornado IOLoop before exiting the process then it can die in an unpleasant way. Fortunately this shouldn't affect your application, except by looking unpleasant.
You might submit a bug report about this, it's still something that we should fix.

Categories