can't able to connect to osquery daemon using python - python

I am trying to use evented tables of osquery using python but I am getting an exception. How can I use evented tables.
import osquery
if __name__=="__main__":
instance= osquery.ExtensionClient('\\.\pipe\osquery.em')
instance.open()
while True:
client=instance.extension_client()
results=client.query("SELECT * FROM ntfs_journal_events;")
if(results.response):
print(results.response)
break
instance.connection=None
The error I am getting is:
Traceback (most recent call last):
File "C:\Users\Yash\OneDrive - Incrux Technologies Private Limited\Desktop\Incrux\osquery3.py", line 11, in
results=client.query("SELECT * FROM _events;")
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\extensions\ExtensionManager.py", line 181, in query
self.send_query(sql)
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\extensions\ExtensionManager.py", line 190, in send_query
self._oprot.trans.flush()
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\thrift\transport\TTransport.py", line 179, in flush
self.__trans.write(out)
File "C:\Users\Yash\AppData\Local\Programs\Python\Python310\lib\site-packages\osquery\TPipe.py", line 126, in write
raise TTransportException(
thrift.transport.TTransport.TTransportException: Called read on non-open pipe

Called read on non-open pipe sounds like osquery isn't listening on that pipe. Is osquery running? Are you sure that's the socket path?

Related

GUnicorn and shared dictionary on REST API: "Ran out of input" Error on high load

I am using a manager.dict to synchronize some data between multiple workers of an API served with GUnicorn (with Meinheld workers). While this works fine for a few concurrent queries, it breaks when I fire about 100 queries simultaneously at the API and I get displayed the following stack trace:
2020-07-16 12:35:38,972-app.api.my_resource-ERROR-140298393573184-on_post-175-Ran out of input
Traceback (most recent call last):
File "/app/api/my_resource.py", line 163, in on_post
results = self.do_something(a, b, c, **d)
File "/app/user_data/data_lookup.py", line 39, in lookup_something
return (a in self._shared_dict
File "<string>", line 2, in __contains__
File "/usr/local/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
EOFError: Ran out of input
2020-07-16 12:35:38,972-app.api.my_resource-ERROR-140298393573184-on_post-175-unpickling stack underflow
Traceback (most recent call last):
File "/app/api/my_resource.py", line 163, in on_post
results = self.do_something(a, b, c, **d)
File "/app/user_data/data_lookup.py", line 39, in lookup_something
return (a in self._shared_dict
File "<string>", line 2, in __contains__
File "/usr/local/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
_pickle.UnpicklingError: unpickling stack underflow
My API framework is falcon. I have a dictionary containing user data that can be updated via POST requests. The architecture should be simple, so I chose Manager.dict() from the multiprocessing package to store the data. When doing other queries, this some input will be checked against the contents of this dictionary (if a in self._shared_dict: ...). This is where the above-mentioned errors occur.
Why is this problem happening? It seems to be tied to the manager.dict. Besides, when I do debugging in PyCharm, it also happens that the debugger does not evaluate any variables and often just hangs infinitely somewhere in multiprocessing code waiting for data.
It seems to have something to do with the Meinheld workers. When I configure GUnicorn to use the default sync worker class, this error does not occur anymore. Hence, Python multiprocessing and the Meinheld package seem not to work well in my setting.

Python: paramiko SCPException connection timeout error

From one Windows server i have a python script that connects to a remote Linux host and transfers some data via SSH/SCP. The script is scheduled to execute every morning via the WindowsTaskScheduler of the local server.
The problem i have is that sometimes (not Always strangely enough, and the last days this happens more often) the execution never completes as i get a connection timeout error. From the log of the script:
*Traceback (most recent call last):
File "D:\App\Anaconda3\lib\site-packages\paramiko\channel.py", line 665, in recv out = self.in_buffer.read(nbytes, self.timeout)
File "D:\App\Anaconda3\lib\site-packages\paramiko\buffered_pipe.py", line160, in read raise PipeTimeout() paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 314, in _recv_confirm
msg = self.channel.recv(512)
File "D:\App\Anaconda3\lib\site-packages\paramiko\channel.py", line 667, in recv
raise socket.timeout()socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "script.py", line 235, in <module>
copy_file_to_remote(LOCAL_FOLDER, file_path, DESTINATION_FOLDER, ssh)
File "script.py", line 184, in copy_file_to_remote
scp.put(win_path, linux_path)
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 154, in put
self._send_files(files)
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 255, in _send_files
self._recv_confirm()
File "D:\App\Anaconda3\lib\site-packages\scp.py", line 316, in _recv_confirm
raise SCPException('Timout waiting for scp response')
scp.SCPException: Timout waiting for scp response*
My question is if it is possible to maximize the connection timeout limit in the ssh/scp funtions used in the script or in general how can i make my script to reestablish the connection, keep it open with keepalives or something similar.
It would be also nice if there is way to know at which side of the connection the problem exists, local server or remote machine. This could help a lot for troubleshooting. Any ideas/help very much appreciated!

Getting "TypeError: 'NoneType' object is not iterable" while doing parallel ssh

I am trying to do parallel ssh on servers. While doing this, i am getting "TypeError: 'NoneType' object is not iterable" this error. Kindly help.
My script is below
from pssh import ParallelSSHClient
from pssh.exceptions import AuthenticationException, UnknownHostException, ConnectionErrorException
def parallelsshjob():
client = ParallelSSHClient(['10.84.226.72','10.84.226.74'], user = 'root', password = 'XXX')
try:
output = client.run_command('racadm getsvctag', sudo=True)
print output
except (AuthenticationException, UnknownHostException, ConnectionErrorException):
pass
#print output
if __name__ == '__main__':
parallelsshjob()
And the Traceback is below
Traceback (most recent call last):
File "parallelssh.py", line 17, in <module>
parallelsshjob()
File "parallelssh.py", line 10, in parallelsshjob
output = client.run_command('racadm getsvctag', sudo=True)
File "/Library/Python/2.7/site-packages/pssh/pssh_client.py", line 520, in run_command
raise ex
TypeError: 'NoneType' object is not iterable
Help me with the solution and also suggest me to use ssh-agent in this same script. Thanks in advance.
From reading the code and debugging a bit on my laptop, I believe the issue is that you don't have a file called ~/.ssh/config. It seems that parallel-ssh has a dependency on OpenSSH configuration, and this is the error you get when that file is missing.
read_openssh_config returns None here: https://github.com/pkittenis/parallel-ssh/blob/master/pssh/utils.py#L79
In turn, SSHClient.__init__ blows up when trying to unpack the values it expects to receive: https://github.com/pkittenis/parallel-ssh/blob/master/pssh/ssh_client.py#L97.
The fix is presumably to get some sort of OpenSSH config file in place, but I'm sorry to say I know nothing about that.
EDIT
After cleaning up some of parallel-ssh's exception handling, here's a better stack trace for the error:
Traceback (most recent call last):
File "test.py", line 11, in <module>
parallelsshjob()
File "test.py", line 7, in parallelsshjob
output = client.run_command('racadm getsvctag', sudo=True)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 517, in run_command
self.get_output(cmd, output)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 601, in get_output
(channel, host, stdout, stderr, stdin) = cmd.get()
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 480, in get
self._raise_exception()
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 171, in _raise_exception
reraise(*self.exc_info)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 534, in run
result = self._run(*self.args, **self.kwargs)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 559, in _exec_command
channel_timeout=self.channel_timeout)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/ssh_client.py", line 98, in __init__
host, config_file=_openssh_config_file)
TypeError: 'NoneType' object is not iterable
This was seemingly a regression in the 0.92.0 version of the library which is now resolved in 0.92.1. Previous versions also work. OpenSSH config should not be a dependency.
To answer your SSH agent question, if there is one running and enabled in the user session it gets used automatically. If you would prefer to provide a private key programmatically can do the following
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
pkey = load_private_key('my_private_key')
client = ParallelSSHClient(hosts, pkey=pkey)
Can also provide an agent with multiple keys programmatically, per below
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
from pssh.agent import SSHAgent
pkey = load_private_key('my_private_key')
agent = SSHAgent()
agent.add_key(pkey)
client = ParallelSSHClient(hosts, agent=agent)
See documentation for more examples.

kafka-python raise kafka.errors.ConsumerFetchSizeTooSmall

I'm writing a simple code in python 2.7 that is consuming messaging from an apache kafka topic. The code is the following:
from kafka import SimpleConsumer,KafkaClient
group = "my_group_test"
client = KafkaClient('localhost:9092')
cons = SimpleConsumer(client, group, "my_topic")
messages = cons.get_messages(count=1000,block=False)
But is raising this exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/simple.py", line 285, in get_messages
update_offset=False)
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/simple.py", line 320, in _get_message
self._fetch()
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/simple.py", line 425, in _fetch
raise ConsumerFetchSizeTooSmall()
kafka.errors.ConsumerFetchSizeTooSmall
How could i modify this parameter (ConsumerFetchSize) in order to make this code work?
I found a solution,
using the parameter max_buffer_size in the SimpleConsumer.
The working code is:
#the size is adherent with my need but is totally arbitrary
cons = SimpleConsumer(client, group, "my_topic",max_buffer_size=9932768)

Trying to connect to coap resource with python library

So im trying to connect to a CoaP Resource using this python library https://github.com/chrysn/aiocoap . The library uses python 3.4 and i have 3.4 installed and set as the interpreter to use with this (Im on Windows 7 btw). Still im getting this error message, when executing the clientGET.py file. Same for the server file.
C:\Python34\python.exe C:/Learning/PyCoap/aiocoap/clientGET.py
Traceback (most recent call last):
File "C:/Learning/PyCoap/aiocoap/clientGET.py", line 34, in <module>
asyncio.get_event_loop().run_until_complete(main())
File "C:\Python34\lib\asyncio\base_events.py", line 268, in run_until_complete
return future.result()
File "C:\Python34\lib\asyncio\futures.py", line 277, in result
raise self._exception
File "C:\Python34\lib\asyncio\tasks.py", line 236, in _step
result = next(coro)
File "C:/Learning/PyCoap/aiocoap/clientGET.py", line 20, in main
protocol = yield from Context.create_client_context()
File "C:\Learning\PyCoap\aiocoap\aiocoap\protocol.py", line 510, in create_client_context
transport, protocol = yield from loop.create_datagram_endpoint(protofact, family=socket.AF_INET6)
File "C:\Python34\lib\asyncio\base_events.py", line 675, in create_datagram_endpoint
waiter)
File "C:\Python34\lib\asyncio\selector_events.py", line 68, in _make_datagram_transport
address, waiter, extra)
File "C:\Python34\lib\asyncio\selector_events.py", line 911, in __init__
super().__init__(loop, sock, protocol, extra)
File "C:\Python34\lib\asyncio\selector_events.py", line 452, in __init__
self._extra['sockname'] = sock.getsockname()
OSError: [WinError 10022] Ein ungultiges Argument wurde angegeben
Process finished with exit code 1
I didn't explore this in a real Python, as I don't have a Windows machine with Python 3.4 handy, but it seems to me that this could be a bug in asyncio. Its UDP socket creation probably simply doesn't work on Windows. Do some experimenting on the lower level, looking at what aiocoap is doing, and try to prove me wrong.
It's supposed to work, documentation only mentions ProactorEventLoop as not supporting UDP.
The error condition is described in Socket.error: Invalid Argument supplied .
aiocoap.protocol.Context.create_client_context() seems to be doing the right thing according to asyncio documentation, but _SelectorTransport.__init__() will always call sock.getsockname() before any packets are sent, at which point the socket will not be bound to an address (according to the linked SO question) and getsockname() will fail on Windows.
You might want to retry with a current version of Python and aiocoap (current development version, after 0.4a1). Windows used not to be supported in aiocoap, and is still not supporting all of CoAP, but now uses a socket implementation that is aware of some limitations in the Windows socket API.

Categories