For starters, I would like to say if anyone can help here, you are incredible.
General Question
My Python program needs to interact with MSMQ. Basically, I want to peek at a queue, specifying a timeout if there's nothing in the queue.
However, despite my best efforts, I cannot get Peek() to wait out the timeout interval, when there's no value previously in the queue. Can you please point out what is missing from this code?
My Current Code
Here is my code right now:
from socket import gethostname
import win32com.client
import pythoncom
import clr
clr.AddReference("System")
clr.AddReference("System.Messaging")
from System import TimeSpan
from System.Messaging import MessageQueue
# Source: [1]
# [1] https://learn.microsoft.com/en-us/previous-versions/windows/desktop/msmq/ms707027%28v%3dvs.85%29
MQ_DENY_NONE = 0x0
MQ_PEEK_ACCESS = 0x1
MQ_SEND_ACCESS = 0x2
# Set up queue
pythoncom.CoInitialize()
qinfo = win32com.client.Dispatch("MSMQ.MSMQQueueInfo")
qinfo.FormatName = f"direct=os:{gethostname()}\\PRIVATE$\\MyQueue"
queue = qinfo.Open(MQ_PEEK_ACCESS, MQ_DENY_NONE)
# Receive a value
timeout_sec = 1.0
timespan = TimeSpan.FromSeconds(timeout_sec)
label, body = "", ""
# TODO: timeout value does not appear working. It never waits when
# there's no message
if queue.Peek(pythoncom.Empty, pythoncom.Empty, timespan):
msg = queue.Receive() . # Blocking receive --> remove msg from the queue
if msg is not None:
label = msg.Label
body = msg.Body
I run: inspect.getfullargspec(queue.Peek) and get:
FullArgSpec(args=['self', 'WantDestinationQueue', 'WantBody', 'ReceiveTimeout', 'WantConnectorType'], varargs=None, varkw=None, defaults=(<PyOleMissing object at 0x00000147F5D43BD0>, <PyOleMissing object at 0x00000147F5D43BD0>, <PyOleMissing object at 0x00000147F5D43BD0>, <PyOleMissing object at 0x00000147F5D43BD0>), kwonlyargs=[], kwonlydefaults=None, annotations={})
Things I've Tried
This question: saying ReceiveTimeout=timespan doesn't seem to solve my problem.
Replacing pythoncom.Empty with pythoncom.Missing doesn't seem to work
This unanswered question seems very similar to mine
I found this article sending-msmq-messages-python
The article shows you how to send and receive a message using msmq. I don't understand why you can't just standard socket connection syntax to say if I haven't received a packet/connection then close connection
import select
mysocket.setblocking(0)
ready = select.select([mysocket], [], [], timeout_in_seconds)
if ready[0]:
data = mysocket.recv(4096)
So something like that shouldn't be too difficult. Worst case scenario create a thread that checks every timeout_time if a variable is zero or not. If it's zero close queue nothing received if it's >0 set to zero and wait for more messages.
I also found a GitHub about asynchronous msmq for python logging.
asynchronous msmq
This one just said receive while True
dlopes7 msmq
import time
t_end = time.time() + 60 * 15
messages=0
while time.time() < t_end or messages>0:
msg = queue.Receive()
messages+=1
if(time.time() > t_end and messages>0):
messages=0
t_end = time.time() + 60 * 15
print(f'Got Message from {queue_name}: {msg.Label} - {msg.Body}')
Not the answer you wanted but one that will work.
In the original question's comments, #PeterBrittain suggested to try just using:
an integer (in milliseconds) for their timeout
I got around to trying that out, and actually, it worked! I found float values to work as well. Here is some sample Python code:
timeout_sec = 1.0
queue.Peek(pythoncom.Empty, pythoncom.Empty, timeout_sec * 1000):
Thank you #PeterBrittain!
Related
Preface
I have an Akai LPD8 mk2 which I would like to interface from Python.
There are two projects that (I believe) provide interfaces for the predecessor (mk1):
https://github.com/zetof/LPD8
https://github.com/DrLuke/python-lpd8
I found I needed to make a few small changes to get the reading to work with these libs (*example below, not sure how relevant this is and it could easily be because I misunderstand something).
However, no matter what I tried, I could not get the sending to the device to work. Specifically, changing the state (on/off) of the pads of the device from Python. This state is indicated by the color of the pads' lighting.
Question
A minimal example of what I thought should work is the following:
With the very nice mido library, I would like to read a few state changes of the pads. Then send them back with a small delay (similar idea here: https://stackoverflow.com/a/29501455/655404):
from time import sleep
import mido
def get_ioport_name(search):
# see footnote **
names = mido.get_ioport_names()
names = set(n for n in names if search in n)
assert len(names) == 1
return names.pop()
def get_messages(port):
return list(port.iter_pending())
name = get_ioport_name("LPD8")
port = mido.open_ioport(name) # see footnote ***
# ensure that there are no messages in the queue
get_messages(port)
#pause
input("Press a few pads, then press Enter to continue...")
msgs = get_messages(port)
print(f"Recorded these messages:\n{msgs}\n")
print("Echoing these messages:")
for m in msgs:
print(m)
port.send(m)
sleep(1)
This runs and gives no error messages. However, the expected color changes on the pads do not happen.
What am I doing wrong?
I have tried with different backends for mido (rtmidi, portmidi, pygame) as well as using rtmidi directly, all with the same result.
Footnotes
*Example for changes needed:
I had to change the constant's block here https://github.com/zetof/LPD8/blob/master/lpd8/lpd8.py#L16 to:
NOTE_ON = 144+9
NOTE_OFF = 128+9
CTRL = 176
PGM_CHG = 192+9
**Port names
names = mido.get_ioport_names() gives:
['Midi Through:Midi Through Port-0 14:0',
'LPD8 mk2:LPD8 mk2 MIDI 1 20:0',
'Midi Through:Midi Through Port-0 14:0',
'LPD8 mk2:LPD8 mk2 MIDI 1 20:0']
and set(names) gives:
{'LPD8 mk2:LPD8 mk2 MIDI 1 20:0', 'Midi Through:Midi Through Port-0 14:0'}
The result is also the same for mido.get_input_names() and mido.get_output_names().
***Ports in mido
From what I understand mido has three port classes:
open_input() # only receive
open_output() # only send
open_ioport() # both (and what I used above)
If I change from ioport to using an input/output pair, the result is the same:
port_in = mido.open_input(name)
port_out = mido.open_output(name)
# ensure that there are no messages in the queue
get_messages(port_in)
#pause
input("Press a few pads, then press Enter to continue...")
msgs = get_messages(port_in)
print(f"Recorded these messages:\n{msgs}\n")
print("Echoing these messages:")
for m in msgs:
print(m)
port_out.send(m)
sleep(1)
We have a python program that is designed to open an Excel template document, run refreshall() twice (data and pivots), then saveas() a new filename in an output directory.
When I log in to the server and run the program, everything works as required - both refreshall() steps update and the new file is saved with the updated data. This is the case not matter how the Visible flag is set (True or False).
When we set it as a step in an MS SQL job, the output file is created, but the updates are not done. I've been all over Stack Overflow and the internet in general, and have found no answers to solve this.
Using the combrowse.py script to look at the COM objects, we can see that excel is in the "Running Objects" list when run logged in, but it is NOT in the list when it is running via the MS SQL job (reduced to a single step in the job).
I looked at stdout to see if that might be an issue. My theory is that there is no console in which to operate when run via the job, and so Excel does not start (and the refreshall() cannot run). I found that the stdout when run locally is a UTF-8 encoding and when run via the job is a cp1252 encoding. I couldn't get anything more useful than that.
Code snippet:
from AE import logging, encrypt
import os, sys, shutil, datetime, gc
import time
import win32com.client
script_name = 'PerSta'
log_id = logging.Start_script(script_name)
try:
logging.Log_script_message(id = log_id, message = 'Opening excel')
ExcelConn = win32com.client.DispatchEx("Excel.Application")
logging.Log_script_message(id = log_id, message = 'ExcelConn is:')
logging.Log_script_message(id = log_id, message = repr(ExcelConn))
logging.Log_script_message(id = log_id, message = 'Opening {}'.format(script_name))
PS = ExcelConn.Workbooks.Open(datadict.get('path') + datadict.get('filename'))
ExcelConn.Interactive = False
ExcelConn.Visible = False
ExcelConn.DisplayAlerts = False
ExcelConn.EnableEvents = False
logging.Log_script_message(id = log_id, message = 'Refreshing excel first time')
PS.RefreshAll()
ExcelConn.CalculateUntilAsyncQueriesDone()
time.sleep(pause_for_refresh)
logging.Log_script_message(id = log_id, message = 'Refreshing excel second time')
PS.RefreshAll() #Refresh again to update any pivots
ExcelConn.CalculateUntilAsyncQueriesDone()
time.sleep(pause_for_refresh)
logging.Log_script_message(id = log_id, message = 'Saving workbook')
PS.SaveAs(Report)
time.sleep(pause_for_refresh)
logging.Log_script_message(id = log_id, message = 'Closing workbook')
PS.Close(SaveChanges = True)
time.sleep(pause_for_refresh)
PS = None
logging.Log_script_message(id = log_id, message = 'Closing filehandle')
ExcelConn.Quit()
ExcelConn = None
except:
logging.Log_script_message(id = log_id, message = 'Refreshed failed, closing filehandle')
PS.Close(SaveChanges = False)
PS = None
ExcelConn.Quit()
ExcelConn = None
I believe the issue lies in not having a screen for Excel to do its work, but I have not been able to prove that. We get NO ERRORS at all, either way it is run. I would expect that there would be an error in the job scenario, since it doesn't do what it says that it is, but that is not the case.
Any help would be much appreciated!!
--MIKE--
Edit: the Interactive, Visible, DisplayAlerts, and EnableEvents was put in as testing to see if we could use those to fix the issue. They did not work, no matter how they were set, but left them in in case they came up in discussion.
I'm trying to construct a man in the middle attack on a webpage (i.e. HTTP traffic). I'm doing this by using a Linux machine attached to Ethernet and a client attached to the Linux box via its WiFi hotspot.
What I've done so far is use NFQueue from within the IPTables Linux firewall to route all TCP packets on the FORWARD chain to the NFQueue queue, which a Python script is picking up and then processing those rules. I'm able to read the data off of the HTTP response packets, but whenever I try to modify them and pass them back (accept the packets), I'm getting an error regarding the strings:
Exception AttributeError: "'str' object has no attribute 'build_padding'" in 'netfilterqueue.global_callback' ignored
My code is here, which includes things that I've tried that didn't work. Notably, I'm using a third-party extension for scapy called scapy_http that may be interfering with things, and I'm using a webpage that is not being compressed by gzip because that was messing with things as well. The test webpage that I'm using is here.
#scapy
from scapy.all import *
#nfqueue import
from netfilterqueue import NetfilterQueue
#scapy http extension, not really needed
import scapy_http.http
#failed gzip decoding, also tried some other stuff
#import gzip
def print_and_accept(packet):
#convert nfqueue datatype to scapy-compatible
pkt = IP(packet.get_payload())
#is this an HTTP response?
if pkt[TCP].sport == 80:
#legacy trial that doesn't work
#data = packet.get_data()
print('HTTP Packet Found')
#check what's in the payload
stringLoad = str(pkt[TCP].payload)
#deleted because printing stuff out clogs output
#print(stringLoad)
#we only want to modify a specific packet:
if "<title>Acids and Bases: Use of the pKa Table</title>" in stringLoad:
print('Target Found')
#strings kind of don't work, I think this is a me problem
#stringLoad.replace('>Acids and Bases: Use of the pK<sub>a</sub>', 'This page has been modified: a random ')
#pkt[TCP].payload = stringLoad
#https://stackoverflow.com/questions/27293924/change-tcp-payload-with-nfqueue-scapy
payload_before = len(pkt[TCP].payload)
# I suspect this line is a problem: the string assigns,
# but maybe under the hood scapy doesn't like that very much
pkt[TCP].payload = str(pkt[TCP].payload).replace("Discussion", "This page has been modified")
#recalculate length
payload_after = len(pkt[TCP].payload)
payload_dif = payload_after - payload_before
pkt[IP].len = pkt[IP].len + payload_dif
#recalculate checksum
del pkt[TCP].chksum
del pkt[IP].chksum
del pkt.chksum
print('Packet Modified')
#redudant
#print(stringLoad)
#this throws an error (I think)
print(str(pkt[TCP].payload))
#no clue if this works or not yet
#goal here is to reassign modified packet to original parameter
packet.set_payload(str(pkt))
#this was also throwing the error, so tried to move away from it
#print(pkt.show2())
#bunch of legacy code that didn't work
#print(GET_print(pkt))
#print(pkt.show())
#decompressed_data = zlib.decompress(str(pkt[TCP].payload), 16 + zlib.MAX_WBITS)
#print(decompressed_data)
#print(str(gzip.decompress(pkt[TCP].payload)))
# print(pkt.getlayer(Raw).load)
#print('HTTP Contents Shown')
packet.accept()
def GET_print(packet1):
ret = "***************************************GET PACKET****************************************************\n"
ret += "\n".join(packet1.sprintf("{Raw:%Raw.load%}\n").split(r"\r\n"))
ret += "*****************************************************************************************************\n"
return ret
print('Test: Modify a very specific target')
print('Program Starting')
nfqueue = NetfilterQueue()
nfqueue.bind(1, print_and_accept)
try:
print('Packet Interface Starting')
nfqueue.run()
except KeyboardInterrupt:
print('\nProgram Ending')
nfqueue.unbind()
Apologies in advance if this is hard to read or badly formatted code; Python isn't a language that I write in often. Any help is greatly appreciated!
After trying to break down code from GitHub and find any youtube videos that talk about this I'm starting to give up, so I'm hoping one of you can please help me. All I want to be able to do is monitor a games memory addresses value. For example, let's say in the game Minecraft the health value and the memory address is:
Address: 001F6498
Value: 20
How do I turn this value into a variable in Python?
Code Thought Process:
import pywin32
pid = 5601
address = 001F6498
ReadProcessMemory(pid, address):
print(Value)
#In this example i would like it to print 20
You need to get a handle to the process first. Here is some code that does so using OpenProcess() FindWindow() and GetWindowThreadProcessId() to get the handle to the process. Also included is a little function to properly read the correct size variable and store it correctly. This method can be used to read pointers, utilizing "i" to denote an integer type.
import win32api
import win32gui
import win32process
from ctypes import *
from pymem import *
PROCESS_ALL_ACCESS = 0x1F0FFF
ReadProcessMemory = windll.kernel32.ReadProcessMemory
def read_memory(procId, address, type):
buffer = (ctypes.c_byte * getlenght(type))()
bytesRead = ctypes.c_ulonglong(0)
readlenght = getlenght(type)
ReadProcessMemory(procId, address, buffer, readlenght, byref(bytesRead))
return struct.unpack(type, buffer)[0]
hWnd = win32gui.FindWindow(0, ("WINDOW NAME HERE"))
pid=win32process.GetWindowThreadProcessId(hWnd)
handle = pymem.Pymem()
handle.open_process_from_id(pid[1])
procBaseAddress = handle.process_base
hProc = windll.kernel32.OpenProcess(PROCESS_ALL_ACCESS, 0, pid[1])
value = ReadProcessMemory(hProc, ADDRESS_OF_VARIABLE_TO_READ, "i")
print(value)
Credits to a friend, puppetmaster, who taught me how to do this
I have a pretty annoying issue at the moment. When I process to a httplib2.request with a way too large page, I would like to be able to stop it cleanly.
For example :
from httplib2 import Http
url = 'http://media.blubrry.com/podacademy/p/content.blubrry.com/podacademy/Neuroscience_and_Society_1.mp3'
h = Http(timeout=5)
h.request(url, 'GET')
In this example, the url is a podcast and it will keep being downloaded forever. My main process will hang indefinitely in this situation.
I have tried to set it in a separate thread using this code and to delete straight my object.
def http_worker(url, q):
h = Http()
print 'Http worker getting %s' % url
q.put(h.request(url, 'GET'))
def process(url):
q = Queue.Queue()
t = Thread(target=http_worker, args=(url, q))
t.start()
tid = t.ident
t.join(3)
if t.isAlive():
try:
del t
print 'deleting t'
except: print 'error deleting t'
else: print q.get()
check_thread(tid)
process(url)
Unfortunately, the thread is still active and will continue to consume cpu / memory.
def check_thread(tid):
import sys
print 'Thread id %s is still active ? %s' % (tid, tid in sys._current_frames().keys() )
Thank you.
Ok I found an hack to be able to deal with this issue.
The best solution so far is to set a maximum of data read and to stop reading from the socket. The data is read from the method _safe_read of httplib module. In order to overwrite this method, I used this lib : http://blog.rabidgeek.com/?tag=wraptools
And voila :
from httplib import HTTPResponse, IncompleteRead, MAXAMOUNT
from wraptools import wraps
#wraps(httplib.HTTPResponse._safe_read)
def _safe_read(original_method, self, amt):
"""Read the number of bytes requested, compensating for partial reads.
Normally, we have a blocking socket, but a read() can be interrupted
by a signal (resulting in a partial read).
Note that we cannot distinguish between EOF and an interrupt when zero
bytes have been read. IncompleteRead() will be raised in this
situation.
This function should be used when <amt> bytes "should" be present for
reading. If the bytes are truly not available (due to EOF), then the
IncompleteRead exception can be used to detect the problem.
"""
# NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never
# return less than x bytes unless EOF is encountered. It now handles
# signal interruptions (socket.error EINTR) internally. This code
# never caught that exception anyways. It seems largely pointless.
# self.fp.read(amt) will work fine.
s = []
total = 0
MAX_FILE_SIZE = 3*10**6
while amt > 0 and total < MAX_FILE_SIZE:
chunk = self.fp.read(min(amt, httplib.MAXAMOUNT))
if not chunk:
raise IncompleteRead(''.join(s), amt)
total = total + len(chunk)
s.append(chunk)
amt -= len(chunk)
return ''.join(s)
In this case, MAX_FILE_SIZE is set to 3Mb.
Hopefully, this will help others.