I have an extremely strange problem that has stumped me for about the past day.
import wx, gui, threading, time, pyodbc
from OrderNumEntry import entryID, plyID, stationID
from main import strPlyDB
from Op1Login import eID1
global eID1
class MainFrame1(gui.MainFrame1Op):
def __init__(self, parent):
gui.MainFrame1Op.__init__(self, parent)
self.m_textCtrl3.SetValue(eID1)
# print eID1
def checkClick1(self, event):
if threading.activeCount() < 2:
t1 = threading.Thread(target = self.checkAdvance1)
t1.setDaemon(True)
t1.start()
self.m_checkBox110.SetValue(True)
def checkAdvance1(self):
strODBC_conn = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=%s' % strPlyDB
conn = pyodbc.connect(strODBC_conn)
cursor = conn.cursor()
# eID1 = '00839530'
print 'test1'
print eID1
print 'test2'
SQL = 'INSERT INTO tblRecord(EntryID, PlyID, StationID, EmployeeNum, BuyoffTime) values(?,?,?,?,?)'
cursor.execute(SQL, (entryID, plyID, stationID, eID1, time.strftime("%c")))
cursor.commit()
cursor.close()
conn.close()
What's going on is there's a primary Python file main.py that initializes my mainframe1 class, which references gui.py for all of its wxpython code to bring up the window. This file is simply the event handler. checkClick1 runs when I click on a check box in my window (creative name I know), which in turn starts a thread for checkAdvance1.
Everything works fine in the program except for one part. My eID1 variable works just fine in __init__, and textCtrl3 displays it. However, when I try and use it again in checkAdvance1, the program hangs.
To test it, I surrounded the print statement for eID1 with two test print statements. Only test1 is printed, and no error code is given, the program simply hangs. To make matters even more confusing, when I put a print eID1 statement in the __init__ function (it's commented out right now), the program runs just fine!
I'm not sure how variable functionality could be affected by a print statement, but it's happening. However, putting a print statement in to fix it seems like a shoddy workaround that ignores the actual problem. Also, the commented-out eID1 declaration is just an example to show you what the content of the variable looks like when the program is running.
EDIT:
I have come up with a work around that I'm not entirely satisfied with, but it's better than adding print. If I create another variable, I can set it to the string translation of empID1, like so: empID1 = str(eID1). I can then reference this value later in place of eID1. However, this still doesn't sit well with me, as according to everything I know and have researched, my original code SHOULD work. It makes me feel like there's something bad going on behind the scenes that could come back to bite me later.
Related
So I have 2 files, fish_life_simulator.py and menu.py. fish_life_simulator.py is the main file and executes other files like menu.py depending on what happens. So here is the code and how it should work:
import os
os.chdir(os.path.dirname(__file__))
result = exec(open(r'menu.py', encoding='utf-8').read())
print(result)
So at first when the code arrives to result = exec(open(r'menu.py', encoding='utf-8').read()) it executes menu.py and all is fine, but it could stop for several reasons:
The player exit the game
The player entered settings
The player pressed play
So what I decided to do, is when menu.py will stop running it will return a value, like 1, 2 or 3, so I tried several methods that have been included in here:
Best way to return a value from a python script
like using return or sys.exit("some value here"), but even though I did the part inside of menu.py, neither of them worked, as when I tried return, result from result = exec(open(r'menu.py', encoding='utf-8').read()) always was None for some reason and when I tried sys.exit(1) for example, result didn't get printed at all, so I was just wandering if it was something I was missing inside of fish_life_simulator.py, because the part with sending the value should be fine, but the part of receiving it is problematic.
Just define a function in menu.py:
def do_stuff_in_menu():
...
return result
and in fish_life_simulator.py you just call that function:
import menu
result = menu.do_stuff_in_menu()
print(result)
I agree with everyone who says exec() is not the best way to do this, however, since that's not your question, here's an answer for you.
The exec() function always returns None (see docs). If you need the return code, you could use os.system() or one of the various methods from the subprocess library. Unlike exec(), however, both of these alternatives would create a child process.
That said, I personally would not use any of those methods, but would instead modify menu.py to allow you to import it. It's much more natural and direct.
When i run the following code (using "sudo python servers.py") the process seem to just finish immediately with just printing "test".
why doesn't the functions "proxy_server" won't run ? or maybe they do but i do not realize that. (because the first line in proxy function doesn't print anything)
this is an impotent code, i didn't want to put unnecessary content, yet it still demonstrate my problem:
import os,sys,thread,socket,select,struct,time
HTTP_PORT = 80
FTP_PORT=21
FTP_DATA_PORT = 20
IP_IN = '10.0.1.3'
IP_OUT = '10.0.3.3'
sys_http = 'http_proxy'
sys_ftp = 'ftp_proxy'
sys_ftp_data = 'ftp_data_proxy'
def main():
try:
thread.start_new_thread(proxy_server, (HTTP_PORT, IP_IN,sys_http,http_handler))
thread.start_new_thread(proxy_server, (FTP_PORT, IP_IN,sys_ftp,http_handler))
thread.start_new_thread(proxy_server, (FTP_DATA_PORT, IP_OUT,sys_ftp_data,http_handler))
print "test"
except e:
print 'Error!'
sys.exit(1)
def proxy_server(host,port,fileName,handler):
print "Proxy Server Running on ",host,":",port
def http_handler(src,sock):
return ''
if __name__ == '__main__':
main()
What am i missing or doing wrong ?
First, you have indentation problems related to using mixed tabs and spaces for indentation. While they didn't cause your code to misbehave in this particular case, they will cause you problems later if you don't stick to consistently using one or the other. They've already broken the displayed indentation in your question; see the print "test" line in main, which looks misaligned.
Second, instead of the low-level thread module, you should be using threading. Your problem is occurring because, as documented in the thread module documentation,
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
threading threads let you explicitly define whether other threads should survive the death of the main thread, and default to surviving. In general, threading is much easier to use correctly.
I've written a couple of twitter scrapers in python, and am writing another script to keep them running even if they suffer a timeout, disconnection, etc.
My current solution is as follows:
Each scraper file has a doScrape/1 function in it, which will start up a scraper and run it once, eg:
def doScrape(logger):
try:
with DBWriter(logger=logger) as db:
logger.log_info("starting", __name__)
s = PastScraper(db.getKeywords(), TwitterAuth(), db, logger)
s.run()
finally:
logger.log_info("Done", __name__)
Where run is a near-infinite loop, which won't break unless there is an exception.
In order to run one of each kind of scraper at once, I'm using this code (with a few extra imports):
from threading import Thread
class ScraperThread(Thread):
def __init__(self, module, logger):
super(ScraperThread, self).__init__()
self.module = module # Module should contain a doScrape(logger) function
self.logger = logger
def run(self):
while True:
try:
print "Starting!"
print self.module.doScrape
self.module.doScrape(self.logger)
except: # if for any reason we get disconnected, reconnect
self.logger.log_debug("Restarting scraper", __name__)
if __name__ == "__main__":
with Logger(level="all", handle=open(sys.argv[1], "a")) as l:
past = ScraperThread(PastScraper, l)
stream = ScraperThread(StreamScraper, l)
past.start()
stream.start()
past.join()
stream.join()
However, it appears that my call of doScrape from above is returning immediately, hence "Starting!" is printed in the console repeatedly, and that "Done" message in the finally block is not written to the log, whereas when run individually like so:
if __name__ == "__main__":
# Example instantiation
from Scrapers.Logging import Logger
with Logger(level="all", handle=open(sys.argv[1], "a")) as l:
doScrape(l)
The code runs forever, as expected. I'm a bit stumped.
Is there anything silly that I might have missed?
get rid of the diaper pattern in your run() method, as in: get rid of that catch-all exception handler. You'll probably get the error printed there then. I think there may be something wrong in the DBWriter or other code you're calling from your doScrape function. Perhaps it is not thread-safe. That would explain why running it from the main program directly works, but calling it from a thread fails.
Aha, solved it! It was actually that I didn't realise that a default argument (here in TwitterAuth()) is evaluated at definition time. TwitterAuth reads the API key settings from a file handle, and the default argument opens up the default config file. Since this file handle is generated at definition time, both threads had the same handle, and once one had read it, the other one tried to read from the end of the file, throwing an exception. This is remedied by resetting the file before use, and using a mutex.
Cheers to Irmen de Jong for pointing me in the right direction.
I currently have a process running that should call a method every 10 seconds. I see that it actually calls the method at that interval, but it seems to not execute something in the code. Weird thing is, is that when I cancel the loop, and start it new it does actually do it the first time. Then when I keep it running it does not do anything.
def main():
try:
while True:
read()
time.sleep(10)
except KeyboardInterrupt:
pass
Above is the loop, and the code here is actually the beginning of the method that is being called, and I found out that it does not actually get results in the results, while the file has changed. In this case it gets data from a .json file
def read():
message = Query()
results = DB.search(message.pushed == False)
Am I overlooking something?
Solved. I had the DB declared globally and that did not go so well. It is being fixed by declaring it just before the statement.
I'd like to manage OpenERP user's activity by installing the audittrail module.
After creating some rules ( define which user, which object and which activity (create, update..) will be monitored). I update a product to see it works.
When I've tried to update a product i got the system error. Seeing the log, I get
[2010-08-31 12:53:35,042] Cursor not closed explicitly
[2010-08-31 12:53:35,043] Cursor was created at /home/pilgrim/working/sources/addons/audittrail/audittrail.py:204
Here the line that causes error
cr = pooler.get_db(db).cursor()
Looking at sql_db.py, I get the comment
def __del__(self):
if not self.__closed:
# Oops. 'self' has not been closed explicitly.
# The cursor will be deleted by the garbage collector,
# but the database connection is not put back into the connection
# pool, preventing some operation on the database like dropping it.
# This can also lead to a server overload.
msg = "Cursor not closed explicitly\n" \
"Cursor was created at %s:%s" % self.__caller
log(msg, netsvc.LOG_WARNING)
self.close()
Since I'm new to Python, I don't know how to overcome this issue?
Any hint to get over this?
Thank
t would be important to see the source code to understand whats going on.
But from what you have posted it looks like the previous cursor was not closed explicitly.
cr = sqldb.db_connect(dbname).cursor()
.........
cr.close()
cr = None
I would suggest that you hack audittrail.py to find where ever you are creating the cursor and where ever you close them. A typical issue arises in incorrect handling of exceptions, causing code to jump over normal closure.
Try placing a try, except and finally clause around the questionable cursor operation. That should help you to get around the problem.
I think I find the answer.
See an example
def a():
try:
print 'before return '
return 1
finally:
print 'in finally'
call a()
before return
in finally
1
It's normal. OK.
Try another example ( code extract from audittrail.py)
def do_something_with_db(db):
// open cusror again
cr = db.cursor()
// do somethign
// close cursor internally
cr.close()
def execute(db)
// 1, open connection and open cursor
cr = db.cursor
try:
//2, do something with db, seeing that this method will open cursor again
return do_something_with_db(db)
finally:
cr.close()
Seeing that the implementation of do_something_with_db trying to open the cursor ( can be called connection) but the current one is not explicitly closed.
So the solution is simple: Pass the current cr around
Before
**do_something_with_db(db)**
after
**do_something_with_db(cr)**
Now the error's gone.
#Don Kirkby: Yes, we should experiment with try...finally
Can you run OpenERP in a debugger like the PyDev plug in for Eclipse? I find that the most effective way to track down problems. I haven't used the audit trail module, but I took a quick look at the source code, and it appears that the cursor is being opened near the beginning of log_fct(). (I would have expected it to report line 207, which version are you running?) Here's what I think is the relevant code:
def log_fct(self, db, uid, passwd, object, method, fct_src, *args):
logged_uids = []
pool = pooler.get_pool(db)
cr = pooler.get_db(db).cursor() # line 207 in version 5.0.12
# ...
if method in ('create'):
# ...
cr.close()
return res_id
# ...
cr.close()
It looks like there are several return statements in the method, but each one seems to call cr.close() first, so I don't see any obvious problems. Try running it in the debugger with a break point in this method. If that's not possible, you can try writing to the log with something like this:
logger = netsvc.Logger()
logger.notifyChannel('audittrail', netsvc.LOG_INFO, 'something happened')
Update:
You commented that this happens under heavy load. Perhaps an exception is being thrown and the cursor is not being closed. You could use a try ... finally statement to make sure that the cursor is always closed. Here's how the sample above would look after converting it:
def log_fct(self, db, uid, passwd, object, method, fct_src, *args):
logged_uids = []
pool = pooler.get_pool(db)
cr = pooler.get_db(db).cursor() # line 207 in version 5.0.12
try:
# ...
if method in ('create'):
# ...
return res_id
# ...
finally:
cr.close()