Perforce (P4) Python API complains about too many locks - python

I wrote an application that opens several subprocesses, which initiate connections individually to a Perforce server. After a while I get this error message in almost all of these child-processes:
Traceback (most recent call last):
File "/Users/peter/Desktop/test_app/main.py", line 76, in p4_execute
p4.run_login()
File "/usr/local/lib/python3.7/site-packages/P4.py", line 665, in run_login
return self.run("login", *args, **kargs)
File "/usr/local/lib/python3.7/site-packages/P4.py", line 611, in run
raise e
File "/usr/local/lib/python3.7/site-packages/P4.py", line 605, in run
result = P4API.P4Adapter.run(self, *flatArgs)
P4.P4Exception: [P4#run] Errors during command execution( "p4 login" )
[Error]: "Fatal client error; disconnecting!
Operation 'client-SetPassword' failed.
Too many trys to get lock /Users/peter/.p4tickets.lck."
Does anyone have any idea what could cause this? I open my connections properly and all double checked on all source locations that I disconnect from the server properly via disconnect.
Only deleting the .p4tickets.lck manually works until the error comes back after a few seconds

The relevant code is here:
https://swarm.workshop.perforce.com/projects/perforce_software-p4/files/2018-1/support/ticket.cc#200
https://swarm.workshop.perforce.com/projects/perforce_software-p4/files/2018-1/sys/filetmp.cc#147
I can't see that there's any code path where the ticket.lck file would fail to get cleaned up without throwing some other error.
Is there anything unusual about the home directory where the tickets file lives? Like, say, it's on a network filer with some latency and some kind of backup process? Or maybe one that doesn't properly enforce file locks between all these subprocesses you're spawning?
How often are your scripts running "p4 login" to refresh and re-write the ticket? Many times a second? If you change them to not do that (e.g. only login if there's not already a ticket) does the problem persist?

Related

apscheduler: returned more than one DjangoJobExecution -- it returned 2

In my proyect scheduler return this error in the execute job, help me please
this is my error in cosole, then execute the program
Error notifying listener
Traceback (most recent call last):
File "C:\Users\angel\project\venv\lib\site-packages\apscheduler\schedulers\base.py", line 836, in _dispatch_event
cb(event)
File "C:\Users\angel\project\venv\lib\site-packages\django_apscheduler\jobstores.py", line 53, in handle_submission_event
DjangoJobExecution.SENT,
File "C:\Users\angel\project\venv\lib\site-packages\django_apscheduler\models.py", line 157, in atomic_update_or_create
job_id=job_id, run_time=run_time
File "C:\Users\angel\project\venv\lib\site-packages\django\db\models\query.py", line 412, in get
(self.model._meta.object_name, num)
django_apscheduler.models.DjangoJobExecution.MultipleObjectsReturned: get() returned more than one DjangoJobExecution -- it returned 2!
This is my code
class Command(BaseCommand):
help = "Runs apscheduler."
scheduler = BackgroundScheduler(timezone=settings.TIME_ZONE, daemon=True)
scheduler.add_jobstore(DjangoJobStore(), "default")
def handle(self, *args, **options):
self.scheduler.add_job(
delete_old_job_executions,
'interval', seconds=5,
id="delete_old_job_executions",
max_instances=1,
replace_existing=True
)
try:
logger.info("Starting scheduler...")
self.scheduler.start()
except KeyboardInterrupt:
logger.info("Stopping scheduler...")
self.scheduler.shutdown()
logger.info("Scheduler shut down successfully!")
Not sure if you're still having this issue. I have same error and found your question. Turned out this happens only in dev environment.
Because python3 manage.py runserver starts two processes by default, the code
seems to register two job records and find two entries at next run time.
With --noreload option, it starts only one scheduler thread and works well. As name implies, it won't reload changes you make automatically though.
python3 manage.py runserver --noreload
not sure if you're still having this issue. i think you can use socket , socket can use this issue.
look this enter image description here

Python tftp handling error: "No options found in OACK"

I am using python 3's module tftpy to attempt to handle tftp style downloading of a file. However, when I run the application I get the following error:
\Python38\site-packages\tftpy\TftpStates.py", line 53, in handleOACK
raise TftpException("No options found in OACK")
tftpy.TftpShared.TftpException: No options found in OACK
How do I get my python project to ignore OACK/ send a new request packet that doesn't include an OACK?
Disclaimer: This is my first time attempting to work with TFTP packets so I am fairly new. If the question I posed isn't the appropriate way to handle it, what should I be doing?
MORE DATA ON THE PROBLEM:
I am using an external chip that is programmed to ignore OACK packet options.
When I used C# and the TFTP.Net package the transfer worked, so I don't believe it is an issue with my TFTP server. However, as our main application is based in python I want to be able to handle this communication via python 3.
I am running python 3.8.5
On my server side it is saying it receives a packet with error code 8.
python Script:
import tftpy
client = tftpy.TftpClient('192.168.0.42', 69)
client.download('triplog.txt', 'faultlog.txt', packethook=None, timeout=5)
full traceback:
Failed to negotiate options: No options found in OACK
Traceback (most recent call last):
File "C:\Users\selena\Documents\PythonScripts\TFTP\TFTPTestScript.py", line 23, in <module>
client.download('triplog.txt', 'faultlog.txt', packethook=None, timeout=5)
File "C:\Users\selena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\tftpy\TftpClient.py", line 58, in download
self.context.start()
File "C:\Users\selena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\tftpy\TftpContexts.py", line 402, in start
self.cycle()
File "C:\Users\selena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\tftpy\TftpContexts.py", line 202, in cycle
self.state = self.state.handle(recvpkt, raddress, rport)
File "C:\Users\selena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\tftpy\TftpStates.py", line 566, in handle
self.handleOACK(pkt)
File "C:\Users\selena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\tftpy\TftpStates.py", line 53, in handleOACK
raise TftpException("No options found in OACK")
tftpy.TftpShared.TftpException: No options found in OACK
[Finished in 0.7s with exit code 1]
Credit #ewong for this workaround solution
The code worked after adding in options when initializing the client even though I didn't need them. I'll be filing an issue #https://github.com/msoulier/tftpy to see if this is a bug that needs to be addressed or a deliberate choice.
Solution code:
import tftpy
client = tftpy.TftpClient('192.168.0.42', 69, options={'blksize': 8})
client.download('triplog.txt', 'faultlog.txt', packethook=None, timeout=5)

Why does the iMessage program throw a database error?

I'm a beginner and I've tried creating a iMessage program that sends a single word from a song's lyrics until it reaches the end. Unfortunately, I have encountered an error that I don't understand a single bit. I have tried researching about the bug, but I only see posts about actual SQL, and I haven't used it before.
It only sends one message and then throws errors like these:
Traceback (most recent call last):
File "/Users/levs/OneDrive/python/python-projects/14/messagebot.py", line 21, in <module>
sendMessage("+44 7871579435", getLyrics())
File "/Users/levs/OneDrive/python/python-projects/14/messagebot.py", line 18, in sendMessage
imessage.send(phone, message)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/py_imessage/imessage.py", line 16, in send
db_conn.open()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/py_imessage/db_conn.py", line 21, in open
db = sqlite3.connect(db_path, uri=True)
sqlite3.OperationalError: unable to open database file
And this is the code:
from time import sleep
from py_imessage import imessage
def getLyrics():
with open('lyrics.txt') as file:
return ' '.join([str(line.strip()) for line in file])
def getWords(lyrics):
return lyrics.split()
def sendMessage(phone, lyrics):
for message in getWords(lyrics):
imessage.send(phone, message)
sendMessage("+44 1122334455", getLyrics())
I would appreciate any kind of help! :)
I did a bit of research on the py_imessage package. It works by accessing the iMessages database on your Mac. The error message you are seeing,
sqlite3.OperationalError: unable to open database file
basically means that your Mac is not allowing the program to access that database. To fix this issue, you will need to boot your Mac into Recovery Mode and run:
crsutil disable
in the terminal. This can be done by holding Command-R when restarting your Mac. Once it boots into Recovery mode, go to Utilites > Terminal, and run csrutil disable there. Then you can type reboot and boot back into normal MacOS.

JRC JJ1000 + dronekit >> ERROR:dronekit.mavlink:Exception in MAVLink input loop

I am trying to connect JRC JJ1000 drone using dronekit + python.
when executing the connect command:
dronekit.connect('com3', baud=115200, heartbeat_timeout=30)
I am getting the following error:
ERROR:dronekit.mavlink:Exception in MAVLink input loop
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\dronekit\mavlink.py", line 211, in mavlink_thread_in
fn(self)
File "C:\Python37\lib\site-packages\dronekit\__init__.py", line 1371, in listener
self._heartbeat_error)
dronekit.APIException: No heartbeat in 5 seconds, aborting.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\dronekit\__init__.py", line 3166, in connect
vehicle.initialize(rate=rate, heartbeat_timeout=heartbeat_timeout)
File "C:\Python37\lib\site-packages\dronekit\__init__.py", line 2275, in initialize
raise APIException('Timeout in initializing connection.')
dronekit.APIException: Timeout in initializing connection.
I left no store unturned but no progress. I also tried both Python 2.7 and 3.7 with same result.
I have been getting the same error. I am using some custom code in a docker container to run simulations with dronekit and ArduPilot. The error is intermittent. So far it seems like the only way to get the error to stop is to:
Close all docker containers.
Open windows task manager and wait for vmmem to lower memory usage (5-10m).
Try again.
Maybe the problems are related somehow. To me it seems like the connection might be in use by a previous instance and it was not properly close. Since waiting for vmmem to free up resources appears to fix it. I would prefer a better solution if anyone finds one!
We are using python code like this to connect:
from dronekit import connect
...
# try to connect 5 times
while connected == False and fails < 5:
try:
vehicle = connect(connection_string, wait_ready=True)
except:
fails += 1
time.sleep(3)
print("Failed to connect to local mavlink sleeping for 3 seconds")
else:
connected = True
Where the connection_string is of the form:
"tcp:host:port"
Also, the documentation states "If the baud rate is not set correctly, connect may fail with a timeout error. It is best to set the baud rate explicitly." Are you sure that you have the correct baud rate?

Sequentially running an external independent process using Tkinter and python

BACKGROUND :
*I'm creating a batch simulation job chooser + scheduler using Tkinter (Portable PYscripter, python v2.7.3)
*This program will function as a front end, to a commercial solver program
*The program needs to allow the user to choose a bunch of files to simulate, sequentially, one after the other.
*It also needs to have the facility to modify (Add/delete) jobs from an existing/running job list.
*Each simulation will definitely run for several hours.
*The output of the simulation will be viewed on separate programs and I do not need any pipe to the output. The external viewer will be called from the GUI, when desired.
***I have a main GUI window, which allows the user to :
choose job files, submit jobs, view the submission log, stop running jobs(one by one)
The above works well.
PROBLEMS :
*If I use subprocess.Popen("command") : all the simulation input files are launched at the same time. It MUST be sequential (due to license and memory limitations)
*If I use subprocess.call(" ") or the wait() method, then the GUI hangs and there is no scope to stop/add/modify the job list. Even if the "job submit" command is on an independent window, both the parent windows hang untill the job completes.
QUESTION 1 :
*How do I launch the simulation jobs sequentially (like subprocess.call) AND allow the main GUI window to function for the purpose of job list modification or stopping a job ?
The jobs are in a list, taken using "askopenfilenames" and then run using a For loop.
Relevant parts of the Code :
cfx5solvepath=r"c:\XXXX"
def file_chooser_default():
global flist1
flist1=askopenfilename(parent = root2, filetypes =[('.def', '*.def'),('All', '*.*'),('.res', '*.res')], title ="Select Simulation run files...", multiple = True)[1:-1].split('} {')
def ext_process():
o=list(flist1)
p=list(flist1)
q=list(flist1)
i=0
while i < len(flist1):
p[i]='"%s" -def "%s"'%(cfx5solvepath,flist1[i])
i+=1
i=0
while i < len(p):
q[i]=subprocess.call(p[i])
i+=1
root2 = Tk()
root2.minsize(300,300)
root2.geometry("500x300")
root2.title("NEW WINDOW")
frame21=Frame(root2, borderwidth=3, relief="solid").pack()
w21= Button(root2,fg="blue", text="Choose files to submit",command=file_chooser_default).pack()
w2a1=Button(root2,fg="white", text= 'Display chosen file names and order', command=lambda:print_var(flist1)).pack()
w2b1= Button (root2,fg="white", bg="red", text="S U B M I T", command=ext_process).pack()
root2.mainloop()
Please let me know if you require anything else. Look forward to your help.
*EDIT *
On incorporating the changes suggested by #Tim , the GUI is left free. Since there is a specific sub-program associated with the main solver program to stop the job, I am able to stop the job using the right command.
Once the currently running job is stopped, the next job on the list starts up, automatically, as I was hoping.
This is the code used for stopping the job :
def stop_select(): #Choose the currently running files which are to be stopped
global flist3
flist3=askdirectory().split('} {')
def sim_stop(): #STOP the chosen simulation
st=list(flist3)
os.chdir("%s"%flist3[0])
st= subprocess.call('"%s" -directory "%s"'%(defcfx5stoppath,flist3[0]))
ret1=tkMessageBox.showinfo("INFO","Chosen simulation stopped successfully")
os.chdir("%s" %currentwd)
QUESTION 2 :
*Once the above jobs are Completed, using start_new_thread, the GUI doesn't respond. The GUI works while the jobs are running in the background. But the start_new_thread documentation says that the thread is supposed to exit silently when the function returns.
*Additionally, I have a HTML log file that is written into/updated as each job completes. When I use start_new_thread ,the log file content is visible only AFTER all the jobs complete. The contents, along with the time stamps are however correct. Without using start_new_thread, I was able to refresh the HTML file to get the updated submission log.
***On exiting the GUI program using the Task manager several times, I am suddenly unable to use the start_new_thread function !! I have tried reinstalling PYscripter and restarting the computer as well. I can't figure out anything sensible from the traceback, which is:
Traceback (most recent call last):
File "<string>", line 532, in write
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\protocol.py", line 439, in _async_request
seq = self._send_request(handler, args)
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\protocol.py", line 229, in _send_request
self._send(consts.MSG_REQUEST, seq, (handler, self._box(args)))
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\protocol.py", line 244, in _box
if brine.dumpable(obj):
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\brine.py", line 369, in dumpable
return all(dumpable(item) for item in obj)
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\brine.py", line 369, in <genexpr>
return all(dumpable(item) for item in obj)
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\brine.py", line 369, in dumpable
return all(dumpable(item) for item in obj)
File "C:\Portable Python 2.7.3.1\App\lib\site-packages\rpyc\core\brine.py", line 369, in <genexpr>
return all(dumpable(item) for item in obj)
File "C:\Portable Python 2.7.3.1\App\Python_Working_folder\v350.py", line 138, in ext_process
q[i]=subprocess.call(p[i])
File "C:\Portable Python 2.7.3.1\App\lib\subprocess.py", line 493, in call
return Popen(*popenargs, **kwargs).wait()
File "C:\Portable Python 2.7.3.1\App\lib\subprocess.py", line 679, in __init__
errread, errwrite)
File "C:\Portable Python 2.7.3.1\App\lib\subprocess.py", line 896, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
I'd suggest using a separate thread for the job launching. The simplest way would be to use the start_new_thread method from the thread module.
Change the submit button's command to command=lambda:thread.start_new_thread(ext_process, ())
You will probably want to disable the button when it's clicked and enable it when the launching is complete. This can be done inside ext_process.
It becomes more complicated if you want to allow the user to cancel jobs. This solution won't handle that.

Categories