Updating files with a Perforce trigger before submit - python

I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me.
Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process.
My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files.
I have already determined that using the change-content trigger (whether possible or not), which
"fire[s] after changelist creation and file transfer, but prior to committing the submit to the database",
is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.
EDIT
Basically what I am looking for is RCS-like functionality, but without the unsightly special character sequences which accompany it. After more digging, what I am asking is the same as this question. However I believe that this must be possible, because the trigger is running on the server side and the files had already been transferred to the server. They must therefore be accessible by the script.
EXAMPLE
Consider the following snippet from a release notes document:
[#####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####] Added a cool new feature. Early retirement is in sight.
[52702] Fixed a really annoying bug. Many lives saved.
[52686] Fixed an annoying bug.
This is what the user submits. I then want the trigger to intercept this file during the submission process (as mentioned, at the change-content stage) and alter it so that what is eventually stored within Perforce looks like this:
[52738] Added a cool new feature. Early retirement is in sight.
[52702] Fixed a really annoying bug. Many lives saved.
[52686] Fixed an annoying bug.
Where 52738 is the final change list number of what the user submitted. (As mentioned, I can already determine this number, so please do dwell on this point.) I.e., what the user sees on the Perforce client console is.
Changelist 52733 renamed 52738.
Submitted change 52738.

Are you trying to replace the content of pending changelist files that were edited on a different client workspace (and different user)?
What type of information are you trying to replace in the documentation files? For example,
is it a date, username like with RCS keyword expansion? http://www.perforce.com/perforce/doc.current/manuals/p4guide/appendix.filetypes.html#DB5-18921
I want to get better clarification on what you are trying to accomplish in case there is another way to do what you want.
Depending on what you are trying to do, you may want to consider shelving ( http://www.perforce.com/perforce/doc.current/manuals/p4guide/chapter.files.html#d0e5537 )
Also, there is an existing Perforce enhancement request I can add your information to,
regarding client side triggers to modify files on the client side prior to submit. If it becomes implemented, you will be notified by email.

99w,
I have also added you to an existing enhancement request for Customizable RCS keywords, along
with the example you provided.
Short of using a post-command trigger to edit the archive content directly and then update the checksum in the database, there is currently not a way to update the file content with the custom-edited final changelist number.

One of the things I learned very early on in programming was to keep out of interrupt level as much as possible, and especially don't do stuff in interrupt that requires resources that can hang the system. I totally get that you want to resolve the internal labeling in sequence, but a better way to do it may be to just set up the edit during the trigger so that a post trigger tool can perform the file modification.
Correct me if I'm looking at this wrong, but there seems a bit of irony, or perhaps recursion, if you are trying to make a file change during the course of submitting a file change. It might be better to have a second change list that is reserved for the log. You always know where that file is, in your local file space. That said, ktext files and $ fields may be able to help.

Related

How I can be aware of a code change in the return when using unpacking

Recently I have a problem when a coworker made a change in a signature for a function return, we have clients that call the function in this way:
example = function()
But then as I was depending on his changes, he unintentionally change to this:
example, other_stuff = function()
I was not aware of this change, I did the merge and everything seem ok but then the error happen as I was expecting one value, but now it was trying to unpack two
So my question is knowing python is not a typed language, is there a way to know this happen and prevent this behavior (a tool or something), because sadly was until a runtime error was raise when I notice this, or how did we need to handle this
Sounds like a process error. An API shouldn't change its signature without considering its users. Within a project its easy, just search. Externally, the API version number should be bumped and the change should be in the change notes.
The API should have unit tests which include return value tests. "he unintentionally changed" issues should all be caught there. Since this didn't happen, a bug report against the tests should be written.
Of course, the coworker could just change those tests. But all of that should be in a code reviewed change set in your source repository. The coworker should have to justify the change and how to mitigate breakage. Since this API appears to have external clients, it should be very difficult to get an API signature change as all clients will need to be notified.

Checkmarx OS_Access_Violation on Python os.enviro, parseargs, and main

I'm getting an OS_Access_Violation in several places in source code across different python projects. It shows up in areas like this:
if __name__ == '__main__':
main(sys.argv[1:])
often coupled with something like:
os.makedirs(args.output_dir, exist_ok=True)
as well as
elif args.backend == "beefygoodness":
os.environ["MMMMM_TACOS"] = "beefygoodness"
and
'args = parser.parse_args()'
There's no description associated with this finding, so I'm unsure what it means and what the proper remediation is. I'm also not sure if it's referring to an access violation in the developer sense (aka, program crash) or if it's a reference to data that shouldn't be accessible, or what exactly.
Google is no help on this either, unfortunately.
So does anyone know what this cryptic high-priority finding is referring to, and what the proper fix is? Thanks!
I haven't tested the query yet, but the result make sense.
You use the input that came from the user, so you have no certainty about its integrity, and it can be hostile.
For example:
args[1:]: You expect for 4 arguments, but the user can give you more, and unexpectedly affect the system.
Now, If I understand your question correctly, you said the vulnerable flow starts in main call and ends in one of os calls.
At this point you should understand that the unprotected input from the user, used as argument of os methods.
What if the user set /root/pwd as directory input?
Or what if the user set malicious text to the environment variable?
I think better solution is to save the arguments/env-variables as files [and credentials fetching from secret store vault] and consume those in runtime.

Django - Best way to create snapshots of objects

I am currently working on a Django 2+ project involving a blockchain, and I want to make copies of some of my object's states into that blockchain.
Basically, I have a model (say "contract") that has a list of several "signature" objects.
I want to make a snapshot of that contract, with the signatures. What I am basically doing is taking the contract at some point in time (when it's created for example) and building a JSON from it.
My problem is: I want to update that snapshot anytime a signature is added/updated/deleted, and each time the contract is modified.
The intuitive solution would be to override each "delete", "create", "update" of each of the models involved in that snapshot, and pray that all of them the are implemented right, and that I didn't forget any. But I think that this is not scalable at all, hard to debug and ton maintain.
I have thought of a solution that might be more centralized: using a periodical job to get the last update date of my object, compare it to the date of my snapshot, and update the snapshot if necessary.
However with that solution, I can identify changes when objects are modified or created, but not when they are deleted.
So, this is my big question mark: how with django can you identify deletions in relationships, without any prior context, just by looking at the current database's state ? Is there a django module to record deleted objects ? What are your thoughts on my issue ?
All right?
I think that, as I understand your problem, you are in need of a module like Django Signals, which listens for changes in the database and, when identified (and if all the desired conditions are met), executes certain commands in your application ( even be able to execute in the database).
This is the most recent documentation:
https://docs.djangoproject.com/en/3.1/topics/signals/

Is it possible to examine the inner statements of a function?

Working from the command line I wrote a function called go(). When called it receives input asking the user for a directory address in the format drive:\directory. No need for extra slashes or quotes or r literal qualifiers or what have you. Once you've provided a directory, it lists all the non-hidden files and directories under it.
I want to update the function now with a statement that stores this location in a variable, so that I can start browsing my hierarchy without specifying the full address every time.
Unfortunately I don't remember what statements I put in the function in the first place to make it work as it does. I know it's simple and I could just look it up and rebuild it from scratch with not too much effort, but that isn't the point.
As someone who is trying to learn the language, I try to stay at the command line as much as possible, only visiting the browser when I need to learn something NEW. Having to refer to obscure findings attached to vaguely related questions to rediscover how to do things I've already done is very cumbersome.
So my question is, can I see the contents of functions I have written, and how?
Unfortunately no. Python does not have this level of introspection. Best you can do is see the compiled byte code.
The inspect module details what information is available at runtime: https://docs.python.org/3.5/library/inspect.html

How can I write to the previous line in a log file using Python's Logging module?

long-time lurker here, finally emerging from the woodwork.
Essentially, what I'm trying to do is have my logger write data like this to the logfile:
Connecting to database . . . Done.
I'd like the 'Connecting to database . . . ' to be written when the function is called, and the 'Done' written after the function has successfully executed.
I'm using Python 2.6 and the logging module. Also, I'd really like to avoid using decorators for this. Any help would be most appreciated!
Writing to a log is, and must be, an atomic action -- this is crucial, and a key feature of any logging package (including the one in Python's standard library) that distinguishes logging from the simple appending of information to files (where bits of things being written by different processes and threads might well "interleave" -- one of them writing some part of a line but not the line-end, just as you desire, and then maybe another one interposing something right afterwards, before the first task writes what it thinks will be the last part of the line but actually ends up on another line... utter confusion often results;-).
It's not inevitable that the atomic unit be "a line" (logs can be recorded elsewhere than to a text file, of course, and some of the things that are acceptable "sinks" for logs won't even have the concept of "a line"!), but, for logging, atomic units there must be. If you want something entire non-atomic then you don't really want logging but simple appends to a file or other stream (and, watch out for the likely confusion mentioned in the first paragraph;-).
For transient updates about what your task is doing (in the middle of X, X done, starting Y, etc), you could think of a specialized log-handler that (for example) interprets such streams of updates by taking the first word as a subtask-identifier (incrementally building up and displaying somewhere the composite message about the "current subtask", recognizing when the subtask identifier changes that the previous subtask is finished or taking an explicit "subtask finished" message, and only writing persistent log entries on subtask-finished events).
It's a pretty specialized requirement so you're not likely to find a pre-made tool for this, but rather you'll have to roll your own. To help you with that, it's crucial to understand exactly what you're trying to accomplish (why would you want non-atomic logging entries, if such a concept even made any sense -- what deployment or system administration task are you trying to ameliorate by using such a hypothetical tool?) so that the specialized subsystem can be tailored to your actual needs. So, can you please expand on this?
I don't believe Python's logger supports that.
However, would it not be better to aggree on a Log format so that the log file can be easily parsed when you want analyse the data where ; is any deliminator you want:
DateTime;LogType;string
That could be parsed easiily by a script and then you could do analysis on the logs
If you use:
Connecting to database . . . Done.
Then you won't be able to analyse how long the transaction took
I don't think you should go down this route. A logging methodolgy with entry:
Time;functionName()->
And exit logging is more useful for troubleshooting:
Time;functionName()<-

Categories