ServiceNow GlideRecord sysparm_query Python - python

Has anyone used the GlideRecord library for python? I can't seem to get it to perform some fairly basic functionality. I want to add a few sysparm_query parameters. This is just a code snippet, I had to manually edit it for security purposes. Hopefully I didn't introduce any typo errors.
for i in glide1, glide2:
i.set_credentials('xxxx', 'xxxx')
i.set_server("https://<instance>.service-now.com/")
i.addQuery("active", "true")
def getIncidents(glide1):
group = "mygroup"
glide1.addQuery('assignment_group', group)
print glide1.query_data['sysparm_query'] + '\n'
print glide1.getQuery()[50:] #just to avoid too much output
gives me the output:
active=true^assignment_group=mygroup
displayvalue=true&JSONv2&sysparm_record_count=100&sysparm_action=getRecords&sysparm_query=
I cannot get the query data to append. Perhaps I should look at doing the queries manually? Here is a link to the GlideRecord git:
https://github.com/bazizi/ServiceNow_GlideRecord_API/blob/master/GlideRecord/init.py
Cheers, Arthur

I just realized that the getQuery() member function I had defined only returned the base query URL (not including the query itself). I had initially added this function for testing purposes, and wrongfully added this to the documentation.
I just fixed this issue and committed to the GitHub repository. Please pull from the git repository again or if you installed using PIP, run the following commands to re-install it from scratch:
pip uninstall GlideRecord
pip install GlideRecord
In terms of setting the assignment group by name, however, I still need to find out how ServiceNow hashes the assignment_group, or if there is another way this query can be added; That is, I have no fix for now.
Thanks
Behnam

Related

Python Pip automatically increment version number based on SCM

Similar questions like this were raised many times, but I was not able to find a solution for my specific problem.
I was playing around with setuptools_scm recently and first thought it is exactly what I need. I have it configured like this:
pyproject.toml
[build-system]
requires = ["setuptools_scm"]
build-backend = "setuptools.build_meta"
[project]
...
dynamic = ["version"]
[tool.setuptools_scm]
write_to = "src/hello_python/_version.py"
version_scheme = "python-simplified-semver"
and my __init__.py
from ._version import __version__
from ._version import __version_tuple__
Relevant features it covers for me:
I can use semantic versioning
it is able to use *.*.*.devN version strings
it increments minor version in case of feature-branches
it increments patch/micro version in case of fix-branches
This is all cool. As long as I am on my feature-branch I am able to get the correct version strings.
What I like particularly is, that the dev version string contains the commit hash and is thus unique across multiple branches.
My workflow now looks like this:
create feature or fix branch
commit, (push, ) publish
merge PR to develop-branch
As soon as I am on my feature-branch I am able to run python -m build which generated a new _version.py with the correct version string accordingly to the latest git tag found. If I add new commits, it is fine, as the devN part of the version string changes due to the commit hash. I would even be able to run a python -m twine upload dist/* now. My package is build with correct version, so I simply publish it. This works perfectly fine localy and on CI for both fix and feature branches alike.
The problem that I am facing now, is, that I need a slightly different behavior for my merged PullRequests
As soon as I merge, e.g. 0.0.1.dev####, I want to run my Jenkins job not on the feature-branch anymore, but instead on develop-branch. And the important part now is, I want to
get develop-branch (done by CI)
update version string to same as on branch but without devN, so: 0.0.1
build and publish
In fact, setuptools_scm is changing the version to 0.0.2.dev### now, and I would like to have 0.0.1.
I was tinkering a bit with creating git tags before running setuptools_scm or build, but I was not able to get the correct version string to put into the tag. At this point I am struggling now.
Is anyone aware of a solution to tackle my issue with having?:
minor increment on feature-branches + add .devN
patch/micro increment on fix-branches + add .devN
no increment on develop-branch and version string only containing major.minor.patch of merged branch
TLDR: turning off to write the version number to a file every time setuptools_scm runs could maybe solve your problem, alternatively add the version file to .gitignore.
Explanation:
I also just started using setuptools_scm, so I am not very confident in using it yet.
But, as far as I understand the logic to derive the version number is incremented according to the state of your repository (the detailed logic is documented here: https://github.com/pypa/setuptools_scm/#default-versioning-scheme).
When I am not mistaken, the tool now does exactly what it is expected to: it does NOT set a version name only derived from the tag, but also adds a devSomething because the tag you've set is not referencing the most current commit on the develop branch head in your case.
Also I had the problem that when letting setuptools_scm generate a version and also configuring it to write it to a file, this would lead to another state since the last commit, again generating a dev version number.
To get a "clean" (e.g. v0.0.1) version number I hat to do the tagging after merging (with merge commit) since the merge commit was also taken into account for the version numbering logic.
Still my setup is currently less complex than yours. Just feature and fix branches and just a main branch without develop. So fewer merge commits (I chose to do merge commits, so no linear history). Now after merging with commit I create a Tag manually and formulate its name myself.
And this also only works for me in case I opt-out for writing the version number into a file. This I have done by inserting the following into pyproject.toml:
[tool.setuptools_scm]
# intentionally empty/commented out
# write_to option leads to an unclean workspace during build
# which again is leading setuptools_scm to interpret this during build and producing wheels with unclean version numbers
# write_to = "version.txt"
Since setuptools_scm runs during build, a new version file is also generated, which pollutes your worktree. Since your worktree will never be clean this way, you always get a dev version number. To still have a version file and to let it ignore during build, add the file to your .gitignore.
My approach is not perfect, some manual steps, but for now it works for me.
Certainly not 100% applicable in your CI scenario, but maybe you could change the order of doing merges and tags. I hope this helps somehow.

aws python cdk: how to solve constructs version conflict (trying to create httpapi)

Goal (quite simple, or at least I thought so):
Returning HTML-Content from a lambda for a web browser.
Defining it with python aws cdk
As far as I understand it, I've got to put my lambda into an HttpLambdaIntegration which I can use in a route added to my HttpApi
Here is the example:
from aws_cdk.aws_apigatewayv2_integrations import HttpLambdaIntegration
# books_default_fn: lambda.Function
books_integration = HttpLambdaIntegration("BooksIntegration", books_default_fn)
http_api = apigwv2.HttpApi(self, "HttpApi")
http_api.add_routes(
path="/books",
methods=[apigwv2.HttpMethod.GET],
integration=books_integration
)
https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigatewayv2_integrations/README.html
So far I found:
HttpLambdaIntegration here: https://pypi.org/project/aws-cdk.aws-apigatewayv2-integrations/
HttpApi here: https://pypi.org/project/aws-cdk.aws-apigatewayv2-alpha/#defining-http-apis
But I can`t install both because of a version clash:
The conflict is caused by:
aws-cdk-lib 2.56.1 depends on constructs<11.0.0 and >=10.0.0
aws-cdk-aws-apigatewayv2-alpha 2.56.1a0 depends on constructs<11.0.0
and >=10.0.0
aws-cdk-aws-apigatewayv2-integrations 1.184.1 depends on
constructs<4.0.0 and >=3.3.69
Am I using the wrong packages? Or am I on the wrong track anyway? The version mash up is quite confusing in the whole...
UPDATE (A few days later)
Ok, good news: I got it running ... but I switched to Typescript, following this example: https://bobbyhadz.com/blog/aws-cdk-http-api-apigateway-v2-example
And I still had to install using npm with --force, which always makes me more or less unhappy, but at least I could and it works.
I've come now to the conclusion, that I can't really recommend the python aws cdk. This is at least the third time I tried, and at least the third time I ran into trouble ... There are just a lot more typescript examples!

Batch call Dependency id requirements?

I have a script which does the following:
Create campaign
Create AdSet (requires campaign_id)
Create AdCreative (requires adset_id)
Create Ad (requires creative_id and adset_id)
I am trying to lump all of them into a batch request. However, I realized that my none of these gets created except for my campaign (step 1) if I use remote_create(batch=my_batch). This is probably due to the dependencies of the ids that are needed in by each of the subsequent steps.
I read the documentation and it mentions that one can "Specifying dependencies between operations in the request" (https://developers.facebook.com/docs/graph-api/making-multiple-requests) between calls via {result=(parent operation name):(JSONPath expression)}
Is this possible with the python API?
Can this be achieved with the way I am using remote_creates?
Unfortunately python sdk doesn't currently support this. There is a github issue for it: https://github.com/facebook/facebook-python-ads-sdk/issues/256.
I have also encountered this issue also and have described my workaround in the comments on the issue:
"I found a decent workaround for getting this behaviour without too much trouble. Basically I set the id fields that have dependencies with values like "{result=:$,id}" and prior to doing execute() on the batch object I iterate over ._batch and add as the 'name' entry. When I run execute sure enough it works perfectly. Obviously this solution does have it's limitations such where you are doing multiple calls to the same endpoint that need to be fed into other endpoints and you would have duplicated resource names and would need to customize the name further to string them together.
Anyways, hope this helps someone!"

How to get a list of Ostream or Oinfo in a variable from a repository path in gitpython?

I currently have a valid git database with no packfile, but due to a bug ingit-pack-objects(the process crashes with a stack dump file) I’m unable to perform thegit repackcommand.
I took a look at the error, and it’s linked to the C nature of the official git project (fixing would require changing corestructdefinitions) so, this will takes a lot of time to fix.
The only alternative I found which don’t use C is gitdb (part of gitpython). However I wasn’t able to find how to use thewrite_pack()function.
Or more exactly, I have no idea on how to build theobject_iterparameter from the database path from loose objects.
I don’t even know the exact class type used in theobject_iterlist.
So, how to use gitdb for that purpose ?
Solved!
#/usr/bin/python
import os,sys,zlib,gitdb
from gitdb.db import LooseObjectDB
from gitdb.pack import PackEntity
from gitdb.util import bin_to_hex,hex_to_bin
ldb=LooseObjectDB(sys.argv[1]+'/.git/objects')
PackEntity.create((ldb.stream(sha) for sha in ldb.sha_iter()),sys.argv[1]+'/.git/objects/pack',object_count=ldb.size(),zlib_compression=zlib.Z_BEST_COMPRESSION)

XMP image tagging and Python

If I were to tag a bunch of images via XMP, in Python, what would be the best way? I've used Perl's Image::ExifTool and I am very much used to its reliability. I mean the thing never bricked on tens of thousands of images.
I found this, backed by some heavy-hitters like the European Space Agency, but it's clearly marked as unstable.
Now, assuming I am comfortable with C++, how easy is it to, say, use the Adobe XMP toolkit directly, in Python? Having never done this before, I am not sure what I'd sign up for.
Update: I tried some libraries out there and, including the fore mentioned toolkit, they are still pretty immature and have glaring problems. I resorted to actually writing an Perl-based server that accepts XML requests to read and write metadata, with the combat-tested Image::EXIF. The amount of code is actually very light, and definitely beats torturing yourself by trying to get the Python libraries to work. The server solution is language-agnostic, so it's a twofer.
Well, they website says that the python-xmp-toolkit uses Exempi, which is based on the Adobe XMP toolkit, via ctypes. What I'm trying to say is that you're not likely to create a better wrapping of the C++ code yourself. If it's unstable (i.e. buggy), it's most likely still cheaper for you to create patches than doing it yourself from scratch.
However, in your special situation, it depends on how much functionality you need. If you just need a single function, then wrapping the C++ code into a small C extension library or with Cython is feasible. When you need to have all functionality & flexibility, you have to create wrappers manually or using SWIG, basically repeating the work already done by other people.
I struggled for several hours with python-xmp-toolkit, and eventually gave up and just wrapped calls to ExifTool.
There is a Ruby library that wraps ExifTool as well (albeit, much better than what I created); I feel it'd be worth porting it to Python for a simple way of dealing with XMP.
For Python 3.x there's py3exiv2 which supports editing XMP metadata
With py3exiv2 you can read and write all standard metadata, create your own XMP namespace or extract the thumbnail embedded in image file.
One thing I like about py3exiv2 is that it's built on the (C++) exiv2 library which seems well-maintained
I did encounter a problem though when installing it on my system (Ubuntu 16.04). To get it working I first had to install the latest version of libexiv2-dev (sudo apt-get install libexiv2-dev), and only after this install py3exiv2 (sudo -H pip3 install py3exiv2)
Here's how I've used py3exiv2 to write a new tag:
import pyexiv2
metadata = pyexiv2.ImageMetadata("file_name.jpg")
metadata.read()
key = "Xmp.xmp.CustomTagKey"
value = "CustomTagValue"
metadata[key] = pyexiv2.XmpTag(key, value)
metadata.write()
(There's also a tutorial in the documentation)
For people finding this thread in the future, I would like to share my solution. I put a package up on the Python Package Index (PyPI) called imgtag. It lets you do basic XMP subject field tag editing using python-xmp-toolkit, but abstracts away all of the frustrating nonsense of actually using python-xmp-toolkit into one-line commands.
Install exempi for your platform, then run
python3 -m pip install imgtag
Now you can use it as such:
from imgtag import ImgTag
# Open image for tag editing
test = ImgTag(
filename="test.jpg", # The image file
force_case="lower", # Converts the case of all tags
# Can be `None`, `"lower"`, `"upper"`
# Default: None
strip=True, # Strips whitespace from the ends of all tags
# Default: True
no_duplicates=True # Removes all duplicate tags (case sensitive)
# Default: True
)
# Print existing tags
print("Current tags:")
for tag in test.get_tags():
print(" Tag:", tag)
# Add tags
test.add_tags(["sleepy", "happy"])
# Remove tags
test.remove_tags(["cute"])
# Set tags, removing all existing tags
test.set_tags(["dog", "good boy"])
# Save changes and close file
test.close()
# Re-open for tag editing
test.open()
# Remove all tags
test.clear_tags()
# Delete the ImgTag object, automatically saving and closing the file
del(test)
I haven't yet added methods for the other XMP fields like description, date, creator, etc. Maybe someday I will, but if you look at how the existing functions work in the source code, you can probably figure out how to add the method yourself. If you do add more methods, make a pull request please. :)
You can use ImageMagic convert, IIRC there's a Python module to it as well.

Categories