CAS support for aerospike python client - python

I am using aerospike as the caching layer and need to implement CAS (Compare and Set/Swap). While I am able to find the support for the same with php client (http://www.aerospike.com/docs/client/php/usage/kvs/write.html) the same is not available for python client. Anyone has any idea if CAS is supported for python client as well- and if there's any documentation for the same?
Thanks!

When you get a record, the meta data contains the generation:
http://www.aerospike.com/apidocs/python/client.html#aerospike-record-tuple
You then need to supply a gen policy to put: http://www.aerospike.com/apidocs/python/client.html#write-policies
Then on the put call, you need the meta dict to contain the expected generation. There is an example here: http://www.aerospike.com/apidocs/python/client.html#aerospike.Client.put

Related

Cannot find optimize function for indices in elasticsearch python client

I am using elasticsearch python client 6.4.0
I want to use the optimize API
https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-optimize.html
But I could not find anything about it in the elasticsearch python api doc
https://elasticsearch-py.readthedocs.io/en/master/api.html
I tried using es.indices.optimize(...) but that function does not exist.
I will prefer to use the python client instead of direct API call.
You are looking at a very old version of elastic (1.7). Optimize API was renamed to forcemerge and under this name it is available in python client (docs)

Getting proper api version for azure python delete_by_id method

I'm working on automated deletion of Azure resources based on tags attached to these resources.
I'm using Azure SDK for python (https://github.com/Azure/azure-sdk-for-python) - I found how to get a list of my resources and that I can delete them with ResourceManagementClient with resources.delete_by_id method.
However, this method requires 2 arguments - resource id (which I have from resources listed by ResourceManagementClient) and API version (which is different for every resource type.
How can I determine which API version should be passed to the method?
I've tried to find something in docs and code of SDK, but I couldn't come up with a proper solution.
API version can be even hardcoded, but it needs to work for all resource types.
When using some api versions (e.g. 2018-05-01) I get error for some of resource types:
Azure Error: NoRegisteredProviderFound
Message: No registered resource provider found for location 'westeurope' and API version '['2018-05-01']' for type 'virtualMachines'. The supported api-versions are '2015-05-01-preview, 2015-06-15, 2016-03-30, 2016-04-30-preview, 2016-08-30, 2017-03-30, 2017-12-01, 2018-04-01, 2018-06-01, 2018-10-01, 2019-03-01'. The supported locations are 'eastus, eastus2, westus, centralus, northcentralus, southcentralus, northeurope, westeurope, eastasia, southeastasia, japaneast, japanwest, australiaeast, australiasoutheast, brazilsouth, southindia, centralindia, westindia, canadacentral, canadaeast, westus2, westcentralus, uksouth, ukwest, koreacentral, koreasouth, francecentral, southafricanorth'.
ERROR: 'CloudError' object has no attribute '__traceback__'
I would recommend the same approach than the CLI implementation, do an initial call to ARM to get the possible mappings from Resource Provider / Resource Type to API version(s), and use that to inject the correct api version in your call.
Get this mapping would be a list providers call.
Adding a Mgmt sample repo, look for ResourceManagementClient:
https://github.com/Azure-Samples/azure-samples-python-management
Edit: I work at MS in the Python SDK team.
If I am not mistaken, resources.delete_by_id is a wrapper over Delete By Id REST API method. Currently the latest API Version for this operation is 2018-05-01. You can use that in your method call.

How to interface with opentsdb from Python

I'd like to interface with an opentsdb data store with Python. I only see a java client library for it. How would I go about this?
Unless you want a standalone client (in which case the Twisted Python OpenTSDB Client looks great), the easiest way is to run tcollector, and simply drop your Python script under /usr/local/tcollector/collector/0 – your script is expected to never return and print one data point per line in that format: metric timestamp value tag1=value1 tag2=value2 ....
tcollector takes care of connecting to OpenTSDB, pushing your data points out, etc. So you can focus on collecting the data you want to collect and write your data collection script in Python or any other scripting language you may like.
You can use Python request module and OpenTSDB HTTP API, too.
Give that library a try.
Twisted Python OpenTSDB Client
http://code.google.com/p/totsdb/source/browse/tostdb.py

How to expose data to zabbix

Here is my goal: I would like to be able to report various metrics to zabbix so that we can display the graphs on a web page.
These metrics include:
latency per soap service submission
various query results from one or more databases.
What things do I need to write and/or expose? Or is the zabbix server going to go and get it from an exposed service somewhere?
I've been advised that a script that returns a single value will work, but I'm wondering if that's the right way.
I can offer 2 suggestions to get the metrics into Zabbix:
Use the zabbix_sender binary to feed the data from your script directly to the Zabbix server. This allows your script to call on it's own interval and set all the parameters needed. You really only need to know the location to the zabbix_sender binary. Inside the Zabbix server interface, you would create items with the type of Zabbix trapper. This is the item type which receives values send from the zabbix_sender. You make up the key name and it has to match.
The second way you could do this is to specify a key name and script/binary inside the zabbix_agentd.conf file. Every time the Zabbix server requests this item the script would be called and the data from the script recorded. This allows you to set the intervals in the Zabbix item configuration rather than forcing you to run your script on its own intervals. However, you would need to add this extra bit of information to your zabbix_agentd.conf file for every host.
There may be other ways to do this directly from Python (zabbix_sender bindings for Python maybe?). But these are the 2 ways I have used before which work well. This isn't really Python specific. But you should be able to use zabbix_sender in your Python scripting. Hope this information helps!
Update: I also remembered that Zabbix was working on/has a API (JSON/RPC style). But the documentation site is down at the moment and I am not sure if the API is for submitting item data or not. Here is the Wiki on the API: http://www.zabbix.com/wiki/doc/api
And a project for Python API: https://github.com/gescheit/scripts/tree/master/zabbix/
There seems to be little documentation on the API as it is new as of Zabbix version 1.8
Actually there is a python binding for zabbix_sender. http://pypi.python.org/pypi/zbxsend

How do I create a D-Bus service that dynamically creates multiple objects?

I'm new to D-Bus (and to Python, double whammy!) and I am trying to figure out the best way to do something that was discussed in the tutorial.
However, a text editor application
could as easily own multiple bus names
(for example, org.kde.KWrite in
addition to generic TextEditor), have
multiple objects (maybe
/org/kde/documents/4352 where the
number changes according to the
document), and each object could
implement multiple interfaces, such as
org.freedesktop.DBus.Introspectable,
org.freedesktop.BasicTextField,
org.kde.RichTextDocument.
For example, say I want to create a wrapper around flickrapi such that the service can expose a handful of Flickr API methods (say, urls_lookupGroup()). This is relatively straightforward if I want to assume that the service will always be specifying the same API key and that the auth information will be the same for everyone using the service.
Especially in the latter case, I cannot really assume this will be true.
Based on the documentation quoted above, I am assuming there should be something like this:
# Get the connection proxy object.
flickrConnectionService = bus.get_object("com.example.FlickrService",
"/Connection")
# Ask the connection object to connect, the return value would be
# maybe something like "/connection/5512" ...
flickrObjectPath = flickrConnectionService.connect("MY_APP_API_KEY",
"MY_APP_API_SECRET",
flickrUsername)
# Get the service proxy object.
flickrService = bus.get_object("com.example.FlickrService",
flickrObjectPath);
# As the flickr service object to get group information.
groupInfo = flickrService.getFlickrGroupInfo('s3a-belltown')
So, my questions:
1) Is this how this should be handled?
2) If so, how will the service know when the client is done? Is there a way to detect if the current client has broken connection so that the service can cleanup its dynamically created objects? Also, how would I create the individual objects in the first place?
3) If this is not how this should be handled, what are some other suggestions for accomplishing something similar?
I've read through a number of D-Bus tutorials and various documentation and about the closest I've come to seeing what I am looking for is what I quoted above. However, none of the examples look to actually do anything like this so I am not sure how to proceed.
1) Mostly yes, I would only change one thing in the connect method as I explain in 2).
2) D-Bus connections are not persistent, everything is done with request/response messages, no connection state is stored unless you implement this in third objects as you do with your flickerObject. The d-bus objects in python bindings are mostly proxies that abstract the remote objects as if you were "connected" to them, but what it really does is to build messages based on the information you give to D-Bus object instantiation (object path, interface and so). So the service cannot know when the client is done if client doesn't announce it with other explicit call.
To handle unexpected client finalization you can create a D-Bus object in the client and send the object path to the service when connecting, change your connect method to accept also an ObjectPath parameter. The service can listen to NameOwnerChanged signal to know if a client has died.
To create the individual object you only have to instantiate an object in the same service as you do with your "/Connection", but you have to be sure that you are using an unexisting name. You could have a "/Connection/Manager", and various "/Connection/1", "/Connection/2"...
3) If you need to store the connection state, you have to do something like that.

Categories