I am looking to create a CICD pipeline for my Azure SQL Database. I have read about the State-based approach and the Migration-based approach that has been described here,
https://devblogs.microsoft.com/azure-sql/devops-for-azure-sql/
However, I want to know if there is an approach I can use to do this in Python. I am looking to deploy both schema and data changes to the other environment through my pipeline. It would be great if I can implement a method that will deploy only chosen data points though. For example, if I can filter on a stage column for production records.
What kind of approach can I take to accomplish this?
It does not matter if I need to trigger this CICD pipeline manually through an API call or something. I believe this is also possible in Azure Pipelines.
What you can do is deploy the sql server normally and then make changes by adding a different task in .yaml file which will execute the changes .
For this you can either use DACPAC or you can directly use sql script .
In both the case you have to create you sql based script before deploying.
for DACPAC you need add the following type to task :
- task: SqlAzureDacpacDeployment#1
displayName: Execute Azure SQL : DacpacTask
inputs:
azureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
SqlUsername: '<SQL user name>'
SqlPassword: '<SQL user password>'
DacpacFile: '<Location of Dacpac file in $(Build.SourcesDirectory) after compilation>'
for sql script add the following type to task :
- task: AzureMysqlDeployment#1
inputs:
ConnectedServiceName: # Or alias azureSubscription
ServerName:
#DatabaseName: # Optional
SqlUsername:
SqlPassword:
#TaskNameSelector: 'SqlTaskFile' # Optional. Options: SqlTaskFile, InlineSqlTask
#SqlFile: # Required when taskNameSelector == SqlTaskFile
#SqlInline: # Required when taskNameSelector == InlineSqlTask
#SqlAdditionalArguments: # Optional
#IpDetectionMethod: 'AutoDetect' # Options: AutoDetect, IPAddressRange
#StartIpAddress: # Required when ipDetectionMethod == IPAddressRange
#EndIpAddress: # Required when ipDetectionMethod == IPAddressRange
#DeleteFirewallRule: true # Optional
For detailed explanation please refer the following documentation on DACPAC task and refer this documentation for the sql script task.
As of now there are no Api to start-stop preexisting pipeline I have consulted this documentation for this.
Related
I am trying to accomplish the following
If you are using the Fargate launch type for your tasks, all you need to do to turn on the awslogs log driver is add the required logConfiguration parameters to your task definition.
I am using CDK to generate the FargateTaskDefn
task_definition = _ecs.FargateTaskDefinition(self, "TaskDefinition",
cpu=2048,
memory_limit_mib=4096,
execution_role=ecs_role,
task_role = ecs_role,
)
task_definition.add_container("getFileTask",
memory_limit_mib = 4096,
cpu=2048,
image = _ecs.ContainerImage.from_asset(directory="assets", file="Dockerfile-ecs-file-download"))
I looked up the documentation and did not find the any attribute called logConfiguration.
What am I missing?
I am not able to send the logs from Container running on ECS/Fargate to Cloudwatch and what is needed is to enable this logConfiguration option in the task defn.
Thank you for your help.
Regards
Finally figured out that Logging option in add_container is the one.
I have a requirement to enable a failover/secondary database for a DB2 database hosted on a Linux Server for a python application using the IBM_DB package.
With a JDBC driver, you can easily the following parameters to the connection string:
clientRerouteAlternatePortNumber=port#
clientRerouteAlternateServerName=servername
enableSeamlessFailover=1
Since the IBM_DB package uses a CLI driver, these parameters wouldn't be the same. I found the following parameters through the IBM documentation, which are:
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.embed.doc/doc/c0060428.html
enableAlternateServerListFirstConnect
alternateserverlist
maxAcrRetries
However, through the instructions of how to include it in the link below, it seems like it is only possible to include them in this DB file: db2dsdriver.cfg
https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.cli.doc/doc/c0056196.html
I know a lot of these parameters are configurable in the connection string, and I wanted to know if it was possible to include these particular parameters in the connection string. Is there any documentation/verification that something like this can work:
import ibm_db_dbi
connect = ibm_db_dbi.connect("DATABASE=whatever; \
HOSTNAME=whatever; \
PORT=whatever; \
PROTOCOL=TCPIP; \
UID=whatever; \
PWD=whatever; \
CURRENTSCHEMA=whatever;\
AUTHENTICATION=SERVER_ENCRYPT;\
ClientEncAlg=2;\
enableAlternateServerListFirstConnect=True;\
alternateserverlist=server1,port1,server2,port2;\
maxAcrRetries=2", "", "")
Thank you for any help.
A helpful page to is this one.
Note the different names of keywords/parameters between jdbc/sqlj and CLI.
The idea is that with the CLI driver, if the Db2-LUW instance is properly configured, then the CLI driver will get the details of ACR from the Db2-LUW-instance automatically and useful defaults will apply . So you might not need to add more keywords in the connection string, unless tuning.
The HA related keyword parameters for CLI are below :
acrRetryInterval
alternateserverlist
detectReadonlyTxn
enableAcr
enableAlternateGroupSeamlessACR
enableAlternateServerListFirstConnect
enableSeamlessAcr
maxAcrRetries
More details here. Note that if enableACR=true ( the default ) then enableSeamlessAcr=true (also default).
Although the docs mention db2dsdriver.cfg most CLI parameter/keywords are also settable in the connection string, unless specifically excluded. So do your testing to verify.
I have the following command line below. It gives me the namespaces and ingressroutes names (see the Example below)
kubectl --context mlf-eks-dev get --all-namespaces ingressroutes
Example
NAMESPACE NAME AGE
aberdeentotalyes aberdeentotalgeyes-yesgepi 98d
I"m trying to replicate the kubetl command line above through my python code, using the python client for kubernetes but I miss on the option, on how to get the ingressroutes.
#My python code
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s" % (i.metadata.namespace, i.metadata.name))
The results are below
wwwpaulolo ci-job-backup-2kt6f
wwwpaulolo wwwpaulolo-maowordpress-7dddccf44b-ftpbq
wwwvhugo ci-job-backup-dz5hs
wwwvhugo wwwvhua-maowordpress-5f7bdfdccd-v7kcx
I have the namespaces but not the ingressroutes, as I don't know how can I get it.
Question is what is the option to have the ingressroutes, please?
Any helps are more than welcomed.
Cheers
After a careful reading of the documentation of the existing APIs for ingressroutes, I found none that can be used, all of them are still in beta as Ingressroutes is not a native components of Kubernetes.
I also asked on Slack's Kubernetes, on the Kubernetes client channel.
Alan C on Kubernetes's slack, gave me that answer:
There's no class for ingressroutes. You can use the dynamic client to
support arbitrary resource types (see link)
My solution is to use the python library subprocess to run the kubectl command line.
did you try CustomObjectsApi?
from kubernetes import client, config
config.load_kube_config()
api = client.CustomObjectsApi()
dir(api) # check for list_cluster_custom_object
#example below api will list all issuers in the cluster.
#issuer is custom resource of cert-manager.io
api.list_cluster_custom_object(group="cert-manager.io",
version="v1",plural="issuers")
I'm trying to use ipython-cypher to run Neo4j Cypher queries (and return a Pandas dataframe) in a Python program. I have no trouble forming a connection and running a query when using IPython Notebook, but when I try to run the same query outside of IPython, as per the documentation:
http://ipython-cypher.readthedocs.org/en/latest/introduction.html#usage-out-of-ipython
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
"http://XXX.XXX.X.XXX:xxxx")
I get the following error:
neo4jrestclient.exceptions.StatusException: Code [401]: Unauthorized. No permission -- see authorization schemes.
Authorization Required
and
Format: (http|https)://username:password#hostname:port/db/name, or one of dict_keys([])
Now, I was just guessing that that was how I should enter a Connection object as the last parameter, because I couldn't find any additional documentation explaining how to connect to a remote host using Python, and in IPython, I am able to do:
%load_ext cypher
results = %cypher http://XXX.XXX.X.XXX:xxxx MATCH (n)--(m) RETURN n.username,
count(m) as neighbors
Any insight would be greatly appreciated. Thank you.
The documentation has a section for the API. When used outside of IPython and in need to connect to a different host, just using the parameter conn and passing a string should work.
import cypher
results = cypher.run("MATCH (n)--(m) RETURN n.username, count(m) as neighbors",
conn="http://XXX.XXX.X.XXX:xxxx")
But also consider that with the new authentication support in Neo4j 2.2, you need to set the new password before connecting from ipython-cypher. I will fix this as soon as I implement the forcing password change mechanism in neo4jrestclient, the library underneath.
I just got bit by my functional tests not using the same settings as my dev_appserver. I currently run my dev_appserver (non-rel) with require_indexes.
How to I force my test bed to use the same setings?
I have tried using SetupIndexes but it did not "require" they be defined in my index.yaml. I did not have the setting correct and as a result i can do any query I want.
i.e.
clz.testbed = Testbed()
clz.testbed.activate()
clz.testbed.init_memcache_stub()
clz.testbed.init_taskqueue_stub()
clz.testbed.init_urlfetch_stub()
clz.testbed.init_datastore_v3_stub(use_sqlite=True, datastore_file=somepath)
SetupIndexes('','')
model.objects().filter(x=1, y=2.....) #will work regardless of index defined.
but when the query executes on the server i get the
NeedIndexError: This query requires a composite index that is not defined. You must update the index.yaml file in your application root.
The following index is the minimum index required:
Try adding { "require_indexes" : True } as a keyword argument to init_datastore_v3_stub()
You can look through (and step through) the SDK code to see how that parameter is eventually passed into the datastore stub.