Create stored procedure activity from python for ADF V2 - python

Getting the following error for code that creates stored procedure activity for ADF V2.
Unable to build a model: Unable to deserialize response data. Data: {DateToImportFor :{value:2017-01-04,type:str}}, {StoredProcedureParameter}, AttributeError: 'str' object has no attribute 'items', DeserializationError: Unable to deserialize response data. Data: {DateToImportFor :{value:2017-01-04,type:str}}, {StoredProcedureParameter}, AttributeError: 'str' object has no attribute 'items'
My code:
spActivity_Name="AzureSQLDWStoredProcedureActivity"
storedproc_name ="DailyImport"
linkedService = LinkedServiceReference(lsdw)
p="{%s :{value:%s,type:str}}" % ('DateToImportFor','2017-01-04')
lsp_activity = SqlServerStoredProcedureActivity(name=spActivity_Name,
stored_procedure_name=storedproc_name,
stored_procedure_parameters=p,
linked_service_name=linkedService)
Since I am very new to python, I am not sure how to construct the parameters object.
#stored_procedure_parameter is expecting in the following format
:param stored_procedure_parameters: Value and type setting for stored
procedure parameters. Example: "{Parameter1: {value: "1", type: "int"}}".
:type stored_procedure_parameters: dict[str,
~azure.mgmt.datafactory.models.StoredProcedureParameter]
"""
p_name = 'StoredProcPipeLine'
params_for_pipeline = {}
p_obj = PipelineResource(activities=[lsp_activity], parameters=params_for_pipeline)
p = adf_client.pipelines.create_or_update(rg_name, df_name, p_name, p_obj)

Example on documentation is a little bit inaccurate, since it states the example for stored_procedure_parameters as a string.
Try setting p as a dictionary:
from azure.mgmt.datafactory.models import StoredProcedureParameter
p = {'DateToImportFor': StoredProcedureParameter('2017-01-04', type='String')}

Related

Python + JSON Formatting ('list' object has no attribute 'values')

I'm getting the following error. I'm confident the error is due to my JSON formatting. How should I be formatting my JSON file?
Exception has occurred: AttributeError
'list' object has no attribute 'values'
The error is occurring on the following line
total_sites = len(custom_sites.values())
The function it's trying to execute
def get_available_websites():
string = []
with open('settings.json') as available_file:
for sites in json.load(available_file)['custom_sites']:
string.append(sites + ", ")
return (''.join(string))[:-2]
The JSON file
{
"custom_sites": [
"https://github.com",
"https://test.com"
]
}
I've tried various changes in the JSON file. Alternating [], and {}
custom_sites will be a list not a dict so it wont have a values attribute. you can just check the length of the list its self.
import json
json_str = """{
"custom_sites": [
"https://github.com",
"https://test.com"
]
}"""
custom_sites = json.loads(json_str)["custom_sites"]
total_sites = len(custom_sites)
print(f"{total_sites=}")
OUTPUT
total_sites=2

Azure Data Factory Python SDK - Convert PipelineResource to JSON

I am creating Azure Data Factory pipeline using Python SDK (azure.mgmt.datafactory.models.PipelineResource). I need to convert PipelineResource object to JSON file. Is it possible anyhow?
I tried json.loads(pipeline_object) , json.dumps(pipeline_object) but no luck.
I need to convert PipelineResource object to JSON file. Is it possible anyhow?
You can try the following code snippet as suggested by mccoyp:
You can add a default argument to json.dumps to make objects that are not JSON serializable into dict
import json
from azure.mgmt.datafactory.models import Activity, PipelineResource
activity = Activity(name="activity-name")
resource = PipelineResource(activities=[activity])
json_dict = json.dumps(resource, default=lambda obj: obj.__dict__)
print(json_dict)
you can use this.
# Create a copy activity
act_name = 'copyBlobtoBlob'
blob_source = BlobSource()
blob_sink = BlobSink()
dsin_ref = DatasetReference(reference_name=ds_name)
dsOut_ref = DatasetReference(reference_name=dsOut_name)
copy_activity = CopyActivity(name=act_name,inputs=[dsin_ref], outputs=[dsOut_ref], source=blob_source, sink=blob_sink)
#Create a pipeline with the copy activity
#Note1: To pass parameters to the pipeline, add them to the json string params_for_pipeline shown below in the format { “ParameterName1” : “ParameterValue1” } for each of the parameters needed in the pipeline.
#Note2: To pass parameters to a dataflow, create a pipeline parameter to hold the parameter name/value, and then consume the pipeline parameter in the dataflow parameter in the format #pipeline().parameters.parametername.
p_name = 'copyPipeline'
params_for_pipeline = {}
p_name = 'copyPipeline'
params_for_pipeline = {}
p_obj = PipelineResource(activities=[copy_activity], parameters=params_for_pipeline)
p = adf_client.pipelines.create_or_update(rg_name, df_name, p_name, p_obj)
print_item(p)

How to convert api.Response to data frame or json in python?

I have a data like this and I want to convert it to data frame.
t = cmc.globalmetrics_quotes_latest()
(Cmc is coinmarketcap api)
type(t)=
coinmarketcapapi.response
"""
RESPONSE: 820ms OK: {'active_cryptocurrencies': 7336, 'total_cryptocurrencies': 14027, 'active_market_pairs': 48398, 'active_exchanges': 431, 'total_exchanges': 1527, 'eth_dominance': 19.65826035511, 'btc_dominance': 43.062294678908, 'eth_dominance_yesterday': 19.3097174, 'btc_dominance_yesterday': 43.49544924, 'eth_dominance_24h_percentage_change': 0.34854295511, 'btc_dominance_24h_percentage_change': -0.433154561092, 'defi_volume_24h': 27083851925.97366, 'defi_volume_24h_reported': 27083851925.97366, 'defi_market_cap': 170702982573.4028, 'defi_24h_percentage_change': 10.226566098235, 'stablecoin_volume_24h': 135386869618.86761, 'stablecoin_volume_24h_reported': 135386869618.86761, 'stablecoin_market_cap': 136340827788.9902, 'stablecoin_24h_percentage_change': 25.498668079553, 'derivatives_volume_24h': 292224255894.04266, 'derivatives_volume_24h_reported': 292224255894.04266, 'derivatives_24h_percentage_change': 34.750223263748, 'quote': {'USD': {'total_market_cap': 2837427293667.6865, 'total_volume_24h': 172324325231.62, 'total_volume_24h_reported': 172324325231.62, 'altcoin_volume_24h': 125164009565.61545, 'altcoin_volume_24h_reported': 125164009565.61545, 'altcoin_market_cap': 1615565991168.745, 'defi_volume_24h': 27083851925.97366, 'defi_volume_24h_reported': 27083851925.97366, 'defi_24h_percentage_change': 10.226566098235, 'defi_market_cap': 170702982573.4028, 'stablecoin_volume_24h': 135386869618.86761, 'stablecoin_volume_24h_reported': 135386869618.86761, 'stablecoin_24h_percentage_change': 25.498668079553, 'stablecoin_market_cap': 136340827788.9902, 'derivatives_volume_24h': 292224255894.04266, 'derivatives_volume_24h_reported': 292224255894.04266, 'derivatives_24h_percentage_change': 34.750223263748, 'last_updated': '2021-11-11T15:57:10.999Z', 'total_market_cap_yesterday': 2968337016970.539, 'total_volume_24h_yesterday': 141372403925.96, 'total_market_cap_yesterday_percentage_change': -4.410204183501307, 'total_volume_24h_yesterday_percentage_change': 21.89389190967583}}, 'last_updated': '2021-11-11T15:57:10.999Z'}"""
I tried :
1)
w=pd.DataFrame.from_dict(pd.json_normalize(t), orient='columns')
TypeError: 'Response' object is not iterable
crypto_map = pd.DataFrame(t)
ValueError: DataFrame constructor not properly called!
t.json()
ValueError: name 'load' is not defined
I want to load a variable in a data but it gives me this error:
t['active_exchanges']
TypeError: 'Response' object is not subscriptable
I been struggling with this for the last couple of days and couldn't find a way to solve it.
Use t.data to extract your data response
From the documentation:
All endpoints return data in JSON format with the results of your query under data if the call is successful.
So, you can create your dataframe like this:
df = pd.DataFrame(t.data)

AttributeError: 'str' object has no attribute 'json'

I wrote little script on python 3.7 to receive actual browser version
Here is it:
import json
def json_open():
file_json = open('/Users/user/PycharmProjects/Test/configuration.json')
return json.load(file_json)
def get_last_version(browser_name):
f = json_open()
res = (f['global']['link_to_latest_browser_version'])
last_version = repr(res.json()['latest']['client'][browser_name]['version'])
#print(last_version[1:-1])
return last_version[1:-1]
Also, json file exists, but it does not matter now.
Received:
AttributeError: 'str' object has no attribute 'json'.
In row
last_version = repr(res.json()['latest']['client'][browser_name]['version'])
Please, tell me what is my mistake?
If you are trying to convert res as a json object try json.loads(res) instead of res.json()
Try this:
import json
FILEJSON = '/Users/user/PycharmProjects/Test/configuration.json'
def get_last_version(browser_name):
with open(FILEJSON, 'r') as fson:
res = json.load(fson)
last_version = res['global']['link_to_latest_browser_version']\
['latest']['client'][browser_name]['version'][1:-1]
return last_version
I think that the json_open function is unnecessary. Also take into account that the behavior of the json.load() method depends on the type of file you are reading.
Ok, the problem is here:
last_version = repr(res.json()['latest']['client'][browser_name]['version'])
A JSON object is basically a dictionary. So when you do json['key'] it returns the content, not a json object.
Here res is a string, not a json object and thus does not have the .json() attribute.
Edit:
If you want a string to be return in your situation:
res = json.loads(f['global']['link_to_latest_browser_version'])
last_version = res['latest']['client'][browser_name]['version']
return last_version
Your "res" variable is of type string.
Strings do not have an attribute called json.
So res.json() is invalid.

Why am I getting a Runtime.MarshalError when using this code in Zapier?

The following code is giving me:
Runtime.MarshalError: Unable to marshal response: {'Yes'} is not JSON serializable
from calendar import monthrange
def time_remaining_less_than_fourteen(year, month, day):
a_year = int(input['year'])
b_month = int(input['month'])
c_day = int(input['day'])
days_in_month = monthrange(int(a_year), int(b_month))[1]
time_remaining = ""
if (days_in_month - c_day) < 14:
time_remaining = "No"
return time_remaining
else:
time_remaining = "Yes"
return time_remaining
output = {time_remaining_less_than_fourteen((input['year']), (input['month']), (input['day']))}
#print(output)
When I remove {...} it then throws: 'unicode' object has no attribute 'copy'
I encountered this issue when working with lambda transformation blueprint kinesis-firehose-process-record-python for Kinesis Firehose which led me here. Thus I will post a solution to anyone who also finds this questions when having issues with the lambda.
The blueprint is:
from __future__ import print_function
import base64
print('Loading function')
def lambda_handler(event, context):
output = []
for record in event['records']:
print(record['recordId'])
payload = base64.b64decode(record['data'])
# Do custom processing on the payload here
output_record = {
'recordId': record['recordId'],
'result': 'Ok',
'data': base64.b64encode(payload)
}
output.append(output_record)
print('Successfully processed {} records.'.format(len(event['records'])))
return {'records': output}
The thing to note is that the Firehose lambda blueprints for python provided by AWS are for Python 2.7, and they don't work with Python 3. The reason is that in Python 3, strings and byte arrays are different.
The key change to make it work with lambda powered by Python 3.x runtime was:
changing
'data': base64.b64encode(payload)
into
'data': base64.b64encode(payload).decode("utf-8")
Otherwise, the lambda had an error due to inability to serialize JSON with byte array returned from base64.b64encode.
David here, from the Zapier Platform team.
Per the docs:
output: A dictionary or list of dictionaries that will be the "return value" of this code. You can explicitly return early if you like. This must be JSON serializable!
In your case, output is a set:
>>> output = {'Yes'}
>>> type(output)
<class 'set'>
>>> json.dumps(output)
Object of type set is not JSON serializable
To be serializable, you need a dict (which has keys and values). Change your last line to include a key and it'll work like you expect:
# \ here /
output = {'result': time_remaining_less_than_fourteen((input['year']), (input['month']), (input['day']))}

Categories