Related
I have an iMac 2017 with Monterey 12.6.
I am forced to call my API in python to have a correct JSON structure for results.
For information, this is the URL that I call in python (coingecko limits the result to the 100 first results):
https://api.coingecko.com/api/v3/coins/markets?ids=acala,alpaca-finance,altair,astar,avalanche-2,baanx,bbeefy-finance,bifrost-native-coin,binancecoin,binance-eth,binance-usd,bitcoin,cardano,chainlink,chainx,chronicle,coin-capsule,colony,cosmos,crabada,crypto-com-chain,cumrocket,curve-dao-token,dappradar,dogecoin,elrond-erd-2,ergo,ethereum,evmos,exeedme,fantom,ftx-token,fuse-network-token,genshiro,green-satoshi-token,havven,hooked-protocol,integritee,kadena,karura,kintsugi,kucoin-shares,kusama,kuswap,matic-network,metagame-arena,metagods,metavault,mina-protocol,moonbeam,moonpot,moonriver,nafty,near,osmosis,pancakeswap-token,paraswap,platypus-finance,pluton,polkadot,safemoon,shiba-inu,kryll,kucoin-shares,kusama,kuswap,lido-dao,matic-network,maze-token,memepad,metagame-arena,metagods,metis-token,mina-protocol,moonbeam,moonpot,moonriver,movn,nafter,nafty,near,nexo,nftlaunch,orion-protocol,osmosis,paid-network,pancakeswap-token,paraswap,platypus-finance,polkadot,polkamon,polkamarkets,polkastarter,polycat-finance,polychain-monsters,polygonfarm-finance,presearch,safemoon,safepal,shiba-inu,shiden,solana,staked-ether,staked-olympus,stasis-eurs,stepn,sushi,swissborg,switch,tether,terrausd,the-graph,the-sandbox,unifarm,uniswap,usd-coin,valkyrie-protocol,wizarre-scroll,zelcash&vs_currency=EUR"
The result in Python is something like that but with many others cryptos:
[{'id': 'bitcoin', 'symbol': 'btc', 'name': 'Bitcoin', 'image': 'https://assets.coingecko.com/coins/images/1/large/bitcoin.png?1547033579', 'current_price': 15765.21, 'market_cap': 303279048538, 'market_cap_rank': 1, 'fully_diluted_valuation': 330971307514, 'total_volume': 9519061248, 'high_24h': 15812.98, 'low_24h': 15758.06, 'price_change_24h': -18.11016048005331, 'price_change_percentage_24h': -0.11474, 'market_cap_change_24h': -207406972.0255127, 'market_cap_change_percentage_24h': -0.06834, 'circulating_supply': 19242937.0, 'total_supply': 21000000.0, 'max_supply': 21000000.0, 'ath': 59717, 'ath_change_percentage': -73.60052, 'ath_date': '2021-11-10T14:24:11.849Z', 'atl': 51.3, 'atl_change_percentage': 30631.82104, 'atl_date': '2013-07-05T00:00:00.000Z', 'roi': None, 'last_updated': '2022-12-25T14:21:15.974Z'}, {'id': 'ethereum', 'symbol': 'eth', 'name': 'Ethereum', 'image': 'https://assets.coingecko.com/coins/images/279/large/ethereum.png?1595348880', 'current_price': 1142.38, 'market_cap': 137652680087, 'market_cap_rank': 2, 'fully_diluted_valuation': 137652680087, 'total_volume': 2165257956, 'high_24h': 1149.42, 'low_24h': 1141.8, 'price_change_24h': -1.0190823115187868, 'price_change_percentage_24h': -0.08913, 'market_cap_change_24h': -73228021.97866821, 'market_cap_change_percentage_24h': -0.05317, 'circulating_supply': 120523982.078808, 'total_supply': 120523982.078808, 'max_supply': None, 'ath': 4228.93, 'ath_change_percentage': -72.99273, 'ath_date': '2021-12-01T08:38:24.623Z', 'atl': 0.381455, 'atl_change_percentage': 299310.75605, 'atl_date': '2015-10-20T00:00:00.000Z', 'roi': {'times': 95.89474372830291, 'currency': 'btc', 'percentage': 9589.474372830291}, 'last_updated': '2022-12-25T14:20:55.699Z'}]
Content of my AppleScript:
set desktop_folder to "$HOME/PycharmProjects/crypto/"
set valReturned to do shell script "python3 " & desktop_folder & "crypto.py"
set coins to (every item in valReturned) as list
repeat with n from 1 to count of coins
set coin to item n of coins
end repeat
The AppleScript process uses 99% of my processor.
If I comment on the loop part, I have a huge JSON (it's just the result for 100 different cryptocurrencies). I can't post it because when I copy-paste its contents, it exceeds 81,000 characters here.
Why does this crash AppleScript?
I've already posted on Stackoverflow here but I didn't do it the same way.
I think I could decrease the number of cryptos in my request but it's still strange that it crashes this way. I have no problem with Excel and powerquery for example.
I feel like the best way to produce a Numbers file would be to generate a CSV and then use AppleScript to import that CSV into Numbers and then apply formatting. Any suggestions?
Can only speculate, as I don't have/use any Apple-anything:
There's no linebreaks in the JSON. If loading/processing it with a tool/language that relies (internally) on strings, it may exceed a maximum character limit for a string
In both cases, your valReturned is a variable in AppleScript. It may come with some size limitations, and also you're filling/setting it via the textual characters that come from the output of echo or pyhton3. This can lead to delays and size limitations (crashes?). Especially with echo, printing text on the terminal is often limited by the terminal's output/drawing/rendering speed, not how fast the computer can process the characters.
Not sure how AppleScript handles JSON - does it parse the JSON and keep it all in memory? Also, does it try to be clever and unserialize/demarshal the JSON into its own data types, creating many objects etc. for later programmatic access? You seem to then also have two copies of it, one in a list structure in coins and then the valReturned (as a string I assume).
Potentially more similar reasons like that...
If you're already able to call echo and python3, maybe you do not need to use AppleScript to iterate over the JSON entries? As you seem to be able to write the JSON from the network into a file directly, maybe you do not need to have the AppleScript load/keep the JSON data? Wonder if trying to avoid AppleScript's exposure to the JSON or results/output from the external programs would mitigate the performance + crash problems.
AppleScript will perform the shell script without any issues, but note that the results will be a string. Your sample script is just going through the (81,000+) characters of that string, which takes a little time. Many languages (such as Python) provide direct support for JSON strings - for AppleScript, Cocoa (via AppleScriptObjC) can be used.
The JSON string from the shell script can be parsed to get a list of records (the result can also be left as Cocoa objects for use with those methods). The individual list items can then stepped through using a repeat statement and the various record key values used as desired, for example:
use framework "Foundation"
use scripting additions
set theURL to "https://api.coingecko.com/api/v3/coins/markets?ids=acala,alpaca-finance,altair,astar,avalanche-2,baanx,bbeefy-finance,bifrost-native-coin,binancecoin,binance-eth,binance-usd,bitcoin,cardano,chainlink,chainx,chronicle,coin-capsule,colony,cosmos,crabada,crypto-com-chain,cumrocket,curve-dao-token,dappradar,dogecoin,elrond-erd-2,ergo,ethereum,evmos,exeedme,fantom,ftx-token,fuse-network-token,genshiro,green-satoshi-token,havven,hooked-protocol,integritee,kadena,karura,kintsugi,kucoin-shares,kusama,kuswap,matic-network,metagame-arena,metagods,metavault,mina-protocol,moonbeam,moonpot,moonriver,nafty,near,osmosis,pancakeswap-token,paraswap,platypus-finance,pluton,polkadot,safemoon,shiba-inu,kryll,kucoin-shares,kusama,kuswap,lido-dao,matic-network,maze-token,memepad,metagame-arena,metagods,metis-token,mina-protocol,moonbeam,moonpot,moonriver,movn,nafter,nafty,near,nexo,nftlaunch,orion-protocol,osmosis,paid-network,pancakeswap-token,paraswap,platypus-finance,polkadot,polkamon,polkamarkets,polkastarter,polycat-finance,polychain-monsters,polygonfarm-finance,presearch,safemoon,safepal,shiba-inu,shiden,solana,staked-ether,staked-olympus,stasis-eurs,stepn,sushi,swissborg,switch,tether,terrausd,the-graph,the-sandbox,unifarm,uniswap,usd-coin,valkyrie-protocol,wizarre-scroll,zelcash&vs_currency=EUR"
set valReturned to (do shell script "/usr/bin/curl " & quoted form of theURL) -- string
set theResult to parseJSON from valReturned -- list of records
set output to "Current Prices:" & return -- header for the example
repeat with anItem in theResult -- the individual list items
-- do whatever with the record key values
set theName to |name| of anItem -- "name" is a keyword
set theValue to current_price of anItem
set output to output & theName & ": " & theValue & return
end repeat
return output
# Parse a JSON string into a data structure.
to parseJSON from sourceString given coercion:coerce : true
if class of sourceString is not in {string} then error "parseJSON error: source is not a string"
set theString to current application's NSString's stringWithString:sourceString
set theData to theString's dataUsingEncoding:(current application's NSUTF8StringEncoding)
set {theObject, theError} to current application's NSJSONSerialization's JSONObjectWithData:theData options:0 |error|:(reference)
if theObject is missing value then error "parseJSON error: " & (theError's userInfo's objectForKey:"NSDebugDescription")
if coerce then if (theObject's isKindOfClass:(current application's NSArray)) as boolean then
return theObject as list
else
return theObject as record
end if
return theObject -- leave as NSArray or NSDictionary
end parseJSON
How can I run a flow several times using different parameters? I'm trying to use create_flow_run.map like this:
dump_to_gcs_flow = create_flow_run.map(
flow_name=unmapped(utils_constants.FLOW_DUMP_TO_GCS_NAME.value),
project_name=unmapped(constants.PREFECT_DEFAULT_PROJECT.value),
parameters=tables_to_zip,
labels=unmapped(current_flow_labels),
run_name=unmapped("Dump to GCS"),
)
except for parameters all the other arguments are constants, so I use unmapped on them. tables_to_zip is a list of dictionaries containing the parameters values for each table to be zip. However, this didn't work. I'm currently receiving the (somewhat crypt) error:
prefect.exceptions.ClientError: [{'message': 'parsing UUID failed, expected String, but encountered Array', 'locations': [{'line': 2, 'column': 5}], 'path': ['flow_run'], 'extensions': {'path': '$.selectionSet.flow_run.args.where.id._eq', 'code': 'parse-failed', 'exception': {'message': 'parsing UUID failed, expected String, but encountered Array'}}}]
what am I doing wrong here?
It would need to be a list of dictionaries with the key being parameter name - this Discourse topic discusses this problem in much more depth: https://discourse.prefect.io/t/how-to-map-over-flows-with-various-parameter-values/365/2
LMK, if you still have issues implementing this pattern after reading this
(btw Prefect 2 is the default version now and makes this much easier: https://docs.prefect.io)
I am trying to implement Pull Model to query change feed using Azure Cosmos Python SDK. I found that to parallelise the querying process, the official documentation mentions about FeedRange value and create FeedIterator to iterate through each range of partition key values obtained from the FeedRange.
Currently my code snippet to query change feed looks like this and it is pretty straight-forward:
# function to get items from change feed based on a condition
def get_response(container_client, condition):
# Historical data read
if condition:
response = container.query_items_change_feed(
is_start_from_beginning = True,
# partition_key_range_id = 0
)
# reading from a checkpoint
else:
response = container.query_items_change_feed(
is_start_from_beginning = False,
continuation = last_continuation_token
)
return response
The problem with this approach is the efficiency when getting all the items from beginning (Historical Data Read). I tried this method with pretty small dataset of 500 items and the response took around 60 seconds. When dealing with millions or even billions of items the response might take too long to return.
Would querying change feed parallelly for each partition key range save time?
If yes, how to get PartitionKeyRangeId in Python SDK?
Is there any problems I need to consider when implementing this?
I hope I make sense!
Currently, I am working on a Boot Sequence in Python for a larger project. For this specific part of the sequence, I need to access a .JSON file (specs.json), establish it as a dictionary in the main program. I then need to take a value from the .JSON file, and add 1 to it, using it's key to find the value. Once that's done, I need to push the changes to the .JSON file. Yet, every time I run the code below, I get the error:
bootNum = spcInfDat['boot_num']
KeyError: 'boot_num'`
Here's the code I currently have:
(Note: I'm using the Python json library, and have imported dumps, dump, and load.)
# Opening of the JSON files
spcInf = open('mki/data/json/specs.json',) # .JSON file that contains the current system's specifications. Not quite needed, but it may make a nice reference?
spcInfDat = load(spcInf)
This code is later followed by this, where I attempt to assign the value to a variable by using it's dictionary key (The for statement was a debug statement, so I could visibly see the Key):
for i in spcInfDat['spec']:
print(CBL + str(i) + CEN)
# Loacting and increasing the value of bootNum.
bootNum = spcInfDat['boot_num']
print(str(bootNum))
bootNum = bootNum + 1
(Another Note: CBL and CEN are just variables I use to colour text I send to the terminal.)
This is the interior of specs.json:
{
"spec": [
{
"os":"name",
"os_type":"getwindowsversion",
"lang":"en",
"cpu_amt":"cpu_count",
"storage_amt":"unk",
"boot_num":1
}
]
}
I'm relatively new with .JSON files, as well as using the Python json library; I only have experience with them through some GeeksforGeeks tutorials I found. There is a rather good chance that I just don't know how .JSON files work in conjunction with the library, but I figure that it would still be worth a shot to check here. The GeeksForGeeks tutorial had no documentation about this, as well as there being minimal I know about how this works, so I'm lost. I've tried searching here, and have found nothing.
Issue Number 2
Now, the prior part works. But, when I attempt to run the code on the following lines:
# Changing the values of specDict.
print(CBL + "Changing values of specDict... 50%" + CEN)
specDict ={
"os":name,
"os_type":ost,
"lang":"en",
"cpu_amt":cr,
"storage_amt":"unk",
"boot_num":bootNum
}
# Writing the product of makeSpec to `specs.json`.
print(CBL + "Writing makeSpec() result to `specs.json`... 75%" + CEN)
jsonobj = dumps(specDict, indent = 4)
with open('mki/data/json/specs.json', "w") as outfile:
dump(jsonobj, outfile)
I get the error:
TypeError: Object of type builtin_function_or_method is not JSON serializable.
Is there a chance that I set up my dictionary incorrectly, or am I using the dump function incorrectly?
You can show the data using:
print(spcInfData)
This shows it to be a dictionary, whose single entry 'spec' has an array, whose zero'th element is a sub-dictionary, whose 'boot_num' entry is an integer.
{'spec': [{'os': 'name', 'os_type': 'getwindowsversion', 'lang': 'en', 'cpu_amt': 'cpu_count', 'storage_amt': 'unk', 'boot_num': 1}]}
So what you are looking for is
boot_num = spcInfData['spec'][0]['boot_num']
and note that the value obtained this way is already an integer. str() is not necessary.
It's also good practice to guard against file format errors so the program handles them gracefully.
try:
boot_num = spcInfData['spec'][0]['boot_num']
except (KeyError, IndexError):
print('Database is corrupt')
Issue Number 2
"Not serializable" means there is something somewhere in your data structure that is not an accepted type and can't be converted to a JSON string.
json.dump() only processes certain types such as strings, dictionaries, and integers. That includes all of the objects that are nested within sub-dictionaries, sub-arrays, etc. See documentation for json.JSONEncoder for a complete list of allowable types.
What I'm trying to do isn't a huge problem in php, but I can't find much assistance for Python.
In simple terms, from a list which produces output as follows:
{"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765}...
All I want to do is add in an extra field, containing the 'runner name' in the data set below, into each of the 'runners' sub lists from the initial data set, based on selection_id=selectionId.
So initially I iterate through the full dataset, and then create a separate list to get the runner name from the runner id (I should point out that runnerId===selectionId===selection_id, no idea why there are multiple names are used), this works fine and the code is shown below:
for market_book in market_books:
market_catalogues = trading.betting.list_market_catalogue(
market_projection=["RUNNER_DESCRIPTION", "RUNNER_METADATA", "COMPETITION", "EVENT", "EVENT_TYPE", "MARKET_DESCRIPTION", "MARKET_START_TIME"],
filter=betfairlightweight.filters.market_filter(
market_ids=[market_book.market_id],
),
max_results=100)
data = []
for market_catalogue in market_catalogues:
for runner in market_catalogue.runners:
data.append(
(runner.selection_id, runner.runner_name)
)
So as you can see I have the data in data[], but what I need to do is add it to the initial data set, based on the selection_id.
I'm more comfortable with Php or Javascript, so apologies if this seems a bit simplistic, but the code snippets I've found on-line only seem to assist with very simple Python lists and nothing 'nested' (to me the structure seems similar to a nested array).
As per the request below, here is the full list:
{"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":20,"size":3},{"price":19.5,"size":26.36},{"price":19,"size":2}],"availableToLay":[{"price":21,"size":13},{"price":22,"size":2},{"price":23,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832767},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":11,"size":9.75},{"price":10.5,"size":3},{"price":10,"size":28.18}],"availableToLay":[{"price":11.5,"size":12},{"price":13.5,"size":2},{"price":14,"size":7.75}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832766},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":48,"size":2},{"price":46,"size":5},{"price":42,"size":5}],"availableToLay":[{"price":60,"size":7},{"price":70,"size":5},{"price":75,"size":10}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832769},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":18.5,"size":28.94},{"price":18,"size":5},{"price":17.5,"size":3}],"availableToLay":[{"price":21,"size":20},{"price":23,"size":2},{"price":24,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832768},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":4.3,"size":9},{"price":4.2,"size":257.98},{"price":4.1,"size":51.1}],"availableToLay":[{"price":4.4,"size":20.97},{"price":4.5,"size":30},{"price":4.6,"size":16}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832771},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":24,"size":6.75},{"price":23,"size":2},{"price":22,"size":2}],"availableToLay":[{"price":26,"size":2},{"price":27,"size":2},{"price":28,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832770},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":5.7,"size":149.33},{"price":5.5,"size":29.41},{"price":5.4,"size":5}],"availableToLay":[{"price":6,"size":85},{"price":6.6,"size":5},{"price":6.8,"size":5}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":10064909}],"publishTime":1551612312125,"priceLadderDefinition":{"type":"CLASSIC"},"keyLineDescription":null,"marketDefinition":{"bspMarket":false,"turnInPlayEnabled":false,"persistenceEnabled":false,"marketBaseRate":5,"eventId":"28180290","eventTypeId":"2378961","numberOfWinners":1,"bettingType":"ODDS","marketType":"NONSPORT","marketTime":"2019-03-29T00:00:00.000Z","suspendTime":"2019-03-29T00:00:00.000Z","bspReconciled":false,"complete":true,"inPlay":false,"crossMatching":false,"runnersVoidable":false,"numberOfActiveRunners":8,"betDelay":0,"status":"OPEN","runners":[{"status":"ACTIVE","sortPriority":1,"id":10064909},{"status":"ACTIVE","sortPriority":2,"id":12832765},{"status":"ACTIVE","sortPriority":3,"id":12832766},{"status":"ACTIVE","sortPriority":4,"id":12832767},{"status":"ACTIVE","sortPriority":5,"id":12832768},{"status":"ACTIVE","sortPriority":6,"id":12832770},{"status":"ACTIVE","sortPriority":7,"id":12832769},{"status":"ACTIVE","sortPriority":8,"id":12832771},{"status":"LOSER","sortPriority":9,"id":10317013},{"status":"LOSER","sortPriority":10,"id":10317010}],"regulators":["MR_INT"],"countryCode":"GB","discountAllowed":true,"timezone":"Europe\/London","openDate":"2019-03-29T00:00:00.000Z","version":2576584033,"priceLadderDefinition":{"type":"CLASSIC"}}}
i think i understand what you are trying to do now
first hold your data as a python object (you gave us a json object)
import json
my_data = json.loads(my_json_string)
for item in my_data['runners']:
item['selectionId'] = [item['selectionId'], my_name_here]
the thing is that my_data['runners'][i]['selectionId'] is a string, unless you want to concat the name and the id together, you should turn it into a list or even a dictionary
each item is a dicitonary so you can always also a new keys to it
item['new_key'] = my_value
So, essentially this works...with one exception...I can see from the print(...) in the loop that the attribute is updated, however what I can't seem to do is then see this update outside the loop.
mkt_runners = []
for market_catalogue in market_catalogues:
for r in market_catalogue.runners:
mkt_runners.append((r.selection_id, r.runner_name))
for market_book in market_books:
for runner in market_book.runners:
for x in mkt_runners:
if runner.selection_id in x:
setattr(runner, 'x', x[1])
print(market_book.market_id, runner.x, runner.selection_id)
print(market_book.json())
So the print(market_book.market_id.... displays as expected, but when I print the whole list it shows the un-updated version. I can't seem to find an obvious solution, which is odd, as it seems like a really simple thing (I tried messing around with indents, in case that was the problem, but it doesn't seem to be, its like its not refreshing the market_book list post update of the runners sub list)!