Select Substring from Larger String and Append to List - python
I'm currently doing some API work with Tenable.io, and I'm having some trouble selecting substrings. I'm sending requests for scan histories, and the API responds with a continuous string of all scans in JSON format. The response I get is a very large continuous string of data, and I need to select some substrings (a few values), and copy that to a list (just for now). Getting data into a list isn't where I'm stuck - I require some serious assistance with selecting the substrings I need. Each scan has the following attributes:
id
status
is_archived
targets
scan_uuid
reindexing
time_start (unix format)
time_end (unix format)
Each of these has a value/boolean following it (see below). I need a way to extract the values following "id":, "scan_uuid:", and "time_start": from the string (and put it in a list just for now).
I'd like to do this without string.index, as this may break the script if the response length changes. There is also a new scan everyday, so the overall length of the response will change. Due to the nature of the data, I'd imagine the ideal solution would be to specify a condition that will select x amount of characters after "id":, "scan_uuid:", and "time_start":, and append them to a list, with the output looking something like:
scan_id_10_response = ["12345678", ""15b6e7cd-447b-84ab-84d3-48a62b18fe6c", "1639111111", etc, etc]
String is below - I've only included the data for 4 scans for simplicity's sake. I've also changed the values for security reasons, but the length & format of the values are the same.
scan_id_10_response = '{"pagination":{"offset":0,"total":119,"sort":[{"order":"DESC","name":"start_date"}],"limit":100},"history":[\
{"id":12345678,"status":"completed","is_archived":false,"targets":{"custom":false,"default":null},"visibility":"public","scan_uuid":"15b6e7cd-447b-84ab-84d3-48a62b18fe6c","reindexing":null,"time_start":1639111111,"time_end":1639111166},\
{"id":23456789,"status":"completed","is_archived":false,"targets":{"custom":false,"default":null},"visibility":"public","scan_uuid":"8a468cff-c64f-668a-3015-101c218b68ae","reindexing":null,"time_start":1632222222,"time_end":1632222255},\
{"id":34567890,"status":"completed","is_archived":false,"targets":{"custom":false,"default":null},"visibility":"public","scan_uuid":"84ea995a-584a-cc48-e352-8742a38c12ff","reindexing":null,"time_start":1639333333,"time_end":1639333344},\
{"id":45678901,"status":"completed","is_archived":false,"targets":{"custom":false,"default":null},"visibility":"public","scan_uuid":"48a95366-48a5-e468-a444-a4486cdd61a2","reindexing":null,"time_start":1639444444,"time_end":1639444455}\
]}'
Basically you can use the standard json module to parse the json string.
Using that code snippet you obtain a dict you can then work with.
import json
c = json.loads(scan_id_10_response)
Now you can for example create a list of list with the desired attributes:
extracted_data = [[d['id'], d['scan_uuid'], d['time_start']] for d in c['history']]
This returns for this particular example:
[[12345678, '15b6e7cd-447b-84ab-84d3-48a62b18fe6c', 1639111111],
[23456789, '8a468cff-c64f-668a-3015-101c218b68ae', 1632222222],
[34567890, '84ea995a-584a-cc48-e352-8742a38c12ff', 1639333333],
[45678901, '48a95366-48a5-e468-a444-a4486cdd61a2', 1639444444]]
If you only want one result at a time use a generator or iterate over the list
gen_extracted = ([d['id'], d['scan_uuid'], d['time_start']] for d in x['history'])
If you dont want to work with a dict i would reccomend you a look into regular expressions.
Related
Is there an R or Python function for separating information in non-delimited strings, where the information varies?
I am currently cleaning up a messy data sheet in which information is given in one excel cell where the different characteristics are not delimited (no comma, spaces are random). Thus, my problem is to separate the different information without a delimitation I could use in my code (can't use a split command) I assume that I need to include some characteristics of each part of information, such that the corresponding characteristic is recognized. However, I don't have a clue how to do that since I am quite new to Python and I only worked with R in the framework of regression models and other statistical analysis. Short data example: INPUT: "WMIN CBOND12/05/2022 23554132121" or "WalMaInCBND 12/05/2022-23554132121" or "WalmartI CorpBond12/05/2022|23554132121" EXPECTED OUTPUT: "Walmart Inc.", "Corporate Bond", "12/05/2022", "23554132121" So each of the "x" should be classified in a new column with the corresponding header (Company, Security, Maturity, Account Number) As you can see the input varies randomly but I want to have the same output for each of the three inputs given above (I have over 200k data points with different companies, securities etc.) First Problem is how to separate the information effectively without being able to use a systematic pattern. Second Problem (lower priority) is how to identify the company without setting up a dictionary with 50 different inputs for 50k companies. Thanks for your help!
I recommend to first introduce useful seperators where possible and construct a dictionary of replacements for processing with regular expressions. import re s = 'WMIN CBOND12/05/2022 23554132121' # CAREFUL this not a real date regex, this should just # illustrate the principle of regex # see https://stackoverflow.com/a/15504877/5665958 for # a good US date regex date_re = re.compile('([0-9]{2}/[0-9]{2}/[0-9]{4})') # prepend a whitespace before the date # this is achieved by searching the date within the string # and replacing it with itself with a prepended whitespace # /1 means "insert the first capture group", which in our # case is the date s = re.sub(date_re, r' \1', s) # split by one or more whitespaces and insert # a seperator (';') to make working with the string # easier s = ';'.join(s.split()) # build a dictionary of replacements replacements = { 'WMIN': 'Walmart Inc.', 'CBOND': 'Corporate Bond', } # for each replacement apply subsitution # a better, but more replicated solution for # this is given here: # https://stackoverflow.com/a/15175239/5665958 for pattern, r in replacements.items(): s = re.sub(pattern, r, s) # use our custom separator to split the parts out = s.split(';') print(out)
Using python and regular expressions: import re def make_filter(pattern): pattern = re.compile(pattern) def filter(s): filtered = pattern.match(s) return filtered.group(1), filtered.group(2), filtered.group(3), filtered.group(4) return filter filter = make_filter("^([a-zA-Z]+)\s([a-zA-Z]+)(\d+/\d+/\d+)\s(\d+)$") filter("WMIN CBOND12/05/2022 23554132121") The make_filter function is just an utility to allow you to modify the pattern. It returns a function that will filter the output according to that pattern. I use it with the "^([a-zA-Z]+)\s([a-zA-Z]+)(\d+/\d+/\d+)\s(\d+)$" pattern that considers some text, an space, some text, a date, an space, and a number. If you want to kodify this pattern provide more info about it. The output will be ("WMIN", "CBOND", "12/05/2022", "23554132121").
welcome! Yeah, we would definitely need to see more examples and regex seems to be the way to go... but since there seems to be no structure, I think it's better to think of this as seperate steps. We KNOW there's a date which is (X)X/(X)X/XXXX (ie. one or two digit day, one or two digit month, four digit year, maybe with or without the slashes, right?) and after that there's numbers. So solve that part first, leaving only the first two categories. That's actually the easy part :) but don't lose heart! if these two categories might not have ANY delimiter (for example WMINCBOND 12/05/202223554132121, or delimiters are not always delimiters for example IMAGINARY COMPANY X CBOND, then you're in deep trouble. :) BUT this is what we can do: Gather a list of all the codes (hopefully you have that). use str_detect() on each code and see if you can recognize the exact string in any of the dataset (if you do have the codes lemme know I'll write the code to do this part). What's left after identifying the code will be the CBOND, whatever that is... so do that part last... what's left of the string will be that. Alternatively, you can use the same str_detect() if you have a list of whatever CBOND stuff is. ONLY AFTER YOU'VE IDENTIFIED EVERYTHING, you can then replace the codes for what they stand for. If you have the code-list let me know and I'll post the code.
edit s = c("WMIN CBOND12/05/2022 23554132121", "WalMaInCBND 12/05/2022-23554132121", "WalmartI CorpBond12/05/2022|23554132121") ID = gsub("([a-zA-Z]+).*","\\1",s) ID2 = gsub(".* ([a-zA-Z]+).*","\\1",s) date = gsub("[a-zA-Z ]+(\\d+\\/\\d+\\/\\d+).*","\\1",s) num = gsub("^.*[^0-9](.*$)","\\1",s) data.frame(ID=ID,ID2=ID2,date=date,num=num,stringsAsFactors=FALSE) ID ID2 date num 1 WMIN CBOND 12/05/2022 23554132121 2 WalMaInCBND WalMaInCBND 12/05/2022-23554132121 12/05/2022 23554132121 3 WalmartI CorpBond 12/05/2022 23554132121 Works for cases 1 and 3 but I haven't figured out a logic for the second case, how can we know where to split the string containing the company and security if they are not separated?
Python- Insert new values into 'nested' list?
What I'm trying to do isn't a huge problem in php, but I can't find much assistance for Python. In simple terms, from a list which produces output as follows: {"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765}... All I want to do is add in an extra field, containing the 'runner name' in the data set below, into each of the 'runners' sub lists from the initial data set, based on selection_id=selectionId. So initially I iterate through the full dataset, and then create a separate list to get the runner name from the runner id (I should point out that runnerId===selectionId===selection_id, no idea why there are multiple names are used), this works fine and the code is shown below: for market_book in market_books: market_catalogues = trading.betting.list_market_catalogue( market_projection=["RUNNER_DESCRIPTION", "RUNNER_METADATA", "COMPETITION", "EVENT", "EVENT_TYPE", "MARKET_DESCRIPTION", "MARKET_START_TIME"], filter=betfairlightweight.filters.market_filter( market_ids=[market_book.market_id], ), max_results=100) data = [] for market_catalogue in market_catalogues: for runner in market_catalogue.runners: data.append( (runner.selection_id, runner.runner_name) ) So as you can see I have the data in data[], but what I need to do is add it to the initial data set, based on the selection_id. I'm more comfortable with Php or Javascript, so apologies if this seems a bit simplistic, but the code snippets I've found on-line only seem to assist with very simple Python lists and nothing 'nested' (to me the structure seems similar to a nested array). As per the request below, here is the full list: {"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":20,"size":3},{"price":19.5,"size":26.36},{"price":19,"size":2}],"availableToLay":[{"price":21,"size":13},{"price":22,"size":2},{"price":23,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832767},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":11,"size":9.75},{"price":10.5,"size":3},{"price":10,"size":28.18}],"availableToLay":[{"price":11.5,"size":12},{"price":13.5,"size":2},{"price":14,"size":7.75}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832766},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":48,"size":2},{"price":46,"size":5},{"price":42,"size":5}],"availableToLay":[{"price":60,"size":7},{"price":70,"size":5},{"price":75,"size":10}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832769},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":18.5,"size":28.94},{"price":18,"size":5},{"price":17.5,"size":3}],"availableToLay":[{"price":21,"size":20},{"price":23,"size":2},{"price":24,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832768},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":4.3,"size":9},{"price":4.2,"size":257.98},{"price":4.1,"size":51.1}],"availableToLay":[{"price":4.4,"size":20.97},{"price":4.5,"size":30},{"price":4.6,"size":16}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832771},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":24,"size":6.75},{"price":23,"size":2},{"price":22,"size":2}],"availableToLay":[{"price":26,"size":2},{"price":27,"size":2},{"price":28,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832770},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":5.7,"size":149.33},{"price":5.5,"size":29.41},{"price":5.4,"size":5}],"availableToLay":[{"price":6,"size":85},{"price":6.6,"size":5},{"price":6.8,"size":5}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":10064909}],"publishTime":1551612312125,"priceLadderDefinition":{"type":"CLASSIC"},"keyLineDescription":null,"marketDefinition":{"bspMarket":false,"turnInPlayEnabled":false,"persistenceEnabled":false,"marketBaseRate":5,"eventId":"28180290","eventTypeId":"2378961","numberOfWinners":1,"bettingType":"ODDS","marketType":"NONSPORT","marketTime":"2019-03-29T00:00:00.000Z","suspendTime":"2019-03-29T00:00:00.000Z","bspReconciled":false,"complete":true,"inPlay":false,"crossMatching":false,"runnersVoidable":false,"numberOfActiveRunners":8,"betDelay":0,"status":"OPEN","runners":[{"status":"ACTIVE","sortPriority":1,"id":10064909},{"status":"ACTIVE","sortPriority":2,"id":12832765},{"status":"ACTIVE","sortPriority":3,"id":12832766},{"status":"ACTIVE","sortPriority":4,"id":12832767},{"status":"ACTIVE","sortPriority":5,"id":12832768},{"status":"ACTIVE","sortPriority":6,"id":12832770},{"status":"ACTIVE","sortPriority":7,"id":12832769},{"status":"ACTIVE","sortPriority":8,"id":12832771},{"status":"LOSER","sortPriority":9,"id":10317013},{"status":"LOSER","sortPriority":10,"id":10317010}],"regulators":["MR_INT"],"countryCode":"GB","discountAllowed":true,"timezone":"Europe\/London","openDate":"2019-03-29T00:00:00.000Z","version":2576584033,"priceLadderDefinition":{"type":"CLASSIC"}}}
i think i understand what you are trying to do now first hold your data as a python object (you gave us a json object) import json my_data = json.loads(my_json_string) for item in my_data['runners']: item['selectionId'] = [item['selectionId'], my_name_here] the thing is that my_data['runners'][i]['selectionId'] is a string, unless you want to concat the name and the id together, you should turn it into a list or even a dictionary each item is a dicitonary so you can always also a new keys to it item['new_key'] = my_value
So, essentially this works...with one exception...I can see from the print(...) in the loop that the attribute is updated, however what I can't seem to do is then see this update outside the loop. mkt_runners = [] for market_catalogue in market_catalogues: for r in market_catalogue.runners: mkt_runners.append((r.selection_id, r.runner_name)) for market_book in market_books: for runner in market_book.runners: for x in mkt_runners: if runner.selection_id in x: setattr(runner, 'x', x[1]) print(market_book.market_id, runner.x, runner.selection_id) print(market_book.json()) So the print(market_book.market_id.... displays as expected, but when I print the whole list it shows the un-updated version. I can't seem to find an obvious solution, which is odd, as it seems like a really simple thing (I tried messing around with indents, in case that was the problem, but it doesn't seem to be, its like its not refreshing the market_book list post update of the runners sub list)!
Using an IF THEN loop with nested JSON files in Python
I am currently writing a program which uses the ComapaniesHouse API to return a json file containing information about a certain company. I am able to retrieve the data easily using the following commands: r = requests.get('https://api.companieshouse.gov.uk/company/COMPANY-NO/filing-history', auth=('API-KEY', '')) data = r.json() With that information I can do an awful lot, however I've ran into a problem which I was hoping you guys could possible help me with. What I aim to do is go through every nested entry in the json file and check if the value of certain keys matches certain criteria, if the values of 2 keys match a certain criteria then other code is executed. One of the keys is the date of an entry, and I would like to ignore results that are older than a certain date, I have attempted to do this with the following: date_threshold = datetime.date.today() - datetime.timedelta(days=30)`` for each in data["items"]: date = ['date'] type = ['type'] if date < date_threshold and type is "RM01": print("wwwwww") In case it isn't clear, what I'm attempting to do (albeit very badly) is assign each of the entries to a variable, which then gets tested against certain criteria. Although this doesn't work, python spits out a variable mismatch error: TypeError: unorderable types: list() < datetime.date() Which makes me think the date is being stored as a string, and so I can't compare it to the datetime value set earlier, but when I check the API documentation (https://developer.companieshouse.gov.uk/api/docs/company/company_number/filing-history/filingHistoryItem-resource.html), it says clearly that the 'date' entry is returned as a date type. What am I doing wrong, its very clear that I'm extremely new to python given what I presume is the atrocity of my code, but in my head it seems to make at least a little sense. In case none of this clear, I basically want to go through all the entries in the json file, and the if the date and type match a certain description, then other code can be executed (in this case I have just used random text). Any help is greatly appreciated! Let me know if you need anything cleared up. :) EDIT After tweaking my code to the below: for each in data["items"]: date = each['date'] type = each['type'] if date is '2016-09-15' and type is "RM01": print("wwwwww") The code executes without any errors, but the words aren't printed, even though I know there is an entry in the json file with that exact date, and that exact type, any thoughts? SOLUTION: Thanks to everyone for helping me out, I had made a couple of very basic errors, the code that works as expected is below:: for each in data["items"]: date = each['date'] typevariable = each['type'] if date == '2016-09-15' and typevariable == "RM01": print("wwwwww") This prints the word "wwwwww" 3 times, which is correct seeing as there are 3 entries in the JSON that fulfil those criteria.
You need to first convert your date variable to a datetime type using datetime.strptime() You are comparing a list type variable date with datetime type variable date_threshold.
Parsing multiple occurrences of an item into a dictionary
Attempting to parse several separate image links from JSON data through python, but having some issues drilling down to the right level, due to what I believe is from having a list of strings. For the majority of the items, I've had success with the below example, pulling back everything I need. Outside of this instance, everything is a 1:1 ratio of keys:values, but for this one, there are multiple values associated with one key. resultsdict['item_name'] = item['attribute_key'] I've been adding it all to a resultsdict={}, but am only able to get to the below sample string when I print. INPUT: for item in data['Item']: resultsdict['images'] = item['Variations']['Pictures'] OUTPUT (only relevant section): 'images': [{u'VariationSpecificPictureSet': [{u'PictureURL': [u'http//imagelink1'], u'VariationSpecificValue': u'color1'}, {u'PictureURL': [u'http//imagelink2'], u'VariationSpecificValue': u'color2'}, {u'PictureURL': [u'http//imagelink3'], u'VariationSpecificValue': u'color3'}, {u'PictureURL': [u'http//imagelink4'], u'VariationSpecificValue': u'color4'}] I feel like I could add ['VariationPictureSet']['PictureURL'] at the end of my initial input, but that throws an error due to the indices not being integers, but strings. Ideally, I would like to see the output as a simple comma-separated list of just the URLs, as follows: OUTPUT: 'images': http//imagelink1, http//imagelink2, http//imagelink3, http//imagelink4
An answer to your comment that required a bit of code to it. When using for item in data['Item']: resultsdict['images'] = item['Variations']['Pictures'] you get a list with one element, so I recommend using this for item in data['Item']: resultsdict['images'] = item['Variations']['Pictures'][0] now you can use for image in resultsdict['images']['VariationsSpecificPictureSet']: print(image['PictureURL'])
Thanks for the help, #uzzee, it's appreciated. I kept tinkering with it and was able to pull the continuous string of all the image URLs with the following code. resultsdict['images'] = sum([x['PictureURL'] for x in item['variations']['Pictures'][0]['VariationSpecificPictureSet']],[]) Without the sum it looks like this and pulls in the whole list of lists... resultsdict['images'] = [x['PictureURL'] for x in item['variations']['Pictures'][0]['VariationSpecificPictureSet']]
Search a single column for a particular value in a CSV file and return an entire row
Issue The code does not correctly identify the input (item). It simply dumps to my failure message even if such a value exists in the CSV file. Can anyone help me determine what I am doing wrong? Background I am working on a small program that asks for user input (function not given here), searches a specific column in a CSV file (Item) and returns the entire row. The CSV data format is shown below. I have shortened the data from the actual amount (49 field names, 18000+ rows). Code import csv from collections import namedtuple from contextlib import closing def search(): item = 1000001 raw_data = 'active_sanitized.csv' failure = 'No matching item could be found with that item code. Please try again.' check = False with closing(open(raw_data, newline='')) as open_data: read_data = csv.DictReader(open_data, delimiter=';') item_data = namedtuple('item_data', read_data.fieldnames) while check == False: for row in map(item_data._make, read_data): if row.Item == item: return row else: return failure CSV structure active_sanitized.csv Item;Name;Cost;Qty;Price;Description 1000001;Name here:1;1001;1;11;Item description here:1 1000002;Name here:2;1002;2;22;Item description here:2 1000003;Name here:3;1003;3;33;Item description here:3 1000004;Name here:4;1004;4;44;Item description here:4 1000005;Name here:5;1005;5;55;Item description here:5 1000006;Name here:6;1006;6;66;Item description here:6 1000007;Name here:7;1007;7;77;Item description here:7 1000008;Name here:8;1008;8;88;Item description here:8 1000009;Name here:9;1009;9;99;Item description here:9 Notes My experience with Python is relatively little, but I thought this would be a good problem to start with in order to learn more. I determined the methods to open (and wrap in a close function) the CSV file, read the data via DictReader (to get the field names), and then create a named tuple to be able to quickly select the desired columns for the output (Item, Cost, Price, Name). Column order is important, hence the use of DictReader and namedtuple. While there is the possibility of hard-coding each of the field names, I felt that if the program can read them on file open, it would be much more helpful when working on similar files that have the same column names but different column organization. Research CSV Header and named tuple: What is the pythonic way to read CSV file data as rows of namedtuples? Converting CSV data to tuple: How to split a CSV row so row[0] is the name and any remaining items are a tuple? There were additional links of research, but I cannot post more than two.
You have three problems with this: You return on the first failure, so it will never get past the first line. You are reading strings from the file, and comparing to an int. _make iterates over the dictionary keys, not the values, producing the wrong result (item_data(Item='Name', Name='Price', Cost='Qty', Qty='Item', Price='Cost', Description='Description')). for row in (item_data(**data) for data in read_data): if row.Item == str(item): return row return failure This fixes the issues at hand - we check against a string, and we only return if none of the items matched (although you might want to begin converting the strings to ints in the data rather than this hackish fix for the string/int issue). I have also changed the way you are looping - using a generator expression makes for a more natural syntax, using the normal construction syntax for named attributes from a dict. This is cleaner and more readable than using _make and map(). It also fixes problem 3.