Python: Search for string from dictionaryCSV file and display matching rows - python

I have this program right now where it allows users to choose from a category(pulling from the file). Then it will print the University data using dictionary.
What I want to do next on my code is for users to search for a specific string from that file and it will display all of the keys. It can be the whole word or part of the string from that file.
I need help on searching for a given string or part of a string and display matching categories (NameID, StudentName, University, Phone, State).
Example:
search: on
output: (Note: that this is in dictionary format)
{'NameID': 'JSNOW', ' StudentName': ' Jon Snow', ' University': ' UofWinterfell', ' Phone': ' 324234423', ' State': 'Westeros'}
{'NameID': 'JJONS', ' StudentName': ' Joe Jonson', ' University': ' NYU', ' Phone': ' 123432333', ' State': 'New York'}
My text file looks like this:
NameID, StudentName, University, Phone, State
JJONS, Joe Jonson, NYU, 123432333, New York
SROGE, Steve Rogers, UofI, 324324423, New York
JSNOW, Jon Snow, UofWinterfell, 324234423, Westeros
DTARG, Daenerys Targaryen, Dragonstone, 345345, NULL
This is what I have so far:
import csv
def load_data(file_name):
university_data=[]
with open("file.csv", mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file, skipinitialspace=True)
for col in csv_reader:
university_data.append(dict(col))
print(university_data)
return university_data
# def search_file():
# for l in data:
# no idea what to do here
def main():
filename='file.csv'
university_data = load_data(filename)
print('[1] University\n[2] Student Name\n[3] Exit\n[4] Search')
while True:
choice=input('Enter choice 1/2/3? ')
if choice=='1':
for university in university_data:
print(university['University'])
elif choice=='2':
for university in university_data:
print(university['StudentName'])
elif choice =='3':
print('Thank You')
break
elif choice =='4':
search_file()
else:
print('Invalid selection')
main()
I need choice 4 to work. I would ignore the choice 1 and 2 because they just display the names and is not in dictionary format.

You have to figure out which field you are searching by and then iterate over the list of dicts.
def search_file(field, query):
for l in data:
if l.get(field, None) == query:
return l

Related

How to format list data and write to csv file in selenium python?

I'm getting data from a website and storing them inside a list of variables. Now I need to send these data to a CSV file.
The website data is printed and shown below.
The data getting from the Website
['Company Name: PATRY PLC', 'Contact Name: Jony Deff', 'Company ID: 234567', 'CS ID: 236789', 'MI/MC:', 'Road Code:']
['Mailing Address:', 'Street: 19700 I-45 Spring, TX 77373', 'City: SPRING', 'State: TX', 'Postal Code: 77388', 'Country: US']
['Physical Address:', 'Street: 1500-1798 Runyan Ave Houston, TX 77039, USA', 'City: HOUSTON', 'State: TX', 'Postal Code: 77039', 'Country: US']
['Registration Period', 'Registration Date/Time', 'Registration ID', 'Status']
['2020-2025', 'MAY-10-2020 15:54:12', '26787856889l', 'Active']
I'm using for loop to get these data using the below code:
listdata6 = []
for c6 in cells6:
listdata6.append(c6.text)
Now I have all data inside the 5 list variables. How can I write these data into CSV file like the below format?
You seem to want to have two header rows.
But I'm afraid your CSV interpreter (which seem to be MS Excel) won't be able to merge cells like you show on the screenshot.
Based on the structure of your data (five lists where keys and values are mixed) looks like you probably have to construct both headers semi-manually.
Here is the code:
company_info = ['Company Name: PATRY PLC', 'Contact Name: Jony Deff', 'Company ID: 234567', 'CS ID: 236789', 'MI/MC:', 'Road Code:']
mailaddr_info = ['Mailing Address:', 'Street: 19700 I-45 Spring, TX 77373', 'City: SPRING', 'State: TX', 'Postal Code: 77388', 'Country: US']
physaddr_info = ['Physical Address:', 'Street: 1500-1798 Runyan Ave Houston, TX 77039, USA', 'City: HOUSTON', 'State: TX', 'Postal Code: 77039', 'Country: US']
reg_data = ['Registration Period', 'Registration Date/Time', 'Registration ID', 'Status']
status_data = ['2020-2025', 'MAY-10-2020 15:54:12', '26787856889l', 'Active']
# composing 1st header's row
header1 = ''.join(',' for i in range(len(company_info))) # add commas
header1 += mailaddr_info[0].strip(':') # adds 1st item which is header of that data
header1 += ''.join(',' for i in range(1, len(mailaddr_info)))
header1 += physaddr_info[0].strip(':') # adds 1st item which is header of that data
header1 += ''.join(',' for i in range(1, len(physaddr_info)))
header1 += ''.join(',' for i in range(len(reg_data))) # add commas
# composing 2nd header's row
header2 = ','.join( item.split(':')[0].strip(' ') for item in company_info) + ','
header2 += ','.join( item.split(':')[0].strip(' ') for item in mailaddr_info[1:]) + ','
header2 += ','.join( item.split(':')[0].strip(' ') for item in physaddr_info[1:]) + ','
header2 += ','.join( item.split(':')[0].strip(' ') for item in reg_data)
# finally, the data row. Note we replace comma with empty char because some items contain comma.
# You can further elaborate by encapsulating comma-containing items with quotes "" which
# is treated as text by CSV interpreters.
data_row = ','.join( item.split(':')[-1].strip(' ') for item in company_info)
data_row += ','.join( item.split(':')[-1].strip(' ').replace(',','') for item in mailaddr_info)
data_row += ','.join( item.split(':')[-1].strip(' ').replace(',','') for item in physaddr_info)+ ','
data_row += ','.join( item for item in status_data)
# writing the data to CSV file
with open("test_file.csv", "w") as f:
f.write(header1 + '\n')
f.write(header2 + '\n')
f.write(data_row + '\n')
If I import that file using MS Excel and set 'Comma' as separator in text import wizard you will get something like that:
You can wrap it into a helper class which takes these five lists and exposes write_csv() method to the outside world.

How do I transform a non-CSV text file into a CSV using Python/Pandas?

I have a text file that looks like this:
Id Number: 12345678
Location: 1234561791234567090-8.9
Street: 999 Street AVE
Buyer: john doe
Id Number: 12345688
Location: 3582561791254567090-8.9
Street: 123 Street AVE
Buyer: Jane doe # buyer % LLC
Id Number: 12345689
Location: 8542561791254567090-8.9
Street: 854 Street AVE
Buyer: Jake and Bob: Owner%LLC: Inc
I'd like the file to look like this:
Id Number
Location
Street
Buyer
12345678
1234561791234567090-8.9
999 Street AVE
john doe
12345688
3582561791254567090-8.9
123 Street AVE
Jane doe # buyer % LLC
12345689
8542561791254567090-8.9
854 Street AVE
Jake and Bob: Owner%LLC: Inc
I have tried the following:
# 1 Read text file and ignore bad lines (lines with extra colons thus reading as extra fields).
tr = pd.read_csv('C:\\File Path\\test.txt', sep=':', header=None, error_bad_lines=False)
# 2 Convert into a dataframe/pivot table.
ndf = pd.DataFrame(tr.pivot(index=None, columns=0, values=1))
# 3 Clean up the pivot table to remove NaNs and reset the index (line by line).
nf2 = ndf.apply(lambda x: x.dropna().reset_index(drop=True))
Here is where got the last line (#3): https://stackoverflow.com/a/62481057/10448224
When I do the above and export to CSV the headers are arranged like the following:
(index)
Street
Buyer
Id Number
Location
The data is filled in nicely but at some point the Buyer field becomes inaccurate but the rest of the fields are accurate through the entire DF.
My guesses:
When I run #1 part of my script I get the following errors 507 times:
b'Skipping line 500: expected 2 fields, saw 3\nSkipping line 728: expected 2 fields, saw 3\
At the tail end of the new DF I am missing exactly 507 entries for the Byer field. So I think when I drop my bad lines, the field is pushing my data up.
Pain Points:
The Buyer field will sometimes have extra colons and other odd characters. So when I try to use a colon as a delimiter I run into problems.
I am new to Python and I am very new to using functions. I primarily use Pandas to manipulate data at a somewhat basic level. So in the words of the great Michael Scott: "Explain it to me like I'm five." Many many thanks to anyone willing to help.
Here's what I meant by reading in and using split. Very similar to other answers. Untested and I don't recall if inputline include eol, so I stripped it too.
with open('myfile.txt') as f:
data = [] # holds database
record = {} # holds built up record
for inputline in f:
key,value = inputline.strip().split(':',1)
if key == "Id Number": # new record starting
if len(record):
data.append(record) # write previous record
record = {}
record.update({key:value})
if len(record):
data.append(record) # out final record
df = pd.DataFrame(data)
This is a minimal example that demonstrates the basics:
cat split_test.txt
Id Number: 12345678
Location: 1234561791234567090-8.9
Street: 999 Street AVE
Buyer: john doe
Id Number: 12345688
Location: 3582561791254567090-8.9
Street: 123 Street AVE
Buyer: Jane doe # buyer % LLC
Id Number: 12345689
Location: 8542561791254567090-8.9
Street: 854 Street AVE
Buyer: Jake and Bob: Owner%LLC: Inc
import csv
with open("split_test.txt", "r") as f:
id_val = "Id Number"
list_var = []
for line in f:
split_line = line.strip().split(':')
print(split_line)
if split_line[0] == id_val:
d = {}
d[split_line[0]] = split_line[1]
list_var.append(d)
else:
d.update({split_line[0]: split_line[1]})
list_var
[{'Id Number': ' 12345689',
'Location': ' 8542561791254567090-8.9',
'Street': ' 854 Street AVE',
'Buyer': ' Jake and Bob'},
{'Id Number': ' 12345678',
'Location': ' 1234561791234567090-8.9',
'Street': ' 999 Street AVE',
'Buyer': ' john doe'},
{'Id Number': ' 12345688',
'Location': ' 3582561791254567090-8.9',
'Street': ' 123 Street AVE',
'Buyer': ' Jane doe # buyer % LLC'}]
with open("split_ex.csv", "w") as csv_file:
field_names = list_var[0].keys()
csv_writer = csv.DictWriter(csv_file, fieldnames=field_names)
csv_writer.writeheader()
for row in list_var:
csv_writer.writerow(row)
I would try reading the file line by line, splitting the key-value pairs into a list of dicts to look something like:
data = [
{
"Id Number": 12345678,
"Location": 1234561791234567090-8.9,
...
},
{
"Id Number": ...
}
]
# easy to create the dataframe from here
your_df = pd.DataFrame(data)

comparing two lists and searching by a field, Python

I have two files I wish to compare and then produce a specific output:
1) Below are the contents of the username text file (this stores the latest films viewed by the user)
Sci-Fi,Out of the Silent Planet
Sci-Fi,Solaris
Romance, When Harry met Sally
2) Below are the contents of the films.txt file which stores all the films in the program that are available to the user
0,Genre, Title, Rating, Likes
1,Sci-Fi,Out of the Silent Planet, PG,3
2,Sci-Fi,Solaris, PG,0
3,Sci-Fi,Star Trek, PG,0
4,Sci-Fi,Cosmos, PG,0
5,Drama, The English Patient, 15,0
6,Drama, Benhur, PG,0
7,Drama, The Pursuit of Happiness, 12, 0
8,Drama, The Thin Red Line, 18,0
9,Romance, When Harry met Sally, 12, 0
10,Romance, You've got mail, 12, 0
11,Romance, Last Tango in Paris, 18, 0
12,Romance, Casablanca, 12, 0
An example of the output I require: The user has currently viewed two sci-fi and one Romance film. The output therefore should SEARCH the Films text file by Genre (identifying SCI-FI and ROMANCE), and should list the films in the films.txt file which have NOT been viewed by the user yet. In this case
3,Sci-Fi,Star Trek, PG,0
4,Sci-Fi,Cosmos, PG,0
10,Romance, You've got mail, 12, 0
11,Romance, Last Tango in Paris, 18, 0
12,Romance, Casablanca, 12, 0
I have the following code which attempts to do the above, but the output it produces is incorrect:
def viewrecs(username):
#set the username variable to the text file -to use it in the next bit
username = (username + ".txt")
#open the username file that stores latest viewings
with open(username,"r") as f:
#open the csv file reader for the username file
fReader=csv.reader(f)
#for each row in the fReader
for row in fReader:
#set the genre variable to the row[0], in which row[0] is all the genres (column 1 in username file)
genre=row[0]
#next, open the films file
with open("films.txt","r") as films:
#open the csv reader for this file (filmsReader as opposed to fReader)
filmsReader=csv.reader(films)
#for each row in the films file
for row in filmsReader:
#and for each field in the row
for field in row:
#print(field)
#print(genre)
#print(field[0])
if genre in field and row[2] not in fReader:
print(row)
Output (undesired):
['1', 'Sci-Fi', 'Out of the Silent Planet', ' PG', '3']
['2', 'Sci-Fi', 'Solaris', ' PG', '0']
['3', 'Sci-Fi', 'Star Trek', ' PG', '0']
['4', 'Sci-Fi', 'Cosmos', ' PG', '0']
I don't want a re-write or new solution, but, preferably, a fix to the above solution with its logical progression ...
#gipsy - your solution appears to have nearly worked. I used:
def viewrecs(username):
#set the username variable to the text file -to use it in the next bit
username = (username + ".txt")
#open the username file that stores latest viewings
lookup_set = set()
with open(username,"r") as f:
#open the csv file reader for the username file
fReader=csv.reader(f)
#for each row in the fReader
for row in fReader:
genre = row[1]
name = row[2]
lookup_set.add('%s-%s' % (genre, name))
with open("films.txt","r") as films:
filmsReader=csv.reader(films)
#for each row in the films file
for row in filmsReader:
genre = row[1]
name = row[2]
lookup_key = '%s-%s' % (genre, name)
if lookup_key not in lookup_set:
print(row)
The output is as below: It is printing ALL the lines in allfilms that are not in the first set, rather than just the ones based on the GENRE in the first set:
['0', 'Genre', ' Title', ' Rating', ' Likes']
['3', 'Sci-Fi', 'Star Trek', ' PG', ' 0']
['4', 'Sci-Fi', 'Cosmos', ' PG', ' 0']
['5', 'Drama', ' The English Patient', ' 15', ' 0']
['6', 'Drama', ' Benhur', ' PG', ' 0']
['7', 'Drama', ' The Pursuit of Happiness', ' 12', ' 0']
['8', 'Drama', ' The Thin Red Line', ' 18', ' 0']
['10', 'Romance', " You've got mail", ' 12', ' 0']
['11', 'Romance', ' Last Tango in Paris', ' 18', ' 0']
['12', 'Romance', ' Casablanca', ' 12', ' 0']
NOTE: I changed the format of the first set to be the same, for simplicity, of the all films entries:
1,Sci-Fi,Out of the Silent Planet, PG
2,Sci-Fi,Solaris, PG
How about using sets and separate lists to filter movies in appropriate genres that were not seen? We can even abuse the dictionaries' keys and values for this purpose:
def parse_file (file):
return map(lambda x: [w.strip() for w in x.split(',')], open(file).read().split('\n'))
def movies_to_see ():
seen = {film[0]: film[1] for film in parse_file('seen.txt')}
films = parse_file('films.txt')
to_see = []
for film in films:
if film[1] in seen.keys() and film[2] not in seen.values():
to_see.append(film)
return to_see
The solution using str.split() and str.join() functions:
# change file paths with your actual ones
with open('./text_files/user.txt', 'r') as userfile:
viewed = userfile.read().split('\n')
viewed_genders = set(g.split(',')[0] for g in viewed)
with open('./text_files/films.txt', 'r') as filmsfile:
films = filmsfile.read().split('\n')
not_viewed = [f for f in films
if f.split(',')[1] in viewed_genders and ','.join(f.split(',')[1:3]) not in viewed]
print('\n'.join(not_viewed))
The output:
3,Sci-Fi,Star Trek, PG,0
4,Sci-Fi,Cosmos, PG,0
10,Romance, You've got mail, 12, 0
11,Romance, Last Tango in Paris, 18, 0
12,Romance, Casablanca, 12, 0
Okay , build a set going through the first file with Genre + name as the entry.
Now iterate over the second file and lookup in the set you made above for an entry for Genre+ name, if not exists print that out.
Once I am home I can type some code.
As promised my code for this is below:
def viewrecs(username):
#set the username variable to the text file -to use it in the next bit
username = (username + ".txt")
# In this set we will collect the unique combinations of genre and name
genre_name_lookup_set = set()
# In this set we will collect the unique genres
genre_lookup_set = set()
with open(username,"r") as f:
#open the csv file reader for the username file
fReader=csv.reader(f)
#for each row in the fReader
for row in fReader:
genre = row[0]
name = row[1]
# Add the genre name combination to this set, duplicates will be taken care automatically as set won't allow dupes
genre_name_lookup_set.add('%s-%s' % (genre, name))
# Add genre to this set
genre_lookup_set.add(genre)
with open("films.txt","r") as films:
filmsReader=csv.reader(films)
#for each row in the films file
for row in filmsReader:
genre = row[1]
name = row[2]
# Build a lookup key using genre and name, example:Sci-Fi-Solaris
lookup_key = '%s-%s' % (genre, name)
if lookup_key not in genre_name_lookup_set and genre in genre_lookup_set:
print(row)

Append items to dictionary Python

I am trying to write a function in python that opens a file and parses it into a dictionary. I am trying to make the first item in the list block the key for each item in the dictionary data. Then each item is supposed to be the rest of the list block less the first item. For some reason though, when I run the following function, it parses it incorrectly. I have provided the output below. How would I be able to parse it like I stated above? Any help would be greatly appreciated.
Function:
def parseData() :
filename="testdata.txt"
file=open(filename,"r+")
block=[]
for line in file:
block.append(line)
if line in ('\n', '\r\n'):
album=block.pop(1)
data[block[1]]=album
block=[]
print data
Input:
Bob Dylan
1966 Blonde on Blonde
-Rainy Day Women #12 & 35
-Pledging My Time
-Visions of Johanna
-One of Us Must Know (Sooner or Later)
-I Want You
-Stuck Inside of Mobile with the Memphis Blues Again
-Leopard-Skin Pill-Box Hat
-Just Like a Woman
-Most Likely You Go Your Way (And I'll Go Mine)
-Temporary Like Achilles
-Absolutely Sweet Marie
-4th Time Around
-Obviously 5 Believers
-Sad Eyed Lady of the Lowlands
Output:
{'-Rainy Day Women #12 & 35\n': '1966 Blonde on Blonde\n',
'-Whole Lotta Love\n': '1969 II\n', '-In the Evening\n': '1979 In Through the Outdoor\n'}
You can use groupby to group the data using the empty lines as delimiters, use a defaultdict for repeated keys extending the rest of the values from each val returned from groupby after extracting the key/first element.
from itertools import groupby
from collections import defaultdict
d = defaultdict(list)
with open("file.txt") as f:
for k, val in groupby(f, lambda x: x.strip() != ""):
# if k is True we have a section
if k:
# get key "k" which is the first line
# from each section, val will be the remaining lines
k,*v = val
# add or add to the existing key/value pairing
d[k].extend(map(str.rstrip,v))
from pprint import pprint as pp
pp(d)
Output:
{'Bob Dylan\n': ['1966 Blonde on Blonde',
'-Rainy Day Women #12 & 35',
'-Pledging My Time',
'-Visions of Johanna',
'-One of Us Must Know (Sooner or Later)',
'-I Want You',
'-Stuck Inside of Mobile with the Memphis Blues Again',
'-Leopard-Skin Pill-Box Hat',
'-Just Like a Woman',
"-Most Likely You Go Your Way (And I'll Go Mine)",
'-Temporary Like Achilles',
'-Absolutely Sweet Marie',
'-4th Time Around',
'-Obviously 5 Believers',
'-Sad Eyed Lady of the Lowlands'],
'Led Zeppelin\n': ['1979 In Through the Outdoor',
'-In the Evening',
'-South Bound Saurez',
'-Fool in the Rain',
'-Hot Dog',
'-Carouselambra',
'-All My Love',
"-I'm Gonna Crawl",
'1969 II',
'-Whole Lotta Love',
'-What Is and What Should Never Be',
'-The Lemon Song',
'-Thank You',
'-Heartbreaker',
"-Living Loving Maid (She's Just a Woman)",
'-Ramble On',
'-Moby Dick',
'-Bring It on Home']}
For python2 the unpack syntax is slightly different:
with open("file.txt") as f:
for k, val in groupby(f, lambda x: x.strip() != ""):
if k:
k, v = next(val), val
d[k].extend(map(str.rstrip, v))
If you want to keep the newlines remove the map(str.rstrip..
If you want the album and songs separately for each artist:
from itertools import groupby
from collections import defaultdict
d = defaultdict(lambda: defaultdict(list))
with open("file.txt") as f:
for k, val in groupby(f, lambda x: x.strip() != ""):
if k:
k, alb, songs = next(val),next(val), val
d[k.rstrip()][alb.rstrip()] = list(map(str.rstrip, songs))
from pprint import pprint as pp
pp(d)
{'Bob Dylan': {'1966 Blonde on Blonde': ['-Rainy Day Women #12 & 35',
'-Pledging My Time',
'-Visions of Johanna',
'-One of Us Must Know (Sooner or '
'Later)',
'-I Want You',
'-Stuck Inside of Mobile with the '
'Memphis Blues Again',
'-Leopard-Skin Pill-Box Hat',
'-Just Like a Woman',
'-Most Likely You Go Your Way '
"(And I'll Go Mine)",
'-Temporary Like Achilles',
'-Absolutely Sweet Marie',
'-4th Time Around',
'-Obviously 5 Believers',
'-Sad Eyed Lady of the Lowlands']},
'Led Zeppelin': {'1969 II': ['-Whole Lotta Love',
'-What Is and What Should Never Be',
'-The Lemon Song',
'-Thank You',
'-Heartbreaker',
"-Living Loving Maid (She's Just a Woman)",
'-Ramble On',
'-Moby Dick',
'-Bring It on Home'],
'1979 In Through the Outdoor': ['-In the Evening',
'-South Bound Saurez',
'-Fool in the Rain',
'-Hot Dog',
'-Carouselambra',
'-All My Love',
"-I'm Gonna Crawl"]}}
I guess this is what you want?
Even if this is not the format you wanted, there are a few things you might learn from the answer:
use with for file handling
nice to have:
PEP8 compilant code, see http://pep8online.com/
a shebang
numpydoc
if __name__ == '__main__'
And SE does not like a list being continued by code...
#!/usr/bin/env python
""""Parse text files with songs, grouped by album and artist."""
def add_to_data(data, block):
"""
Parameters
----------
data : dict
block : list
Returns
-------
dict
"""
artist = block[0]
album = block[1]
songs = block[2:]
if artist in data:
data[artist][album] = songs
else:
data[artist] = {album: songs}
return data
def parseData(filename='testdata.txt'):
"""
Parameters
----------
filename : string
Path to a text file.
Returns
-------
dict
"""
data = {}
with open(filename) as f:
block = []
for line in f:
line = line.strip()
if line == '':
data = add_to_data(data, block)
block = []
else:
block.append(line)
data = add_to_data(data, block)
return data
if __name__ == '__main__':
data = parseData()
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(data)
which gives:
{ 'Bob Dylan': { '1966 Blonde on Blonde': [ '-Rainy Day Women #12 & 35',
'-Pledging My Time',
'-Visions of Johanna',
'-One of Us Must Know (Sooner or Later)',
'-I Want You',
'-Stuck Inside of Mobile with the Memphis Blues Again',
'-Leopard-Skin Pill-Box Hat',
'-Just Like a Woman',
"-Most Likely You Go Your Way (And I'll Go Mine)",
'-Temporary Like Achilles',
'-Absolutely Sweet Marie',
'-4th Time Around',
'-Obviously 5 Believers',
'-Sad Eyed Lady of the Lowlands']},
'Led Zeppelin': { '1969 II': [ '-Whole Lotta Love',
'-What Is and What Should Never Be',
'-The Lemon Song',
'-Thank You',
'-Heartbreaker',
"-Living Loving Maid (She's Just a Woman)",
'-Ramble On',
'-Moby Dick',
'-Bring It on Home'],
'1979 In Through the Outdoor': [ '-In the Evening',
'-South Bound Saurez',
'-Fool in the Rain',
'-Hot Dog',
'-Carouselambra',
'-All My Love',
"-I'm Gonna Crawl"]}}

how to get values from the keys in dictionary

I have some code for car registration for parking. I have created a dictionary with car registration number as keys and rest information as values. I am trying to get details (values) of each registration by entering the registration number. Even if the id is in the dictionary it's showing message not found in the dictionary.
global variable
data_dict = {}
def createNameDict(filename):
path = "C:\Users\user\Desktop"
basename = "ParkingData_Part2.txt"
filename = path + "\\" + basename
file = open(filename)
contents = file.read()
print contents,"\n"
data_list = [lines.split(",",1) for lines in contents.split("\n")]
#data_list.sort()
#print data_list
#dict_list = []
for line in data_list:
keys = line[0]
values = line[1]
data_dict[keys] = values
print data_dict,"\n"
print data_dict.keys(),"\n"
print data_dict.values(),"\n"
print data_list
def detailForRegistrationNumber(regNumber):
regNumber == "keys"
if regNumber in data_dict:
print data_dict[regNumber]
else:
print regNumber, "Not in dictionary"
The error message I am getting is:
======= Loading Progam =======
>>> detailForRegistrationNumber('EDF768')
EDF768 Not in dictionary
But the dictionary has the above registration number:
{'HUY768': ' Wilbur Matty, 8912, Creche_Parking', 'GH7682': ' Clara Hill, 7689, AgHort_Parking', 'GEF123': ' Jill Black, 3456, Creche_Parking', 'WER546': ' Olga Grey, 9898, Creche_Parking', 'TY5678': ' Jane Miller, 8987, AgHort_Parking', 'ABC234': ' Fred Greenside, 2345, AgHort_Parking', 'KLOI98': ' Martha Miller, 4563, Vet_Parking', **'EDF768'**: ' Bill Meyer, 2456, Vet_Parking', 'JU9807': ' Jacky Blair, 7867, Vet_Parking', 'DF7800': ' Jacko Frizzle, 4532, Creche_Parking', 'ADF645': ' Cloe Freckle, 6789, Vet_Parking'}
I think your problem is that your function def createNameDict(filename): doesn't return anything, so the data_dict inside it is just a local variable!
Make the last line of the function return data_dict and then use it like data_dict = createNameDict(filename). There is no need for the global variable part, so just remove that.

Categories