python pandas in person class - python

i have a person class that have 2 method, admin sign in and log in.
def admin_sign_in(self):#this instance make a csv file of admins username and pass
info = {'user_name' : [self.username] , 'password' : [self.password]}
self.admin_df = pd.read_csv('admin_file.csv',sep = ',')
self.admin_df= pd.DataFrame(info)
c = self.admin_df.index.values[-1]
self.admin_df.loc[c+1 ,['user_name','password']]
x = self.admin_df.to_csv('admin_file.csv',header= True ,index = False , mode = 'a')
return x
but in csv file every object i make, saved with header and 0 index.
do you have any suggestion to manage it?

If you run with append mode then you should run it headeres=False.
You could create (manully) file only with headers and later append new rows without headers
import pandas as pd
username = 'james_bond'
password = '007'
info = {
'username': [username],
'password': [password],
}
df = pd.DataFrame(info)
df.to_csv('admin_file.csv', header=False, index=False, mode='a')
It does't need to read previous content and use loc[c+1, ...] to append at the end.
Eventually you could write all without headers and add headers when you read it
df = pd.read_csv('admin_file.csv', names=['username', 'password'])
But it could be better to read previous content and check if username doesn't exist.

Related

Wanting to compare two data frames to check if username and password are correct from the data saved in my excel file, using pandas only

I am trying to make a login page with Tkinter, and pandas only, trying to store all the data in excel file and am having trouble with reading the excel file.
import pandas as pd
def USPchecker():# method name
obt_Username = UsrInp.get() # storring input from user for username
obt_Password = PassInp.get() # storring password from user
# print(File)
for row in File.iterrows():
df2 = pd.DataFrame(row)
df2.sort_index(inplace=True)
print(df2)
dataFUN = {'USERNAME': obt_Username, 'PASSWORD': obt_Password}
df1 = pd.DataFrame(dataFUN, index=[NONE])
df1.sort_index(inplace=True)
print(df1)
if df1.reset_index(drop=True,inplace=True) == df2.reset_index(drop=True,inplace=True):
report_window()
else:
messagebox.showerr("DOXC", "wrong username or password")
Not sure why you are storing the obt data in a dataframe.
The below function will return True or False regarding the obt fields:
def check_login(df2, obt_Username, obt_Password):
df = df2[df2['username']==obt_Username] & df2['password']==obt_Password]
if len(df) > 0:
return True
else:
return False

How to loop through json data with multiple objects

My json file data.json looks like this
[
{"host" : "192.168.0.25", "username":"server2", "path":"/home/server/.ssh/01_id"},
{"host" : "192.168.0.26", "username":"server3", "path":"/home/server/.ssh/01_id"}
]
I want the loop happen in this way only (lets ignore the remote variable)
for remotes,host,username in zip(remote , data["host"] ,data["username"]):
This is the error i am getting
for remotes,host,username in list(zip(remote , data["host"] ,data["username"])):
TypeError: list indices must be integers or slices, not str
You need to iterate the data to extract the host and username values so that you can zip them to the remote list:
data = [
{"host" : "192.168.0.25", "username":"server2", "path":"/home/server/.ssh/01_id"},
{"host" : "192.168.0.26", "username":"server3", "path":"/home/server/.ssh/01_id"}
]
hosts_users = [(d['host'], d['username']) for d in data]
remote = [1, 2]
for remote, (host, username) in zip(remote, hosts_users):
print(remote, host, username)
Output:
1 192.168.0.25 server2
2 192.168.0.26 server3
if you have json file first you need to read and after that, you can manipulate that data as a python object
import json
with open("data.json") as json_file:
data = json.load(json_file)
for d in data:
host = d['host']
username = d['username']
path = d['path']
print(host, username, path)
You can do by using map with zip like
# uncomment following code if data reside in json
# import json
# file = open('path_of_your_json')
# data = json.load(file)
data = [
{"host" : "192.168.0.25", "username":"server2", "path":"/home/server/.ssh/01_id"},
{"host" : "192.168.0.26", "username":"server3", "path":"/home/server/.ssh/01_id"}
]
for (host, username, path) in zip(*zip(*map(lambda x: x.values(), data))):
print(host, username, path)
# whatever you want
zip(*zip(*map(lambda x: x.values(), data))) this line will provide the data in linear way
Since you mentioned specifically that you would like to iterate through the data using zip column wise, here is how you can do that.
Say the json file name is SO.json
Load the json object in the variable data.
import json
f = open(r'C:\Users\YYName\Desktop\Temp\SO.json')
data = json.load(f)
Now you can iterate through the values using zip and through columns. Load the json data in a pandas dataframe.
import pandas as pd
df = pd.DataFrame(data)
for host,username in zip(df["host"] ,df["username"]):
print(host, username)
Assuming remote to be of same length as the number of rows in your json. You can now do
for remotes,host,username in zip(remote , df["host"] ,df["username"]):
print(remotes, host, username)

How can I alter the value in a specific column of a certain row in python without the use of pandas?

I was playing around with the code provided here: https://www.geeksforgeeks.org/update-column-value-of-csv-in-python/ and couldn't seem to figure out how to change the value in a specific column of the row without it bringing up an error.
Say I wanted to change the status of the row belonging to the name Molly Singh, how would I go about it? I've tried the following below only to get an error and the CSV file turning out empty. I'd also prefer the solution be without the use of pandas tysm.
For example the row in the csv file will originally be
Sno Registration Number Name RollNo Status
1 11913907 Molly Singh RK19TSA01 P
What I want the outcome to be
Sno Registration Number Name RollNo Status
1 11913907 Molly Singh RK19TSA01 N
One more question if I were to alter the value in column snow by doing addition/substraction etc how would I go about that as well? Thanks!
the error I get as you can see, the name column is changed to true then false etc
import csv
op = open("AllDetails.csv", "r")
dt = csv.DictReader(op)
print(dt)
up_dt = []
for r in dt:
print(r)
row = {'Sno': r['Sno'],
'Registration Number': r['Registration Number'],
'Name'== "Molly Singh": r['Name'],
'RollNo': r['RollNo'],
'Status': 'P'}
up_dt.append(row)
print(up_dt)
op.close()
op = open("AllDetails.csv", "w", newline='')
headers = ['Sno', 'Registration Number', 'Name', 'RollNo', 'Status']
data = csv.DictWriter(op, delimiter=',', fieldnames=headers)
data.writerow(dict((heads, heads) for heads in headers))
data.writerows(up_dt)
op.close()
Issues
Your error is because the field name in the input file is misspelled as Regristation rather than Registration
Correction is to just read the names from the input file and propagate to the output file as below.
Alternately, you can your code to:
headers = ['Sno', 'Regristation Number', 'Name', 'RollNo', 'Status']
"One more question if I were to alter the value in column snow by doing addition/substraction etc how would I go about that as well"
I'm not sure what is meant by this. In the code below you would just have:
r['Sno'] = (some compute value)
Code
import csv
with open("AllDetails.csv", "r") as op:
dt = csv.DictReader(op)
headers = None
up_dt = []
for r in dt:
# get header of input file
if headers is None:
headers = r
# Change status of 'Molly Singh' record
if r['Name'] == 'Molly Singh':
r['Status'] = 'N'
up_dt.append(r)
with open("AllDetails.csv", "w", newline='') as op:
# Use headers from input file above
data = csv.DictWriter(op, delimiter=',', fieldnames=headers)
data.writerow(dict((heads, heads) for heads in headers))
data.writerows(up_dt)

Selecting values from a JSON file in Python

I am getting JIRA data using the following python code,
how do I store the response for more than one key (my example shows only one KEY but in general I get lot of data) and print only the values corresponding to total,key, customfield_12830, summary
import requests
import json
import logging
import datetime
import base64
import urllib
serverURL = 'https://jira-stability-tools.company.com/jira'
user = 'username'
password = 'password'
query = 'project = PROJECTNAME AND "Build Info" ~ BUILDNAME AND assignee=ASSIGNEENAME'
jql = '/rest/api/2/search?jql=%s' % urllib.quote(query)
response = requests.get(serverURL + jql,verify=False,auth=(user, password))
print response.json()
response.json() OUTPUT:-
http://pastebin.com/h8R4QMgB
From the the link you pasted to pastebin and from the json that I saw, its a you issues as list containing key, fields(which holds custom fields), self, id, expand.
You can simply iterate through this response and extract values for keys you want. You can go like.
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = {
'key': issue['key'],
'customfield': issue['fields']['customfield_12830'],
'total': issue['fields']['progress']['total']
}
x.append(temp)
print(x)
x is list of dictionaries containing the data for fields you mentioned. Let me know if I have been unclear somewhere or what I have given is not what you are looking for.
PS: It is always advisable to use dict.get('keyname', None) to get values as you can always put a default value if key is not found. For this solution I didn't do it as I just wanted to provide approach.
Update: In the comments you(OP) mentioned that it gives attributerror.Try this code
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = dict()
key = issue.get('key', None)
if key:
temp['key'] = key
fields = issue.get('fields', None)
if fields:
customfield = fields.get('customfield_12830', None)
temp['customfield'] = customfield
progress = fields.get('progress', None)
if progress:
total = progress.get('total', None)
temp['total'] = total
x.append(temp)
print(x)

export list to csv and present to user via browser

Want to prompt browser to save csv
^^working off above question, file is exporting correctly but the data is not displaying correctly.
#view_config(route_name='csvfile', renderer='csv')
def csv(self):
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
header = ['name']
rows = []
for item in name:
rows = [item.id]
return {
'header': header,
'rows': rows
}
Getting _csv.Error
Error: sequence expected but if I change in my renderer writer.writerows(value['rows']) to writer.writerow(value['rows']) the file will download via the browser just fine. Problem is, it's not displaying data in each row. The entire result/dataset is in one row, so each entry is in it's own column rather than it's own row.
First, I wonder if having a return statement inside your for loop isn't also causing problems; from the linked example it looks like their loop was in the prior statement.
I think what it looks like it's doing is it's building a collection of rows based on "table" having columns with the same name as the headers. What are the fields in your table table?
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
This is going to give you back essentially a collection of rows from table, as if you did a SELECT query on it.
Something like
name = DBSession.query(table).join(othertable).filter(othertable.id == 9701).all()
header = ['name']
rows = []
for item in name:
rows.append(item.name)
return {
'header': header,
'rows': r
}
Figured it out. kept getting Error: sequence expected so I was looking at the output. Decided to try putting the result inside another list.
#view_config(route_name='csv', renderer='csv')
def csv(self):
d = datetime.now()
query = DBSession.query(table, othertable).join(othertable).join(thirdtable).filter(
thirdtable.sid == 9701)
header = ['First Name', 'Last Name']
rows = []
filename = "csvreport" + d.strftime(" %m/%d").replace(' 0', '')
for i in query:
items = [i.table.first_name, i.table.last_name, i.othertable.login_time.strftime("%m/%d/%Y"),
]
rows.append(items)
return {
'header': header,
'rows': rows,
'filename': filename
}
This accomplishes 3 things. Fills out the header, fills the rows, and passes through a filename.
Renderer should look like this:
class CSVRenderer(object):
def __init__(self, info):
pass
def __call__(self, value, system):
fout = StringIO.StringIO()
writer = csv.writer(fout, delimiter=',',quotechar =',',quoting=csv.QUOTE_MINIMAL)
writer.writerow(value['header'])
writer.writerows(value['rows'])
resp = system['request'].response
resp.content_type = 'text/csv'
resp.content_disposition = 'attachment;filename='+value['filename']+'.csv'
return fout.getvalue()
This way, you can use the same csv renderer anywhere else and be able to pass through your own filename. It's also the only way I could figure out how to get the data from one column in the database to iterate through one column in the renderer. It feels a bit hacky but it works and works well.

Categories