I am trying to perform entity analysis on text and I want to put the results in a dataframe. Currently the results are not stored in a dictionary, nor in a Dataframe. The results are extracted with two functions.
df:
ID title cur_working pos_arg neg_arg date
132 leave yes good coffee management, leadership and salary 13-04-2018
145 love it yes nice colleagues long days 14-04-2018
I have the following code:
result = entity_analysis(df, 'neg_arg', 'ID')
#This code loops through the rows and calls the function entities_text()
def entity_analysis(df, col, idcol):
temp_dict = {}
for index, row in df.iterrows():
id = (row[idcol])
x = (row[col])
entities = entities_text(x, id)
#temp_dict.append(entities)
#final = pd.DataFrame(columns = ['id', 'name', 'type', 'salience'])
return print(entities)
def entities_text(text, id):
"""Detects entities in the text."""
client = language.LanguageServiceClient()
ent_df = {}
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
# Instantiates a plain text document.
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
# Detects entities in the document.
entities = client.analyze_entities(document).entities
# entity types from enums.Entity.Type
entity_type = ('UNKNOWN', 'PERSON', 'LOCATION', 'ORGANIZATION',
'EVENT', 'WORK_OF_ART', 'CONSUMER_GOOD', 'OTHER')
for entity in entities:
ent_df[id] = ({
'name': [entity.name],
'type': [entity_type[entity.type]],
'salience': [entity.salience]
})
return print(ent_df)
This code gives the following outcome:
{'132': {'name': ['management'], 'type': ['OTHER'], 'salience': [0.16079013049602509]}}
{'132': {'name': ['leadership'], 'type': ['OTHER'], 'salience': [0.05074194446206093]}}
{'132': {'name': ['salary'], 'type': ['OTHER'], 'salience': [0.27505040168762207]}}
{'145': {'name': ['days'], 'type': ['OTHER'], 'salience': [0.004272154998034239]}}
I have created temp_dict and a final dataframe in the function entity_analysis(). This thread explained that appending to a dataframe in a loop is not efficient. I don't know how to populate the dataframe in an efficient way. These threads are related to my question but they explain how to populate a Dataframe from existing data. When I try to use temp_dict.update(entities) and return temp_dict I get an error:
in entity_analysis
temp_dict.update(entities)
TypeError: 'NoneType' object is not iterable
I want the output to be like this:
ID name type salience
132 management OTHER 0.16079013049602509
132 leadership OTHER 0.05074194446206093
132 salary OTHER 0.27505040168762207
145 days OTHER 0.004272154998034239
One solution is to create a list of lists via your entities iterable. Then feed your list of lists into pd.DataFrame:
LoL = []
for entity in entities:
LoL.append([id, entity.name, entity_type[entity.type], entity.salience])
df = pd.DataFrame(LoL, columns=['ID', 'name', 'type', 'salience'])
If you also need the dictionary in the format you currently produce, then you can add your current logic to your for loop. However, first check whether you need to use two structures to store identical data.
Related
I have a python script where I'm trying to fetch data from meraki dashboard through its API. Now the data is stored in a dataframe which needs to be pushed to a Smartsheet using the Smartsheet API integration. I've tried searching the Smartsheet API documentation but couldn't find any solution to the problem. Has anyone worked on this kind of use case before or know a script to push a simple data frame to the smartsheet?
The code is something like this:
for device in list_of_devices:
try:
dict1 = {'Name': [device['name']],
"Serial_No": [device['serial']],
'MAC': [device['mac']],
'Network_Id': [device['networkId']],
'Product_Type': [device['productType']],
'Model': [device['model']],
'Tags': [device['tags']],
'Lan_Ip': [device['lanIp']],
'Configuration_Updated_At': [device['configurationUpdatedAt']],
'Firmware': [device['firmware']],
'URL': [device['url']]
}
except KeyError:
dict1['Lan_Ip'] = "NA"
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
The dataframe("alldata") looks something like this:
Name Serial_No MAC \
0 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
1 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
2 xxxxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx
the dataframe has somewhere around 1000 rows and 11 columns
I've tried pushing this dataframe similar to the code mentioned in the comments but I'm getting a "Bad Request" error.
smart = smartsheet.Smartsheet(access_token='xxxxxxxx')
sheet_id = xxxxxxxxxxxxx
sheet = smart.Sheets.get_sheet(sheet_id)
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
data_dict = alldata.to_dict('index')
rowsToAdd = []
for i,i in data_dict.items():
new_row = smart.models.Row()
new_row.to_top = True
for k,v in i.items():
new_cell = smart.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
new_row.cells.append(new_cell)
rowsToAdd.append(new_row)
result = smart.Sheets.add_rows(sheet_id, rowsToAdd)
{"response": {"statusCode": 400, "reason": "Bad Request", "content": {"detail": {"index": 0}, "errorCode": 1012, "message": "Required object attribute(s) are missing from your request: cell.value.", "refId": "1ob56acvz5nzv"}}}
Smartsheet photo where the data must be pushed
The following code adds data from a dataframe to a sheet in Smartsheet -- this should be enough to at least get you started. If you still can't get the desired result using this code, please update your original post to include the code you're using, the outcome you're wanting, and a detailed description of the issue you encountered. (Add a comment to this answer if you update your original post, so I'll be notified and will know to look.)
# target sheet
sheet_id = 3932034054809476
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
df = pd.DataFrame({'item_id': [111111, 222222],
'item_color': ['red', 'yellow'],
'item_location': ['office', 'kitchen']})
data_dict = df.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
# add the collection of rows to the sheet in Smartsheet
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
UPDATE #1 - re Bad Request error
Seems like the error you've described in your first comment below is perhaps being caused by the fact that some of the cells in your dataframe don't have a value. When you add a new row using the Smartsheet API, each cell that's specified for the row must specify a value for the cell -- otherwise you'll get the Bad Request error you've described. Maybe try adding an if statement inside the for loop to skip adding the cell if the value of v is None?
for k,v in i.items():
# skip adding this cell if there's no value
if v is None:
continue
...
UPDATE #2 - re further troubleshooting
In response to your second comment below: you'll need to debug further using the data in your dataframe, as I'm unable to repro the issue you describe using other data.
To simplify things -- I'd suggest that you start by trying to debug with just one item in the dataframe. You can do so by adding the line (statement) break at the end of the for loop that's building the dict -- that way, only the first device will be added.
for device in list_of_devices:
try:
...
except KeyError:
dict1['Lan_Ip'] = "NA"
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
# break out of loop after one item is added
break
alldata.reset_index(drop=True, inplace=True)
# print dataframe contents
print (alldata)
If you get the same error when testing with just one item, and can't recognize what it is about that data (or the way it's stored in your dataframe) that's causing the Smartsheet error, then feel free to add a print (alldata) statement after the for loop (as I show in the code snippet above) to your code and update your original post again to include the output of that statement (changing any sensitive data values, of course) -- and then I can try to repro/troubleshoot using that data.
UPDATE #3 - repro'd issue
Okay, so I've reproduced the error you've described -- by specifying None as the value of a field in the dict.
The following code successfully inserts two new rows into Smartsheet -- because every field in each dict it builds contains a (non-None) value. (For simplicity, I'm manually constructing two dicts in the same manner as you do in your for loop.)
# target sheet
sheet_id = 37558492129156
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
#----
# start: repro SO question's building of dataframe
#----
alldata = pd.DataFrame()
dict1 = {'Name': ['name1'],
"Serial_No": ['serial_no1'],
'MAC': ['mac1'],
'Network_Id': ['networkId1'],
'Product_Type': ['productType1'],
'Model': ['model1'],
'Tags': ['tags1'],
'Lan_Ip': ['lanIp1'],
'Configuration_Updated_At': ['configurationUpdatedAt1'],
'Firmware': ['firmware1'],
'URL': ['url1']
}
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
dict2 = {'Name': ['name2'],
"Serial_No": ['serial_no2'],
'MAC': ['mac2'],
'Network_Id': ['networkId2'],
'Product_Type': ['productType2'],
'Model': ['model2'],
'Tags': ['tags2'],
'Lan_Ip': ['lanIp2'],
'Configuration_Updated_At': ['configurationUpdatedAt2'],
'Firmware': ['firmware2'],
'URL': ['URL2']
}
temp = pd.DataFrame.from_dict(dict2)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
#----
# end: repro SO question's building of dataframe
#----
data_dict = alldata.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
However, running the following code (where the value of the URL field in the second dict is set to None) results in the same error you've described:
{"response": {"statusCode": 400, "reason": "Bad Request", "content": {"detail": {"index": 1}, "errorCode": 1012, "message": "Required object attribute(s) are missing from your request: cell.value.", "refId": "dw1id3oj1bv0"}}}
Code that causes this error (identical to the successful code above except that the value of the URL field in the second dict is None):
# target sheet
sheet_id = 37558492129156
sheet = smartsheet_client.Sheets.get_sheet(sheet_id)
# translate column names to column id
column_map = {}
for column in sheet.columns:
column_map[column.title] = column.id
#----
# start: repro SO question's building of dataframe
#----
alldata = pd.DataFrame()
dict1 = {'Name': ['name1'],
"Serial_No": ['serial_no1'],
'MAC': ['mac1'],
'Network_Id': ['networkId1'],
'Product_Type': ['productType1'],
'Model': ['model1'],
'Tags': ['tags1'],
'Lan_Ip': ['lanIp1'],
'Configuration_Updated_At': ['configurationUpdatedAt1'],
'Firmware': ['firmware1'],
'URL': ['url1']
}
temp = pd.DataFrame.from_dict(dict1)
alldata = alldata.append(temp)
dict2 = {'Name': ['name2'],
"Serial_No": ['serial_no2'],
'MAC': ['mac2'],
'Network_Id': ['networkId2'],
'Product_Type': ['productType2'],
'Model': ['model2'],
'Tags': ['tags2'],
'Lan_Ip': ['lanIp2'],
'Configuration_Updated_At': ['configurationUpdatedAt2'],
'Firmware': ['firmware2'],
'URL': [None]
}
temp = pd.DataFrame.from_dict(dict2)
alldata = alldata.append(temp)
alldata.reset_index(drop=True, inplace=True)
#----
# end: repro SO question's building of dataframe
#----
data_dict = alldata.to_dict('index')
rowsToAdd = []
# each object in data_dict represents 1 row of data
for i, i in data_dict.items():
# create a new row object
new_row = smartsheet_client.models.Row()
new_row.to_top = True
# for each key value pair, create & add a cell to the row object
for k, v in i.items():
# create the cell object and populate with value
new_cell = smartsheet_client.models.Cell()
new_cell.column_id = column_map[k]
new_cell.value = v
# add the cell object to the row object
new_row.cells.append(new_cell)
# add the row object to the collection of rows
rowsToAdd.append(new_row)
result = smartsheet_client.Sheets.add_rows(sheet_id, rowsToAdd)
Finally, note that the error message I received contains {"index": 1} -- this implies that the value of index in this error message indicates the (zero-based) index of the problematic row. The fact that your error message contains {"index": 0} implies that there's a problem with the data in the first row you're trying to add to Smartsheet (i.e., the first item in the dataframe). Therefore, following the troubleshooting guidance I posted in my previous update (Update #2 above) should allow you to closely examine the data for the first item/row and hopefully spot the problematic data (i.e., where the value is missing).
I've extracted the data from API response and created a dictionary function:
def data_from_api(a):
dictionary = dict(
data = a['number']
,created_by = a['opened_by']
,assigned_to = a['assigned']
,closed_by = a['closed']
)
return dictionary
and then to df (around 1k records):
raw_data = []
for k in data['resultsData']:
records = data_from_api(k)
raw_data.append(records)
I would like to create a function allows to extract the nested fields {display_value} in the columns in the dataframe. I need only the names like John Snow, etc. Please see below:
How to create a function extracts the display values for those fields? I've tried to create something like:
df = pd.DataFrame.from_records(raw_data)
def get_nested_fields(nested):
if isinstance(nested, dict):
return nested['display_value']
else:
return ''
df['created_by'] = df['opened_by'].apply(get_nested_fields)
df['assigned_to'] = df['assigned'].apply(get_nested_fields)
df['closed_by'] = df['closed'].apply(get_nested_fields)
but I'm getting an error:
KeyError: 'created_by'
Could you please help me?
You can use .str and get() like below. If the key isn't there, it'll write None.
df = pd.DataFrame({'data':[1234, 5678, 5656], 'created_by':[{'display_value':'John Snow', 'link':'a.com'}, {'display_value':'John Dow'}, {'my_value':'Jane Doe'}]})
df['author'] = df['created_by'].str.get('display_value')
output
data created_by author
0 1234 {'display_value': 'John Snow', 'link': 'a.com'} John Snow
1 5678 {'display_value': 'John Dow'} John Dow
2 5656 {'my_value': 'Jane Doe'} None
I'm writing a python program to save and retrieve a customer data in cloud datastore. My entity looks like below:
entity.update({
'customerId': args['customerId'],
'name': args['name'],
'email': args['email'],
'city': args['city'],
'mobile': args['mobile']
})
datastore_client.put(entity)
I'm successfully saving the data. Now, I want to retrieve a random email id from the a record. I have written the below code:
def get_customer():
query = datastore_client.query(kind='CustomerKind')
results = list(query.fetch())
chosen_customer = random.choice(results)
print(chosen_customer)
But instead getting only one random email id, I'm getting the entire row like this:
<Entity('CustomerKind', 6206716152643584) {'customerId': '103', 'city': 'bhubaneswar', 'name': 'Amit', 'email': 'amit#gmail.com', 'mobile': '7879546732'}>
Can anyone suggest how can I get only 'email': 'amit#gmail.com' ? I'm new to datastore.
When using
query = datastore_client.query(kind='CustomerKind')
results = list(query.fetch())
you are retrieving all the properties from all the entities that will be returned.
Instead, you can use a projection query, which allows you to retrieve only the specified properties from the entities:
query = client.query(kind="CustomerKind")
query.projection = ["email"]
results = list(query.fetch())
Using projection queries is recommended for cases like this, in which you only need some properties as they reduce cost and latency.
I have a set of nested JSON and it I am doing the following thus far:
r = session.get(search_url, auth=HTTPKerberosAuth(mutual_authentication=OPTIONAL), verify=False)
json_data = json.loads(r.content)
flattened_data = json_normalize(json_data['documents'])
print(list(flattened_data))
This outputs the following results:
['affected_users', 'aggregatedLabels', 'aliases', 'assignedFolder', 'assigneeIdentity', 'attachments', 'authorizations', 'autoUpgrade.workingHours', 'conversation', 'createDate', 'dedupes', 'deleted', 'description', 'descriptionContentType', 'editCount', 'engagementList', 'extensions.backlog.priority', 'extensions.effort.effortEstimatedLocal.effort', 'extensions.effort.effortEstimatedLocal.unit', 'extensions.effort.effortEstimatedRecursiveSum.effort', 'extensions.effort.effortEstimatedRecursiveSum.unit', 'extensions.effort.effortRemainingLocalSum.effort', 'extensions.effort.effortRemainingLocalSum.unit', 'extensions.effort.effortRemainingRecursiveSum.effort', 'extensions.effort.effortRemainingRecursiveSum.unit', 'extensions.effort.effortSpentLocalSum.effort', 'extensions.effort.effortSpentLocalSum.unit', 'extensions.effort.effortSpentRecursiveSum.effort', 'extensions.effort.effortSpentRecursiveSum.unit', 'extensions.tt.assignedGroup', 'extensions.tt.building', 'extensions.tt.caseType', 'extensions.tt.category', 'extensions.tt.city', 'extensions.tt.endCode', 'extensions.tt.ecd', 'extensions.tt.impact', 'extensions.tt.item', 'extensions.tt.justification', 'extensions.tt.migrationStatus', 'extensions.tt.minImpact', 'extensions.tt.resolution', 'extensions.tt.rootCause', 'extensions.tt.rootCauseDetails', 'extensions.tt.status', 'extensions.tt.type', 'frames', 'id', 'identityTimestamped', 'inheritedLabels', 'isTicket', 'labels', 'lastAssignedDate', 'lastResolvedByIdentity', 'lastResolvedDate', 'lastUpdatedActualDate', 'lastUpdatedConversationDate', 'lastUpdatedDate', 'lastUpdatedIdentity', 'next_step.action', 'next_step.exceptions', 'next_step.owner', 'parentTasks', 'requesterIdentity', 'rootCauses', 'rulesReceipt', 'schedule.estimatedCompletionDate', 'schedule.estimatedStartDate', 'schedule.needByDate', 'schema', 'slaReceipts', 'status', 'stickyThreadId', 'submitterIdentity', 'subtasks', 'tags', 'threads', 'title', 'watchers']
From this list I am trying to get only certain keys and their values into the data frame:
print(flattened_data['assigneeIdentity',
# 'createDate',
# 'description',
# 'extensions.tt.assignedGroup',
# 'extensions.tt.category',
# 'extensions.tt.endCode',
# 'extensions.tt.ecd',
# 'extensions.tt.impact',
# 'extensions.tt.item',
# 'extensions.tt.justification',
# 'extensions.tt.resolution',
# 'extensions.tt.rootCause',
# 'extensions.tt.rootCauseDetails',
# 'extensions.tt.status',
# 'extensions.tt.type',
# 'id',
# 'labels',
# 'lastAssignedDate',
# 'lastResolvedByIdentity',
# 'lastResolvedDate',
# 'lastUpdatedActualDate',
# 'lastUpdatedConversationDate',
# 'lastUpdatedDate',
# 'lastUpdatedIdentity',
# 'requesterIdentity',
# 'submitterIdentity',
# 'title',
# 'watchers'])
When I do this I get a key error. so the base JSON that comes in is as follows for the fields I list above and an idea of the nesting level of each one; each 'item' is an integer under the documents element with then more nested elements I need:
documents:
0:
extensions:
tt:
category:
type:
item:
assignedGroup:
impact:
justification:
endCode:
rootCause:
rootCauseDetails:
status:
id:
title:
lastAssignedDate:
createDate:
lastUpdatedActualDate:
lastResolvedDate:
lastResolvedByIdentity:
lastUpdatedIdentity:
assigneeIdentity:
submitterIdentity:
requesterIdentity:
identityTimestamped:
lastUpdatedConversationDate:
lastUpdatedDate:
1:
extensions:
tt:
category:
type:
item:
assignedGroup:
impact:
justification:
endCode:
rootCause:
rootCauseDetails:
status:
id:
title:
lastAssignedDate:
createDate:
lastUpdatedActualDate:
lastResolvedDate:
lastResolvedByIdentity:
lastUpdatedIdentity:
assigneeIdentity:
submitterIdentity:
requesterIdentity:
identityTimestamped:
lastUpdatedConversationDate:
lastUpdatedDate:
How do I get this and the values into a dataframe.
flattened_data should already be a valid DataFrame. The error appears to be that you're trying to print flattened_data["key1", "key2", ...] which would look for the column named ["key1", "key2", ...] in flattened_data. In essence, you are telling the DataFrame "Get the column whose name is this list".
To get a list of columns from a DataFrame, you should try flattened_data[["key1", "key2", ...]], which is instead saying "Get all of the columns whose name is in this list".
What could also be happening here is that you have a DataFrame with columns ["0.id", "0.title", ..., "1.id", "1.title", ...], with just one row: the values assigned to each of those paths in the JSON object.
However, pandas.io.json.normalize_json() can take a list of dictionaries as an argument, so instead of using flattened_data = json_normalize(json_data['documents']), using a list of the sub-dictionaries in json_data['documents'] (for example, json_data['documents'].values()) should return the correct DataFrame.
records = list(json_data['documents'].values())
flattened_data = json_normalize(records)
Then, you could retrieve the columns you want with:
print(flattened_data[['assigneeIdentity', 'createDate', 'description', 'extensions.tt.assignedGroup', ...]])
Citing something from a fantastic response that I just commented on today. Maybe this will help:
import pandas as pd
r = session.get(search_url, auth=HTTPKerberosAuth(mutual_authentication=OPTIONAL), verify=False)
data = r.json()
df = pd.DataFrame(data)
mask = df['assigneeIdentity'].apply(lambda x: '<your value to filter here>' in x)
df1 = df[mask] # The mask will return values that are True (i.e. - what you want)
I have List of multiple dictionaries inside it(as JSON ).I have a list of value and based on that value I want that JSON object for that particular value. For eg.
[{'content_type': 'Press Release',
'content_id': '1',
'Author':John},
{'content_type': 'editorial',
'content_id': '2',
'Author': Harry
},
{'content_type': 'Article',
'content_id': '3',
'Author':Paul}]
I want to to fetch complete object where author is Paul.
This is the code I have made so far.
import json
newJson = "testJsonNewInput.json"
ListForNewJson = []
def testComparision(newJson,oldJson):
with open(newJson, mode = 'r') as fp_n:
json_data_new = json.load(fp_n)
for jData_new in json_data_new:
ListForNewJson.append(jData_new['author'])
If any other information required, please ask.
Case 1
One time access
It is perfectly alright to read your data and iterate over it, returning the first match found.
def access(f, author):
with open(file) as f:
data = json.load(f)
for d in data:
if d['Author'] == author:
return d
else:
return 'Not Found'
Case 2
Repeated access
In this instance, it would be wise to reshape your data in such a way that accessing objects by author names is much faster (think dictionaries!).
For example, one possible option would be:
with open(file) as f:
data = json.load(f)
newData = {}
for d in data:
newData[d['Author']] = d
Now, define a function and pass your pre-loaded data along with a list of author names.
def access(myData, author_list):
for a in author_list:
yield myData.get(a)
The function is called like this:
for i in access(newData, ['Paul', 'John', ...]):
print(i)
Alternatively, store the results in a list r. The list(...) is necessary, because yield returns a generator object which you must exhaust by iterating over.
r = list(access(newData, [...]))
Why not do something like this? It should be fast and you will not have to load the authors that wont be searched.
alreadyknown = {}
list_of_obj = [{'content_type': 'Press Release',
'content_id': '1',
'Author':'John'},
{'content_type': 'editorial',
'content_id': '2',
'Author': 'Harry'
},
{'content_type': 'Article',
'content_id': '3',
'Author':'Paul'}]
def func(author):
if author not in alreadyknown:
obj = get_obj(author)
alreadyknown[author] = obj
return alreadyknown[author]
def get_obj(auth):
return [obj for obj in list_of_obj if obj['Author'] is auth]
print(func('Paul'))