I'm trying to parse some data as follows:
subject_data
{"72744387":{"retired":null,"Filename":"2021-07-18 23-16-26 frontlow.jpg"}}
{"72744485":{"retired":null,"Filename":"2021-07-21 07-39-57 frontlow.jpg"}}
{"72744339":{"retired":null,"Filename":"2021-07-17 04-55-03 frontlow.jpg"}}
I'd like to get the file name from all of this data, but I'd like to do so without using that first number, as these numbers are randomized and there are a lot. So far I have:
classifications['subject_data_json'] = [json.loads(q) for q in classifications.subject_data]
data = classifications['subject_data_json']
print(data[3])
This prints {'72744471': {'retired': None, 'Filename': '2021-07-21 04-11-45 frontlow.jpg'}}
But I'd like to print just the Filename for each of the data sets. print(data[3]['Filename']) fails, and I'm not sure how to get the information without using the number.
I'd go with a nested expression
print([v['Filename'] for i in data for k, v in i.items()])
Related
I'm working on a minimax algorithm project and I am trying to find a way to save board values in a text file so they don't need to be calculated over and over again each time the program is tested. I have the board stored as a nested dictionary.
rows = {
4:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
3:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
2:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
1:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
}
I tried doing this, which gives the desired result but is not at all optimized and I'm sure there is a way to do this better.
rows = {
4:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
3:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
2:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
1:{1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0},
}
e = []
for key in rows:
e.append(list(rows[key].values()))
e=str(e)
e=e.replace ("[",""); e=e.replace ("]","")
e=e.replace (" ","")
e=e.replace (",","")
print(e)
You could make use of a str.join(), map is used to convert integers to strings:
res = ''.join(''.join(map(str, r.values())) for r in rows.values())
print(res)
Out:
00000000000000000000000000000000
Hi i'm not an expert and this problem kept me stuck for such a long time I hope that someone here can help me
i would like to exctract the value "interestExpense" from the following json file:
{'incomeBeforeTax': 17780000000,
'minorityInterest': 103000000,
'netIncome': 17937000000,
'sellingGeneralAdministrative': 5918000000,
'grossProfit': 16507000000,
'ebit': 10589000000,
'endDate': 1640908800,
'operatingIncome': 10589000000,
'interestExpense': -1803000000,
'incomeTaxExpense': -130000000,
'totalRevenue': 136341000000,
'totalOperatingExpenses': 125752000000,
'costOfRevenue': 119834000000,
'totalOtherIncomeExpenseNet': 7191000000,
'netIncomeFromContinuingOps': 17910000000,
'netIncomeApplicableToCommonShares': 17937000000}
In this case the result should be -130000000 as a string but i m trying to find a way to create an list(or an array) with all those floats so that i can decide which one to pick, i have no idea how to manipulate this kind of data(json)
For example
print(list[0])
should return 17780000000(the value associated with incomeBeforeTax)
is this actually possible?
The output is generated from this code:
annual_is_stms=[]
url_financials ='https://finance.yahoo.com/quote/{}/financials?p{}'
stock= 'F'
response = requests.get(url_financials.format(stock,stock),headers=headers)
soup = BeautifulSoup(response.text,'html.parser')
pattern = re.compile(r'\s--\sData\s--\s')
script_data = soup.find('script',text=pattern).contents[0]
script_data[:500]
script_data[-500:]
start = script_data.find("context")-2
json_data =json.loads(script_data[start:-12])
json_data['context']['dispatcher']['stores']['QuoteSummaryStore'].keys()
#all data relative financials
annual_is=json_data['context']['dispatcher']['stores']['QuoteSummaryStore']['incomeStatementHistory']['incomeStatementHistory']
for s in annual_is:
statement = {}
for key, val in s.items():
try:
statement[key] = val['raw']
except TypeError:
continue
except KeyError:
continue
annual_is_stms.append(statement)
print(annual_is_stms[0])
If you are using python, you need to include the json module and parse it as an object:
import json
# some JSON:
x = '{ "name":"John", "age":30, "city":"New York"}'
# parse x:
y = json.loads(x)
# the result is a Python dictionary:
print(y["age"])
Regards
L.
Ok, so the output snippet you posted comes from this line:
print(annual_is_stms[0])
If you now want the: -1803000000 you should do:
print(annual_is_stms[0]['interestExpense'])
If you want the: -130000000 you should do:
print(annual_is_stms[0]['incomeTaxExpense'])
and if you want the: 17780000000 you should do:
print(annual_is_stms[0]['incomeBeforeTax'])
Copy and paste this into Python.
data = {'incomeBeforeTax': 17780000000,
'minorityInterest': 103000000,
'netIncome': 17937000000,
'sellingGeneralAdministrative': 5918000000,
'grossProfit': 16507000000,
'ebit': 10589000000,
'endDate': 1640908800,
'operatingIncome': 10589000000,
'interestExpense': -1803000000,
'incomeTaxExpense': -130000000,
'totalRevenue': 136341000000,
'totalOperatingExpenses': 125752000000,
'costOfRevenue': 119834000000,
'totalOtherIncomeExpenseNet': 7191000000,
'netIncomeFromContinuingOps': 17910000000,
'netIncomeApplicableToCommonShares': 17937000000}
print(data['interestExpense'])
My script cleans arrays from the unwanted string like "##$!" and other stuff.
The script works as intended but the speed of it is extremely slow when the excel row size is big.
I tried to use numpy if it could speed it up but I'm not too familiar with is so I might be using it incorrectly.
xls = pd.ExcelFile(path)
df = xls.parse("Sheet2")
TeleNum = np.array(df['telephone'].values)
def replace(orignstr): # removes the unwanted string from numbers
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
return orignstr
for UncleanNum in tqdm(TeleNum):
newnum = replace(str(UncleanNum)) # calling replace function
df['telephone'] = df['telephone'].replace(UncleanNum, newnum) # store string back in data frame
I also tried removing the method to if that would help and just place it as one block of code but the speed remained the same.
for UncleanNum in tqdm(TeleNum):
orignstr = str(UncleanNum)
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
print(orignstr)
df['telephone'] = df['telephone'].replace(UncleanNum, orignstr)
TeleNum = np.array(df['telephone'].values)
The current speed of the script running an excel file of 200,000 is around 70it/s and take around an hour to finish. Which is not that good since this is just one function of many.
I'm not too advanced in python. I'm just learning as I script so if you have any pointer it would be appreciated.
Edit:
Most of the array elements Im dealing with are numbers but some have string in them. I trying to remove all string in the array element.
Ex.
FD3459002912
*345*9002912$
If you are trying to clear everything that isn't a digit from the strings you can directly use re.sub like this:
import re
string = "FD3459002912"
regex_result = re.sub("\D", "", string)
print(regex_result) # 3459002912
I use an API which gives me a JSON file structured like this:
{
offset: 0,
results: [
{
source_link: "http://www.example.com/1",
source_link/_title: "Title example 1",
source_link/_source: "/1",
source_link/_text: "Title example 1"
},
{
source_link: "http://www.example.com/2",
source_link/_title: "Title example 2",
source_link/_source: "/2",
source_link/_text: "Title example 2"
},
...
And I use this code in Python to extract the data I need:
import json
import urllib2
u = urllib2.urlopen('myapiurl')
z = json.load(u)
u.close
link = z['results'][1]['source_link']
title = z['results'][1]['source_link/_title']
The problem is that to use it I have to know the number of the element from which I'm extracting the data. My results can have different length every time, so what I want to do is to count the number of elements in results at first, so I would be able to set up a loop to extract data from each element.
To check the length of the results key:
len(z["results"])
But if you're just looping around them, a for loop is perfect:
for result in x["results"]:
print(result["source_link"])
You didn't need to know the length of the result, you are fine with a for loop:
for result in z['results']:
# process the results here
Anyway, if you want to know the length of 'results': len(z.results)
If you want to get the length, you can try:
len(z['result'])
But in python, what we usually do is:
for i in z['result']:
# do whatever you like with `i`
Hope this helps.
You don't need, or likely want, to count them in order to loop over them, you could do:
import json
import urllib2
u = urllib2.urlopen('myapiurl')
z = json.load(u)
u.close
for result in z['results']:
link = result['source_link']
title = result['source_link/_title']
# do something with link/title
Or you could do:
u = urllib2.urlopen('myapiurl')
z = json.load(u)
u.close
link = [result['source_link'] for result in z['results']]
title = [result['source_link/_title'] for result in z['results']]
# do something with links/titles lists
Few pointers:
No need to know results's length to iterate it. You can use for result in z['results'].
lists start from 0.
If you do need the index take a look at enumerate.
use this command to print the result on the terminal and then can check the number of results
print(len(z['results'][0]))
I am looking at an xml file similar to the below:
<pinnacle_line_feed>
<PinnacleFeedTime>1418929691920</PinnacleFeedTime>
<lastContest>28962804</lastContest>
<lastGame>162995589</lastGame>
<events>
<event>
<event_datetimeGMT>2014-12-19 11:15</event_datetimeGMT>
<gamenumber>422739932</gamenumber>
<sporttype>Alpine Skiing</sporttype>
<league>DH 145</league>
<IsLive>No</IsLive>
<participants>
<participant>
<participant_name>Kjetil Jansrud (NOR)</participant_name>
<contestantnum>2001</contestantnum>
<rotnum>2001</rotnum>
<visiting_home_draw>Visiting</visiting_home_draw>
</participant>
<participant>
<participant_name>The Field</participant_name>
<contestantnum>2002</contestantnum>
<rotnum>2002</rotnum>
<visiting_home_draw>Home</visiting_home_draw>
</participant>
</participants>
<periods>
<period>
<period_number>0</period_number>
<period_description>Matchups</period_description>
<periodcutoff_datetimeGMT>2014-12-19 11:15</periodcutoff_datetimeGMT>
<period_status>I</period_status>
<period_update>open</period_update>
<spread_maximum>200</spread_maximum>
<moneyline_maximum>100</moneyline_maximum>
<total_maximum>200</total_maximum>
<moneyline>
<moneyline_visiting>116</moneyline_visiting>
<moneyline_home>-136</moneyline_home>
</moneyline>
</period>
</periods>
<PinnacleFeedTime>1418929691920</PinnacleFeedTime>
</event>
</events>
</pinnacle_line_feed>
I have parsed the file with the code below:
pinny_url = 'http://xml.pinnaclesports.com/pinnacleFeed.aspx?sportType=Basketball'
tree = ET.parse(urllib.urlopen(pinny_url))
root = tree.getroot()
list = []
for event in root.iter('event'):
event_datetimeGMT = event.find('event_datetimeGMT').text
gamenumber = event.find('gamenumber').text
sporttype = event.find('sporttype').text
league = event.find('league').text
IsLive = event.find('IsLive').text
for participants in event.iter('participants'):
for participant in participants.iter('participant'):
p1_name = participant.find('participant_name').text
contestantnum = participant.find('contestantnum').text
rotnum = participant.find('rotnum').text
vhd = participant.find('visiting_home_draw').text
for periods in event.iter('periods'):
for period in periods.iter('period'):
period_number = period.find('period_number').text
desc = period.find('period_description').text
pdatetime = period.find('periodcutoff_datetimeGMT')
status = period.find('period_status').text
update = period.find('period_update').text
max = period.find('spread_maximum').text
mlmax = period.find('moneyline_maximum').text
tot_max = period.find('total_maximum').text
for moneyline in period.iter('moneyline'):
ml_vis = moneyline.find('moneyline_visiting').text
ml_home = moneyline.find('moneyline_home').text
However, I am hoping to get the nodes separated by event similar to a 2D table (as in a pandas dataframe). However, the full xml file has multiple "event" children, some events that do not share the same nodes as above. I am struggling quite mightily with being able to take each event node and simply create a 2d table with the tag and that value where the tag acts as the column name and the text acts as the value.
Up to this point, I have done the above to gauge how I might put that information into a dictionary and subsequently put a number of dictionaries into a list from which I can create a dataframe using pandas, but that has not worked out, as all attempts have required me to find and replace text to create the dxcictionaries and python has not responded well to that when attempting to subsequently create a dataframe. I have also used a simple:
for elt in tree.iter():
list.append("'%s': '%s'") % (elt.tag, elt.text.strip()))
which worked quite well in simple pulling out every single tag and the corresponding text, but I was unable to make anything of that because any attempts at finding and replacing the text to create dictionaries was no good.
Any assistance would be greatly appreciated.
Thank you.
Here's an easy way to get your XML into a pandas dataframe. This utilizes the awesome requests library (which you can switch for urllib if you'd like, as well as the always helpful xmltodict library available in pypi. (NOTE: a reverse library is also available, knows as dicttoxml)
import json
import pandas
import requests
import xmltodict
web_request = requests.get(u'http://xml.pinnaclesports.com/pinnacleFeed.aspx?sportType=Basketball')
# Make that unweidly XML doc look like a native Dictionary!
result = xmltodict.parse(web_request.text)
# Next, convert the nested OrderedDict to a real dict, which isn't strictly necessary, but helps you
# visualize what the structure of the data looks like
normal_dict = json.loads(json.dumps(result.get('pinnacle_line_feed', {}).get(u'events', {}).get(u'event', [])))
# Now, make that dictionary into a dataframe
df = pandas.DataFrame.from_dict(normal_dict)
To get some idea of what this is starting to look like, here's the first couple of lines of the CSV:
>>> from StringIO import StringIO
>>> foo = StringIO() # A fake file to write to
>>> df.to_csv(foo) # Output the df to a CSV file
>>> foo.seek(0) # And rewind the file to the beginning
>>> print ''.join(foo.readlines()[:3])
,IsLive,event_datetimeGMT,gamenumber,league,participants,periods,sporttype
0,No,2015-01-10 23:00,426688683,Argentinian,"{u'participant': [{u'contestantnum': u'1071', u'rotnum': u'1071', u'visiting_home_draw': u'Home', u'participant_name': u'Obras Sanitarias'}, {u'contestantnum': u'1072', u'rotnum': u'1072', u'visiting_home_draw': u'Visiting', u'participant_name': u'Libertad'}]}",,Basketball
1,No,2015-01-06 23:00,426686588,Argentinian,"{u'participant': [{u'contestantnum': u'1079', u'rotnum': u'1079', u'visiting_home_draw': u'Home', u'participant_name': u'Boca Juniors'}, {u'contestantnum': u'1080', u'rotnum': u'1080', u'visiting_home_draw': u'Visiting', u'participant_name': u'Penarol'}]}","{u'period': {u'total_maximum': u'450', u'total': {u'total_points': u'152.5', u'under_adjust': u'-107', u'over_adjust': u'-103'}, u'spread_maximum': u'450', u'period_description': u'Game', u'moneyline_maximum': u'450', u'period_number': u'0', u'period_status': u'I', u'spread': {u'spread_visiting': u'3', u'spread_adjust_visiting': u'-102', u'spread_home': u'-3', u'spread_adjust_home': u'-108'}, u'periodcutoff_datetimeGMT': u'2015-01-06 23:00', u'moneyline': {u'moneyline_visiting': u'136', u'moneyline_home': u'-150'}, u'period_update': u'open'}}",Basketball
Notice that the participants and periods columns are still their native Python dictionaries. You'll either need to remove them from the columns list, or do some additional mangling to get them to flatten out:
# Remove the offending columns in this example by selecting particular columns to show
>>> from StringIO import StringIO
>>> foo = StringIO() # A fake file to write to
>>> df.to_csv(foo, cols=['IsLive', 'event_datetimeGMT', 'gamenumber', 'league', 'sporttype'])
>>> foo.seek(0) # And rewind the file to the beginning
>>> print ''.join(foo.readlines()[:3])
,IsLive,event_datetimeGMT,gamenumber,league,sporttype
0,No,2015-01-10 23:00,426688683,Argentinian,Basketball
1,No,2015-01-06 23:00,426686588,Argentinian,Basketball