I am working with XML data for a HW assignment but I'm not sure how to tell if the XML is valid. This is my code so far. I think I'm supposed to parse the XML but was confused how to validate it.
//XML Data Sets:
http://aiweb.cs.washington.edu/research/projects/xmltk/xmldata/www/repository.html
//standard includes
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
%matplotlib inline
//import the lxml parser
from lxml import etree
dtd = etree.DTD('ebay.dtd')
ebay = etree.parse('ebay.xml')
lxml will throw an exception lxml.etree.XMLSyntaxError when it's asked to parse a badly formed XML.
So you could handle this like:
xml_fname = 'ebay.xml'
try:
ebay = etree.parse(xml_fname)
except etree.XMLSyntaxError:
print "Failed to parse an invalid XML: " + xml_fname
If you want to validate against a DTD:
xml = etree.XML(etree.tostring(ebay))
print(dtd.validate(xml))
Related
import streamlit as st
import pandas as pd
import numpy as np
import requests
import tweepy
import config
import psycopg2
import psycopg2.extras
import plotly.graph_objects as go
auth = tweepy.OAuthHandler(config.TWITTER_CONSUMER_KEY,
config.TWITTER_CONSUMER_SECRET)
auth.set_access_token(config.TWITTER_ACCESS_TOKEN,
config.TWITTER_ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
Problem with my code, it is not running with the twitter key. The module has no attributes
The config module that you are attempting to import and read off of is not what you want.
TWITTER_CONSUMER_KEY and TWITTER_CONSUMER_SECRET are not constants that you can get from a module. These are values that you must input yourself. There is perhaps a piece of code missing at the start of your application that looks like this:
config = {
'TWITTER_CONSUMER_KEY': 'ENTER YOUR TWITTER CONSUMER KEY',
'TWITTER_CONSUMER_SECRET': 'ENTER YOUR TWITTER CONSUMER SECRET'
}
Take a look at this article for more help. Goodluck!
I'm trying to get some stats from the NBA stats page. I'm following this tutorial-idea
https://towardsdatascience.com/using-python-pandas-and-plotly-to-generate-nba-shot-charts-e28f873a99cb
The basic idea is put the data into a csv file.
So I try this code, to get the data from the nba web, trying to get the json file and the convert it to a csv:
import requests
import json
import pandas as pd
from pandas import DataFrame as df
import urllib.request
shot_data_url_start="https://stats.nba.com/events/?flag=3&CFID=33&CFPARAMS=2017-18&PlayerID="
player_id="202695"
shot_data_url_end="&ContextMeasure=FGA&Season=2017-18§ion=player&sct=plot"
def shoy_chart(player_id):
full_url = shot_data_url_start + str(player_id) + shot_data_url_end
json = requests.get(full_url, headers=headers).json()
return(json)
data = json['resultSets'][0]['rowSets']
columns = json['resultSets'][0]['headers']
df = pd.DataFrame.from_records(data, columns=columns)
And this is the error that notebook shows to me:
TypeError Traceback (most recent call last)
<ipython-input-42-a3452c3a4fc8> in <module>
18
19
---> 20 data = json['resultSets'][0]['rowSets']
21 columns = json['resultSets'][0]['headers']
22
TypeError: 'module' object is not subscriptable
Anyone can help me, or know another way to get the data into a .csv or excel file?
When imported with import json, the name json is referring to the JSON module of the Python standard library. You cannot use it as a regular variable name. If you rename your variable to something else such as response_json, this part of your code will work.
Regarding the rest of the code, the page https://stats.nba.com/events/ doesn't return any JSON text, it is a regular web page with images, menus, a video player, etc... If you want to access the API that returns the shots in JSON format, you will have to use the https://stats.nba.com/stats/shotchartdetail (with the right query string). This API endpoint is mentioned in the tutorial, in the "Chrome XHR tab and resulting json linked by url" image.
Ok I've changed the code like this:
import requests
import json
import pandas as pd
from pandas import DataFrame as df
import urllib.request
def shot_chart(player_id):
full_url = "https://stats.nba.com/stats/shotchartdetail?AheadBehind=&CFID=33&CFPARAMS=2017-18&ClutchTime=&Conference=&ContextFilter=&ContextMeasure=FGA&DateFrom=&DateTo=&Division=&EndPeriod=10&EndRange=28800&GROUP_ID=&GameEventID=&GameID=&GameSegment=&GroupID=&GroupMode=&GroupQuantity=5&LastNGames=0&LeagueID=00&Location=&Month=0&OnOff=&OpponentTeamID=0&Outcome=&PORound=0&Period=0&PlayerID=202695&PlayerID1=&PlayerID2=&PlayerID3=&PlayerID4=&PlayerID5=&PlayerPosition=&PointDiff=&Position=&RangeType=0&RookieYear=&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StartPeriod=1&StartRange=0&StarterBench=&TeamID=0&VsConference=&VsDivision=&VsPlayerID1=&VsPlayerID2=&VsPlayerID3=&VsPlayerID4=&VsPlayerID5=&VsTeamID="
response_json = requests.get(full_url, headers=headers)
return(response_json)
data = response_json['resultSets'][0]['rowSets']
columns = response_json['resultSets'][0]['headers']
df = pd.DataFrame.from_records(data, columns=columns)
import requests
import json
import pandas as pd
from pandas import DataFrame as df
import urllib.request
shot_data_url_start="https://stats.nba.com/stats/shotchartdetail?AheadBehind=&CFID=33&CFPARAMS=2019-20&ClutchTime=&Conference=&ContextFilter=&ContextMeasure=FGA&DateFrom=&DateTo=&Division=&EndPeriod=10&EndRange=28800&GROUP_ID=&GameEventID=&GameID=&GameSegment=&GroupID=&GroupMode=&GroupQuantity=5&LastNGames=0&LeagueID=00&Location=&Month=0&OnOff=&OpponentTeamID=0&Outcome=&PORound=0&Period=0&PlayerID="
player_id="202330"
shot_data_url_end="&PlayerID1=&PlayerID2=&PlayerID3=&PlayerID4=&PlayerID5=&PlayerPosition=&PointDiff=&Position=&RangeType=0&RookieYear=&Season=2019-20&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StartPeriod=1&StartRange=0&StarterBench=&TeamID=0&VsConference=&VsDivision=&VsPlayerID1=&VsPlayerID2=&VsPlayerID3=&VsPlayerID4=&VsPlayerID5=&VsTeamID="
def shot_chart(player_id):
full_url = shot_data_url_start + str(player_id) + shot_data_url_end
response_json = requests.get(full_url).json()
return(response_json)
data = response_json['resultSets'][0]['rowSets']
columns = response_json['resultSets'][0]['headers']
df = pd.DataFrame.from_records(data, columns=columns)
shot_chart("202330")
What is going on now? the notebook is tucked right know
Try this out
import pandas as pd
from pandas import DataFrame as df
shot_data_url_start = "https://stats.nba.com/stats/shotchartdetail?AheadBehind=&CFID=33&CFPARAMS=2017-18&ClutchTime=&Conference=&ContextFilter=&ContextMeasure=FGA&DateFrom=&DateTo=&Division=&EndPeriod=10&EndRange=28800&GROUP_ID=&GameEventID=&GameID=&GameSegment=&GroupID=&GroupMode=&GroupQuantity=5&LastNGames=0&LeagueID=00&Location=&Month=0&OnOff=&OpponentTeamID=0&Outcome=&PORound=0&Period=0&PlayerID="
player_id = "204001"
shot_data_url_end = "&PlayerID1=&PlayerID2=&PlayerID3=&PlayerID4=&PlayerID5=&PlayerPosition=&PointDiff=&Position=&RangeType=0&RookieYear=&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StartPeriod=1&StartRange=0&StarterBench=&TeamID=0&VsConference=&VsDivision=&VsPlayerID1=&VsPlayerID2=&VsPlayerID3=&VsPlayerID4=&VsPlayerID5=&VsTeamID="
def get_shot_data(player_id):
full_url = shot_data_url_start + player_id + shot_data_url_end
data = requests.get(
full_url,
headers = {
"User-Agent": "PostmanRuntime/7.4.0"
}
)
return data.json()
shot_results = get_shot_data(player_id)
result_sets = shot_results['resultSets']
first_result_set = result_sets[0]
row_set = first_result_set['rowSet']
set_headers = first_result_set['headers']
df = pd.DataFrame.from_records(row_set, columns=set_headers)
I see how you got confused with that medium post. You were missing the headers and the url for the NBA api wasn't right. That's what #pierre was trying to say in his response. The url you're using isn't right. If you reread that post you were following, you'll see that the author said he had to dig in to dev tools in order to find that actual url to use in order to grab the JSON.
Edit: Forgot to mention that when I didn't pass a User-Agent in the headers, the request would timeout. If you don't pass that in, you won't get a successful response.
I have a script that pulls from the city if Chicago and grabs a json file and then published to Pub Sub. Once the data gets into pub sub I have a dataflow template that pulls the data into Google Big Query. The final data move to BQ is failing and when I print the output in the script. I am getting a u' in front of all the fields which I believe is messing up the field match. Has anyone else had this issue and know what is wrong with my code and how to possibly remove the 'u'. I have tried multiple fix but none of them have worked. A sample out put is listed below:
('_last_updt', '2010-07-21 14:50:53.0'), ('_length', '0.69'), ('_lif_lat', '41.985032613'),
My code is listed below:
from __future__ import unicode_literals
from sodapy import Socrata
import json
from io import StringIO
from google.oauth2 import service_account
from oauth2client.client import GoogleCredentials
from google.cloud import pubsub_v1
import time
import datetime
import urllib
import urllib.request
import argparse
import base64
credentials = GoogleCredentials.get_application_default()
# change project to your Project ID
project="xxxx"
# change topic to your PubSub topic name
topic="xxxx"
res = urllib.request.urlopen('https://data.cityofchicago.org/resource/8v9j-bter.json')
res_body = res.read()
traffic=json.loads(res_body)
publisher = pubsub_v1.PublisherClient()
topicName = 'projects/' + project + '/topics/' + topic
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project,topic)
for key in traffic:
publisher.publish(topicName,str.encode(str(key)))
print(key.items())
You are describing a Python construction: Unicode strings are shown with a u' prefix.
See:
What's the u prefix in a Python string?
For example, building an array with a "normal" string and an unicode one:
>>> [u'a', 'a']
[u'a', 'a']
Don't worry too much about it, they are identical strings:
>>> u'a' == 'a'
True
Now when you say "I am getting a u' in front of all the fields which I believe is messing up the field match." Where do you see this? Is this part of the Python code, or do you see these u' show up on the BigQuery web UI too?
Looking at the code you posted, is seems to be forcing all strings to be Unicode with from __future__ import unicode_literals:
>>> "a"
'a'
>>> from __future__ import unicode_literals
>>> "a"
u'a'
https://python-future.org/unicode_literals.html
I am running this query in python. I do get the error "Invalid string literal" in the regular expression part. I know this regex is, not sure what syntax is missing here. Any help would be appreciated.
import pandas as pd
import numpy as np
import datetime
from time import gmtime, strftime
import smtplib
import sys
from IPython.core.display import HTML
PROJECT = 'server'
queryString = '''
SELECT
mn as mName,
dt as DateTime,
ip as LocalIPAddress,
REGEXP_EXTRACT(path, 'ActName:([\s\S\w\W]*?)ActDomain:') AS ActName,
FROM Agent.Logs
'''
You're missing an r
REGEXP_EXTRACT(path, r'ActName:([\s\S\w\W]*?)ActDomain:') AS ActName
in Python/Django, I need to parse and objectify a file .xml according to a given XMLSchema made of three .xsd files referring each other in such a way:
schema3.xsd (referring schema1.xsd)
schema2.xsd (referring schema1.xsd)
schema1.xsd (referring schema2.xsd)
xml schemas import
For this I'm using the following piece of code which I've already tested being succesfull when used with a couple of xml/xsd files (where .xsd is "standalone" without refering others .xsd):
import lxml
import os.path
from lxml import etree, objectify
from lxml.etree import XMLSyntaxError
def xml_validator(request):
# define path of files
path_file_xml = '../myxmlfile.xml'
path_file_xsd = '../schema3.xsd'
# get file XML
xml_file = open(path_file_xml, 'r')
xml_string = xml_file.read()
xml_file.close()
# get XML Schema
doc = etree.parse(path_file_xsd)
schema = etree.XMLSchema(doc)
#define parser
parser = objectify.makeparser(schema=schema)
# trasform XML file
root = objectify.fromstring(xml_string, parser)
test1 = root.tag
return render(request, 'risultati.html', {'risultato': test1})
Unfortunately, I'm stucked with the following error that i got with the multiple .xsd described above:
complex type 'ObjectType': The content model is not determinist.
Request Method: GET Request URL: http://127.0.0.1:8000/xml_validator
Django Version: 1.9.1 Exception Type: XMLSchemaParseError Exception
Value: complex type 'ObjectType': The content model is not
determinist., line 80
Any idea about that ?
Thanks a lot in advance for any suggestion or useful tips to approach this problem...
cheers
Update 23/03/2016
Here (and in the following answers to the post, because it actually exceed the max number of characters for a post), a sample of the files to figure out the problem...
sample files on GitHub
My best guess would be that your XSD model does not obey the Unique Particle Attribution rule. I would rule that out before looking at anything else.