I'm using Python 3.x and the lib requests.
is there a way to get a specific value by putting the whole path from a json-file without downloading first the whole json?
I have the follow code to get weatherdata:
import requests
data=requests.get('http.weatherdata.com/example.json')
current_wind=a.json()['features'][48]['properties']['value']
is there a way to request directly the "current wind" like somehow like the following? this could decrease traffic from the request.
import requests
current_wind=requests.get('http.weatherdata.com/example.json',['features'][48]['properties']['value'])
i was looking for that specific question for quite a while... almost gave up.
thank for your answer.
Please be more accurate in the question and/or give examples.
But I think you mean something like this:
a=requests.get('http.weatherdata.com/example.json')
current_wind=a.json()['features'][48]['properties']['value']
You have to define the variable for the request, then convert it into json.
Related
I'm trying to use vcrpy to record and replay requests.
Issue is, those requests are happening on a different process and it looks like vcrpy doesn't work on that.
code looks something like this:
import vcr
with vcr.use_cassette("path/to/cassette"):
A = subprocess.Popen("etc etc - a request happens here")
A.wait()
When I run it the cassette does not get created.
Is there a way for me to use vcrpy to record those requests and save them?
Is there perhaps a different approach I can use for this?
Thanks
I want to query Elasticsearch and print all results for the query. The default max is 10,000, but I'd like to expand this max to much larger. I'm working with Python.
I'm using Elasticsearch.helpers.scan. It seems to work, but then during the middle of printing the results I get this error:
elasticsearch.helpers.ScanError: Scroll request has only succeeded on 66 shards out of 80.
I'm not sure what this means at all, could someone please explain and provide a solution to fix this?
Also, if theres a better/easier module/api to use other than Elasticsearch.helpers.scan, please let me know!
Thanks!
Pass raise_on_error=False to the scan function.
res = scan(es, query=query, scroll='50m', size=1000, raise_on_error=False)
This fixed it for me.
What might indeed help to find out more information about the exception reason is quite simple - just turn on DEBUG log for Elasticsearch python modules you're using:
import logging
from elasticsearch import logger as elasticsearch_logger
elasticsearch_logger.setLevel(logging.DEBUG)
and consequently check the logs around your scan() call.
I've been scouring the internet for hours now and I'm stumped. Pandas has a method dumps(accessible via pandas.json.dumps) that can encode any arbitrary object to a json string. The builtin json.dumps would normally just throw an exception.
I've been looking at source code trying to find the implementation of this function but I can't find it. Does anyone have the implementation or have an idea of how this would work?
A search through the Pandas GitHub repository shows that pandas.json.dumps appears to be implemented by the objToJson function defined in pandas/src/ujson/python/objToJson.c.
import feedparser
d = feedparser.parse('http://rss.cnn.com/rss/edition.rss', etag=d.etag)
I am new to Python and can't get my head around the parameter etag=d.etag
I Don't understand the data type. It's important to me because I am trying to make this parameter as a string dynamically. Does not work. I printed type(d.etag), result is Unicode. So I tried to the Unicode func to form my string. Still no luck. Sorry, I realise this is so basic, I just can't get it. I know, to get the etag working is easy to achieve if you follow the examples from the feedparser site, where you do your first call without a param, then each subsequent call use the etag=d.etag. I am mainly learning on my iPad and am using Pythonista, so I am running my program over and over. I also know I could write it out to a file, and parse the file instead, but I really want to understand why I can't dynamically create this param. I am sure I will hit the same problem with another module sooner or later.
I'm trying to parse this XML
I want to get a list of all of the mechanisms, so I'm trying to use XPATH (please suggest if theres an easier way) to get the mechanisms...
Here is my code:
parseMessage = libxml2.parseDoc(doc)
xpathcon = parseMessage.xpathNewContext()
xpathcon.xpathRegisterNs('urn','http://etherx.jabber.org/streams')
nodes = xpathcon.xpathEval("//urn:text()")
print nodes
And here is the error I'm getting...
Entity: line 1: parser error : Premature end of data in tag stream line 1
h"/><register xmlns="http://jabber.org/features/iq-register"/></stream:features>
I know that my code doesn't remove all the mechanisms but first I'd just like to get around the issue at hand. Anyway to make this into correct XML that can be parsed? Do I need to add a new header or remove a header or do something else?
It looks like you're trying to build an XMPP library. Why not use an existing library, such as SleekXMPP?
If you really need to build your own XMPP library, you'll need to use a streaming parser, such as Expat.
Please use one of the existing XMPP libraries.
Next: you're not going to be successful with XMPP thinking of it like a document. You'll be able to hack around it for a few days making yourself believe that you're on to something, and then you'll realize that there is no way to tell when the server is done sending you information, so there's no way to know when to call what you have a document.
Instead, use a stream-based parser. SleekXMPP uses xml.etree.cElementTree.iterparse with a wrapper around the socket to make it smell like a file. There are likely other ways, like using xml.parsers.expat directly.