To make the problem simpler, let me explain the exact thing I did to reproduce the issue faster.
When you go to your own Github repository's traffic insight URL, let's say https://github.com/yourusername/yournicerepo/graphs/traffic, you will get a JSON formatted historical traffic data, like below, from the get requested URL /yourusername/yournicerepo/graphs/traffic-data. I found this using the Chrome dev tools network tab inspection. So, I can get the data using the dev tool, so why not I cannot get the same data using the software, mitmproxy. So I learnt about mitmproxy, wrote a script _script.py (as follows) and ran mitmproxy by ./mitmproxy --listen-port 8085 --scripts _script.py.
The script should print the JSON but actually it printed me b'', and the HTTP GET log also showed me [no content].
What is going on? Am I missing something?
(example json response)
{"counts":[{"bucket":1625529600,"total":0,"unique":0},{"bucket":1625443200,"total":1,"unique":1},{"bucket":1625356800,"total":6,"unique":4}, ..
(_script.py)
def response(flow):
if("graphs/traffic-data" in flow.request.url):
print("flow.response.content")
print(flow.response.content)
Related
I have been trying to download past broadcasts for a streamer on twitch using python. I found this python code online:
https://gist.github.com/baderj/8340312
However, when I try to call the functions I am getting errors giving me a status 400 message.
Unsure if this is the code I want to download the video (as an mp4) or how to use it properly.
And by video I mean something like this as an example: www(dot)twitch.tv/imaqtpie/v/108909385 //note cant put more than 3 links since I don't have 10 reputation
Any tips on how i should go about doing this?
Here's an example of running it in cmd:
python twitch_past_broadcast_downloader.py 108909385
After running it, it gave me this:
Exception API returned 400
This is where i got the information on running it:
https://www.johannesbader.ch/2014/01/find-video-url-of-twitch-tv-live-streams-or-past-broadcasts/
Huh it's not as easy at it seems ... The code you found on this gist is quite old and Twitch has completely changed its API. Now you will need a Client ID to download videos, in order to limit the amount of video you're downloading.
If you want to correct this gist, here are simple steps you can do :
Register an application : Everything is explained here ! Register you app and keep closely your client id.
Change API route : It is no longer '{base}/api/videos/a{id_}' but {base}/kraken/videos/{id_} (not sure about the last one). You will need to change it inside the python code. The doc is here.
Add the client id to the url : As said in the doc, you need to give a header to the request you make, so add a Client-ID: <client_id> header in the request.
And now I think you will need to start debugging a bit, because it is old code :/
I will try myself to do it and I will edit this answer when I'm finished, but try yourself :)
See ya !
EDIT : Mhhh ... It doesn't seem to be possible anyway to download a video with the API :/ I was thinking only links to API changed, but the chunks section of the response from the video url disappeared and Twitch is no longer giving access to raw videos :/
Really sorry I told you to do that, even with the API I think is no longer possible :/
You can download past broadcasts of Twitch videos with Python library streamlink.
You will need OAuth token which you can generate with the command
streamlink --twitch-oauth-authenticate
Download VODs with:
streamlink --twitch-oauth-token <your-oauth-token> https://www.twitch.tv/videos/<VideoID> best -o <your-output-folder>
I am currently pulling data from a public series data from https://www3.bcb.gov.br/expectativas/publico/en/serieestatisticas
This is a public page that uses apache wicket I believe.
I usually am ok with scraping, whether GET or POST. Here I and my colleagues are stuck. Can anyone help understand what URL needs to be used to actually make the request. Here's what I've got so far:
The form with inputs:
The Fiddler capture manually executed:
Text View:
form19_hf_0=&indicador=0&calculo=0&linhaPeriodicidade%3Aperiodicidade=0&tfDataInicial=11%2F10%2F2015&tfDataFinal=11%2F24%2F2015&divPeriodoRefereEstatisticas%3AgrupoAnoReferencia%3AanoReferenciaInicial=16&divPeriodoRefereEstatisticas%3AgrupoAnoReferencia%3AanoReferenciaFinal=16&btnCSV=Generate+CSV
Form data I'm passing in the request:
Summary:
I need some help, I can't seem to get the POST working correctly, it takes me to a different page, and I'm not sure of how to work through this one.
NB: I'm trying to grab back a CSV.
The libraries I'm using are primarily Requests (I was going to use LXML but I don't think its going to be applicable here).
I've been trying to figure out the right form with Postman and Fiddler to understand what the request needs to be.
So,
The solution to this was somewhat indirect. We were not able to do a straight POST because the the page incremented the actual POST url in a way that was generally impossible to predict.
The solution that we used was installing Selenium web driver and using that to simulate the dropdown visible values and button clicks.
This worked out very cleanly.
Thanks and HTH anyone else who might have a similar problem.
Using OOTB Alfresco 5 Community edition running on Ubuntu14.04
Steps:
Create site through the share UI.
Copy request as curl from Chromium developer tools.
Reconstructed request in python requests library as:
s=requests.post('http://<IP>:8080/share/service/modules/create-site',data=site_data,cookies=THE_cookie
Where THE_cookie was obtained via a POST to http://:8080/share/page/dologin, which gave a 200, and site_data has different names to the site created through the share UI.
That request gave a 500 error stating that
u'freemarker.core.InvalidReferenceException: The following has evaluated to null or missing:\n==> success [in template "org/alfresco/modules/create-site.post.json.ftl" at line 2, column 17]\n\nTip: If the failing expression is known to be legally null/missing, either specify a default value with myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthessis: (myOptionVar.foo)!myDefault, (myOptionVar.foo)??\n\nThe failing instruction:\n==> ${success?string} [in template "org/alfresco/modules/create-site.post.json.ftl" at line 2, column 15]', ...
When in Chromium, there is no response, but a site is created successfully.
I've also not got the curl request from the command line to work -- it needs the CSRF token removed, then gives a 200 and does nothing; no logs. My understanding is that Alfresco always gives a 200 on a successful request regardless of whether it's a GET or POST.
If anyone has any ideas that would be amazing. There doesn't seem to be anything that we can do to get create-site to work outside of the share UI, but we absolutely need it to do so.
Since the script is expecting JSON, you need to set the HTTP header "Content-Type: application/json".
Have a look to Requests session objects, which are designed to persist session cookies between requests (like a browser do). You can try an approach similar to this:
s = requests.session()
s.post('http://<IP>:8080/share/page/dologin', data=login_data)
r = s.post('http://<IP>:8080/share/service/modules/create-site', data=site_data)
I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)
I'm trying to upload a PDF file to a website using Hot Banana's content management system using a Python script. I've successfully logged into the site and can log out, but I can't seem to get file uploads to work.
The file upload is part of a large complicated web form that submits the form data and PDF file though a POST. Using Firefox along with the Firebug and Tamper Data extensions I took a peek at what the browser was sending in the POST and where it was going. I believe I mimicked the data the browser was sending in the code, but I'm still having trouble.
I'm importing cookielib to handle cookies, poster to encode the PDF, and urllib and urllib2 to build the request and send it to the URL.
Is it possible that registering the poster openers is clobbering the cookie processor openers? Am I doing this completely wrong?
Edit: What's a good way to debug the process? At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Edit 2: Chris Lively suggested I post the error I'm getting. The response from urllib2 doesn't generate an exception, but just returns:
<script>
if (parent != window) {
parent.document.location.reload();
} else {
parent.document.location = 'login.cfm';
}
</script>
I'll keep at it.
A tool like WireShark will give you a more complete trace at a much lower-level than the firefox plugins.
Often this can be something as simple as not setting the content-type correctly, or failing to include content-length.
"What's a good way to debug [a web services] process?"
At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Correct. That's about all there is.
HTTP is a very simple protocol -- you make a request (POST, in this case) and the server responds. Not much else involved and not much more you can do while debugging.
What else would you like? Seriously. What kind of debugger are you imagining might exist for this kind of stateless protocol?
You might be better off instrumenting the server to see why this is failing, rather than trying to debug this on the client side.