So the process I'm performing seems to make logical sense to me but I keep getting an error. So I have this binary file I'm trying to send to a server (Shapeways to be exact. It's a binary 3d model file) so I go through this process to make it acceptable in a URL
theFile = open(fileloc,'rb')
contents = theFile.read()
b64 = base64.urlsafe_b64encode(contents)
url = urllib.urlencode(b64) # error
The problem is the last line always throws the error
TypeError: not a valid non-string sequence or mapping object
Which doesn't make sense to me as the data is suppose to be encoded for URLs. Is it possible it simply contains other characters that weren't encoded or something like that?
urllib.urlencode takes a sequence of two-element tuples or dictionary into a URL query string (it is basicly excerpt from docstring), but you are passing as a argument just a string.
You can try something like that:
theFile = open(fileloc,'rb')
contents = theFile.read()
b64 = base64.urlsafe_b64encode(contents)
url = urllib.urlencode({'shape': b64})
but all you get inside url variable is encoded params, so you still need actual url. If you don't need low level operations it is better to use requests library:
import requests
import base64
url = 'http://example.com'
r = requests.post(
url=url,
data={'shape':base64.urlsafe_b64encode(open(fileloc, 'rb').read())}
)
If you're just trying to send a file to your server, you shouldn't need to urlencode it. Send it using a POST request.
You can use urllib2 or you could use the requests lib which can simplify things a bit.
This SO thread may help you as well.
Related
I just started learning python yesterday and have VERY minimal coding skill. I am trying to write a python script that will process a folder of PDFs. Each PDF contains at least 1, and maybe as many as 15 or more, web links to supplemental documents. I think I'm off to a good start, but I'm having consistent "HTTP Error 403: Forbidden" errors when trying to use the wget function. I believe I'm just not parsing the web links correctly. I think the main issue is coming in because the web links are mostly "s3.amazonaws.com" links that are SUPER long.
For reference:
Link copied directly from PDF (works to download): https://s3.amazonaws.com/os_uploads/2169504_DFA%20train%20pass.PNG?AWSAccessKeyId=AKIAIPCTK7BDMEW7SP4Q&Expires=1909634500&Signature=aQlQXVR8UuYLtkzjvcKJ5tiVrZQ=&response-content-disposition=attachment;%20filename*=utf-8''DFA%2520train%2520pass.PNG
Link as it appears after trying to parse it in my code (doesn't work, gives "unknown url type" when trying to download): https%3A//s3.amazonaws.com/os_uploads/2169504_DFA%2520train%2520pass.PNG%3FAWSAccessKeyId%3DAKIAIPCTK7BDMEW7SP4Q%26Expires%3D1909634500%26Signature%3DaQlQXVR8UuYLtkzjvcKJ5tiVrZQ%253D%26response-content-disposition%3Dattachment%253B%2520filename%252A%253Dutf-8%2527%2527DFA%252520train%252520pass.PNG
Additionally if people want to weigh in on how I'm doing this in a stupid way. Each PDF starts with a string of 6 digits, and once I download supplemental documents I want to auto save and name them as XXXXXX_attachY.* Where X is the identifying string of digits and Y just increases for each attachment. I haven't gotten my code to work enough to test that, but I'm fairly certain I don't have it correct either.
Help!
#!/usr/bin/env python3
import os
import glob
import pdfx
import wget
import urllib.parse
## Accessing and Creating Six Digit File Code
pdf_dir = "/users/USERNAME/desktop/worky"
pdf_files = glob.glob("%s/*.pdf" % pdf_dir)
for file in pdf_files:
## Identify File Name and Limit to Digits
filename = os.path.basename(file)
newname = filename[0:6]
## Run PDFX to identify and download links
pdf = pdfx.PDFx(filename)
url_list = pdf.get_references_as_dict()
attachment_counter = (1)
for x in url_list["url"]:
if x[0:4] == "http":
parsed_url = urllib.parse.quote(x, safe='://')
print (parsed_url)
wget.download(parsed_url, '/users/USERNAME/desktop/worky/(newname)_attach(attachment_counter).*')
##os.rename(r'/users/USERNAME/desktop/worky/(filename).*',r'/users/USERNAME/desktop/worky/(newname)_attach(attachment_counter).*')
attachment_counter += 1
for x in url_list["pdf"]:
print (parsed_url + "\n")```
I prefer to use requests (https://requests.readthedocs.io/en/master/) when trying to grab text or files online. I tried it quickly with wget and I got the same error (might be linked to user-agent HTTP headers used by wget).
wget and HTTP headers issues : download image from url using python urllib but receiving HTTP Error 403: Forbidden
HTTP headers : https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent
The good thing with requests is that it lets you modify HTTP headers the way you want (https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers).
import requests
r = requests.get("https://s3.amazonaws.com/os_uploads/2169504_DFA%20train%20pass.PNG?AWSAccessKeyId=AKIAIPCTK7BDMEW7SP4Q&Expires=1909634500&Signature=aQlQXVR8UuYLtkzjvcKJ5tiVrZQ=&response-content-disposition=attachment;%20filename*=utf-8''DFA%2520train%2520pass.PNG")
with open("myfile.png", "wb") as file:
file.write(r.content)
I'm not sure I understand what you're trying to do, but maybe you want to use formatted strings to build your URLs (https://docs.python.org/3/library/stdtypes.html?highlight=format#str.format) ?
Maybe checking string indexes is fine in your case (if x[0:4] == "http":), but I think you should check python re package to use regular expressions to catch the elements you want in a document (https://docs.python.org/3/library/re.html).
import re
regex = re.compile(r"^http://")
if re.match(regex, mydocument):
<do something>
The reason for this behavior is inside wget library. Inside it encodes the URL with urllib.parse.quote() (https://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote).
Basically it replaces characters with their appropriate %xx escape character. Your URL is already escaped but the library does not know that. When it parses the %20 it sees % as a character that needs to be replaced so the result is %2520 and different URL - therefore 403 error.
You could decode that URL first and then pass it, but then you would have another problem with this library because your URL has parameter filename*= but the library expects filename=.
I would recommend doing something like this:
# get the file
req = requests.get(parsed_url)
# parse your URL to get GET parameters
get_parameters = [x for x in parsed_url.split('?')[1].split('&')]
filename = ''
# find the get parameter with the name
for get_parameter in get_parameters:
if "filename*=" in get_parameter:
# split it to get the name
filename = get_parameter.split('filename*=')[1]
# save the file
with open(<path> + filename, 'wb') as file:
file.write(req.content)
I would also recommend removing the utf-8'' in that filename because I don't think it is actually part of the filename. You could also use regular expressions for getting the filename, but this was easier for me.
I am trying to acquire a file from a url on the web and then open that file for use in an application I’m making in python on AWS Lambda. There doesn’t seem to be a way for me to acquire the file in the form I need it, which I believe to be an os.Pathlike object.
Here is what I am trying now, which doesn’t work since requests.get returns a response not path. I’m posting from a phone right now so I cannot use code tags. Apologies.
filename = requests.get(“url.com/file.txt”)
f = open(filename, ‘rb’)
I have also tried a urlparse and a urllib urlretrieve on the url but that does not return a pathlike object either. Note that I don’t believe I can just use wget or something on the shell level as I am using AWS lambda.
import requests
url = 'http://url.com/file.txt'
r = requests.get(url, allow_redirects=True)
f = open(r, ‘rb’)
When you do such operation, it's always a good to see the entire response of the request you are doing. I usually use the dict attribute, works quite often
print(response.__dict__)
On the ones I have done, there were a _content field in the response object with the file bytes. Then you can simply use the io module to read this file :
file = io.BytesIO(response._content)
This can then be used as a file just like when you do open() function
I can't get URL
base_url = "http://status.aws.amazon.com/"
socket.setdefaulttimeout(30)
htmldata = urllib2.urlopen(base_url)
for url in parser.url_list:
get_rss_th = threading.Thread(target=parser.get_rss,name="get_rss_th", args=(url,))
get_rss_th.start()
print htmldata
<addinfourl at 140176301032584 whose fp = <socket._fileobject object at 0x7f7d56a09750>>
when specifying htmldata.read() (Python error when using urllib.open)
then getting blank screen
python 2.7
whole code:https://github.com/tech-sketch/zabbix_aws_template/blob/master/scripts/AWS_Service_Health_Dashboard.py
The problem is, that from URL link (RSS feed), i can't get output (data) variable data = zbx_client.recv(4096) is empty- no status
There's no real problem with your code (except for a bunch of indentation errors and syntax errors that apparently aren't in your real code), only with your attempts to debug it.
First, you did this:
print htmldata
That's perfectly fine, but since htmldata is a urllib2 response object, printing it just prints that response object. Which apparently looks like this:
<addinfourl at 140176301032584 whose fp = <socket._fileobject object at 0x7f7d56a09750>>
That doesn't look like particularly useful information, but that's the kind of output you get when you print something that's only really useful for debugging purposes. It tells you what type of object it is, some unique identifier for it, and the key members (in this case, the socket fileobject wrapped up by the response).
Then you apparently tried this:
print htmldata.read()
But already called read on the same object earlier:
parser.feed(htmldata.read())
When you read() the same file-like object twice, the first time gets everything in the file, and the second time gets everything after everything in the file—that is, nothing.
What you want to do is read() the contents once, into a string, and then you can reuse that string as many times as you want:
contents = htmldata.read()
parser.feed(contents)
print contents
It's also worth noting that, as the urllib2 documentation said right at the top:
See also The Requests package is recommended for a higher-level HTTP client interface.
Using urllib2 can be a big pain, in a lot of ways, and this is just one of the more minor ones. Occasionally you can't use requests because you have to dig deep under the covers of HTTP, or handle some protocol it doesn't understand, or you just can't install third-party libraries, so urllib2 (or urllib.request, as it's renamed in Python 3.x) is still there. But when you don't have to use it, it's better not to. Even Python itself, in the ensurepip bootstrapper, uses requests instead of urllib2.
With requests, the normal way to access the contents of a response is with the content (for binary) or text (for Unicode text) properties. You don't have to worry about when to read(); it does it automatically for you, and lets you access the text over and over. So, you can just do this:
import requests
base_url = "http://status.aws.amazon.com/"
response = requests.get(base_url, timeout=30)
parser.feed(response.content) # assuming it wants bytes, not unicode
print response.text
If I use this code:
import urllib2
import socket
base_url = "http://status.aws.amazon.com/"
socket.setdefaulttimeout(30)
htmldata = urllib2.urlopen(base_url)
print(htmldata.read())
I get the page's HTML code.
I am extremely confused after trying a few possible solutions and getting various errors that just lead me in circles. I have a function that will grab a tweet, put it in a dictionary, then write that dictionary to a file using dumps like so:
jsonFile = {}
jsonFile["tweet"] = tweet
jsonFile["language"] = language
with open('jsonOutputfile.txt', 'w') as f:
json.dump(jsonFile, f)
I then have another python file that has a function that will return the value of this jsonOutputfile.txt if I want to use it elsewhere. I do that like so:
with open('jsonOutputfile.txt') as f:
jsonObject = json.load(f)
return jsonObject
This function sits on my localhost. The above two functions that have to do with saving and retrieving the JSON file are separate from the rest of my functions below, as I want them to be.
I have another function that will retrieve the values of the returned status using python requests, like so:
def grab_tweet():
return requests.post("http://gateway:8080/function/twittersend")
and then after grabbing the tweet I want to manipulate it, and I want to do so using the JSON that I should have received from this request.
r = grab_tweet()
data = json.dumps(r.text)
return data.get('tweet')
I want this function above to return just the value that is associated with the tweet key in the JSON that I received from when I saved and loaded it. However, I keep on getting the following error: AttributeError: 'str' object has no attribute 'get' which I am confused about because from my understanding using json.dumps() should create a JSON valid string that I can call get on. Is there an encoding error when I am transferring this to and from a file, or maybe when I am receiving my request?
Any help is appreciated.
EDIT:
Here is a sample of a response from my requests.post when I use r.text, it also looks like there is some Unicode in the response so I put an example at the end of the tweet section. (This also doesn't look like a JSON which is what my question is centered around. There should at least be double quotes and no U's right?):
{u'tweet': u'RT THIS IS THE TWEET BLAH BLAH\u2026', u'language': u'en'}
Use .json() in requests module to get response as JSON
Ex:
data = r.json()
return data.get('tweet')
Note: json.dumps convert your response to a string object
Edit as per comment - Try using the ast module.
Ex:
import ast
data = ast.literal_eval(r.text)
You will need to use the .json() method. See requests' documentation: JSON Response Content
Also, for future reference, rather than do
f.write(json.dumps(jsonFile))
You could simply use:
json.dump(jsonFile, f)
Same with using load instead of loads:
jsonObject = json.load(f)
I'm getting really tired of trying to figure out why this code works in Python 2 and not in Python 3. I'm just trying to grab a page of json and then parse it. Here's the code in Python 2:
import urllib, json
response = urllib.urlopen("http://reddit.com/.json")
content = response.read()
data = json.loads(content)
I thought the equivalent code in Python 3 would be this:
import urllib.request, json
response = urllib.request.urlopen("http://reddit.com/.json")
content = response.read()
data = json.loads(content)
But it blows up in my face, because the data returned by read() is a "bytes" type. However, I cannot for the life of me get it to convert to something that json will be able to parse. I know from the headers that reddit is trying to send utf-8 back to me, but I can't seem to get the bytes to decode into utf-8:
import urllib.request, json
response = urllib.request.urlopen("http://reddit.com/.json")
content = response.read()
data = json.loads(content.decode("utf8"))
What am I doing wrong?
Edit: the problem is that I cannot get the data into a usable state; even though json loads the data, part of it is undisplayable, and I want to be able to print the data to the screen.
Second edit: The problem has more to do with print than parsing, it seems. Alex's answer provides a way for the script to work in Python 3, by setting the IO to utf8. But a question still remains: why is it that the code worked in Python 2, but not Python 3?
The code you post is presumably due to wrong cut-and-paste operations because it's clearly wrong in both versions (f.read() fails because there's no f barename defined).
In Py3, ur = response.decode('utf8') works perfectly well for me, as does the following json.loads(ur). Maybe the wrong copys-and-pastes affected your 2-to-3 conversion attempts.
Depends of your python version you have to choose the correct library.
for python 3.5
import urllib.request
data = urllib.request.urlopen(url).read().decode('utf8')
for python 2.7
import urllib
url = serviceurl + urllib.urlencode({'sensor':'false', 'address': address})
uh = urllib.urlopen(url)
Please see that answer in another Unicode related question.
Now: the Python 3 str (which was the Python 2 unicode) type is an idealised object, in the sense that it deals with “characters”, not “bytes”. These characters, in order to be used for/from disk/network data, need to be encoded-into/decoded-from bytes by a “conversion table”, a.k.a encoding a.k.a codepage. Because of operating system variety, Python historically avoided to guess what that encoding should be; this has been changing over the years, but still the principle of “In the face of ambiguity, refuse the temptation to guess.” applies.
Thankfully, a web server makes your work easier. Your response above should give you all extra information needed:
>>> response.headers['content-type']
'application/json; charset=UTF-8'
So, every time you issue a request to a web server, check the Content-Type header for a charset value, and decode the request's data into Unicode (Python 3: bytes.decode(charset) → str) by using that charset.
Here is an approach that is compatible across both versions - it works by first converting bytes data to string, and then loading the string.
import json
try:
from urllib.request import Request, urlopen #python3+
except ImportError:
from urllib2 import Request, urlopen #python2
url = 'https://jsonfeed.org/feed.json'
request = Request(url)
response_json_string = urlopen(request).read().decode('utf8')
response_json_object = json.loads(response_json_string)