I want to generate a dynamically created png image with Pycairo and serve it usign Django. I read this: Serve a dynamically generated image with Django.
Is there a way to transport data from Pycairo surface directly into HTTP response? I'm doing this for now:
data = surface.to_rgba()
im = Image.frombuffer ("RGBA", (width, height), data, "raw", "RGBA", 0,1)
response = HttpResponse(mimetype="image/png")
im.save(response, "PNG")
return response
But it actually doesn't work because there isn't a to_rgba call (this call I found using Google code but doesn't work).
EDIT: The to_rgba can be replaced by the correct call get_data(), but I still want to know if I can bypass PIL altogether.
def someView(request):
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 100, 100)
context = cairo.Context(surface)
# Draw something ...
response = HttpResponse(mimetype="image/png")
surface.write_to_png(response)
return response
You can try this:
http://www.stuartaxon.com/2010/02/03/using-cairo-to-generate-svg-in-django
It's about SVG but I think it will be easy to adapt
Related
I am trying to save a map containing markers and also heatmap into an image.
Here is the code to display the map.
from ipywidgets import Layout
import geopandas
defaultLayout=Layout(width='3000px', height='3000px') # A very large image.
lat_lgn = [39.74248, 254.993622]
m_f = Map(center=lat_lgn, zoom=12, layout=defaultLayout)
marker_m = Marker(location=lat_lgn, draggable=False)
m_f.add_layer(marker_m)
m_f
Add some markers on it
arr_test1 = [39.74258, 254.993682]
arr_test2 = [39.76288, 254.988932]
arr_test3 = [39.79998, 254.991982]
all_loc = [arr_test1, arr_test2, arr_test3]
for cur_loc in all_loc:
point = CircleMarker(
radius=10,
location=cur_loc,
color='red',
fill_color="black",
)
m_f.add_layer(point)
time.sleep(0.001)
Save the map as html file. Works fine.
m_f.save('f_map.html', title='My Map')
The problem occurs, when I try to get an image, or pdf from the html.
import imgkit
import pdfkit
config = pdfkit.configuration(wkhtmltopdf='/usr/local/bin/wkhtmltopdf')
pdfkit.from_file('f_map.html', 'outpdf.pdf', configuration=config)
pdfkit.from_file('f_map.html', 'outjpg.jpg', configuration=config)
pdfkit.from_file('f_map.html', 'outpng.png', configuration=config)
The pdf file is blank
And macBook is not able the open neither the jpeg nor the png file.
To ckeck my dependencies, I have tried this:
import pdfkit
pdfkit.from_url('http://stackoverflow.com', 'out.pdf', configuration=config)
which works fine. However, once I change out.pdf to out.png, I cannot open the obtained file.
Does anyone has an idea, how I can solve the issue ?
I am trying to get hurge image. But it also did not work with a 300px X 300px image.
Any hints will be welcome.
One option (which admittedly is a little crude) is to use the selenium library to automatically open the HTML and from there take a screenshot which then is saved as an image.
The code will look like (I have given the code here according to MS Edge, but the code should be similar for other browsers):
from selenium import webdriver
from time import sleep
filename = '<Name (without path) of your HTML file here>'
MyBrowser = webdriver.Edge(r"<path to webdriver here>\\msedge.exe")
murl = r'file:\\{path}\{mapfile}'.format(path='<path to HTML here>',mapfile=mfilename)
MyBrowser.get(murl)
sleep(10) #this is not strictly necessary but is here in case you need time for map tiles to load
MyBrowser.save_screenshot('<Image_file_name>')
MyBrowser.quit()
Note that this will require you to install a webdriver for your browser in advance.
My python code
What i'm doing here is receiving a JSON by POST method, extracting a certain property from it to form a string. Afterwards I do generate a wordcloud using matplotlib library.
My goal is to somehow "generate" an image WITHOUT STORING IT ON MY SERVER, as I would think it's a variable or something alike that. And then "send" it to a HTML page and show it.
import json
#app.route('/wordcloud', methods=['POST', 'GET'])
def wordcloud():
print('****************************************')
jsoncito = request.form['wordcloud']
print('22222222222222222222222222222222222222222')
jsonParaPasar = json.dumps(jsoncito)
text = ""
newjson = json.loads(request.form['wordcloud'])
neww = ast.literal_eval(newjson)
for x in neww:
text = text + x['respuesta'] + " "
wordcloud = WordCloud(width=480, height=480, margin=0).generate(text)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=10, y=10)
I read maybe I can turn it into a numpy array in order to send it, but I don't know how to turn that into a proper image!
I am trying to reload an image in a loop from the blob's URL into a text area.
So, I am following the given IronPython examples to be able to download a blob and set to a document property called Img1 and eventually I'm setting that property to a label inside the text area. And my plan is to run this in a wait-loop cycle.
The code looks like this:
#from System.Drawing import Image
from System import Uri
from System.Net import HttpWebRequest
from Spotfire.Dxp.Data import BinaryLargeObject
uri = Uri("http://100.206.214.99/remote/display.bmp")
request = HttpWebRequest.Create(uri)
response = request.GetResponse()
stream = response.GetResponseStream()
blob = BinaryLargeObject.Create(stream)
Document.Properties["Img1"] = blob
It's all working fine, but I'm not able to adjust the height or width of the image/blob. Can anyone help me with this?
I'd like to send a Image (via URL or Path), on request.
I use the source code here.
The code has already a sample to send an image (via URL or Path), but I don't get it since I'm new to Python.
Here's the sample code snippet:
elif text == '/image': #request
img = Image.new('RGB', (512, 512))
base = random.randint(0, 16777216)
pixels = [base+i*j for i in range(512) for j in range(512)] # generate sample image
img.putdata(pixels)
output = StringIO.StringIO()
img.save(output, 'JPEG')
reply(img=output.getvalue())
Some API infos can be found here.
Thanks for your patience.
To send a photo from URL:
bot.send_photo(chat_id=chat_id, photo='https://telegram.org/img/t_logo.png')
To send a photo from local Drive:
bot.send_photo(chat_id=chat_id, photo=open('tests/test.png', 'rb'))
Here is the reference documentation.
I was struggling to connect the python-telegram-bot examples with the above advice. Especially, while having context and update, I couldnt find chatid and bot. Now, my two cents:
pic=os.path.expanduser("~/a.png")
context.bot.send_photo(chat_id=update.effective_chat.id, photo=open(pic,'rb'))
guys
I want to display some images with their captions in QTextEdit. I have a dictionary with captions and corresponding URLs. The problem is when I post a request with QNetworkAccessManager and wait for a signal finished(QNetworkReply*), I get reply with image only. How can I determine a corresponding caption this image was requested for?
def _init_(self)
manager = QNetworkAccessManager(self);
self.connect(manager, SIGNAL("finished(QNetworkReply*)"), self.add_record)
for record in dict:
manager.get(QNetworkRequest(QUrl(status['caption'])))
def add_record(self, reply):
img = QImage()
img.loadFromData(reply.readAll())
self.textEdit.textCursor().insertImage(img)
#I don't know at this point for which caption
#I've received this image
#self.textEdit.append(record['text'] + '\n');
Are there any design patterns for this problem? I would appreciate any ideas
Assuming a recent Qt version, the QNetworkReply::request() will give you a pointer to the QNetworkRequest that triggered this reply.
So you can access the information you're after with QNetworkRequest::url().