Get all username of a domain using gdata in AppEngine - python

I have just began to learn AppEngine with python and in my project I'm building an application in which I need to retrieve all users of my domain.
I used RetrieveAllOrgUsers and RetrieveAllUsers to get users of my domain, but both of them crash when I deploy the application on my AppEngine account. I get the error deadlineexceeded.
Could someone please help me to resolve this issue? I guess maybe I need to use tasks, but I don't know how.

You should spend some time on reading about the limitations of the platform you are using.
Do you understand why/what a DeadelinExceededError means.
That should be you starting point.
Also have a look at the stack trace in the log, and you will see where in you code the error has occurred.
Basically you have a limited amount of time to do things in front end requests. If you exceed that time , then you will get this error (there are other reasons) .
I assume your using the provisioning api, and that could be something that takes some serious time. There are other limits you will need to contend with, such as those around URLFetch as well.
Tasks will more than likely be your solution, but you should try and understand why before embarking on that.
Have a read of https://developers.google.com/appengine/articles/deferred which is a easy path into tasks using deferred.

Related

Python high load capable microservice architecture

I wanted to ask you about microservices in Python. As of writing this, i got pretty good with writing well structured flask-restful APIs and I wanted to go and learn about microservices in python.
Right now I have read up quite a lot of info regarding this and even searched online to find examples for this (1 example here) but I am not really sure exactly where to start as I don't want to invest too much time in a inefficient pattern.
So I wanted to know if anyone know any courses with examples for Python on building high load services. My only hints so far as asyncio and aiohttp for request handling and i'm not sure if using a message broker (such as zeromq or rabbitmq) would be a good idea as from what I read, it adds request lag.
Any advice would be great.
PS: The current pattern I'm stuck on is the API Gateway pattern and I would also want to know if it is a good direction as a start.
There are plenty of microservice frameworks in Python that can handle high load and get you a long way towards following best practices.
Try for example pymacaron (http://pymacaron.com/). Pymacaron is basically a flask app whose endpoints are auto-spawn from a swagger specification. To write a pymacaron microservice, you mostly have to:
(1) write a swagger specification for your api (which is always a good starting point, whatever language you are using). Your swagger file describes the get/post/etc calls of your api and which objects (json dicts) they get and return, but also which python method in your code that implement the endpoint.
(2) and implement your endpoints' methods.
Once you have done that, you get loads of things for free: you can package your code as a docker container, deploy it to amazon beanstalk, start asynchronous tasks from within your api calls, or get the api documentation with no extra work.
Here is an example of an helloworld api implemented with pymacaron: https://github.com/pymacaron/pymacaron-helloworld

Gathering bug reports?

Premise: I am a beginner in search for an easy way to send bug reports from users over sea.
I've made a script for some friends that are living on the other side of the sea (US - EUROPE)... I will like to gather automatic bug reports whenever they happen. So my first idea was to send myself an email with the smtplib module. It works fine when testing home, but as soon as the sender "sends", my email provider (gmail) blocks the connection because of course, its from an "unknown device". I've already enabled "Allow less secure apps" as someone suggested but with no avail.
What I am searching its a simple way of dealing with this.
Yes I could make the script to ignore the error if the email its not being sent, and then go into my google account and enable those devices so at least it will work from the second run..
But it doesn't seem what a programmer would do in this case. I am learning so a solution withing the language is what I am after.
A different provider that has no restriction its also a good start but I tried Yahoo, Live, Yandex but I couldn't make them work. Are there any?
So my question is: how others do? what is the best solution for some one like me?
I've read about sentry or other error/bug tracking but its obviously way too much for want I need
You should certainly not incorporate e.g. Gmail credentials in the code that is remotely executed on devices you do not control, given I understand correctly the Gmail less secure device issue happens as every "user" is running this code and using your credentials. This holds true for any other provider.
Now this won't exactly be simple but one way to go about it would be to create a server side API endpoint that can accept HTTP(s) or any other protocol requests that then will authenticate in a little more secure way on the server side with Gmail.
The concept for emails is:
Bug > Python Script > API call > Email
This could be implemented using Python on the API side (Flask e.g.) using an AWS Lambda Function with Amazon API Gateway, but again that is something to get through and understand by itself which will take a good chunk of time.
You need to touch a lot of concepts, like auth tokens to make this really secure.
Could you elaborate a little on where the code needs to run and if you are willing to try AWS or any other cloud provider, or would have access to an internet connected server ? This makes it easier to provide you with a full example on the solution in a hackish way while I would highlight the problems you could face on the security side.
I understand that this is not the way to go but as for my needs and my level of experience it works for me!
Yandex allows you to send email from different ip so Yandex is the way to go. What I was doing wrong in the first place was to use the wrong port (587 instead of 465)

What's the best way to get continuous data from another program in Django?

Here's the setup: On a single-board computer with a very rudimentary linux I'm running a Django app. This app is, when a button is pressed or as a response to the data described below, supposed to call either a function from a library written in C, or a compiled C program, to write data to system memory at a specified address, poke/peek like. (Python doesn't seem to be able to do that natively).
The Django app should also display data, continuously, which is being read from the memory from the same library / program.
My question now is how to even begin with setting up the scenario described above. Is this even possible with a web app? Is a Django or more fundamentally any web framework even the right approach here? I'm at a bit of a loss here, since I've spent quite a few hours now trying to figure out how to do this while not getting the most basic starting point...
Disclaimer: I'm pretty new to the entire web framework thing, and more importantly web development in general, so sorry if this is a bad question as in, I could have easily found information on this topic online, but I couldn't really find a good starting point on this.
I wanted to add a comment but not enough space... anyway
You can write a native extension in C for Python that could do what you need, check this.
Now for the fact of displaying data continuously this is kind of vague, if this C library is switching this hypothetical address, very often and very fast you have to update a browser client as fast as possible.
I think websockets would do the trick but they are js related, so I think NodeJs would be a better candidate for the server side of your application instead of Django.
If you want to stick to Django you can also expose an URL with the generated address value and have a webpage continuously (with a little Interval) checking that URL using a simple ajax call, kind of ugly and inefficient but would work.
Anyway IMHO your best bet is for websockets because with them you have a fullduplex communication between client and server.
Good Luck with your project.
Info:
Websockets in Django with socket.io
Nodejs socket.io

Delivering Python Processed data to the web

I have developed a python program that parses a webpage and creates a new text document with the parsed data. I want to deliver this new information to the web. I have no idea where to start with something like this. Are there any free options where I can have a site automatically call this python code upon request and update the new data to its page? Or is the only feasible solution here to have my own website/server that uses my code? I'm honestly pretty overwhelmed with many of the options when I try to begin doing a web-search for a solution like this. I have done a decent amount of application programming before so i'm confident in my ability to learn new things, but web protocols are all new to me so its hard to find a starting point.
Ultimately I want this python code to run automatically, or per request of a user, and deliver to the data to them. It could even be through an email, although that is probably less practical.
I personally have good experience using Google Appengine (and its free for a limited amount of requests). The downside is that it does not allow C-extensions or Python3.
If you want to host your own server, tornado is a good option I think. Tornado supports both Python2 and Python3.
There are a great deal of options available.. from 'traditional' virtual server or website hosts like a2hosting or godaddy to 'Cloud Application Hosts' such as Amazon EC2, Heroku or OpenShift.
For your case, and without knowing more, I would suggest that an application hosting is more appropriate, and that you should take a look at Heroku and Openshift in particular.
Define carefully what you want to achieve (how the users access your application, what they see, how they interact with it... etc..) and then evaluate these options based on those requirements.
Most offer a free trial, or even free services, depending on what you need! Good luck
If you've never worked with web technologies before this will be a overwhelming task, since there's a lot of different technologies involved, and many possible ways to combine them.
You'll probably want to start by familiarizing yourself with the very basics of the HTTP protocol.
Then you should read a bit on CGI server-side programming (the article also has a quick overview on HTTP).
Python can run both on CGI and WSGI (if the server provider allows such access), so you may also want to read about WSGI.
Once you grasp all these concepts, you should check this question for actual python techniques.
Also, since you seem to be under the impression you must pay to have a website/app deployed, you should know there are companies that host python apps for free

Python GData lib causes too much redirects

I'm using python GData to work with Google Calendar. I'm making really simple requests (e.g. create event), using OAuth authorization.
Usually this works OK, but sometimes I'm receiving lots of 302 redirects, that leads to "Maximum redirects count reached" exception.
If I re-try same request, it's usually works correct.
I can't figure out, why is this happening, looks like it's a random event.
As a walkthrough I wrote a code which retries requests few times, if there is such error, but may be there is an explanation of this behavior or even solutions to evade it?
Answer from Google support forum:
This might happen due to some issues in the Calendar servers and is not an error on your part. The best way to "resolve" this issue is to retry again.

Categories