Google analytics .dat file missing, falling back to noauth_local_webserver - python

I have an AWS EC2 machine that has been running nightly google analytics scripts to load into a database. It has been working fine up for months until this weekend. I have not made any changes to the code.
These are the two errors that are showing up in my logs:
/venv/lib/python3.5/site-packages/oauth2client/_helpers.py:256: UserWarning: Cannot access analytics.dat: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Failed to start a local webserver listening on either port 8080
or port 8090. Please check your firewall settings and locally
running programs that may be blocking or using those ports.
Falling back to --noauth_local_webserver and continuing with
authorization.
It looks like it is missing my analytics.dat file but I have checked and the file is in the same folder as the script that calls the GA API. I have been searching for hours trying to figure this out but there are very little resources on the above errors for GA.
Does anyone know what might be going on here? Any ideas on how to troubleshoot more?

I am not sure why this is happening, But I have a list of steps which might help you.
check if this issue is caused by google analytics API version, google generally deprecates the previous versions of their API.
I am guessing that you are running this code on cron on your EC2 serv, make sure that you include the path to the folder where the .dat file is.
3.check whether you have the latest credentials in the .dat file.
Authentication to the API will happen through the .dat file.
Hope this solves your issue.

Related

Upload file on Ambari Apache from local system using python

I want to upload my a csv file daily on Ambari Apache. I've tried manuplating multiple solutions avaliable online to upload files of Google and other equivalent platforms. I have also tried methods like sftp to help me achieve it, but still have not found a solution. Please recommend any tips, ideas or methods on how should I achieve it.
There is an Ambari method to do this. You can create a custom service in ambari that would run. This would enable you to have ambari self contain the code and execute it. Out of the box Ambari wouldn't technically be running the script, you'd have to run it on a Master/Slave but you might be able to work around that by running an agent on ambari and making it a slave. If it's acceptable you could just have this service installed on one slave and have it push/pull the appropriate file to Ambari.
There are others that have implemented this on just one machine and you can google for how they make sure it's run on just one machine.

Error in deployment of a flask web app with MongoDB Atlas

i am new in developing web apps so i might get confused a lot of times!
The problem is this:
I was developing with Pycharm son sort of basic social network, and at first when you sign up the users were created in local folder as JSON files and then i look foward to make a deployment, and i did it without problem using PythonAnywere (PA). Lets call my .py file "server.py" where i have the whole thing.
Then i started looking for some cloud service and I ended modifying everything in order to work with Mongodb Atlas and it was a complete success. I made a lot of local test using Pycharm and everything is OK, the users are now created on cloud service.
My problem is that i would like to make a deployment test with that Mongodb service version, and i was trying to use (PA) again but this time its give me a lot of errors.
Note: i already install all the requirements in (PA) from pip freeze requirements.txt
Is there a problem with PA and MongoDB? is ther any other better option?
Should it run ok if the first version of "server.py" was ok?
I just replaced that file with new one, that was runnning perfect on localhost.
If you need more info just tell me, i am very new in this.
Thanks a lot

Google Cloud Vision not responding when using gunicorn+Flask

I am new to Google Vision API but i have been working with gunicorn and flask for some time. I installed all the required libraries. i have my api key in environment via gunicorn bash file. Whenever i try to hit gcp API, it just freezes with no response.
Can anybody help?
Here's my gunicorn_start.bash
NAME="test"
NUM_WORKERS=16
PYTHONUNBUFFERED=True
FLASK_DIR=/home/user/fold/API/
echo "Starting $NAME"
cd $FLASK_DIR
conda activate tf
export development=False
export GOOGLE_APPLICATION_CREDENTIALS='/home/user/test-6f4e7.json'
exec /home/user/anaconda3/envs/tf/bin/gunicorn --bind 0.0.0.0:9349 --timeout 500 --worker-class eventlet --workers $NUM_WORKERS app:app
EDIT
It freezes during API call.
Code for API call:
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.document_text_detection(image=image)
There is no log as it just freezes,nothing else
The code looks fine and it doesn't seem to be a permission error. Since there are no logs, the issue is hard to troubleshoot; however, I have two theories of what could be happening. I'll leave them below, with some information on how to troubleshoot them.
The API call is not reaching Google's servers
This could be happening due to a networking error. To discard this, try to make a request from the development environment (or where the application is running) using curl.
You can follow the CLI quickstart for Vision API to prepare the environment, and make the request. If it works, then you can discard the network as a possible cause. If the request fails or freezes, then you might need to check the network configuration of your environment.
In addition, you can go to the API Dashboard in the Cloud Console and look at the metrics of the Vision API. In these graphs, you can see if your requests are reaching the server, as well as some useful information like: errors by API method, errors by credential, latency of requests, etc.
There's an issue with the image/document you're sending
Note: Change the logging level of the application to DEBUG (if it's not already at this level).
If you're certain that the requests are reaching the server, the possible issue could be with the file that you're trying to send. If the file is too big, the connection might look as if it was frozen while it is being uploaded, and also it might take some time to be processed. Try with smaller files to see the results.
Also, I noticed that you're currently using a synchronous method to perform the recognition. If the file is too big, you could try the asynchronous annotation approach. Basically, you upload your file(s) to Cloud Storage first and then create a request indicating: the storage URI where your file is located and the destination storage URI where you want the results to be written to.
What you'll receive from the service is an operation Id. With this Id, you can check the status of the recognition request and make your code wait until the process has finished. You can use this example as a reference to implement it.
Hopefully with this information you can determine what is the issue you're facing and solve it.
I had the exact same issue, and it turned out to be gunicorn's fault. When I switched to the Django dev server, the problem went away. I tried with older gunicorn versions (back to 2018), and the problem persisted. I should probably report this to gunicorn. :D
Going to switch to uwsgi in the meantime.

Web2py Deployment issue: can't make server live

I am trying to implement web2py in EC2. I followed simple official guide but its not working for me. These are the steps I've followed just now and I can't open it in my local browser. Objective here is to access site from EC2 deployment to server locally on my home computer. Can someone please point me to right direction? Thanks
In your Web2Py folder there is '/scripts' and it helps you to deploy web2py from scratch automatically. You don't need to do any research, just run one of the script according to the type of your machine you are using on EC2. Coz there are different scripts as per OS. Run the shell script with root and that's it. You can access your site at ip:8000 by default. You can change the .sh or script file if you need to use some other port for deployment in test phase. #Tarun it doesn't hurt to click on a link and I am doing this all the time when coders paste gist around.

Can't run coursebuilder in google app engine

It is really weird that after clicking run button, it does nothing and also no log and show a clock sign on the first column.
It works normally before. However, after I messed up my python environment, the google coursebuilder can't run web application. That's my guessing. When I run which python.it only shows:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
These let me feel like I have no way to solve it!Are there anyone who came across this problem before? Any ideas or suggestions?
Updated: I follow suggestions to use command line to run web application on GAE. It reminds me here:
Update: The error message shows that GAE can't get the allocated port and domain. The reason why it happens is that when I use command line to run the web application, I also open GAE GUI to run a web app with the same port number.
So the way to solve it is to close the GAE GUI and free the port. Or we also could designate another kind of port number with command line.(--port=XXXX and --admin_port=YYYY). Or take a look at the doc:
Again thanks for the help of Mihail R!
The OP had multiple issues with GAE setup which were resolved by simply reinstalling the GAE Launcher and making sure the app was first copied into Applications from the .dmg file, then ran from the Applications instead of from inside the .dmg file, and appropriate permissions were suppose to be given so that GAE Launcher created the symlinks it needed to work properly.
More instructions on proper GAE SDK installation can be found here: https://cloud.google.com/appengine/downloads after clicking on the needed SDK and then the OS the SDK will be installed on.

Categories