requirements.txt:
pymystem3==0.2.0
main.py:
from pymystem3 import Mystem
import json
def hello_world(request):
request_json = request.get_json()
if request_json and 'text' in request_json:
text = request_json['text']
m = Mystem()
lemmas = m.lemmatize(text)
return json.dumps(m.analyze(text), ensure_ascii=False, encoding='utf8', sort_keys=True, indent=2)
else:
return 'No text provided'
Logs in google cloud:
hello_world
032ljsdfawo
Function execution started
hello_world
032ljsdfawo
Installing mystem to /tmp/.local/bin/mystem from http://download.cdn.yandex.net/mystem/mystem-3.1-linux-64bit.tar.gz
Expand all | Collapse all {
insertId: "000000-a9eaa07b-d936-45d8-a186-927cc94246d6"
labels: {…}
logName: "projects/project_name/logs/cloudfunctions.googleapis.com%2Fcloud-functions"
receiveTimestamp: "2018-09-26T09:41:31.070927654Z"
resource: {…}
severity: "ERROR"
textPayload: "Installing mystem to /tmp/.local/bin/mystem from http://download.cdn.yandex.net/mystem/mystem-3.1-linux-64bit.tar.gz"
timestamp: "2018-09-26T09:41:19.595Z"
}
I'm new to python, so it's probably a very simple mistake. Maybe dependencies suppose to be installed before deploying the function somehow and compiled with main.py?
This is due to unfortunate behavior by the pymsystem library: instead of including everything it needs in it's Python package, it's attempting to download additional things at runtime, specifically every time you import it.
You can see that the message in your logs is coming from the library itself.
It seems like the library is attempting to determine if it has already been installed or not, but this might not be working correctly. I'd recommend filing an issue on the project's issue tracker.
Related
I'm stuck with following problem while creating my Flask based blog.
Firstly, I used CKeditor 4 but than upgraded to 5 version.
I can't understand how to handle image upload on server side now, with adapters etc. As for 4 version, I used flask-ckeditor extension and Flask documentation to handle image uploading.
I didn't find any examples for this combination. I understand that I lack knowledge and I'm looking for advice in which direction may I advance and which concepts should I know to approach such subject.
Thanks in advance.
My takes on this so far:
According to https://ckeditor.com/docs/ckeditor5/latest/features/image-upload/simple-upload-adapter.html
(official guide on simplest adapters).
config.simpleUpload.uploadUrl should be like /upload route that was used in cke4. Object with URL property needed by cke5 is cke4's upload_successful which was returned by /upload route.
So I figured it out.
As for cke 4:
/upload route handled uploading process by returning upload_successful() from flask-ckeditor extension.
upload_successful() itself is a jsonify-ing function, which in turn modify arguments to fit json format.
As for cke 5:
There were some things aside upload handling, which caused problems.
Plugin used: "Simple upload adapter"
I integrated cke5 by downloading from Online-builder and then reinstalling and rebuilding it by myself. (for this on Ubuntu 20.04 I installed nodejs and npm by sudo apt install.) Plugin is installed by executing from /static/ckeditor folder:
npm install
npm install --save #ckeditor/ckeditor5-upload
npm run build (need to wait here for a little)
Different adapters may conflict and not allow Editor to load, so I removed CKFinder adapter from src/ckeditor.js in import and .builtinPlugins sections, replacing them by import SimpleUploadAdapter from '#ckeditor/ckeditor5-upload/src/adapters/simpleuploadadapter.js'; and SimpleUploadAdapter correspondingly.
.html, where CKEditor instance is created. body here is name of flask_wtf text-field:
<script>
ClassicEditor
.create( document.querySelector( '#body' ), {
extraPlugins: ['SimpleUploadAdapter'],
simpleUpload: {
uploadUrl: '/upload',
},
mediaEmbed: {previewsInData: true}
} )
.catch( error => {
console.error( error.stack );
} );
</script>
Things to notice:
In official guide plugins are recommended to enable as following:
.create( document.querySelector( '#editor' ), {
plugins: [ Essentials, Paragraph, Bold, Italic, Alignment ],
For me it is not working: Editor would not load with such syntaxis. What worked is this (from docs):
.create( document.querySelector( '#body' ), {
extraPlugins: ['SimpleUploadAdapter'],
So, plugins -> extraPlugins and PluginName -> 'PluginName'.
/upload route itself:
#main.route('/files/<path:filename>')
def uploaded_files(filename):
app = current_app._get_current_object()
path = app.config['UPLOADED_PATH']
return send_from_directory(path, filename)
#main.route('/upload', methods=['POST'])
def upload():
app = current_app._get_current_object()
f = request.files.get('upload')
# Add more validations here
extension = f.filename.split('.')[-1].lower()
if extension not in ['jpg', 'gif', 'png', 'jpeg']:
return upload_fail(message='Image only!')
f.save(os.path.join(app.config['UPLOADED_PATH'], f.filename))
url = url_for('main.uploaded_files', filename=f.filename)
return jsonify(url=url)
I will edit this answer as I advance in this subject.
I am building a script which automatically builds AWS Lambda function. I follow this github repo as an inspiration.
However, the lambda_handler that I want to deploy is having extra dependencies, such as numpy, pandas, or even lgbm. Simple example below:
import numpy as np
def lambda_handler(event, context):
result = np.power(event['data'], 2)
response = {'result': result}
return response
example of an event and its response:
event = {'data' = [1,2,3]}
lambda_handler(event)
> {"result": [1,4,9]}
I would like to automatically add the needed layer, while creating the AWS Lambda. For that I think I need to change create_lambda_deployment_package function that is in the lambda_basic.py in the repo. What I was thinking of doing is a following:
import zipfile
import glob
import io
def create_full_lambda_deployment_package(function_file_name):
buffer = io.BytesIO()
with zipfile.ZipFile(buffer, 'w') as zipped:
# Adding all the files around the lambda_handler directory
for i in glob.glob(function_file_name):
zipped.write(i)
# Adding the numpy directory
for i in glob.glob('./venv/lib/python3.7/site-packages/numpy/*'):
zipped.write(i, f'numpy/{i[41:]}')
buffer.seek(0)
return buffer.read()
Despite the fact that lambda is created and 'numpy' folder appears my lambda environment, unfortunately this doesn't work (error cannot import name 'integer' from partially initialized module 'numpy' (most likely due to a circular import) (/var/task/numpy/__init__.py)").
How could I fix this issue? Or is there maybe another way to solve my problem?
I'm working at a recommendation system for Spotify and I'm using spotipy on Python. I can't use the function current_user_recently_played, because Python says that the attribute current_user_recently_played isn't valid.
I don't know how to solve this problem, I absolutely need of this information to continue with my work.
This is my code:
import spotipy
import spotipy.util as util
import json
def current_user_recently_played(self, limit=50):
return self._get('me/player/recently-played', limit=limit)
token = util.prompt_for_user_token(
username="212887#studenti.unimore.it",
scope="user-read-recently-played user-read-private user-top-read user-read-currently-playing",
client_id="xxxxxxxxxxxxxxxxxxxxxx",
client_secret="xxxxxxxxxxxxxxxxxxxxxx",
redirect_uri="https://www.google.it/")
spotify = spotipy.Spotify(auth=token)
canzonirecenti= spotify.current_user_recently_played(limit=50)
out_file = open("canzonirecenti.json","w")
out_file.write(json.dumps(canzonirecenti, sort_keys=True, indent=2))
out_file.close()
print json.dumps(canzonirecenti, sort_keys=True, indent=2)
and the response is:
AttributeError: 'Spotify' object has no attribute 'current_user_recently_played'
The Spotify API Endpoints current_user_recently_added exists in the source code on Github, but I don't seem to have it in my local installation. I think the version on the Python package index is out of date, last change to the source code was 8 months ago and last change to the PyPI version was over a year ago.
I've gotten the code example to work by patching the Spotify client object to add the method myself, but this way of doing it is not the best way generally as it adds custom behaviour to a particular instance rather than the general class.
import spotipy
import spotipy.util as util
import json
import types
def current_user_recently_played(self, limit=50):
return self._get('me/player/recently-played', limit=limit)
token = util.prompt_for_user_token(
username="xxxxxxxxxxxxxx",
scope="user-read-recently-played user-read-private user-top-read user-read-currently-playing",
client_id="xxxxxxxxxxxxxxxxxxxxxx",
client_secret="xxxxxxxxxxxxxxxxxxxxxxxx",
redirect_uri="https://www.google.it/")
spotify = spotipy.Spotify(auth=token)
spotify.current_user_recently_played = types.MethodType(current_user_recently_played, spotify)
canzonirecenti = spotify.current_user_recently_played(limit=50)
out_file = open("canzonirecenti.json","w")
out_file.write(json.dumps(canzonirecenti, sort_keys=True, indent=2))
out_file.close()
print(json.dumps(canzonirecenti, sort_keys=True, indent=2))
Other ways of getting it to work in a more correct way are:
installing it from the source on Github, instead of through Pip
poking Plamere to request he update the version on PyPI
subclass the Spotify client class and add the missing methods to the subclass (probably the quickest and simplest)
Here's a partial snippet of the way I've subclassed it in my own project:
class SpotifyConnection(spotipy.Spotify):
"""Modified version of the spotify.Spotipy class
Main changes are:
-implementing additional API endpoints (currently_playing, recently_played)
-updating the main internal call method to update the session and retry once on error,
due to an issue experienced when performing actions which require an extended time
connected.
"""
def __init__(self, client_credentials_manager, auth=None, requests_session=True, proxies=None,
requests_timeout=None):
super().__init__(auth, requests_session, client_credentials_manager, proxies, requests_timeout)
def currently_playing(self):
"""Gets whatever the authenticated user is currently listening to"""
return self._get("me/player/currently-playing")
def recently_played(self, limit=50):
"""Gets the last 50 songs the user has played
This doesn't include whatever the user is currently listening to, and no more than the
last 50 songs are available.
"""
return self._get("me/player/recently-played", limit=limit)
<...more stuff>
I'm trying to create a lambda function by uploading a zip file with a single .py file at the root and 2 folders which contain the requests lib downloaded via pip.
Running the code local works file. When I zip and upload the code I very often get this error:
Unable to import module 'main': No module named requests
Sometimes I do manage to fix this, but its inconsistent and I'm not sure how I'm doing it. I'm using the following command:
in root dir zip -r upload.zip *
This is how I'm importing requests:
import requests
FYI:
1. I have attempted a number of different import methods using the exact path which have failed so I wonder if thats the problem?
2. Every time this has failed and I've been able to make it work in lambda, its involved a lot of fiddling with the zip command as I thought the problem was I was zipping the contents incorrect and hiding them behind an extra parent folder.
Looking forward to seeing the silly mistake i've been making!
Adding code snippet:
import json ##Built In
import requests ##Packaged with
import sys ##Built In
def lambda_function(event, context):
alias = event['alias']
message = event['message']
input_type = event['input_type']
if input_type == "username":
username = alias
elif input_type == "email":
username = alias.split('#',1)[0]
elif input_type is None:
print "input_type 'username' or 'email' required. Closing..."
sys.exit()
payload = {
"text": message,
"channel": "#" + username,
"icon_emoji": "<an emoji>",
"username": "<an alias>"
}
r = requests.post("<slackurl>",json=payload)
print(r.status_code, r.reason)
I got some help outside the stackoverflow loop and this seems to consistently work.
zip -r upload.zip main.py requests requests-2.9.1.dist-info
I am trying to use gdata within a Django app to create a directory in my google drive account. This is the code written within my Django view:
def root(request):
from req_info import email, password
from gdata.docs.service import DocsService
print "Creating folder........"
folder_name = '2015-Q1'
service_client = DocsService(source='spreadsheet create')
service_client.ClientLogin(email, password)
folder = service_client.CreateFolder(folder_name)
Authentication occurs without issue, but that last line of code triggers the following error:
Request Method: GET
Request URL: http://127.0.0.1:8000/
Django Version: 1.7.7
Exception Type: RequestError
Exception Value: {'status': 501, 'body': 'POST method does not support concurrency', 'reason': 'Not Implemented'}
I am using the following software:
Python 2.7.8
Django 1.7.7
PyCharm 4.0.5
gdata 2.0.18
google-api-python-client 1.4.0 (not sure if relevant)
[many other packages that I'm not sure are relevant]
What's frustrating is that the exact same code (see below) functions perfectly when I run it in its own, standalone file (not within a Django view).
from req_info import email, password
from gdata.docs.service import DocsService
print "Creating folder........"
folder_name = '2015-Q1'
service_client = DocsService(source='spreadsheet create')
service_client.ClientLogin(email, password)
folder = service_client.CreateFolder(folder_name)
I run this working code in the same virtual environment and the same PyCharm project as the code that produced the error. I have tried putting the code within a function in a separate file, and then having the Django view call that function, but the error persists.
I would like to get this code working within my Django app.
I don't recall if I got this to work within a Django view, but because Google has since required the use of Oauth 2.0, I had to rework this code anyways. I think the error had something to do with my simultaneous use of two different packages/clients to access Google Drive.
Here is how I ended up creating the folder using the google-api-python-client package:
from google_api import get_drive_service_obj, get_file_key_if_exists, insert_folder
def create_ss():
drive_client, credentials = get_drive_service_obj()
# creating folder if it does not exist
folder = get_file_key_if_exists(drive_client, 'foldername')
if folder: # if folder exists
print 'Folder "' + folder_name + '" already exists.'
else: # if folder doesn't exist
print 'Creating folder........"' + folder_name + '".'
folder = insert_folder(drive_client, folder_name)
After this code, I used a forked version (currently beta) of sheetsync to copy my template spreadsheet and populate the new file with my data. I then had to import sheetsync after the code above to avoid the "concurrency" error. (I could post the code involving sheetsync here too if folks want, but for now, I don't want to get too far off topic.)