How can I add Members folder for my functional tests in plone.app.testing so that it is findable as in real site?
Have have set member area creation flag in my product installation step which I'm testing.
membership.memberareaCreationFlag = 1
I need to get this test working:
class TestMemberFolder(unittest.TestCase):
layer = MY_FUNCTIONAL_TESTING
def setUp(self):
portal = self.portal = self.layer['portal']
def test_members_folder(self):
membership = getToolByName(self.portal, 'portal_membership')
membership.addMember("basicuser", "secret", ["Member"], [])
transaction.commit()
login(self.portal, "basicuser")
# This works just fine, because it was set by my product
self.assertEquals(membership.memberareaCreationFlag, 1,
"memberareaCreationFlag must be 1 when it is enabled")
members_folder = membership.getMembersFolder()
# But this fails
self.assertIsNotNone(members_folder)
# Also we should have the user folder here
self.assertTrue(members_folder.hasObject('basicuser'))
I specifically need Member folder functionality. Just a folder owned by the test user does not cut it.
Also I tried creating new user with acl_users.userFolderAddUser, but that does not help neighter.
The memberareaCreationFlag works just fine in live Plone site.
I finally figured it out.
At first membership.memberareaCreationFlag = 1 is not enough for enabling member folders.
It must be enabled with SecurityControlPanelAdapter in plone.app.controlpanel.security
from plone.app.controlpanel.security import ISecuritySchema
# Fetch the adapter
security_adapter = ISecuritySchema(portal)
security_adapter.set_enable_user_folders(True)
Also the Functional testing fixture does not create the member folder automatically, but is possible to install it manually in your fixture class
class YourPloneFixture(PloneSandboxLayer):
defaultBases = (PLONE_FIXTURE,)
def setUpZope(self, app, configurationContext):
# Required by Products.CMFPlone:plone-content
z2.installProduct(app, 'Products.PythonScripts')
def setUpPloneSite(self, portal):
# Installs all the Plone stuff. Workflows etc.
self.applyProfile(portal, 'Products.CMFPlone:plone')
# Install portal content. Including the Members folder!
self.applyProfile(portal, 'Products.CMFPlone:plone-content')
Finally as Member folders are created uppon user login, but the login helper function in plone.app.testing seem to be too low level for this. Login with zope.testbrowser seems to do the trick
browser = Browser(self.layer['app'])
browser.open(self.portal.absolute_url() + '/login_form')
browser.getControl(name='__ac_name').value = TEST_USER_NAME
browser.getControl(name='__ac_password').value = TEST_USER_PASSWOR
browser.getControl(name='submit').click()
Phew.
self.assert_ isn't a testing method, use something like self.assertTrue, or self.assertIsNotNone.
To add members folders just turn on member folder creation and add a new user.
Something like
def setUpPloneSite(self, portal):
# Install into Plone site using portal_setup
quickInstallProduct(portal, 'Products.DataGridField')
quickInstallProduct(portal, 'Products.ATVocabularyManager')
quickInstallProduct(portal, 'Products.MasterSelectWidget')
if HAVE_LP:
quickInstallProduct(portal, 'Products.LinguaPlone')
applyProfile(portal, 'vs.org:default')
portal.acl_users.userFolderAddUser('god', 'dummy', ['Manager'], [])
setRoles(portal, 'god', ['Manager'])
login(portal, 'god')
is perfectly working for us.
Related
Using this as a reference: https://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html
I can not get links to show in the UI. I have tried adding the link within the operator itself and building the separate extra_link.py file to add it and the link doesn't show up when looking at the task in graph or grid view. Here is my code for creating it in the operator:
class upstream_link(BaseOperatorLink):
"""Create a link to the upstream task"""
name = "Test Link"
def get_link(self, operator, *, ti_key):
return "https://www.google.com"
# Defining the plugin class
class AirflowExtraLinkPlugin(AirflowPlugin):
name = "integration_links"
operator_extra_links = [
upstream_link(),
]
class BaseOperator(BaseOperator, SkipMixin, ABC):
""" Base Operator for all integrations """
operator_extra_links = (upstream_link(),)
This is a custom BaseOperator class used by a few operators in my deployment. I don’t know if the inheritance is causing the issue or not. Any help would be greatly appreciated.
Also, the goal is to have this on mapped tasks, this does work with mapped tasks right?
Edit: Here is the code I used when i tried the stand alone file approach in the plugins folder:
from airflow.models.baseoperator import BaseOperatorLink
from plugins.operators.integrations.base_operator import BaseOperator
from airflow.plugins_manager import AirflowPlugin
class upstream_link(BaseOperatorLink):
"""Create a link to the upstream task"""
name = "Upstream Data"
operators = [BaseOperator]
def get_link(self, operator, *, ti_key):
return "https://www.google.com"
# Defining the plugin class
class AirflowExtraLinkPlugin(AirflowPlugin):
name = "extra_link_plugin"
operator_extra_links = [
upstream_link(),
]
The custom plugins should be defined in the plugins folder (by default $AIRFLOW_HOME/plugins) to be processed by the plugin manager.
Try to create a new script in the plugins folder, and move AirflowExtraLinkPlugin class to this script, it should work.
The issue turned out to be the inheritance. Attaching the extra link does not carry through to the children as it seems that Airflow is just looking for that specific operator name. Extra Links also do not seem to work with mapped tasks.
I am trying to run the functional tests (using Selenium in python/django) directly from the django views by using management.call_command, in order to allow the user to run the test from the web site. The django view is something like:
class MyView():
def get(self):
output = call_command('test', 'folder.tests.MyTest')
# doing ./manage.py test folder.tests.MyTest
test_result = 'Test result: ' + output
return something_http_with_test_result
What is the best way to do this in order to do not affect the current user data? MyTest is going to create a lot of object in the database but the user must not see them.
Thank you
The best way I found to run it properly is to use os.system :
import os
dir = 'your_absolute_path_to_project'
class MyView:
def some_func_called():
os.system(dir + '/manage.py test >' + dir + '/log.txt')
The project and firefox have to have the same owner and results will be in log.txt.
I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.
This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).
So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
Add a model to TF Serving server and to the existing config file conf_filepath: Use arguments name, base_path, model_platform for the new model. Keeps the original models intact.
Notice a small difference from #Karl 's answer - using MergeFrom instead of CopyFrom
pip install tensorflow-serving-api
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use --model_config_file_poll_wait_seconds
As mentioned here in the documentation -
By setting the --model_config_file_poll_wait_seconds flag to instruct the server to periodically check for a new config file at --model_config_file filepath.
So, you just have to update the config file at model_config_path and tf-serving will load any new models and unload any models removed from the config file.
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see this). So, try to use the latest version if possible.
If you're using the method described in this answer, please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency.
I am new to python and firebase and I am trying to flaten my firebase database.
I have a database in this format
each cat has thousands of data in it. All I want is to fetch the cat names and put them in an array. for example I want the output to be ['cat1','cat2'....]
I was using this tutorial
http://ozgur.github.io/python-firebase/
from firebase import firebase
firebase = firebase.FirebaseApplication('https://your_storage.firebaseio.com', None)
result = firebase.get('/Data', None)
the problem with the above code is it'll attempt to fetch all the data under Data. How can I only fetch the "cats"?
if you want to get the values inside the cats as columns, try using the pyrebase, using pip install pyrebase at cmd / anaconda prompt(later prefered if you didn't set up PIP or Python at your environment paths. after installing:
import pyrebase
config {"apiKey": yourapikey
"authDomain": yourapidomain
"databaseURL": yourdatabaseurl,
"storageBucket": yourstoragebucket,
"serviceAccount": yourserviceaccount
}
Note: you can find all the information above at your Firebase's console:
https://console.firebase.google.com/project/ >>> your project >>> click on the icon "<'/>" with the tag "add firebase to your web app
back to the code...
make a neat definition so you can store it into a py file:
def connect_firebase():
# add a way to encrypt those, I'm a starter myself and don't know how
username: "usernameyoucreatedatfirebase"
password: "passwordforaboveuser"
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
#authenticate a user > descobrir como não deixar hardcoded
user = auth.sign_in_with_email_and_password(username, password)
#user['idToken']
# At pyrebase's git the author said the token expires every 1 hour, so it's needed to refresh it
user = auth.refresh(user['refreshToken'])
#set database
db = firebase.database()
return db
Ok, now save this into a neat .py file
NEXT, at your new notebook or main .py you're going to import this new .py file that we'll call auth.py from now on...
from auth import *
# add do a variable
db = connect_firebase()
#and now the hard/ easy part that took me a while to figure out:
# notice the value inside the .child, it should be the parent name with all the cats keys
values = db.child('cats').get()
# adding all to a dataframe you'll need to use the .val()
data = pd.DataFrame(values.val())
and thats it, print(data.head()) to check if the values / columns are where they're expected to be.
Firebase Realtime Database is one big JSON tree:
when you fetch data at a location in your database, you also retrieve
all of its child nodes.
The best practice is to denormalize your data, creating multiple locations (nodes) for the same data:
Many times you can denormalize the data by using a query to retrieve a
subset of the data
In your case, you may create a second node named "categories" where you list "only" the category names.
/cat1
/...
/cat2
/...
/cat3
/...
/cat4
/...
/categories
/cat1
/cat2
/cat3
/cat4
In this scenario you can use the update() method to write to more than one location at the same time.
I was exploring pyrebase documentation. As per that, we may extract only keys from some path.
To return just the keys at a particular path use the shallow() method.
all_user_ids = db.child("users").shallow().get()
In your case, it'll be something like:
firebase = pyrebase.initialize_app(config)
db = firebase.database()
allCats = db.child("data").shallow().get()
Let me know if it didn't help.
I want to generate for my users temporary download link.
Is that ok if i use django to generate link using url patterns?
Could it be correct way to do that. Because can happen that I don't understand some processes how it works. And it will overflow my memory or something else. Some kind of example or tools will be appreciated. Some nginx, apache modules probably?
So, what i wanna to achieve is to make url pattern which depend on user and time. Decript it end return in view a file.
A simple scheme might be to use a hash digest of username and timestamp:
from datetime import datetime
from hashlib import sha1
user = 'bob'
time = datetime.now().isoformat()
plain = user + '\0' + time
token = sha1(plain)
print token.hexdigest()
"1e2c5078bd0de12a79d1a49255a9bff9737aa4a4"
Next you store that token in a memcache with an expiration time. This way any of your webservers can reach it and the token will auto-expire. Finally add a Django url handler for '^download/.+' where the controller just looks up that token in the memcache to determine if the token is valid. You can even store the filename to be downloaded as the token's value in memcache.
Yes it would be ok to allow django to generate the urls. This being exclusive from handling the urls, with urls.py. Typically you don't want django to handle the serving of files see the static file docs[1] about this, so get the notion of using url patterns out of your head.
What you might want to do is generate a random key using a hash, like md5/sha1. Store the file and the key, datetime it's added in the database, create the download directory in a root directory that's available from your webserver like apache or nginx... suggest nginx), Since it's temporary, you'll want to add a cron job that checks if the time since the url was generated has expired, cleans up the file and removes the db entry. This should be a django command for manage.py
Please note this is example code written just for this and not tested! It may not work the way you were planning on achieving this goal, but it works. If you want the dl to be pw protected also, then look into httpbasic auth. you can generate and remove entries on the fly in a httpd.auth file using htpasswd and the subprocess module when you create the link or at registration time.
import hashlib, random, datetime, os, shutil
# model to hold link info. has these fields: key (charfield), filepath (filepathfield)
# datetime (datetimefield), url (charfield), orgpath (filepathfield of the orignal path
# or a foreignkey to the files model.
from models import MyDlLink
# settings.py for the app
from myapp import settings as myapp_settings
# full path and name of file to dl.
def genUrl(filepath):
# create a onetime salt for randomness
salt = ''.join(['{0}'.format(random.randrange(10) for i in range(10)])
key = hashlib('{0}{1}'.format(salt, filepath).hexdigest()
newpath = os.path.join(myapp_settings.DL_ROOT, key)
shutil.copy2(fname, newpath)
newlink = MyDlink()
newlink.key = key
newlink.date = datetime.datetime.now()
newlink.orgpath = filepath
newlink.newpath = newpath
newlink.url = "{0}/{1}/{2}".format(myapp_settings.DL_URL, key, os.path.basename(fname))
newlink.save()
return newlink
# in commands
def check_url_expired():
maxage = datetime.timedelta(days=7)
now = datetime.datetime.now()
for link in MyDlink.objects.all():
if(now - link.date) > maxage:
os.path.remove(link.newpath)
link.delete()
[1] http://docs.djangoproject.com/en/1.2/howto/static-files/
It sounds like you are suggesting using some kind of dynamic url conf.
Why not forget your concerns by simplifying and setting up a single url that captures a large encoded string that depends on user/time?
(r'^download/(?P<encrypted_id>(.*)/$', 'download_file'), # use your own regexp
def download_file(request, encrypted_id):
decrypted = decrypt(encrypted_id)
_file = get_file(decrypted)
return _file
A lot of sites just use a get param too.
www.example.com/download_file/?09248903483o8a908423028a0df8032
If you are concerned about performance, look at the answers in this post: Having Django serve downloadable files
Where the use of the apache x-sendfile module is highlighted.
Another alternative is to simply redirect to the static file served by whatever means from django.