I have created a django app. Now i want to test the application's performance with some 5000 data. Is there any method to do it ? Is there any method to randomly enter this data into db and run the application ? I believe there should be some method rather than typing 5000 data manually into db. The database i use is mysql. I am quite new to Python and Django, so please help me to solve this . Thanks in advance.
Check out django-dilla. It helps you to spam your database with random data
There's no real good documentation, but you'll need to install it and add it to your INSTALLED_APPS, then run python manage.py run_dilla from the command line.
To view the options run python manage.py run_dilla --help which let you specify counts and the apps to "spam". Check out the command source for implementation details.
pycheesecake.org provides a good source of testing tools for python.
You might also want to checkout django-autofixture, a nice tool for inserting randomly generated data into the database, and django-mockups, which is a fork from django-autofixture.
Here is a sample usage of django-autofixture:
django-admin.py loadtestdata [options] app.Model:# [app.Model:# ...]
You just supply the app name, model and the number of objects that you want to create.
Related
I would like to to make Odoo read the admin_passwd form a file existing on the file system.
My use case is this:
I am running an odoo:10.0 container instance on a Docker swarm-mode cluster and would like to share the necessary credentials using docker secrets.
For instance, admin_passwd would be found at /run/secrets/admin_passwd... etc.
Is this type of configuration supported in Odoo?
If not, please put some spot lights on what may help me extend Odoo and develop such a module.
Thanks in advance!
In odoo passwords are stored in pbkdf2 crypto format for security reasons. They are stored in res_users table, if you need to query directly.
You can not reverse it easily. ;)
I have gone through this http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tutorial/authentication.html but it does not give any clue how to add database to this to store email and password?
The introduction to the Quick Tutorial describes its purpose and intended audience. Authentication and persistent storage are not covered in the same lesson, but in two different lessons.
Either you can combine learning from previous steps (not recommended) or you can take a stab at the SQLAlchemy + URL dispatch wiki tutorial which covers a typical web application with authentication, authorization, hashing of passwords, and persistent storage in an SQL database.
Note however that it uses SQLite, not MySQL, as its SQL database, so you'll either have to use the provided one or swap it out for your preferred SQL database.
Here are a few suggestions regarding switching from SQLite to MySQL. In your development.ini (and/or production.ini) file, change from SQLite to MySQL:
# sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite [comment out or remove this line]
sqlalchemy.url = mysql://MySQLUsername:MySQLPassword#localhost/MySQLdbName
Of course, you will need a MySQL database (MySQLdbName in the example above) and likely the knowledge and privileges to edit its metadata, for example, to add fields called user_email and passwordhash to the users table or create a users table if necessary.
In your setup.py file, you will require the mysql-python module to be imported. An example would be:
requires = [
'bcrypt',
'pyramid',
'pyramid_jinja2',
'pyramid_debugtoolbar',
'pyramid_tm',
'SQLAlchemy',
'transaction',
'zope.sqlalchemy',
'waitress',
'mysql-python',
]
After specifying new module(s) in setup.py, be sure to run the following commands so your project recognizes the new module(s):
cd $VENV/MyPyramidProject
sudo $VENV/bin/pip install -e .
By this point, your Pyramid project should be hooked up to MySQL. Now it is down to learning the details of Pyramid (and SQLAlchemy if this is your selected ORM). Much of the suggestions in the tutorials, partcularly the SQLAlchemy + URL dispatch wiki tutorial in your case, should work as they work with SQLite.
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
I think Celery is what you are looking for.
You can write custom admin command to load data according to your need and run that command through cron job. You can refer Writing custom commands
You can also try existing loaddata command, but it tries to load data from fixture added in your app directory.
I have done the same thing.
Firstly, my script was already parsing the emails and storing them in a db, so I set the db up in settings.py and used python manage.py inspectdb to create a model based on that db.
Then it's just a matter of building a view to display the information from your db.
If your script doesn't already use a db it would be simple to create a model with what information you want stored, and then force your script to write to the tables described by the model.
Forget about this being a Django app for a second. It is just a load of Python code.
What this means is, your Python script is absolutely free to import the database models you have in your Django app and use them as you would in a standard module in your project.
The only difference here, is that you may need to take care to import everything Django needs to work with those modules, whereas when a request enters through the normal web interface it would take care of that for you.
Just import Django and the required models.py/any other modules you need for it work from your app. It is your code, not a black box. You can import it from where ever the hell you want.
EDIT: The link from Rohan's answer to the Django docs for custom management commands is definitely the least painful way to do what I said above.
When you are saying " get that data into the DJango app" what exactly do you mean?
I am guessing that you are using some sort of database (like mysql). Insert whatever data you have collected from your cronjob into the respective tables that your Django app is accessing. Also insert this cron data into the same tables that your users are accessing. So that way your changes are immediately reflected to the users using the app as they will be accessing the data from the same table.
Best way?
Make a view on the django side to handle receiving the data, and have your script do a HTTP POST on a URL registered to that view.
You could also import the model and such from inside your script, but I don't think that's a very good idea.
Have your script send an HTTP Post request like so. This is the library Requests
>>> files = {'report.xls': open('report.xls', 'rb')}
>>> r = requests.post(url, files=files)
>>> r.text
then on the receiving side you can use web.py to process the info like this
x = web.input()
then do whatever you want with x
On the receiving side of the POST request import web and write a function that handles the post
for example
def POST(self):
x = web.input()
If you dont want to use HTTP to send messages back and forth you could just have the script write the email info to a .txt file and then have your django app open the file and read it.
EDIT:
You could set your CRON job to read the e-mails at say 8AM then write it to a text file info.txt. The in your code write something like
import time
if '9' == time.strftime("%H"):
file = open(info.txt)
info = file.read()
that will check the file at 9AM untill 10AM. if you want it to only check one time just add the minutes too the if statement as well.
I have a large amount of automatically generated html files that I would like to push to my Plone website with a script. I currently generate the files, log into Plone, click edit on each individual page and copy and paste the html into the editor. I'd like to automate this. It would be nice to retain the plone versioning, have a auto generated comment for the edit, and come from a specific user.
I've read and tried Webdav with little luck at getting it working consistently and know that there is a way to connect to plone via ftp, but haven't tried it. I'm not sure if these are the methods that I need.
My google searches aren't leading me to anything useful. Any ideas on where to start looking for a solution to this? Or any tips on implementing it?
You can script anything in Plone via the following methods:
Through-the-web via API calls (e.g. XML-RPC, wsapi, etc.)
The bin/instance run script provided by plone.recipe.zope2instance (See charm for an example of this).
You can also use a migration framework like:
collective.transmogrifier
which allows you to write migration code, and trigger it via GenericSetup or Browser view. Additionally, there are applications written on top of Transmogrifier aimed roughly at what you are describing, the most popular of which is:
funnelweb
I would recommend that you consider using or writing a Transmogrifier "blueprint(s)" to do your import, and execute the pipeline with a tool that makes that easy:
mr.migrator
You can find blueprints by searching PyPI for "transmogrify". One popular set of blueprints is:
quintagroup.transmogrifier
One of the main attractions to the Transmogrifier approach, aside from getting the job done, is the ability to share useful blueprints with others.
I think transmogrifier is the best tool for this job, but this will definitely be a programming task no matter how you do it. It's used for many such migration jobs such as migrating from drupal.
There's an add-on, wsapi4plone.core that pumazi at WebLion started that provides web services for portals which you can then hook into. You can create, modify, delete content via XML-RPC calls. The only caveat is that it doesn't yet work with Collections (criteria specifically).
project: http://pypi.python.org/pypi/wsapi4plone.core
docs: http://packages.python.org/wsapi4plone.core/
You can also do it programmatically by hooking into the ZODB via Python (zopepy or some other method).
These should get you started:
http://plone.org/documentation/kb/manipulating-plone-objects-programmatically/reading-and-writing-field-values - you should be able to get an understanding of accessors and mutators (setters and getters), in your case you are going to be more than likely working with obj.Text (getter) and obj.setText (setter).
https://weblion.psu.edu/trac/weblion/wiki/AutomatingObjectCreation - lots of examples (slightly outdated but still relevant)
http://plone.org/documentation/faq/upload-images-files
Try to enable Webdav or ftp in Plone, then you can access Plone via webdav or ftp clients, pushing the html files. Plone (Zope) will recognises the html files as Pages.
I need to populate my database with a bunch of dummy entries (around 200+) so that I can test the admin interface I've made and I was wondering if there was a better way to do it. I spent the better part of my day yesterday trying to fill it in by hand (i.e by wrapping stuff like this my_model(title="asdfasdf", field2="laksdj"...) in a bunch of "for x in range(0,200):" loops) and gave up because it didn't work the way I expected it to. I think this is what I need to use, but don't you need to have (existing) data in the database for this to work?
Check this app
https://github.com/aerosol/django-dilla/
Let's say you wrote your blog application (oh yeah, your favorite!) in Django. Unit tests went fine, and everything runs extremely fast, even those ORM-generated ultra-long queries. You've added several categorized posts and it's still stable as a rock. You're quite sure the app is efficient and ready to for live deployment. Right? Wrong.
You can use fixtures for this purpose, and the loaddata management command.
One approach is to do it like this.
Prepare your test database.
Use dumpdata to create JSON export of the database.
Put this in the fixtures directory of your application.
Write your unit tests to load this "fixture": https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.TransactionTestCase.fixtures
Django fixtures provide a mechanism for importing data on syncdb. However, doing this initial data propagation is often easier via Python code. The technique you outline should work, either via syncdb or a management command. For instance, via syncdb, in my_app/management.py:
def init_data(sender, **kwargs):
for i in range(1000):
MyModel(number=i).save()
signals.post_syncdb.connect(init_data)
Or, in a management command in myapp/management/commands/my_command.py:
from django.core.management.base import BaseCommand, CommandError
from models import MyModel
class MyCommand(BaseCommand):
def handle(self, *args, **options):
if len(args) > 0:
raise CommandError('need exactly zero arguments')
for i in range(1000):
MyModel(number=i).save()
You can then export this data to a fixture, or continue importing using the management command. If you choose to continue to use the syncdb signal, you'll want to conditionally run the init_data function to prevent the data getting imported on subsequent syncdb calls. When a fixture isn't sufficient, I personally like to do both: create a management command to import data, but have the first syncdb invocation do the import automatically. That way, deployment is more automated but I can still easily make modifications to the initial data and re-run the import.
I'm not sure why you require any serialization. As long as you have setup your Django settings.py file to point to your test database, populating a test database should be nothing more than saving models.
for x in range(0, 200):
m = my_model(title=random_title(), field2=random_string(), ...)
m.save()
There are better ways to do this, but if you want a quick test set, this is the way to go.
The app recommended by the accepted answer is no longer being maintained however django-seed can be used as a replacement:
https://github.com/brobin/django-seed
I would recommend django-autofixtures to you. I tried both django_seed and django-autofixtures, but django_seed has a lot of issues with unique keys.
django-autofixtures takes care of unique, primary and other db constraints while filling up the database