I'm currently developing a Django application for internal use which runs on one server (Server 1) in a local network but needs write access to another server (Server 2) when data is saved to the database.
When a new record is saved, Django creates a new directory on the external server (Server 2) with an appropriate foldername. This was working well on the Django testserver which seemed to have access to the entire local network.
I've now successfully deployed my Django application with Apache and mod_wsgi but the folder creation procedure doesn't seems to work any more. I've tried a few things but can't seem to fix it quickly. Any ideas? Can this actually be achieved with Django and Apache?
def create_folder(self,request,obj,form, change, serverfolder, templatefolder):
try:
source_dir = templatefolder # Replace with path to project folder template
if not os.path.exists(destination_dir):
dir_util.copy_tree(source_dir,destination_dir)
obj.projectfolder = destination_dir
messages.success(request,"Project folder created on %s" % (serverfolder))
obj.create_folder = False
obj.has_folder = True
else:
messages.warning(request,"No new project folder created on %s server" % (obj.office.abbreviation))
except Exception,e:
messages.warning(request,str(e) + " Error during project folder creation on %s server!" % (obj.office.abbreviation))
def save_model(self, request, obj, form, change):
serverfolder = r'\\C-s-002\Projects' #C-s-002 is the external server in the same local network as the server on which Django is running
templatefolder = r'\\C-s-002\Projects\XXX Project Template'
self.create_folder(request,obj,form, change, serverfolder, templatefolder)
There are various approaches you can take here, so I will not attempt to exhaust all possibilities:
Option 1: Call an external command with Python. This is not specific to Django or Apache.
Option 2: Set up a web service on Server 2 that you can access via API calls to handle the file/directory creation needed by Server 1. This could be implemented with Django.
Related
I have a Django project working on localhost producing a view with :
def entry_list(request):
editedbooks = EditedBook.objects.all()
treaty = Treaty.objects.all()
pilcases = PILCase.objects.all()
journalarts = JournalArt.objects.all()
return render(request, 'text/test.bib', {'treaty': treaty,'editedbooks': editedbooks,'pilcases': pilcases, 'journalarts': journalarts}, content_type='text/x-bibtex; charset=UTF-8')
The view works. I need to push a text file of the view to a publicly available repository such as git. I'm not sure how to modify the existing view function so that it functions as it does now but renders a bib (plain text) file. Could you please advise? Render to localhost and manually push the repository or is it possible to render to a remote location?
Deploying to Heroku I got stuck on migrating the local SQLite database to the remote PostgreSQL database. The localhost environment works. Can render() produce a file or do I need to change to httpresponse() or can I somehow subclass the existing function?
Python Python 3.7.2
Django 1.9
Running in virtual environment on Mac
When I try to upload a sample csv data to my GAE app through appcfg.py, it shows the below 401 error.
2015-11-04 10:44:41,820 INFO client.py:571 Refreshing due to a 401 (attempt 2/2)
2015-11-04 10:44:41,821 INFO client.py:797 Refreshing access_token
Error 401: --- begin server output ---
You must be logged in as an administrator to access this.
--- end server output ---
Here is the command I tried,
appcfg.py upload_data --application=dev~app --url=http://localhost:8080/_ah/remote_api --filename=data/sample.csv
This is how we do it in order to use custom authentication.
Custom handler in app.yaml
- url: /remoteapi.*
script: remote_api.app
Custom wsgi app in remote_api.py to override CheckIsAdmin
from google.appengine.ext.remote_api import handler
from google.appengine.ext import webapp
import re
MY_SECRET_KEY = 'MAKE UP PASSWORD HERE' # make one up, use the same one in the shell command
cookie_re = re.compile('^"?([^:]+):.*"?$')
class ApiCallHandler(handler.ApiCallHandler):
def CheckIsAdmin(self):
"""Determine if admin access should be granted based on the
auth cookie passed with the request."""
login_cookie = self.request.cookies.get('dev_appserver_login', '')
match = cookie_re.search(login_cookie)
if (match and match.group(1) == MY_SECRET_KEY
and 'X-appcfg-api-version' in self.request.headers):
return True
else:
self.redirect('/_ah/login')
return False
app = webapp.WSGIApplication([('.*', ApiCallHandler)])
From here we script the uploading of data that was exported from our live app. Use the same password that you made up in the python script above.
echo "MAKE UP PASSWORD HERE" | appcfg.py upload_data --email=some#example.org --passin --url=http://localhost:8080/remoteapi --num_threads=4 --kind=WebHook --filename=webhook.data --db_filename=bulkloader-progress-webhook.sql3
WebHook and webhook.data are specific to the Kind that we exported from production.
I had a similar issue, where appcfg.py was not giving me any credentials dialog, so I could not authenticate. I downgraded from GAELauncher 1.27 to 1.26, and the authentication started working again.
Temporary solution: go to https://console.developers.google.com/storage/browser/appengine-sdks/featured/ to get version 1.9.26
Submitted bug report: https://code.google.com/p/google-cloud-sdk/issues/detail?id=340
You cannot use the appcfg.py upload_data command with the development server [edit: as is; see Josh J's answer]. It only works with the remote_api endpoint running on App Engine and authenticated with OAuth2.
An easy way to load data into the dev server's datastore is to create an endpoint that reads a CSV file and creates the appropriate datastore entities, then hit it with the browser. (Be sure to remove the endpoint before deploying the app, or restrict access to the URL with login: admin.)
You must have an oauth token for a google account that is not an admin of that project. Try passing the --no_cookies flag so that it prompts for authentication again.
Maybe this has something to do with it? From the docs
Connecting your app to the local development server
To use the local development server for your app running locally, you
need to do the following:
Set environment variables. Add or modify your app's Datastore
connection code. Setting environment variables
Create an environment variable DATASTORE_HOST and set it to the host
and port on which the local development server is listening. The
default host and port is http://localhost:8080. (Note: If you use the
port and/or host command line arguments to change these defaults, be
sure to adjust DATASTORE_HOST accordingly.) The following bash shell
example shows how to set this variable:
export DATASTORE_HOST=http://localhost:8080 Create an environment
variable named DATASTORE_DATASET and set it to your dataset ID, as
shown in the following bash shell example:
export DATASTORE_DATASET= Note: Both the Python and Java
client libraries look for the environment variables DATASTORE_HOST and
DATASTORE_DATASET.
Link to Docs
https://cloud.google.com/datastore/docs/tools/devserver
I use Amazon S3 as a part of my webservice. The workflow is the following:
User uploads lots of files to web server. Web server first stores them locally and then uploads to S3 asynchronously
User sends http-request to initiate job (which is some processing of these uploaded files)
Web service asks worker to do the job
Worker does the job and uploads result to S3
User requests the download link from web-server, somedbrecord.result_file.url is returned
User downloads result using this link
To work with files I use QueuedStorage backend. I initiate my FileFields like this:
user_uploaded_file = models.FileField(..., storage=queued_s3storage, ...)
result_file = models.FileField(..., storage=queued_s3storage, ...)
Where queued_s3storage is an object of class derived from ...backends.QueuedStorage and remote field is set to '...backends.s3boto.S3BotoStorage'.
Now I'm planning to deploy the whole system on one machine to run everything locally, I want to replace this '...backends.s3boto.S3BotoStorage' with something based on my local filesystem.
The first workaround was to use FakeS3 which can "emulate" S3 locally. Works, but this is not ideal, just extra unnecessary overhead.
I have Nginx server running and serving static files from particular directories. How do I create my "remote storage" class that actually stores files locally, but provides download links which lead to files served by Nginx? (something like http://myip:80/filedir/file1). Is there a standard library class for that in django?
The default storage backend for media files is local storage.
Your settings.py defines these two environment variables:
MEDIA_ROOT (link to docs) -- this is the absolute path to the local file storage folder
MEDIA_URL (link to docs) -- this is the webserver HTTP path (e.g. '/media/' or '//%s/media' % HOSTNAME
These are used by the default storage backend to save media files. From Django's default/global settings.py:
# Default file storage mechanism that holds media.
DEFAULT_FILE_STORAGE = 'django.core.files.storage.FileSystemStorage'
This configured default storage is used in FileFields for which no storage kwarg is provided. It can also be accessed like so: rom django.core.files.storage import default_storage.
So if you want to vary the storage for local development and production use, you can do something like this:
# file_storages.py
from django.conf import settings
from django.core.files.storage import default_storage
from whatever.backends.s3boto import S3BotoStorage
app_storage = None
if settings.DEBUG == True:
app_storage = default_storage
else:
app_storage = S3BotoStorage()
And in your models:
# models.py
from file_storages import app_storage
# ...
result_file = models.FileField(..., storage=app_storage, ...)
Lastly, you want nginx to serve the files directly from your MEDIA_URL. Just make sure that the nginx URL matches the path in MEDIA_URL.
I'm planning to deploy the whole system on one machine to run everything locally
Stop using QueuedStorage then, because "[QueuedStorage] enables having a local and a remote storage backend" and you've just said you don't want a remote.
Just use FileSystemStorage and configure nginx to serve the location / settings.MEDIA_ROOT
I'm trying to programmatically spin up an Azure VM using the Python REST API wrapper. All I want is a simple VM, not part of a deployment or anything like that. I've followed the example here: http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/#CreateVM
I've gotten the code to run, but I am not seeing any new VM in the portal; all it does is create a new cloud service that says "You have nothing deployed to the production environment." What am I doing wrong?
You've created a hosted_service (cloud service) but haven't put deployed anything in that service. You need to do a few more things so I'll coninue from where you left off, where name is the name of the VM:
# Where should the OS VHD be created:
media_link = 'http://portalvhdsexample.blob.core.windows.net/vhds/%s.vhd' % name
# Linux username/password details:
linux_config = azure.servicemanagement.LinuxConfigurationSet(name, 'username', 'password', True)
# Endpoint (port) configuration example, since documentation on this is lacking:
endpoint_config = azure.servicemanagement.ConfigurationSet()
endpoint_config.configuration_set_type = 'NetworkConfiguration'
endpoint1 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='HTTP', protocol='tcp', port='80', local_port='80', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint2 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='SSH', protocol='tcp', port='22', local_port='22', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint_config.input_endpoints.input_endpoints.append(endpoint1)
endpoint_config.input_endpoints.input_endpoints.append(endpoint2)
# Set the OS HD up for the API:
os_hd = azure.servicemanagement.OSVirtualHardDisk(image_name, media_link)
# Actually create the machine and start it up:
try:
sms.create_virtual_machine_deployment(service_name=name, deployment_name=name,
deployment_slot='production', label=name, role_name=name,
system_config=linux_config, network_config=endpoint_config,
os_virtual_hard_disk=os_hd, role_size='Small')
except Exception as e:
logging.error('AZURE ERROR: %s' % str(e))
return False
return True
Maybe I'm not understanding your problem, but a VM is essentially a deployment within a cloud service (think of it like a logical container for machines).
I have created a web service in django and its hosted on a shared server.The django web service respond to request from a game made in unity. But whenever game tries to request a django Web service url the server send empty resonse.Response is always:
WWW Error: server return empty string
The Unity webplayer expects a http served policy file named "crossdomain.xml" to be available on the domain you want to access with the WWW class, (although this is not needed if it is the same domain that is hosting the unity3d file).So I placed a file "crossdomain.xml" at the root of my domain ,but still i am getting same empty reply.Help plz...
EDIT:
I tried it through browser my service works fine and reply with proper response.And you know what My game can communicate to django web service when both are running on local machine.But now the django project is hosted on actual server and when game tried accessing service it never get response :(
url.py
urlpatterns = patterns('',
url(r'^crossdomain.xml$',views.CrossDomain),
url(r'^ReadFile/$',views.ReadFile),
)
views.py
def CrossDomain(request):
f = open(settings.MEDIA_ROOT+'jsondata/crossdomain.xml', 'r')
data = f.read()
f.close()
return HttpResponse(data, mimetype="application/xml")
def ReadFile(request):
f = open(settings.MEDIA_ROOT+'jsondata/some_file.json', 'r')
data = f.read()
f.close()
return HttpResponse(data, mimetype="application/javascript")
def Test(request):
return HttpResponse("Hello", mimetype="text/plain")
As I said using django for this is slight overkill because you could just serve them. Point aside though. If your serving on a different server it could be
A) Connection problems mean that your response is lost
B) Firewall issues mean that the request mean something
C) The server isn't setup correctly and therefore it justs get an error.
You need to test the response on the server. so is you access the page on the server through your browser. If so then make the game make a request and check the server error and access logs. In the apache access log you should see something like
GET "/url" 200 each time a request is made.
If you don't see any request getting through then either the request isn't made or its been lost.
If you do then the problem is in the code somewhere.