I decided to try using fileconveyor in order to write a simple app that will be able to sync a directory (with very small word files) across all my computers.
In order to do that I also installed pydtpdlib so as to write a simple ftp server that fileconveyor will link to.
pydtpdlib comes with a number of examples so I used one of them to run a server on 0.0.0.0:2121 and configured file conveyor to connect to it which it did, reporting back that it is
- Fully up and running now.
The ftp server also logged the connection as such
USER 'user' logged in.
FTP session closed (disconnect).
But I am not quite sure on what to do now.
1.How can I make the ftp server save uploaded files to a directory of my choosing?
2.Will fileconveyor be able to sync the files both ways?
3.If yes how is that possible, as it would have to track changes on the files in the remote machine?
4.Is what I am trying to do a good idea or should I be using file conveyor differently, possibly not with pyftpdlib but some other service?
Answer to 1: You can configure a directory per user and per anonymus user, see example (add_user/add_anonymous)
Answers to pp 2 and 3: I don't think it's possible to make a reliable implementation of a two-way sync application using only a standard FTP server on the one of the sides. Such apps need more information than FTP protocol provides.
Answer to 4: Why do you need pyftpdlib? I believe it's good for building a customized embedded FTP server. You can use any popular FTP servers like ProFTPD or Filezilla. They are well documented and you can find a lot of HOW-TOs.
BTW why don't you want to use Dropbox?
Related
The task I want to accomplish is to send a copy of the opened file, transfer it to a location on the server, and for the fast render farm pc to open it, render the file, then close itself, essentially dumping all hardware intensive tasks onto one computer.
I also want to make sure that only one file is rendered/opened at a time.
What do I need to know to accomplish this ? How would you go about this ? It's about Maya Batch Rendering(.ma) as well as Nuke files (.nk)
You can try using socket library(pre-installed) and the flask library. With them you can enstablish a connection between two or more pcs.
For Flask here is a site that can help you
https://pythonbasics.org/flask-upload-file/#:~:text=It%20is%20very%20simple%20to,it%20to%20the%20required%20location.
For Socket here is another site
https://www.thepythoncode.com/article/send-receive-files-using-sockets-python
And I tou search on google or youtube you can find mano tutorial about it
I was wondering if it was possible to take the newest files uploaded to an ftp server and send them to another ftp server. BUT, every file can only be sent once. If you can do this in python that would be nice, I know intermediate python. EXAMPLE:
2:14 PM file.txt is uploaded to the server. the program takes the file and sensd it to another server.
2:15 PM example.txt is uploaded to the server. the program takes just that file and sends it to another server.
I have searched online for this but cant find anything. Please help!
As you said that you already know python, I will give you some conceptual hints. Basically, you are looking for a one-way synchronisation. The main problem with this task is to make your program detect new files. The simplest way to do this is to create a database (note that by database I mean a way of storing data, not necessarly a specialized database). For example, a text file. In this database, each file will be recorded. Periodically, check the database with the current files (the basic ls or something similar will do). If a new file appears (meaning that there are files that are not in database), upload them.
This is the basic idea. You can improve it by using multi threading, some checks if a file has modified and so on.
EDIT: This is a programming way. As it has been suggested in comments, there are also some software solutions that will do this for you.
I developed a statistics system for online web service user behavior research in python, which mostly relies on reading and analyzing logs from production server. Currently I shared log folders internally under SMB protocol for the routine analytics program to read, but for the data accessing method I have 2 questions,
Are there any other way accessing logs other than via SMB? or other strategy?
I guess a lot read may block HD of the production and affect normal log writing, any solution to solve this?
I hoped I could come up with some real number but currently don't have. Any guy can give me some guide on doing this more gracefully?
If you are open to using a third party log aggregation tool, you have a couple of options:
http://graylog2.org/
http://www.logstash.net/
http://www.octopussy.pm/
https://github.com/facebook/scribe
In addition, if you are logging to syslog - many of the commonly used syslog daemons ( eg syslog-ng ) can be configured to forward logs from various applications to one or more of these aggregators. It is trivial to log to syslog from a python application - there is a syslog module in the standard library
Well, if you have a HTTP server in between (IHS, OHS, I guess Apache too...) then you can expose your physical repositories via a URL: each of your files will benefit from a URL too, and via this kind of code you can download them quite easily:
import os
import urllib2
# Open our local file for writing
f = urllib2.urlopen(url)
with open(os.path.basename(url), 'wb') as local_file:
local_file.write(f.read())
I'm maintaining an open-source document asset management application called NotreDAM, which is written in Django running on Apache an instance of TwistedWeb.
Whenever any user downloads a file, the application hangs for all users for the entire duration of the download. I've tracked down the download command to this point in the code, but I'm not enough versed with Python/Django to know why this may be happening.
response = HttpResponse(open(fullpath, 'rb').read(), mimetype=mimetype)
response["Last-Modified"] = http_date(statobj.st_mtime)
response["Content-Length"] = statobj.st_size
if encoding:
response["Content-Encoding"] = encoding
return response
Do you know how I could fix the application hanging while a file downloads?
The web server reads the whole file in the memory instead of streaming it. It is not well written code, but not a bug per se.
This blocks the Apache client (pre-forked) for the duration of whole file read. If IO is slow and the file is large it may take some time.
Usually you have several pre-forked Apache clients configured to satisfy this kind of requests, but on a badly configured web server you may exhibit this kind of problems and this is not a Django issue. Your web server is probably running only one pre-forked process, potentially in a debug mode.
notreDAM serves the asset files using the django.views.static.serve() command, which according to the Django docs "Using this method is inefficient and insecure. Do not use this in a production setting. Use this only for development." So there we go. I have to use another command.
I created an app. Its copy will be in two different computers. But a sqlite database file needs to be common for these two apps. I mean, both computers will be able to read and write this database file. For this purpose, I will put this file in a folder on our server which both computers are connected to. How can I get the full path for this file in Python? Or can you suggest any other way as easy as possible for doing this task?
Sqlite over a network share [stackoverflow.com]
I'd recommend against database files on a network drive. The network filesystem usually isn't robust enough to handle random updates like a DB.
As a previous answer suggested, you'd be better off creating a simple client/server model. A server process has sole access to the sqlite db, clients send requests to the server. Don't pass the sqlite db file back and forth.
You might want to use a full network DB such as MySQL or PostgreSQL.
I would have a Python server program running on the server with the database file (using the sockets library). Then have the two clients connect to the server program (again, using the sockets library), and then receive the database file. You can find some examples for the socket library at http://www.prasannatech.net/2008/07/socket-programming-tutorial.html