i need to create an application (python 3.3 strictly) where users will save/load their settings online to a remote hosted database. I do not wish to change the database from listening to any other thing than localhost for security reasons, so i assume the best solution for me would be to make the program create some ssh tunnels before the saving/loading happens.
Would this policy make my database unsecure?
How could i make this work? I tried installing paramiko but it is not python 3+ ready.
I also thought maybe i could include into the application's installation, putty tray too and some proper scripting so that i can create that tunnel, but im looking for something clever and efficient here. Ofcourse i would really really really prefer avoiding any extra tray icons or shells from appearing every time a tunnel would activate.
Im asking here so that i can hear an opinion from someone with expirience. As im lacking of that :) . What would be your suggestion?
thx in advance
I am dealing with similar problems at the moment (trying to use Docker's remote API which at the moment doesn't offer authentication). I am currently using bgtunnel, though I'm getting some errors now and then (usually resolved by a webpage refresh) - at first glance due to trying to establish a connection when one already exists. This can probably be solved using some thread.isAlive() stuff, but all in all it is kind of messy managing the connection - is it already alive? Should I check before any request or try and establish it and redo the API call on an exception. It's a bit tricky. I'm also starting to explore Paramiko, hoping it might be a bit easier to handle from Python - the process of porting to Python 3 seems is almost finished at the moment.
That said, I'd expect any decent database to have good-enough baked-in authentication and encryption that I shouldn't be afraid to use it remotely without any ssh tunnelling. It appears from this SO answer that Postrges for example can be configured this way. This means you can kick out ssh and that's one thing in the stack less to think of. They are working on implementing remote auth in Docker's remote API too, at which point I'll gladly do the same :)
Related
I am programming a roboter, and it needs to send data between the EV3 and my laptop (Windows)
I run pybricks on the EV3, which enables me to code in python.
I already did research, but the only things that are remaining are some blogs from 2014 that don't help either.. the only thing that helped a little was the official documentation of pybrick.
I thought that running the example code on the EV3 and laptop would work, but the code only worked on the EV3. The EV3 waits until getting a message, but the laptop instantly says connected, even tough it isn't.
I thought it is maybe possible to get the laptop to act like an EV3 to connect them (because the original message function for EV3 is only made for interaction between different bricks), but my knowledge kinda ends here, even tough I tried a few things, like a virtual box.. maybe I did something wrong, but I hadn't had good results
If I understand correctly, you want to,
send a value to a remote machine
have the remote machine do some stuff
get a response from the remote machine
If that is correct, this is normally achieved by API. The remote machine serves an API, which the client calls using the requests library. For example, TheCatAPI allows you to call for pictures of cats. Your laptop and EV3 are still logically distinct, even though they are physically nearby.
You can choose to host the API on your laptop or make it available to others using services like AWS. Avoiding copypasta, Real Python has some fantastic docs on how to build an API.
You will want to connect your EV3 to WiFi then tell it to call your new API. If you set this up in AWS lambda, others will be able to do the same and it will probably be free for you.
If you want to do it purely locally, you will need to connect your EV3 and laptop via Bluetooth and establish a serial link.
Personal opinion
I would suggest that showcasing your ability to create an API and get it working on AWS would be more beneficial than going down the Bluetooth route.
Easy deployment of a Flask API, how do we do that? What is the best way?
I would like to deploy my Flask API on a single server, in the beginning. I just got started with a new project and I don't want to spend too much time on Docker and scalability. I am even a bit scared to use Docker in production at the beginning anyway.
With PHP there are a ton of options, I just saw they even have "deployer" now, which makes things even easier.
What I am looking for:
with one command, deploying my project to the server (using git). But depending on "deploy dev" or "deploy prod" command, the server needs to know from which branch to pull. So I do need to merge branches before deploying.
create a new "release" folder on the server and symlink the www folder to the new release.
keep at least 5 release folders, remove the 5th on every deploy.
make it possible to rollback, so change symlink to a previous release folder.
I saw I can use Fabric, but it seems kinda complicated and perhaps overkill (like capistrano). I searched quite a lot on the web, but couldn't find a very clear answer/solution. Or a solution which most people agree on.
Any thoughts or people who would like share their experience?
I will post an answer, cause I see I've been given an answer 9 months ago already, without actually answering the thread.
Like Sayse already told: plenty of ways but GIT and CI are both good ways to implement Continuous Deployment on a VPS.
I've been trying CI, with much success!
I'm theory crafting a Python service which will manipulate domain joined machines into running tests as part of a suite. Details of requirements
We must be logged in as a domain user, and we must not have the automatic login enabled
We need to reboot machines a few times, so it's a requirement for this to be sustainable
I'm wondering if it's possible to, from a Python service, somehow convince Windows to log us in? Presumably a Python service runs in Session0, as does Microsoft's Hardware Certification Kit? If that's capable of doing it, Python should also be (so far as I can see).
Any suggestions most welcome, I've got a suspicion there's a cheeky Windows API call that does this, but can't seem to find it anywhere.
So I've found a way to do it from a Windows service (written in C++) and presumably the ctypes library will permit me to use it.
Simple as using LogonUser from Win32API so far as I can see. Yet to actually set up and test it but it does seem to be exactly what I need. The difficulty being that session0 can only be accessed once logged in or via some remote debugging, so getting something like this working is no easy feat.
This may seem very basic but I have never done this before and am frankly kind of lost. Here is what I want to do:
I want a process to run (a Python CGI script would be great) on a web server without ever stopping unless I tell it to die.
I want any session to be able to communicate with that same process (i.e. not start a new CGI process for each session).
What is the best way to do this? I have been looking at FastCGI and my hosting service seems to support it, but I can't for the life of me find a usable example in any search because all results either seem to talk about running python or php itself as a FastCGI and not about my above-described problem.
Thanks
I am attempting to build an educational coding site, similar to Codecademy, but I am frankly at a loss as to what steps should be taken. Could I be pointed in the right direction in including even a simple python interpreter in a webapp?
One option might be to use PyPy to create a sandboxed python. It would limit the external operations someone could do.
Once you have that set up, your website would take the code source, send it over ajax to your webserver, and the server would run the code in a subprocess of a sandboxed python instance. You would also be able to kill the process if it took longer than say 5 seconds. Then you return the output back as a response to the client.
See these links for help on a PyPy sandbox:
http://doc.pypy.org/en/latest/sandbox.html
http://readevalprint.com/blog/python-sandbox-with-pypy.html
To create a fully interactive REPL would be even more involved. You would need to keep an interpreter alive to each client on your server. Then accept ajax "lines" of input and run them through the interp by communicating with the running process, and return the output.
Overall, not trivial. You would need some strong dev skills to do this comfortably. You may find this task a bit daunting if you are just learning.
There's more to do here than you think.
The major problem is that you cannot let people run arbitrary Python code on your webserver. For example, what happens if they do
import os
os.system("rm -rf *.*")
So clearly you have to run this Python code securely. But then you have the problem of securing Python, which is basically impossible because of how dynamic it is. And so you'll probably have to run the Python shell in a virtual machine, which comes with its own headaches.
Have you seen e.g. http://code.google.com/p/google-app-engine-samples/downloads/detail?name=shell_20091112.tar.gz&can=2&q=?
One recent option for this is to use repl.
This option is awesome because the compilers are made using JavaScript so the compilation and execution is made in the user-side, meaning that the server is free of vulnerabilities.
They have compilers for: Python3, Python, Javascript, Java, Ruby, PHP...
I strongly recommend you to check their site at http://repl.it
Look into LXC Containers. They have a pretty cool api that you can use to create lightweight linux containers. You could run the subprocess commands inside that container that way the end user could not mess with your main server.