Controlling VMs using Python scripts - python

I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.

For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.

follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...

Related

Mounting a virtual environment via SSHFS on local machine using it's Python3 file not working

So I have mounted a part of a development server which hold a virtual environment that is used for development testing. The reason for this is to get access to the installed packages such as Django-rest-framework and Django itself and not having it set up locally (to be sure to use the same version as the development server has). I know that it's perhaps better to use Docker for this, but that's not the case right now.
The way I've done it is installing SSHFS via an external brew (as it's no longer supported in the brew core) - via this link https://github.com/gromgit/homebrew-fuse
After that I've run this command in the terminal to via SSH mount the specific part of the development server that holds the virtual enviornment:
sshfs -o ssh_command='ssh -i /Users/myusername/.ssh/id_rsa' myusername#servername:/home/myusername/projectname/env/bin ~/mnt/projectname
It works fine and I have it mounted on my local disk in mnt/projectname.
Now I go into VSCode and go into the folder and select the file called "python3" as my interpreter (which I should, right?). However, this file is just an alias, being 16 bytes in size. I suspect something is wrong here, but I'm not sure on how to fix it. Can someone maybe take a look and give some input? I'll attach a screenshot of the mounted directory.
Screenshot of virtualenv directory mounted on local machine
The solution to the problem was using the VSCode extension Remote - SSH and run VSCode directly in the remote location, and from there being able to access the virtual environment.

Running Ruby, Node, Python and Docker on the new Apple Silicon architecture? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
How do you get a Ruby, Python and Node.js development environments running on Apple Silicon architecture. What about virtualization software e.g. Docker?
Programming languages such as Ruby, Node and Python should run on Apple M1 Chip but Docker is not supported as of now (They are working on it)
Docker for Mac Issue
https://github.com/docker/for-mac/issues/4733
Docker team might be working on fixing the issue as per this:
https://github.com/docker/roadmap/issues/142
My Recommendation:
Get it now only if you want to build iOS application. Since most of the people don't have the laptop right now, we might end up with a lot of other issues.
UPDATE:
This one is more appropriate answer now:
https://stackoverflow.com/a/65253659/8216911
I've tried many things and had some real trouble getting things working, but in the end, here is the simplest way I've found to get Docker running on a new Mac Silicon M1 chip.
Docker does not natively work, VirtualBox doesn't work, Parallels doesn't work, ... in the end, it goes down to using UTM to create a Virtual Machine, install Ubuntu server on it. Once you have that, you can install whatever you want on it (Docker, Node JS, Apache, PHP, MariaDB, ...).
Then you set eveything up so you can use all your favorite Mac OS tools ( Terminal, Transmit, VS Code, Safari ... ) to work, just as if you had all that on your local file system.
1 - Download Linux installation disk
Grab an ARM linux distribution. I took Ubuntu server 20.04 LTS:
https://cdimage.ubuntu.com/releases/20.04/release/ubuntu-20.04.1-live-server-arm64.iso
It's 922 MB.
I got it from here: https://ubuntu.com/download/server/arm
You can chose any linux distribution, but just make sure you get the ARM version (some distributions don't have one).
2 - Download UTM
UTM is a virtualisation software that is mainly aimed at IOs devices, but it works on Mac OS too.
https://github.com/utmapp/UTM/releases/download/v2.0.14/UTM.dmg
That one is 255 MB.
Future versions will be available from here: https://github.com/utmapp/UTM/releases/
Simply download the package, open it, and launch the application that is inside.
3 - Create your VM
Create your new VM, attach the linux installation disk to it and launch the VM following these steps here:
https://github.com/utmapp/UTM/wiki/Install-Ubuntu-ARM64-on-Apple-M1
Basically:
click Create a New VM
in the Information tab: choose a name and an icon for your VM
in the System tab:
in Hardware choose ARM64 (aarch64) architecture
give it some memory (how about 4 GB ...)
in the Drives tab:
create your main drive with New Drive, interface VirtIO and choose the size you want (I chose 20 480 MB), then click Create.
create the CD drive with New Drive, check Removable, interface USB, click Create
Save the VM
Select your shiny new VM in the sidebar and in the bottom right corner, click Browse and select your Linux installation ISO virtual disk.
You can now launch the VM, it will boot on the Linux installation CD: install Linux.
During this classic installation process, you will be asked to create a user account on the linux system (let's call it bob). When the installation is finished, shut down the VM and extract the installation disk before rebooting.
4 - Working inside your VM
When you restart the VM, you get a terminal asking you to log into Linux, using the username and password you created during installation.
You can now install Docker, openSSH-server, nodeJS, using classic apt-get commands.
5 - Working in your VM from MacOS
If you failed like to me chose the right keyboard, you might have trouble typing some special characters. The best way to work with your VM is now to work from outside of it.
Stop the VM (sudo shutdown -h now if you are already inside the shell) and go back to UTM:
Select your VM in the left side panel and click the top right button to edit the VM again :
Go to the Network tab and in front of Port Forward, click New.
You need to manually add a new port forwarding directive for each port in your VM you want to access from your Mac OS Host.
For example for SSH: in the new port forward form, simply write 22 in Guest Port and what you want in Host Port (let's say 3022).
Now you can restart your VM and in a normal Mac OS Terminal, you can log into your VM with
ssh -p 3022 bob#localhost
If you don't want to type your password each time, copy the content of ~/.ssh/id_rsa.pub (from Mac OS) inside a newly created /home/bob/.ssh/authorized_keys text file, inside the VM.
6 - copying files via sftp
Sadly, I did not manage to access the content of the VM directly with the finder. I had to use the famous FTP client Transmit.
Create a new connection with:
Protocol: SFTP
Host: localhost
User: bob
Password: [your password]
Port: 3022
(yes, the port is the same as SSH)
You can now freely explore and copy files to and from your VM.
Oh but wait ... there is more!
7 - working with VS Code on your VM
Now you can also work on your VM, from your Mac OS VS Code, installing the Remote Development extention:
https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack
Once the extention is installed, click on the green >< sign at the bottom left of your VS Code window and choose Remote-SSH: connect to host ...
Choose add new SSH host and type ssh -p 3022 bob#localhost
Now you can work in VS Code on a project inside your VM just as if it was in your local file system.
I do some Nuxt.js development that calls an API powered with Apache / PHP / MySQL (had to switch to MariaDB because I could not find an ARM version of MySQL that was working) that is all running in different Docker containers inside the VM using docker-compose.
Having Port Forwarded the guest 3000 port to the host 3000 port, I can browser the front end with Safari just as if it was all running natively on Mac OS.
I hope this all saves you some time.
For docker there is a technical preview out https://docs.docker.com/docker-for-mac/apple-m1/.
You can run ruby, python etc. directly on a Mac M1 by setting up a terminal that runs under Rosetta mode. Then run home brew and you can use the existing x86_64 architecture brew taps. I'm using /bin/bash as my Mac shell rather than zsh but you could adapt the below for zsh if you prefer.
Log in to your normal shell and install ARM homebrew to /opt/homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Now set up a new "Rosetta shell" terminal profile with the Shell tab Run command "env /usr/bin/arch -x86_64 /bin/bash --login".
Login again under rosetta shell and install x86_64 homebrew to /usr/local/homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Edit your ~/.bash_profile and add some code to detect if your shell is under Rosetta and if so use the /usr/local/homebrew instead of the usual /opt/homebrew
if [ "$(sysctl -n sysctl.proc_translated)" = "1" ]; then
# run under rosetta 2 with
# env /usr/bin/arch -x86_64 /bin/bash --login
#local brew_path="/usr/local/homebrew/bin"
eval $(/usr/local/bin/brew shellenv)
export PS1="i \D{%I:%M %p}:\w $ "
else
#local brew_path="/opt/homebrew/bin"
eval $(/opt/homebrew/bin/brew shellenv)
fi
Now login to your Rosetta shell and do commands like
$ brew install ruby
then you can run ruby.
I have also managed to get a vagrant virtual machine Fedora 33 for ARM running on Mac M1 under Parallels virtualisation beta. That might help with apache and php. Instructions here
https://github.com/peterdragon/packer-M1-parallels-fedora33
Seems everything will work as is...
From the event presentation they said "Existing Mac apps that have not been updated to Universal will run seamlessly with Apple’s Rosetta 2 technology."

Copy files from a remote windows virtual machine

I want to do a python script that is able to copy log files from a remote windows 10 virtual machine to the script's machine (Windows) as well as deleting files. A developer in my work place uses WMI with C# to do these kind of stuff but I haven't been able to find anything for Python regarding this topic.
You can use SSH for that.
Paramiko is an awesome library that can run SSH in python: http://www.paramiko.org/

How do I run a subprocess using local librairies on a remote server using mounted file systems?

I am using a subprocess to run a program on another machine through a mounted network file system.
For my first step in this project I have been using sshfs mount and wget:
sys_process = subprocess.Popen(wget_exec.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
using the command: wget works perfectly
using the command: /mnt/ssh_mount/wget does not execute
System libraries:
my remote system is: Arch Linux which is calling for libpcre.so.1
my local system is: Ubuntu which uses libpcre3 so libpcre.so.1 is missing
I know this because when I call the wget command through the ssh mount (/mnt/ssh_mount/bin/wget) it throws an error. I do not wish to install needed libraries to all systems using this as it defeats the purpose of trying to run something remotely.
For measure checks for permissions have been made
How do I get the command to use the local libraries?
I hope to use nfs as well which would exclude below as solutions:
Python subprocess - run multiple shell commands over SSH
Paramiko
I have tried (with no success)
os.chdir('/mnt/ssh_mount')
Error while loading shared libraries: 'libpcre.so.0: cannot open shared object file: No such file or directory' assumes a stable mount point which would cause changes in 2 places when the environment would change (this seems wrong from a database normalization background, I would assume code/sys admin as well)
You are not actually running the wget command on the remote machine - you are trying to run the remote machine's binary on your local system, and the command is failing due to incompatible library versions. sshfs, nfs, and other types of network mounting protocols simply mount the remote filesystem as an extension of your local one - they don't allow for remote execution. To do that, you'll need to open a remote shell using ssh tunneling and execute the Arch wget command through that.

How to run a code in an Amazone's EC2 instance?

I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
Here's a very simple procedure to move your Python script from local to EC2 Instance and run it.
> 1. scp -i <filepath to Pem> <filepath to Py File> ec2-user#<Public DNS>.compute-1.amazonaws.com:<filepath in EC2 instance where you want
> your file to be>
> 2. Cd to to the directory in EC2 containing the file. Type Python <Filename.py> There it executed.
Here's a concrete examples for those who likes things shown step-by-step:
In your local directory, create a python script with the following code: print("Hello AWS")
Assuming you already have AWS already set up and you want to run this script in EC2, you need to SCP (Secure Copy Protocol) your file to a directory in EC2. So here's an example:
- My filepath to pem is ~/Desktop/random.pem.
- My filepath to py file is ~/Desktop/hello_aws.py
- My public DNS is ec22-34-12-888
- The ec2 directory where I want my script to be is in /home/ec2-user
- So the full command I run in my local terminal is:
scp -i ~/Desktop/random.pem ~/Desktop/hello_aws.py ec2-user#ec2-34-201-49-170.compute-1.amazonaws.com:/home/ec2-user
Now ssh to your ec2 instance, cd to /home/ec2-user (Or wherever you put your file) and Python hello_aws.py
You have a variety of options. You can browse through a large library of AMIs here.
You can import a vm, instructions are here.
This is a general article about AWS and python.
And in this article, the author takes you through a more advanced system with a combination of datastores in python using the highly recommend django framework.
Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them.
Running scripts on Linux ec2 instances
I had to run a script on Amazon ec2 and learned how to do it. Even though the question was asked years back, I thought I would share how easy it is today.
Setting up EC2 and ssh-ing to ec2 host
Sign up and launch an ec2 instance(Do not forget to save the certificate file that will be generated while launching ec2) with default settings.
Once the ec2 is up and running, provide required permissions to the certificate file chmod 400 /path/my-key-pair.pem (or .cer file)
Run the command: ssh -i /path/my-key-pair.pem(.cer) USER#Public DNS(USER data changes based on the operating system you have launched, refer to the below paragraph for more details && Public DNS can be obtained on ec2 instance page)
Use the ssh command to connect to the instance. You specify the private key (.pem) file and user_name#public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL, the user name is ec2-user or root. For Ubuntu, the user name is ubuntu or root. For Centos, the user name is centos. For Fedora, the user name is ec2-user. For SUSE, the user name is ec2-user or root. Otherwise, if ec2-user and root don't work, check with your AMI provider.
Clone the script to EC2
In order to run the scripts on ec2, I would prefer storing the code on Github as a repo or as a gist(if you need to keep code private) and clone into ec2.
Above mention is very easy and is not error-prone.
Running the python script
I have worked with RHEL Linux instance and python was already installed. So, I could run python script after ssh-ing to host directly. It depends on your operating system you choose. Refer to aws manuals if it's not installed already.
Reference: AWS Doc

Categories