Executing Shell commands using python In Linux [duplicate] - python

This question already has answers here:
Running Bash commands in Python
(11 answers)
Closed 2 years ago.
So I just recently started having interest and playing CTF's on OverTheWire website and I am still in the first challenge called the bandit Lvl5 if you'll consider looking at it. So it teaches the use of linux command line etc
So here on this challenge I am tasked that there are hidden files in home directory I have to find them and in one of those files I will find a password to go to the next level taking note the are over 20 hidden files that come up.
So I then thought no man yes I can go through this manually but it will take forever so I tried making a script that I thought would work to get the hidden files and open one by one. But it does not work as I wanted.
I wanted to append those hidden files into a list and then run a for loop that will open each one of them and then I will be able to see results and spot password
Code Below
import os
a = []
for i in os.system('find .inhere/'):
a.append(i)
for j in a:
print("\n\n cat j ")
So it my first time messing code of such manner trying to interact with the command line using python can you please help on how I can go about it or if my code can be fixed

os.system() only returns the exit status of the command (Not the STDOUT). You should use the subprocess module especially the subprocess.Popen. I have added several comments for my code for the better understanding.
Code:
import subprocess
import sys
def call_command(command):
"""
Call a command the STDOUT
:param command: The related command
:return: STDOUT as string
"""
result1 = subprocess.Popen(
command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True
)
# Get the STDOUT and STDERR descriptors of the command.
std_out, std_err = result1.communicate()
return std_out
# Find files in test1 folder.
find_result = call_command("find test1/ -type f")
for one_find in find_result.split("\n"):
if one_find:
# The result of cat command will be in "cat_result" variable.
cat_result = call_command("cat {}".format(one_find))
print(cat_result)
# You can write the result directly to STDOUT with the following line.
# sys.stdout.write("\n".join(call_command("cat {}".format(one_find))))
Content of test1 folder:
>>> ll test1/
total 8
drwxrwxr-x 2 user grp 4096 Jul 8 14:18 ./
drwxrwxr-x 18 user grp 4096 Jul 8 14:33 ../
-rw-rw-r-- 1 user grp 29 Jul 8 14:18 test_file.txt
Content of test_file.txt:
>>> cat test1/test_file.txt
Contents of test_file.txt file
Output of the code:
>>> python3 test.py
Contents of test_file.txt file.

Related

Manually reading multiple lines of input in terminal

I am writing a python code for a class which needs multiple lines of inputs.
For example I need the input to be in the format:
3 14
12 10
12 5
10 5
When entering this manually on the terminal, I do not know how to signal an end of input.
I have been working around it by entering the inputs in txt files and reading these.
On Linux, use Ctrl+D to type "end of file". On Windows, use Ctrl+Z.
Besides the control-d, in bash, you can also:
pythonscript <<EOF
3 14
12 10
12 5
10 5
EOF
On linux and unix you can find what the EOF char is using
stty -a
it will show something like
...
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <under>;
...
indicating the eof is ^D, which you can also change using stty.
Then, you can type ^D to signal EOF to a process that's reading its input from the terminal.

crontab running python code is not saving outputs to file [duplicate]

This question already has an answer here:
crontab failed to run python script at reboot
(1 answer)
Closed 1 year ago.
I've started a compute engine instance in Google Cloud, I created a folder called "python" in the main dir, and "one.py" in that dir. Than I pasted this into one.py:
from datetime import datetime
import os
a=datetime.now()
file_to_open = os.path.join(os.getcwd(), "raw_data.txt")
with open(file_to_open, "a+") as file_object:
file_object.seek(0)
data = file_object.read(100)
if len(data)>0:
file_object.write("\n")
file_object.write(str(a))
file_object.close()
So far, so good. Dates are being saved to the file.
Than I added this to cronetab:
crontab -e
...
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
* * * * * python3 ~python/one.py
The cron job is working, I'm getting this outputs after running
grep CRON /var/log/syslog
Mar 21 11:33:01 instance-2 CRON[605]: (myname) CMD (python3 ~/home/python/one.py)
Mar 21 11:33:01 instance-2 CRON[604]: (CRON) info (No MTA installed, discarding output)
Mar 21 11:34:01 instance-2 CRON[647]: (myname) CMD (python3 ~/home/python/one.py)
Mar 21 11:34:01 instance-2 CRON[646]: (CRON) info (No MTA installed, discarding output)
I thought it might be sth with the root directory being in a different place so did find . raw_data.txt
there's only one, under ~python/
Can someone help me fix this? Why the cron job isn't saving the dates to the file?
My assumption is that it is looking for raw_data.txt in the wrong folder and failing because it cannot find the file. To see the full about of the script, add a log file for the cron job to dump the output to. You could say something like: * * * * * /use/bin/python3 /home/m/one.py >> /home/m/out.log. This would dumb the full output of the execution to out.log and give you all the information you need to solve the issue.
Without knowing more, my assumption is that the issue is caused by os.getcwd(). This does not get the current working directory of one.py but the current working directory where cron is executed from. Since you are opening as a+, the file must exist first in order to append to the file. What you want instead is os.path.dirname(os.path.realpath(__file__)) which will give you the full directory path of one.py regardless of where the script is called from. Change that line to be this:
file_to_open = os.path.join(os.path.dirname(os.path.realpath(__file__)), "raw_data.txt")

How to get contents of a given "path" using 'ls -l' command and Python

I am using python and i want to get listing of all of the files /directories (not nested) at a given path. Meaning i need exact equivalent out put of "ls -l" command using python.
For e.g. at path /opt/test/ ls -l out put is shown below.
-rw-r--r-- 1 user qa-others 16715 Jan 16 13:38 file_2001161337
-rw-r--r-- 1 user qa-others 16715 Jan 16 13:46 file_2001161346
-rw-r--r-- 1 user qa-others 16715 Jan 16 13:54 file_2001161353
My python code is shown below.
print(subprocess.check_output(['ls', '-l']))
How can i pass the path value i.e. "/opt/temp" and get the same put of "ls -l" as shown above?
You can use pathlib.Path() for this (Python >=3.4):
from pathlib import Path
source = Path('/opt/temp')
# Get all children
content = source.glob('*')
By default this will return an iterator of pathlib.Path objects (cast iterator to list if you need a visual check).
Then you can programmatically access file attributes using pahtlib.Path.stat().
You are far, far better off using os.listdir, which essentially exactly what you want.
You can also use os.scandir if you need other information about these folders, though you'll need to filter to ensure you're only picking up folders:
[e for e in os.scandir() if e.isdir]
Each of these functions takes a path argument if you want to explicitly specify it, otherwise they run on the current directory.

stdout from interactive subprocess is cut short

When using subprocess.Popen on a Windows interactive command-line program, and setting stdout = PIPE or stdout = somefileobject, the output will always be cut short, or truncated, as well as missing prompts and other text.
So my question is: How do I capture all of the output of the subprocess?
More Details below:
I am specifically trying to grab the output from steamcmd. Here are some code examples and outputs I've run through the Python environment in the terminal.
from subprocess import Popen
#This command opens steamcmd, logs in, and calls licenses_print. It dumps
#all licenses available to the user, and then quits the program.
cmd = (['path to steamcmd', 'arg1', 'arg2',...])
Popen(cmd)
In this I didn't set stdout to anything, so it dumps all output into the terminal and I can see everything. It's about 70 lines of text. The last few lines will be this, which is what I expect, and I get this from running steamcmd directly.
License packageID 166844:
- State : Active( flags 0 ) - Purchased : Sat Jun 2 12:43:06 2018 in "US", Wallet
- Apps : 620980, (1 in total)
- Depots : 620981, (1 in total)
But the moment I try to pass this into a file, like below
f = open('path to file', 'w+')
Popen(cmd, stdout = f).wait()
f.close()
The output dumped to the file gets cut short, and the last few lines look like this
License packageID 100123:
- State : Active( flags 512 ) - Purchased : Sat Jun 10 19:34:14 2017 in "US", Complimentary
- Apps : 459860, (1 in total)
- Depots : 4598
You can see it didn't make it to package 166844, and it stops in the middle of the line "- Depots : 459861, (1 in total)"
I have read that PIPE has a size limit and causes it to hang, but it never has hung for me, the output isn't nearly big enough, but I tried outputting straight to the file anyways and it hasn't worked. I've tried using check_output, getoutput, but I assume they use the same thing under the hood anyways.
So again, my question again is: How do I capture all the output from the subprocess?
Edit 1: I've tried reading it out line by line, but it still cuts at the same place. I've tried Powershell and Windows Command Prompt. I tried this same thing on Linux Mint, with the Linux build of Steamcmd. On Linux I was able to capture all the output from the subprocess and store it in the file. So this seems like it may not be a python issue, but a Windows, or Windows cli issue that is causing it to not capture all the output. I tell it to wait, send it to PIPE instead of a file, it will always cut it short. Somewhere the last of the output gets dropped before making to my file.

why does this python script not work?

my python script is the following code:
1 import subprocess
2
3 # initial 'output' to make
4 r0 = 21
5 # number of runs to make
6 rn = 10
7
8 rf = r0+rn
9
10 for i in range(r0, rf):
11 #output directory
12 opt_dir = 'output'+str(i)
13 #put it in this output directory
14 popt_dir = './output'+str(i)
15
16 subprocess.call(['mkdir', opt_dir])
17 subprocess.call(['./exp_fit', 'efit.inp'])
18 subprocess.call(['mv', 'at*', popt_dir])
the intention is this:
i have a program called "exp_fit" which takes an input file "efit.inp". one call to ./exp_fit efit.inp will create output files called 'at0_l0_l0', 'at0_l1_l-1', ... etc (total 475 files starting with 'at').
now, i have been generating data files by running 'exp_fit', then creating output directories and moving them into the output directories with the following bash commands:
(for example, with the 20th run of my code)
mkdir output20
mv at* ./output20
so i would think that my script should do the same thing. however, it only does the following:
(1) it correctly generates all output files (475 files starting with 'at')
(2) it correctly creates the desired directories (output21 - output30)
(3) it DOES NOT, however, correctly move all the output files starting with 'at' into the desired directories. why is this? shouldn't the call to line 18 correctly execute the command to move all my files starting with 'at' into the desired directory?
should i be writing this script with bash instead of python? what is wrong with this?
Don't issue subprocess calls for things you can do natively from Python. To move files/dirs around, just use os.rename.
To create a directory, use os.mkdir.
To execute an external program, using subprocess is the right tool.
The problem is that this subprocess command
subprocess.call(['mv', 'at*', './output20'])
is not the same as typing this at a prompt
$ mv at* ./output20
In the latter case, the bash glob expansion converts the single at* argument to a list of arguments of matching filenames for the mv command. So the kernel sees the second as
['mv', 'at0_l0_l0', 'at0_l1_l-1', './output20']
kev's answer tells Python to pass the command through the shell, so the escaping will occur.
But the better solution is to use the glob module and os.rename libraries and not call the subprocess. Creating subprocesses is expensive, and using shell=True could lead to security holes, so it's best to avoid that habit.
(Actually, I suggest making the output directory, switching into it, and then running the exp_fit program from within that directory. Then you won't have to move the output. Try that first.)
If shell=True, the executable argument specifies which shell to use.
On Unix, the default shell is /bin/sh.
subprocess.call(['mv', 'at*', popt_dir], shell=True)

Categories