Brief summary:
I have two files: foo1.pyw and foo2.py
I need to send large amounts of sensitive information to foo2.py from foo1.pyw, and then back again.
Currently, I am doing this by writing to a .txt file, and then opening it with foo2.py using: os.system('foo2.py [text file here] [other arguments passing information]') The problem here is that the .txt file then leaves a trace when it is removed. I need to send information to foo2.py and back without having to write to a temp file.
The information will be formatted text, containing only ASCII characters, including letters, digits, symbols, returns, tabs, and spaces.
I can give more detail if needed.
You could use encryption like AES with python:http://eli.thegreenplace.net/2010/06/25/aes-encryption-of-files-in-python-with-pycrypto or use a transport layer: https://docs.python.org/2/library/ssl.html.
If what you're worrying about is the traces left on the HD, and real time interception is not the issue, why not just shred the temp file afterwards?
Alternatively, for a lot more work, you can setup a ramdisk and hold the file in memory.
The right way to do this is probably with a sub-process and pipe, accessible via subprocess.Popen You can then directly pipe information between the scripts.
I think the simplest solution would be to just call the function within foo2.py from foo1.py:
# foo1.py
import foo2
result = foo2.do_something_with_secret("hi")
# foo2.py
def do_something_with_secret(s):
print(s)
return 'yeah'
Obviously, this wouldn't work if you wanted to replace foo2.py with an arbitrary executable.
This may be a little tricky if they two are in different directories, run under different versions of Python, etc.
Related
I have a python script that emits text files which are inputs to a 3rd-party black box application, and it constructs the files according to input data and command line options. Depending on command line options a couple of these big text files differ only in one or two lines, I'll create an example below. Sadly, there is no good way to implement this on the black-box side of things, it only accepts straight directives, and if statements inside its inner loops make it very slow.
What I would like is to have a python module that contained the text with some cpp-like directives to choose the right lines in the text file it outputs. E.g., a file like blackboxText.py:
__txt="""
...
bunch of code
...
#if OPTION1
<command sequence one>
#else
<command sequence two>
#endif
...
bunch of code
...
"""
def get(options):
return(cpp(__txt,options))
What I do not want is to actually have to run cpp, I need to stick to python as the only executable in this stage.
I don't need the whole suite of 'cpp' commands like '#include', but it's not bad if it's complete. And it doesn't have to be cpp, it could be anything approximately similar in functionality.
Is there a Python module that can parse cpp-like directives like this in a text block?
(Background: On an NTFS partition, files and/or folders can be set to "compressed", like it's a file attribute. They'll show up in blue in Windows Explorer, and will take up less disk space than they normally would. They can be accessed by any program normally, compression/decompression is handled transparently by the OS - this is not a .zip file. In Windows, setting a file to compressed can be done from a command line with the "Compact" command.)
Let's say I've created a file called "testfile.txt", put some data in it, and closed it. Now, I want to set it to be NTFS compressed. Yes, I could shell out and run Compact, but is there a way to do it directly in Python code instead?
In the end, I ended up cheating a bit and simply shelling out to the command line Compact utility. Here is the function I ended up writing. Errors are ignored, and it returns the output text from the Compact command, if any.
def ntfscompress(filename):
import subprocess
_compactcommand = 'Compact.exe /C /I /A "{}"'.format(filename)
try:
_result = subprocess.run(_compactcommand, timeout=86400,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,text=True)
return(_result.stdout)
except:
return('')
I want to pass an input fasta file stored in a variable say inp_a from python to bowtie and write the output into another out_a. I want to use
os.system ('bowtie [options] inp_a out_a')
Can you help me out
Your question asks for two things, as far as I can tell: writing data to disk, and calling an external program from within Python. Without more detailed requirements, here's what I would write:
import subprocess
data_for_bowtie = "some genome data, lol"
with open("input.fasta", "wb") as input_file:
input_file.write(data_for_bowtie)
subprocess.call(["bowtie", "input.fasta", "output.something"])
There are some fine details here which I have assumed. I'm assuming that you mean bowtie, the read aligner. I'm assuming that your file is a binary, non-human-readable one (which is why there's that b in the second argument to open) and I'm making baseless assumptions about how to call bowtie on the command line because I'm not motivated enough to spend the time learning it.
Hopefully, that provides a starting point. Good luck!
I have a set of files named 16ID_#.txt where # represents a number. I want to check if a specific file number exists, using os.path.exists(), before attempting to import the file to python. When I put together my variable for the folder where the files are, with the name of the file (e.x.: folderpath+"\16ID_#.txt"), python interprets the "\16" as a music note.
Is there any way I can prevent this, so that folderpath+"\16ID_#.txt" is interpreted as I want it to be?
I cannot change the names of the files, they are output by another program over which I have no control.
You can use / to build paths, regardless of operating system, but the correct way is to use os.path.join:
os.path.exists(os.path.join(folderpath, "16ID_#.txt"))
I get these are windows \paths. Maybe the problem is that you need to escape the backslash, because \16 could be interpreted as a special code. So maybe you need to put \\16 instead of \16.
Python 2.4.x (cannot install any non-stock modules).
Question for you all. (assuming use of subprocess.popen)
Say you had 20 - 30 machines - each with 6 - 10 files on them that you needed to read into a variable.
Would you prefer to scp into each machine, once for each file (120 - 300 SCP commands total), reading each file after it's SCP'd down into a variable - then discarding the file.
Or - SSH into each machine, once for each file - reading the file into memory. (120 - 300 ssh commands total).
?
Unless there's some other way to grab all desired files in one shot per machine (files are named YYYYMMDD.HH.blah - range would be given 20111023.00 - 20111023.23). - reading them into memory that I cannot think of?
Depending on the size of the file, you can possibly do something like:
...
files= "file1 file2 ..."
myvar = ""
for tm in machine_list
myvar = myvar+ subprocess.check_output(["ssh", "user#" + tm, "/bin/cat " + files]);
...
file1 file2 etc are space delimited. Assuming all are unix boxes you can /bin/cat them all in one shot from each machine. (This is assuming that you are simply loading the ENTIRE content in one variable) variations of above.. SSH will be simpler to diagnose.
At least that's my thought.
UPDATE
use something like
myvar = myvar+Popen(["ssh", "user#" +tm ... ], stdout=PIPE).communicate()[0]
Hope this helps.
scp lets you:
Copy entire directories using the -r flag: scp -r g0:labgroup/ .
Specify a glob pattern: scp 'g0:labgroup/assignment*.hs' .
Specify multiple source files: scp 'g0:labgroup/assignment1*' 'g0:labgroup/assignment2*' .
Not sure what sort of globbing is supported, odds are it just uses the shell for this. I'm also not sure if it's smart enough to merge copies from the same server into one connection.
You could run a remote command via ssh that uses tar to tar the files you want together (allowing the result to go to standard out), capture the output into a Python variable, then use Python's tarfile module to split the files up again. I'm not actually sure how tarfile works; you may have to put the read output into a file-like StringIO object before accessing it with tarfile.
This will save you a bit of time, since you'll only have to connect to each machine once, reducing time spent in ssh negotiation. You also avoid having to use local disk storage, which could save a bit of time and/or energy — useful if you are running in laptop mode, or on a device with a limited file system.
If the network connection is relatively slow, you can speed things up further by using gzip or bzip compression; decompression is supported by tarfile.
As an extra to Inerdia's answer, yes you can get scp to transfer multiple files in one connection, by using brace patterns:
scp "host:{path/to/file1,path/to/file2}" local_destination"
And you can use the normal goodies of brace patterns if your files have common prefixes or suffixes:
scp "host:path/to/{file1,file2}.thing" local_destination"
Note that the patterns are inside quotes, so they're not expanded by the shell before calling scp. I have a host with a noticeable connection delay, on which I created two empty files. Then executing the copy like the above (with the brace pattern quoted) resulted in a delay and then both files quickly transferred. When I left out the quotes, so the local shell expanded the braces into two separate host:file arguments to scp, then there was a noticeable delay before the first file and between the two files.
This suggests to me that Inerdia's suggestion of specifying multiple host:file arguments will not reuse the connection to transfer all the files, but using quoted brace patterns will.