SOLVED:
I figured out the fix for this was adding env.abort_on_prompts = True to my fab file
This is a very specific question ... but I have a python definition that checks for the OS version of a specific host. It goes line by line through the list, attempts to connect to the host and outputs the OS information it finds. But that is just background ...
So my real question is if I can skip the hosts I cannot access. Many hosts it does fine with, but it will hit one where the screen prompts for "Login password for 'yourAdminUser':". I want to know if there is a way that I can get the script to realize when this is being output to the console, that it should terminate that connection attempt and then move on to the next line.
I would paste my code but it is only a few lines and I have nothing in it to expect this error of a password that I do not have.
Thanks
EDIT : I've pasted my definition below.
def get_os():
put(local_path="scripts/check_OS.ksh",remote_path="/tmp/check_OS.ksh")
sudo('chmod u+x /tmp/check_OS.ksh')
output = sudo("/tmp/check_OS.ksh")
print green("OS: {}".format(output))
Related
I've got a python script for (Foundry) Nuke that listens to commands and for every command received executes a Write node. I've noticed that if I don't do nuke.scriptOpen(my_renderScript) before nuke.execute(writeNode,1,1) and then after do nuke.scriptClose(my_renderScript), then the write command seems to execute but nothing is written to file, despite me changing knob values before I call execute again.
The reason I want to not use scriptOpen and scriptClose every time I execute -the same node- is for performance. I'm new to nuke, so correct me if I'm wrong, but it's inefficient to unload and reload a script every time you want to run a node inside it, right?
[EDIT] Here's a simple test script. Waits for command line input and runs the function, then repeats. If I move the script open and script close outside the looping / recursive function, then it will only write to file once, the first time. On subsequent commands it will "run", and nuke will output "Total render time: " in the console (render time will be 10x faster since it's not writing / doing anything) and pretend it succeeded.
# Nuke12.2.exe -nukex -i -t my_nukePython.py render.nk
# Then it asks for user input. The input should be:
# "0,1,0,1", "1024x1024", "C:/0000.exr", "C:/Output/", "myOutput####.png", 1, 1
# then just keep spamming it and see.
import nuke
import os
import sys
import colorsys
renderScript = sys.argv[1]
nuke.scriptOpen(renderScript)
readNode = nuke.toNode("Read1")
gradeNode = nuke.toNode("CustomGroup1")
writeNode = nuke.toNode("Write1")
def runRenderCommand():
cmdArgs = input("enter render command: ")
print cmdArgs
if len(cmdArgs) != 7:
print "Computer says no. Try again."
runRenderCommand()
nuke.scriptOpen(renderScript)
colorArr = cmdArgs[0].split(",")
imageProcessingRGB = [float(colorArr[0]), float(colorArr[1]), float(colorArr[2]), float(colorArr[3])]
previewImageSize = cmdArgs[1]
inputFileLocation = cmdArgs[2]
outputFileLocation = cmdArgs[3]
outputFileName = cmdArgs[4]
startFrameToExecute = cmdArgs[5]
endFrameToExecute = cmdArgs[6]
readNode.knob("file").setValue(inputFileLocation)
writeNode.knob("file").setValue(outputFileLocation+outputFileName)
gradeNode.knob("white").setValue(imageProcessingRGB)
print gradeNode.knob("white").getValue()
nuke.execute(writeNode.name(),20,20,1)
runRenderCommand()
nuke.scriptClose(renderScript)
runRenderCommand()
The problem was between the chair and the screen. Turns out my example works. My actual code that I didn't include for the example, was a bit more complex and involved websockets.
But anyway, it turns out I don't know how python scoping sintax works ^__^
I was making exactly this error in understanding how the global keyword should be used:
referenced before assignment error in python
So now it indeed works without opening and closing the nuke file every time. Funny how local scope declaration in python in this case in my code made it look like there's no errors at all... This is why nothing's sacred in scripting languages :)
Is there a way to delete this question on grounds that the problem turns out was completely unrelated to the question?
Well that took an unexpected turn. So yes I had the global problem. BUT ALSO I WAS RIGHT in my original question! Turns out depending on the nodes you're running, nuke can think that nothing has changed (probably the internal hash doesn't change) and therefore it doesn't need to execute the write command. In my case I was giving it new parameters, but the parameters were the same (telling it to render the same frame again)
If I add this global counter to the write node frame count (even though the source image only has 1 frame), then it works.
nuke.execute(m_writeNode.name(),startFrameToExecute+m_count,endFrameToExecute+m_count, continueOnError = False)
m_count+=1
So I gotta figure out how to make it render the write node without changing frames, as later on I might want to use actual frames not just bogus count increments.
We're having an issue on a particular server where a call to os.setreuid is hanging, if the uid passed is not the one for the root user.
The code is in this function (please forgive any typos - code is on a different classified system and I'm having to retype it):
import os
import pwd
def run_as(username):
user=pwd.getpwnam(username)
uid, gid = user.pw_uid, user.pw_gid
os.initgroups(username, gid)
os.setregid(gid, gid)
os.setreuid(uid, uid)
It successfully finds the user, gets the uid and gid, calls initgroups, and does the setregid. When the call is made to setreuid, it's hanging. Not returning, but not throwing errors either.
This is on a Redhat Linux server - might be RHEL7 but I think it's RHEL6. It works on other servers. On the problem server, we've checked /etc/passwd and /etc/shadow. We've deleted the user and remade it. If we pass "root" as the username, it works. Pass any other user and it hangs. Has anyone ever dealt with anything like this and have any words of wisdom?
Ok, so it's possible that the answer to this question is simply "stop using parallel-ssh and write your own code using netmiko/paramiko. Also, upgrade to python 3 already."
But here's my issue: I'm using parallel-ssh to try to hit as many as 80 devices at a time. These devices are notoriously unreliable, and they occasionally freeze up after giving one or two lines of output. Then, the parallel-ssh code hangs for hours, leaving the script running, well, until I kill it. I've jumped onto the VM running the scripts after a weekend and seen a job that's been stuck for 52 hours.
The relevant pieces of my first code, the one that hangs:
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
The next thing I tried was the channel_timout option, because if it takes more than 4 minutes to get the command output, then I know that the device froze, and I need to move on and cycle it later in the script:
from pssh.pssh_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, channel_timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
This version never actually connects to anything. Any advice? I haven't been able to find anything other than channel_timeout to attempt to kill an ssh session after a certain amount of time.
The code is creating a client object inside a function and then returning only the output of run_command which includes remote channels to the SSH server.
Since the client object is never returned by the function it goes out of scope and gets garbage collected by Python which closes the connection.
Trying to use remote channels on a closed connection will never work. If you capture stack trace of the stuck script it is most probably hanging at using remote channel or connection.
Change your code to keep the client alive. Client should ideally also be reused.
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return client, result
Make sure you understand where the code is going wrong before jumping to conclusions that will not solve the issue, ie capture stack trace of where it is hanging. Same code doing the same thing will break the same way..
I have a function like this in Django:
def uploaded_files(request):
global source
global password
global destination
username = request.user.username
log_id = request.user.id
b = File.objects.filter(users_id=log_id, flag='F') # Get the user id from session .delete() to use delete
source = 'sachet.adhikari#69.43.202.97:/home/sachet/my_files'
password = 'password'
destination = '/home/zurelsoft/my_files/'
a = Host.objects.all() #Lists hosts
command = subprocess.Popen(['sshpass', '-p', password, 'rsync', '--recursive', source],
stdout=subprocess.PIPE)
command = command.communicate()[0]
lines = (x.strip() for x in command.split('\n'))
remote = [x.split(None, 4)[-1] for x in lines if x]
base_name = [os.path.basename(ok) for ok in remote]
files_in_server = base_name[1:]
total_files = len(files_in_server)
info = subprocess.Popen(['sshpass', '-p', password, 'rsync', source, '--dry-run'],
stdout=subprocess.PIPE)
information = info.communicate()[0]
command = information.split()
filesize = command[1]
#st = int(os.path.getsize(filesize))
#filesize = size(filesize, system=alternative)
date = command[2]
users_b = User.objects.all()
return render_to_response('uploaded_files.html', {'files': b, 'username':username, 'host':a, 'files_server':files_in_server, 'file_size':filesize, 'date':date, 'total_files':total_files, 'list_users':users_b}, context_instance=RequestContext(request))
The main usage of the function is to transfer the file from the server to local machine and writes the data into the database. What I want it: There are single file which is of 10GB which will take a long time to copy. Since the copying happens using rsync in command line, I want to let user play with other menus while the file is being transferred. How can I achieve that? For example if the user presses OK, the file will be transferring in command line, so I want to show user "The file is being transferred" message and stop rolling the cursor or something like that? Is multiprocessing or threading appropriate in this case? Thanks
Assuming that function works inside of a view, your browser will timeout before the 10GB file has finished transferring over. Maybe you should re-think your architecture for this?
There are probably several ways to do this, but here are some that come to my mind right now:
One solution is to have an intermediary storing the status of the file transfer. Before you begin the process that transfers the file, set a flag somewhere like a database saying the process has begun. Then if you make your subprocess call blocking, wait for it to complete, check the output of the command if possible and update the flag you set earlier.
Then have whatever front end you have poll the status of the file transfer.
Another solution, if you make the subprocess call non-blocking as in your example, in that case you should use a thread which sits there reading the stdout and updating an intermediary store which your front end can query to get a more 'real time' update of the transfer process.
What you need is Celery.
It let's you spawn job as a parallel task and return http response.
RaviU solutions would certainly work.
Another option is to call a blocking subprocess in its own Thread. This thread could be responsible for setting a flag or information (in memcache, db, or just a file on the harddrive) as well as clearing it when it's complete. Personally, there is no love lost between reading rsyncs stdout and I so I usually just ask the OS for the filesize.
Also, if you don't need the file absolutely ASAP, adding "-c" to do a checksum can be good for those giant files. source: personal experience trying to transfer giant video files over spotty campus network.
I will say the one problem with all of the solutions so far is that it doesn't work for "N" files. Eventually, even if you make sure each file only can be transfered once at a time, if you have a lot of different files then eventually it'll bog down the system. You might be better off just using some sort of task queue unless you know it will only ever be the one file at a time. I haven't used one recently, but a quick google search yielded Celery which doesn't look to bad.
Every web server has a facility of uploading files. And what it does for large files is that it divides the file in chunks and does a merge after every chunk is received. What you can do here is that you can have a hidden tag in your html page which has a value attribute and whenever your upload webservice returns you an ok message at that point of time you can change the hidden html value to something relevant and also write a function that keeps on reading the value of that hidden html element and check whether your file uploading has been finished or not.
I have two servers A and B. I'm suppose to send, let said an image file, from server A to another server B. But before server A could send the file over I would like to check if a similar file exist in server B. I try using os.path.exists() and it does not work.
print os.path.exists('ubuntu#serverB.com:b.jpeg')
The result return a false even I have put an exact file on server B. I'm not sure whether is it my syntax error or is there any better solution to this problem. Thank you
The os.path functions only work on files on the same computer. They operate on paths, and ubuntu#serverB.com:b.jpeg is not a path.
In order to accomplish this, you will need to remotely execute a script. Something like this will work, usually:
def exists_remote(host, path):
"""Test if a file exists at path on a host accessible with SSH."""
status = subprocess.call(
['ssh', host, 'test -f {}'.format(pipes.quote(path))])
if status == 0:
return True
if status == 1:
return False
raise Exception('SSH failed')
So you can get if a file exists on another server with:
if exists_remote('ubuntu#serverB.com', 'b.jpeg'):
# it exists...
Note that this will probably be incredibly slow, likely even more than 100 ms.