How to kill process in robotframework - python

i am trying to kill process in robot framework, although the log says that process is killed , i am still able to see the command prompt invoked by the process Library.
is there anyway to kill the invoked command prompt in Suite Teardown ?
*** Settings ***
Library Process
Suite Setup Generic Suite Setup
Suite TearDown Terminate All Processes kill=True
*** Test Cases ***
login
*** Keywords ***
Generic Suite Setup
#This is invoking cmd
#when i run this , got error as mentioned below
Run Process appium -p 4723
Run Process appium -p 4750
#I tried to include cmd , no error but can't see the cmd getting invoked
Run Process cmd appium -p 4750
My python version :2.7.14
pybot version : 3.0.2
After removing start & "cmd" i get the error
Parent suite setup failed:
WindowsError: [Error 2] The system cannot find the file specified
Appium path is set in environment variables

When you use Start Process, each argument that you would use on a command line needs to be an argument in robot. For example, if you would type appium -p 4723 on the command line, then in robot you would do:
Start process appium -p 4723
(note: there are two spaces between "process", "appium", "-p", and "4723")
When you do this, robot will look through the folders in your PATH environment variable in order to find a program named "appium" (or "appium.exe" on windows). If you get the error "cannot find the file specified" that usually means that the program you're trying to run isn't in a folder in your PATH. It could also mean that the program isn't installed, or that you misspelled the app name, but I'm assuming neither of those are true in this case.
The simplest solution is to find where the appium executable is, and then use the full and complete path as the first argument to Run Process (eg: Run Process C:/the/path/to/appium.exe -p 4723)

Related

Yocto Project Bitbake Unexpected Termination

I just got my i.MX 8M Evaluation Kit and I followed the tutorial to make the system for my board.
I build the system on a host machine with Ubuntu 16.04 and I followed all instructions in Section-3 to set up my host machine.
I'm trying to build the Wayland image with OPTEE enabled so the commands are:
$ DISTRO=fsl-imx-wayland MACHINE=imx8mqevk source fsl-setup-release.sh -b build-wayland
Comment two SDL settings in local.conf: PACKAGECONFIG_append_pn-qemu-native = " sdl", PACKAGECONFIG_append_pn-nativesdk-qemu = " sdl"
Enable OPTEE in local.conf
$ bitbake fsl-image-qt5-validation-imx
The issue happens after the "bitbake" command, which is the script would suddenly be stopped and the host machine would be suspended and required re-login. The bitbake command can be continued with "-k" parameter while the unknown termination and re-login process are really annoying to me.
By reviewing the bitbake log file bitbake-cookerdeamon.log, I found that every time before unexpected termination, the bitbake command generates the same logs:
Accepting [<socket.socket fd=7, family=AddressFamily.AF_UNIX,
type=SocketKind.SOCK_STREAM, proto=0, laddr=bitbake.sock>] Connecting
Running command ['updateConfig', ...]
Running command ['getVariable', 'BBINCLUDELOGS']
Running command ['getVariable', 'BBINCLUDELOGS_LINES']
Running command ['getSetVariable', 'BB_CONSOLELOG']
Running command ['getUIHandlerNum']
Running command ['setEventMask', ...]
Running command ['getVariable', 'BB_DEFAULT_TASK']
Running command ['setConfig', 'cmd', 'build']
Running command ['buildTargets', ['fsl-image-qt5-validation-imx'], 'build']
Running command ['stateForceShutdown']
Connecting Client
Disconnecting Client
No timeout, exiting.
Exiting
According to my current understanding, the above commands are only supposed to be executed after all tasks have been prepared. However, right now my host machine may invoke these commands during other tasks are still running, and this incorrect sequence leads to my unexpected termination issue.
I'm wondering if anyone runs to a similar problem or knows the solution to my issue?
Any suggestion is welcomed. Thank you in advance.
Simon
-----Supplemental Information
Here is the content of the configuration file fsl-imx-wayland.conf
# i.MX DISTRO for Wayland without X11
include conf/distro/include/fsl-imx-base.inc include
conf/distro/include/fsl-imx-preferred-env.inc
DISTRO = "fsl-imx-wayland"
# Remove conflicting backends DISTRO_FEATURES_remove = "directfb x11 " DISTRO_FEATURES_append = " wayland pam systemd"

Opening a terminal application from python and running custom scripts inside it

I'm working with a software called dc_shell that has a terminal command (also called dc_shell) on a CentOS Linux server. when I run dc_shell command, I'm connected to its terminal and I'm able to run scripts/commands inside it. (This is all done manually)
So the real problem is that I want to do this task all from a Python program. Meaning that I have a Python code which does some task, and after that has to open dc_shell and run some commands inside it.
I have used subprocess.Popen before and this doesn't have any problem when I run commands like ls or other general terminal commands. But when I run dc_shell command it seems like it crashes and nothing happens, and when I try to terminate the session I get the following errors in my terminal.
Here's my code:
def run_scripts():
commandtext = 'cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";'
print(commandtext)
process = subprocess.Popen(commandtext,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
and the output is:
cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";
and nothing happens... and after terminating I get:
[User#server python]$ /bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
Do you need to use dc_shell to run your commands?
If so, that should be your executable and the rest of commands your arguments.
You should never use shell=True due to security considerations (the warning in the 2.x docs for subprocess seems much clearer to me).

How can I know if my python script is running? (using Cygwin or Windows shell)

I have a python script named sudoserver.py that I start in a CygWin shell by doing:
python sudoserver.py
I am planning to create a shell script (I don't know yet if I will use Windows shell script or a CygWin script) that needs to know if this sudoserver.py python script is running.
But if I do in CygWin (while sudoserver.py is running):
$ ps -e | grep "python" -i
11020 10112 11020 7160 cons0 1000 00:09:53 /usr/bin/python2.7
and in Windows shell:
C:\>tasklist | find "python" /i
python2.7.exe 4344 Console 1 13.172 KB
So it seems I have no info about the .py file being executed. All I know is that python is running something.
The -l (long) option for 'ps' on CygWin does not find my .py file. Nor does it the /v (verbose) switch at tasklist.
What should be the appropriate shell (Windows or CygWin shell would enough; both if possible would be fine) way to programmatically find if an specific python script is executing right now?
NOTE: The python process could be started by another user. Even from a user not logged in a GUI shell, and, even more, the "SYSTEM" (privileged) Windows user.
It is a limitation of the platform.
You probably need to use some low level API to retrieve the process info. You can take a look at this one: Getting the command line arguments of another process in Windows
You can probably use win32api module to access these APIs.
(Sorry, away from a Windows PC so I can't try it out)
Since sudoserver.py is your script, you could modify it to create a file in an accessible location when it starts and to delete the file when it finishes. Your shell script can then check for the existence of that file to find out if sudoserver.py is running.
(EDIT)
Thanks to the commenters who suggested that while the presence or absence of the file is an unreliable indicator, a file's lock status is not.
I wrote the following Python script testlock.py:
f = open ("lockfile.lck","w")
for i in range(10000000):
print (i)
f.close()
... and ran it in a Cygwin console window on my Windows PC. At the same time, I had another Cygwin console window open in the same directory.
First, after I started testlock.py:
Simon#Simon-PC ~/test/python
$ ls
lockfile.lck testlock.py
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
rm: cannot remove `lockfile.lck': Device or resource busy
... then after I had shut down testlock.py by using Ctrl-C:
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
Simon#Simon-PC ~/test/python
$ ls
testlock.py
Simon#Simon-PC ~/test/python
$
Thus, it appears that Windows is locking the file while the testlock.py script is running but it is unlocked when it is stopped with Ctrl-C. The equivalent test can be carried out in Python with the following script:
import os
try:
os.remove ("lockfile.lck")
except:
print ("lockfile.lck in use")
... which correctly reports:
$ python testaccess.py
lockfile.lck in use
... when testlock.py is running but successfully removes the locked file when testlock.py has been stopped with a Ctrl-C.
Note that this approach works in Windows but it won't work in Unix because, according to the Python documentation:
On Windows, attempting to remove a file that is in use causes
an exception to be raised; on Unix, the directory entry is removed
but the storage allocated to the file is not made available until
the original file is no longer in use.
A platform-independent solution using an additional Python module FileLock is described in Locking a file in Python.
(FURTHER EDIT)
It appears that the OP didn't necessarily want a solution in Python. An alternative would be to do this in bash. Here is testlock.sh:
#!/bin/bash
flock lockfile.lck sequence.sh
The script sequence.sh just runs a time-consuming operation:
#!/bin/bash
for i in `seq 1 1000000`;
do
echo $i
done
Now, while testlock.sh is running, we can test the lock status using another variant on flock:
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Lock acquired
$
The first two attempts to lock the file failed because testlock.sh was still running and so the file was locked. The last attempt succeeded because testlock.sh had finished running.

python subprocess: how to run an app on OS X?

I am porting a windows application to OS X 10.6.8. It is a new platform for me and I am facing some difficulties.
The application is a small webserver (bottle+waitress) which is starting a browser (based on chromium embedded framework) thanks to a subprocess call.
The browser is an app file and runs ok when started from gui.
I am launching it this way:
subprocess.Popen([os.getcwd()+"/cef/cefclient.app", '--url=http://127.0.0.1:8100'])
Unfortunately, this fails with OSError: permission denied.
I tried to run the script with a sudo with similar result.
I can launch the app from shell with the following command:
open -a "cef/cefclient.app" --args --url-http://127.0.0.1:8100
But
subprocess.Popen(['open', '-a', os.getcwd()+'/cef/cefclient.app', '--args', '--url-http://127.0.0.1:8100'])
fails with the following error
FSPathMakeRef(/Users/.../cefclient.app) failed with error -43.
Any idea how to fix this issue?
The file cefclient.app is actually a directory (an application bundle, specifically), not the application executable. The real executable is located inside the bundle, with a path like Contents/MacOS/executable_name. So to launch it, you'd do this:
subprocess.Popen([os.getcwd()+"/cef/cefclient.app/Content/MacOS/executable_name",
"--url=http://127.0.0.1:8100"])
Alternatively,
os.system('open -a "cef/cefclient.app" --args --url-http://127.0.0.1:8100')
Just depends if you want to control stdin / stdout or if starting the app is enough.

Python: os.environ.get('SSH_ORIGINAL_COMMAND') returns None

Trying to follow a technique found at bzr and gitosis I did the following:
added to ~/.ssh/authorized_keys the command="my_parser" parameter
which point to a python script file named 'my_parser' and located in
/usr/local/bin (file was chmoded as 777)
in that script file '/usr/local/bin/my_parser' I got the following
lines:
#!/usr/bin/env python
import os
print os.environ.get('SSH_ORIGINAL_COMMAND', None)
When trying to ssh e.g. ssh localhost
I get None on the terminal and then the connection is closed.
I wonder if anyone have done such or alike in the past and can help me
with this.
Is there anything I should do in my python file in order to get that
environment variable?
$SSH_ORIGINAL_COMMAND is set when you connect to a host with ssh to execute a single command:
$ ssh username#host 'some command'
Your "my_parser" would then return "some command".
Unless you invoke a shell with my_parser, it will then exit, and the connection will close. You can use this to control the environment of the remotely executed commands, but you lose the ability to have an interactive session

Categories