I want to collect data and parse it eventually from an open window in linux.
An example- Suppose a terminal window is open. I need to retrieve all the data that appears on that window. After retrieval, I would parse it to get specific commands entered.
So is it possible to do that? If so, how? I would prefer to use python to code this entire thing.
I am making a guess that first I would have to get some sort of ID for the open window and then use some kind of library to get the content from the window whose ID I have got.
Please help. I am quite a newbie.
You can (ab)use the assistive technologies support (for screen readers and such) that exist in the toolkit libraries. Whether it will work is toolkit specific—Gtk and Qt have this support, but others (like Tk, Fltk, etc.) may or may not.
The Linux Desktop Testing Project is a python toolkit for abusing these interfaces for testing GUI applications, so you can either use it or look how it works and do similar thing.
I think the correct answer may be "with some difficulty". Essentially, the contents of a window is a bitmap. This bitmap is drawn on by a whole slew of primitives (including "display this octet-string, using that encoding and a specific font"), but the window contents is still "just pixels".
Getting the "just pixels" is pretty straight-forward, as these things go. You open a session to the X server and say "given me the contents of window W" and it hands it over.
Doing something useful with it is, unfortunately, a completely different matter, as you'd potentially have to (essentially) OCR the bitmap for what you want.
If you decide to take that route, have a look at the source of xwd, as that does, essentially, that.
Do you have some sort of control over the execution of the terminal? In that case, you can use the script command in the terminal session to log all interaction to a file and then read and parse the file.
$ script myfile
Script started, file is myfile
$ ls
...
$ exit
Script done, file is myfile
$ parse_file.py myfile
If the terminal is running inside of screen, you have other options as well. Screen has logging built in, screen -X sends commands to a running screen session (man screen).
Related
I had a python code and I used Pyinstaller to make it a stand-alone .exe executable.
In my code, I use "print" function to output result.
However, if the result is really long, (several screen page long) the result is cut short because the limitation of the console, I can scroll up but I guess the total length of the viewable text region is limited. (I am running my .exe in Windows)
Is there a way to extend the visible range so I can see all my output?
Thanks!
================
update:
I agree with #supremefist that it is the shell that is limit the visible range.
Is there a way to pass parameters to the shell so that when I double click it in Windows, the view range is extended.
Also, if it is possible, I would like to have my executable robust across different OSs. I am trying to write a small program and my target users maybe inexperienced computer users.
================
update2:
Now, I understand that Pyinstaller is only for Windows, previous update for different Oss is completely wrong.
The good news is that I switch to Qt and this problem goes away as I am now displaying my result in a window rather than a shell console.
I don't think the limit is related to PyInstaller, but to the constraints being set on your windows shell. You could try changing your shell settings by doing the following:
Open a windows shell by either running 'cmd' or by finding it under Accessories in your Start Menu.
Right click on the title of the shell window and select Properties.
Increase the screen buffer height setting under the 'Layout' tab to a large number. Something like 9999.
Re-run your program, you should see a much longer history of text.
I'm trying to write a program than will detect when my mouse pointer will change icon and automatically send out a mouse click. Is there a better way to do this than to take screenshots and parse the image for the mouse icon?
EDIT:
I'm running my program on windows 7.
I'm trying to learn some image processing and make a simple flash game i made automated.
Rules: when the curses changes shape, click to get a point.
Also what imaging modules for python will allow you to take a specific size screenshot not just the whole screen? This question has moved to a new thread: "Taking Screen shots of specific size"
The way to do this in Windows is to install either a global message hook with SetWindowsHookEx or SetWinEventHook. (Alternatively, you could build a DLL that embeds Python and hooks into the browser or its Flash wrapper app and do it less intrusively from within the app, but that's much more work.)
The message you want is WM_SETCURSOR. Note that this is the message sent by Windows to the app to ask whether it wants to change the cursor, not a message sent when the cursor changes. So, IIRC, you will want to put a WH_CALLWNDPROC and a WH_CALLWNDPROCRET and check GetCursorInfo before and after to see if the app has done so.
So, how do you do this from Python? Honestly, if you don't already know both win32api and friends from the pywin32 package, and how to write Windows message procs in some language, you probably don't want to. If you do want to, I'd start off with the (abandoned) pyHook project from UNC Assist. Even if you can't get it working, it's full of useful source code.
You should also search SO for [python] SetWinEventHook and [python] SetWindowsHookEx, and google around a bit; there are some examples out there (I even wrote one here somewhere…)
You can look at higher-level wrapper frameworks like pywinauto and winGuiAuto, but as far as I know, none of them has much help for capturing events.
I believe there are other tools, maybe AutoIt, that have all the functionality you need, but not in Python module. (AutoIt, for example, has its own VB-like scripting language instead.)
I am a new programmer with little experience but I am in the process of learning Python 2.7. I use Python(x,y) or Spydar as the programs are called on Windows 7.
The main packages I'm using are numpy, pil and potentially win32gui.
I am currently trying to write a program to mine information from a 3rd-party software. This is against their wishes and they have made it difficult. I'm using ImageGrab and then numpy to get some results. This however, or so i belive, forces me to keep the window I want to read in focus, which is not optimal.
I'm wondering if there is any way to hijack the whole window and redirect the output directly into a "virtual" copy, just so I can have it running in the background?
When looking at the demos for win32api, there is a script called desktopmanager. I never got it to work, probably since I'm running Windows 7, that's supposed to create new desktops. I don't really know how multiple desktops work but if they run in parallel, there may be a way to create a new desktop around a current window. I don't know how, it's just a thought so far.
The reason it's not working for me is not that it's not creating a new desktop, it's that once it's been created, I can't return from it. The taskbar icon nor the taskbar itself ever appears.
One approach that might work would be to do something like so:
get the window handle (FindWindow() or something similar, there are a few ways to do this)
get the window dimensions (GetClientRect() or GetWindowRect())
get the device context for the window (GetWindowDC())
get the image data from the window (BitBlt() or similar)
It is possible that you will need elevated privelages to access another processes window dc, if so you may need to inject code/dll into the target process space to do this.
HTH.
I saw here a solution, but i don't want wait until the key is pressed. I want to get the last key pressed.
The related question may help you, as #S.Lott mentioned: Detect in python which keys are pressed
I am writting in, though to give yu advice: don't worry about that.
What kind of program are you trying to produce?
Programas running on a terminal usually don't have an interface in which getting "live" keystrokes is interesting. Not nowadays. For programs running in the terminal, you should worry about a usefull command line User Interfase, using the optparse or other modules.
For interative programs, you should use a GUI library and create a decent UI for your users, instead of reinventing the wheel.Which wouldb eb etter for what you ar trying to do? Theuser click on an icon,a window opens on the screen, witha couple of buttons on it, and half a dozen or so menu options packed under a "File" menu as all the otehr windws on the screen - or - a black terminal opens up, with an 80's looking text interface with some blue-highlighted menu options and so on?. You can use Tkinter for simple windowed applications, as it comes pre-installed with Python + Windows, so that yoru users don't have to worry about installign aditional libraries.
Rephrasing it just to be clear: Any program that requires a user interface should either se a GUI library, or have a WEB interface. It is a waste of your time, and that of your users, to try and create a UI operating over the terminal - we are not in 1989 any more.
If you absolutely need a text interface, you should look at the ncurses library then. Better than trying to reinvent the wheel.
http://code.activestate.com/recipes/134892/
i think it's what you need
ps ooops, i didn't see it's the same solution you rejected...why, btw?
edit:
do you know:
from msvcrt import getch
it works only in windows, however...
(and it is generalised in the above link)
from here: http://www.daniweb.com/forums/thread115282.html
I am writing an editor which has lot of parameters that could be easily interacted with through text. I find it inconvenient to implement a separate text-editor or lots of UI code for every little parameter. Usual buttons, boxes and gadgets would be burdensome and clumsy. I'd much rather let user interact with those parameters through vim.
The preferable way for me would be to get editor open vim with my text buffer. Then, when one would save the text buffer in vim, my editor would get notified from that and update it's view.
Write your intermediate results (what you want the user to edit) to a temp file. Then use the $EDITOR environment variable in a system call to make the user edit the temp file, and read the results when the process finishes.
This lets users configure which editor they want to use in a pseudo-standard fashion.
Check out It's All Text!. It's a Firefox add-in that does something similar for textareas on web pages, except the editor in question is configurable.
You can also think about integrating VIM in to your app. Pida does this