I'm currently developing a language transition software for linux using python GTK. it has two entries. what it basically does is, when the user type some word in a text entry 1, the translated text appears in text entry 2 and when the user press space bar, I want to paste the translated text to another application's text area. not to a text entry in my application. I think it needs to switch to the other application, paste the text and switch back to my application.
As an example, if gedit is opened in background, when a user type a word in my application and press the space bar, the translated word should be pasted in gedit.
Sometimes it may be possible to complete my task by set my application window as a popup window(type=WINDOW_POPUP) without set it as top level window(type=WINDOW_TOPLEVEL). but I'm not clear with that.
I think the problem is clear to you. If anyone can help me to solve this problem, it would be a great help for me. Thanks all.
this looks like a dbus solution and not a fun one. As for clipboard manipulation in GTK http://developer.gnome.org/gtk3/stable/gtk3-Clipboards.html will get you where you need to go, most of the C functions have a direct equivalent in python ( http://developer.gnome.org/pygtk/stable/class-gtkclipboard.html ).
Communication between applications in GTK+ is not a whole lot of fun and when I worked on a project that had to do so, I ended up using DBUS (C++) but there might be a good python port for dbus, I haven't checked.
Related
How can I paste a string into a text field, like for example the Windows search bar or a text editor using Python?
I searched a lot for this, but all I can find is hundreds of questions asking how to copy to the clipboard or getting a string from the clipboard. What I want to do is paste from the clipboard into the active Window, as if I were pressing ctrl+v. If possible, I want to avoid the seemingly complicated way of emulating the actual low-level keyboard press.
In Windows clipboard is considered IPC (Inter process communication).
You can read some of the details here.
In python you can use this library I think supporting major OS.
For linux specific are options like xclip but I think it's desktop environment dependent.
I am currently working on the final year project for my degree. I have chosen to research and develop a tool to aid the delivery of the new Computing curriculum that is coming to schools next year.
I am using a Raspberry Pi in my development, and I aim to teach extremely basic Python programming to children between the ages of 8 and 10. They are going to be able to control some hardware attached to the Pi using a simple API that I am going to create.
My question is: I would like to be able to create a GUI for the children to work in, which would allow them to write and compile scripts. This is mainly to get away from the unfamiliar interface of Linux and terminals etc, and put them in a friendly, basic interface which will pretty much just allow them to write their code and click a big red button to compile and run it to interact with the hardware. Is it possible to allow for text to be written in a GUI and then compiled when the button is pressed?
I am pretty new to Python myself so I am not as clued up as I'd like to be about the specifics of it. I know that it is possible to have the output of IDLE inside of a tkinter interface, and that it is possible to have text boxes for user input and stuff, but would it actually be possible to compile a script on button press and then run it? I have been thinking that maybe threading is the answer. Perhaps I could create a new thread to do it when the button is pressed?
My apologies if this is incredibly basic, but I am having no luck finding any answers about how I would do this. I think it's mainly because I am unsure on what exactly to ask for.
I appreciate any feedback/help, thank you very much.
Dell
Have your GUi write the Python code to a file, then dynamically import using the imp module. I actually do something similar :-)
import imp
hest = imp.load_source("Name", Path)
I want to collect data and parse it eventually from an open window in linux.
An example- Suppose a terminal window is open. I need to retrieve all the data that appears on that window. After retrieval, I would parse it to get specific commands entered.
So is it possible to do that? If so, how? I would prefer to use python to code this entire thing.
I am making a guess that first I would have to get some sort of ID for the open window and then use some kind of library to get the content from the window whose ID I have got.
Please help. I am quite a newbie.
You can (ab)use the assistive technologies support (for screen readers and such) that exist in the toolkit libraries. Whether it will work is toolkit specific—Gtk and Qt have this support, but others (like Tk, Fltk, etc.) may or may not.
The Linux Desktop Testing Project is a python toolkit for abusing these interfaces for testing GUI applications, so you can either use it or look how it works and do similar thing.
I think the correct answer may be "with some difficulty". Essentially, the contents of a window is a bitmap. This bitmap is drawn on by a whole slew of primitives (including "display this octet-string, using that encoding and a specific font"), but the window contents is still "just pixels".
Getting the "just pixels" is pretty straight-forward, as these things go. You open a session to the X server and say "given me the contents of window W" and it hands it over.
Doing something useful with it is, unfortunately, a completely different matter, as you'd potentially have to (essentially) OCR the bitmap for what you want.
If you decide to take that route, have a look at the source of xwd, as that does, essentially, that.
Do you have some sort of control over the execution of the terminal? In that case, you can use the script command in the terminal session to log all interaction to a file and then read and parse the file.
$ script myfile
Script started, file is myfile
$ ls
...
$ exit
Script done, file is myfile
$ parse_file.py myfile
If the terminal is running inside of screen, you have other options as well. Screen has logging built in, screen -X sends commands to a running screen session (man screen).
I was thinking that for a learning project for myself, I would try to make a GUI for ffdshow on linux, using tkinter. Wanted to make sure this project would be feasible first, before I get halfway through and run into something that cant be done in python.
Basic idea is to have a single GUI window with a bunch of drop down boxes that have the various presets (like format or bitrate), as well as a text box where a custom number can be entered if applicable. Then when all the options are selected, the user hits the Start button on the GUI and it shows a progress little bar with a percentage. All the options selected would just send the relevant selections as cli arguments for ffdshow, and begin the conversion progress (essentially turning all the user's input into a single perfect cli command).
Is all this doable with python and tkinter? and is it something that a relative newb with only very basic tkinter experience could pull off with books and other python resources?
Thanks
That is precisely the type of thing that python and Tkinter excel at. And yes, a relative newbie can easily do a task like that.
I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it).
Any ideas how to reach this elements and make them testable?
I'm experimenting with code for a tool for automating PowerBuilder-based GUIs as well. From what I can see, your best bet would be to use the PowerBuilder Native Interface (PBNI), and call PowerScript code from within your NVO.
If you like, feel free to send me an email (see my profile for my email address), I'd be interested in exchanging ideas about how to do this.
I didn't use PowerBuilder for a while but I guess that the problem that you are trying to solve is similar to the one I am trying to address for people making projects with SCADA systems like Wonderware Intouch.
The problem with such an application is that there is no API to get or set the value of a control. So a pywinauto approach can't work.
I've made a small tool to simulate the user events and to get the results from a screencapture. I am usig PIL and pytesser ORM for the analysis of the screen captures. It is not the easiest way but it works OK.
The tool is open-source and free of charge and can be downloaded from my website (Sorry in french). You just need an account but it's free as well. Just ask.
If you can read french, here is one article about testing Intouch-based applications
Sorry for the self promotion, but I was facing a similar problem with no solution so I've written my own. Anyway, that's free and open-source...
I've seen in AutomatedQa support that they a recipe recommending using msaa and setting some properties on the controls. I do not know if it works.
If you are testing DataWindows (the class is pbdwxxx, e.g. pbdw110) you will have to use a combination of clicking at specific coordinates and sending Tab keys to get to the control you want. Of course you can also send up and down arrow keys to move among rows. The easiest thing to do is to start with a normal control like an SLE and tab into the DataWindow. The problem is that the DataWindow is essentially just an image. There is no control for a given field until you move the focus there by clicking or tabbing. I've also found that the DataWindow's iAccessible interface is a bit strange. If you ask the DataWindow for the object with focus, you don't get the right answer. If you enumerate through all of the children you can find the one that has focus. If you can modify the source I also advise that you set AccessibleName for your DataWindow controls, otherwise you probably won't be able to identify the controls except by position (by DataWindow controls I mean the ones inside the DataWindow, not the DataWindow itself). If it's an MDI application, you may also find it useful to locate the MicroHelp window (class fnhelpxxx, e.g. fnhelp110, find from the main application window) to help determine your current context.
Edited to add:
Sikuli looks very promising for testing PowerBuilder. It works by recognizing objects on the screen from a saved fragment of screenshot. That is, you take a screenshot of the part of the screen you want it to find.