Is there a method to use Sendkeys with Xinput? - python

Im trying to find a way send inputs to XINPUT buttons for a controller that is already plugged in. So the one python repo (vjoy/pygame) wouldn't work since it "adds" a controller.
Does anyone know if this exist?
For a similar comparison I am able to have python trigger buttons on my keyboard by using Direct X scancodes and Sendkeys but I could not find scan codes for Xinput or controllers.
Thanks again!

I think what you want is a virtual xinput controller. If so, you should check out the relatively new but excellent vgamepad library. It's a wrapper around vigem, so you can send your keys to a virtual Xbox controller.
I hope that answers your question.

Related

Is there any way to make python act like a game controller?

I want to control games with python. I need python to act like a game controller and do what i code.
I couldn't find things about this. I'm waiting for your help. Thanks.
edit : I pretty much did it with using PyvJoy library and vJoy software. Its now possible python to act like game controller and press keys. This can be useful while developing machine learning apps.
Here's an example code I used a while ago in order to read inputs from an xbox controller hooked-up via the USB port and use them inside Python:
my GitHub
It relies on the pyGame module:
https://www.pygame.org/news

Detecting keypresses in Windows to automate window functions in python

I'm creating an application to automatically perform keystrokes into the same, multiple windows that are open at the same time. These windows are all child windows from a parent.
For example, there are four windows and I want to press a key that will send CTRL-K to them, then immediately send WIN+M to all of them. After some time, I want to press another key that will then send SHIFT+WIN+M, and then CTRL+O.
I'm stuck on capturing windows using EnumWindow and all of that. I'm familiar with callbacks, but a little confused on how to use them.
I'm also considering using pyWinAuto's SendKeys.
I do realize there are programs like X-Mouse Button Control that can do all of this for me, but I'm learning and this would be a fun exercise.
I would recommend using PyAutoit for this, it's fairly straightforward to use and it usually works well. Steps to install the package and a simple example can be found here :
https://pypi.python.org/pypi/PyAutoIt/0.3

Python compile a script within a GUI

I am currently working on the final year project for my degree. I have chosen to research and develop a tool to aid the delivery of the new Computing curriculum that is coming to schools next year.
I am using a Raspberry Pi in my development, and I aim to teach extremely basic Python programming to children between the ages of 8 and 10. They are going to be able to control some hardware attached to the Pi using a simple API that I am going to create.
My question is: I would like to be able to create a GUI for the children to work in, which would allow them to write and compile scripts. This is mainly to get away from the unfamiliar interface of Linux and terminals etc, and put them in a friendly, basic interface which will pretty much just allow them to write their code and click a big red button to compile and run it to interact with the hardware. Is it possible to allow for text to be written in a GUI and then compiled when the button is pressed?
I am pretty new to Python myself so I am not as clued up as I'd like to be about the specifics of it. I know that it is possible to have the output of IDLE inside of a tkinter interface, and that it is possible to have text boxes for user input and stuff, but would it actually be possible to compile a script on button press and then run it? I have been thinking that maybe threading is the answer. Perhaps I could create a new thread to do it when the button is pressed?
My apologies if this is incredibly basic, but I am having no luck finding any answers about how I would do this. I think it's mainly because I am unsure on what exactly to ask for.
I appreciate any feedback/help, thank you very much.
Dell
Have your GUi write the Python code to a file, then dynamically import using the imp module. I actually do something similar :-)
import imp
hest = imp.load_source("Name", Path)

Python Image processing screenshots

I'm trying to write a program than will detect when my mouse pointer will change icon and automatically send out a mouse click. Is there a better way to do this than to take screenshots and parse the image for the mouse icon?
EDIT:
I'm running my program on windows 7.
I'm trying to learn some image processing and make a simple flash game i made automated.
Rules: when the curses changes shape, click to get a point.
Also what imaging modules for python will allow you to take a specific size screenshot not just the whole screen? This question has moved to a new thread: "Taking Screen shots of specific size"
The way to do this in Windows is to install either a global message hook with SetWindowsHookEx or SetWinEventHook. (Alternatively, you could build a DLL that embeds Python and hooks into the browser or its Flash wrapper app and do it less intrusively from within the app, but that's much more work.)
The message you want is WM_SETCURSOR. Note that this is the message sent by Windows to the app to ask whether it wants to change the cursor, not a message sent when the cursor changes. So, IIRC, you will want to put a WH_CALLWNDPROC and a WH_CALLWNDPROCRET and check GetCursorInfo before and after to see if the app has done so.
So, how do you do this from Python? Honestly, if you don't already know both win32api and friends from the pywin32 package, and how to write Windows message procs in some language, you probably don't want to. If you do want to, I'd start off with the (abandoned) pyHook project from UNC Assist. Even if you can't get it working, it's full of useful source code.
You should also search SO for [python] SetWinEventHook and [python] SetWindowsHookEx, and google around a bit; there are some examples out there (I even wrote one here somewhere…)
You can look at higher-level wrapper frameworks like pywinauto and winGuiAuto, but as far as I know, none of them has much help for capturing events.
I believe there are other tools, maybe AutoIt, that have all the functionality you need, but not in Python module. (AutoIt, for example, has its own VB-like scripting language instead.)

How to make PowerBuilder UI testing application?

I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it).
Any ideas how to reach this elements and make them testable?
I'm experimenting with code for a tool for automating PowerBuilder-based GUIs as well. From what I can see, your best bet would be to use the PowerBuilder Native Interface (PBNI), and call PowerScript code from within your NVO.
If you like, feel free to send me an email (see my profile for my email address), I'd be interested in exchanging ideas about how to do this.
I didn't use PowerBuilder for a while but I guess that the problem that you are trying to solve is similar to the one I am trying to address for people making projects with SCADA systems like Wonderware Intouch.
The problem with such an application is that there is no API to get or set the value of a control. So a pywinauto approach can't work.
I've made a small tool to simulate the user events and to get the results from a screencapture. I am usig PIL and pytesser ORM for the analysis of the screen captures. It is not the easiest way but it works OK.
The tool is open-source and free of charge and can be downloaded from my website (Sorry in french). You just need an account but it's free as well. Just ask.
If you can read french, here is one article about testing Intouch-based applications
Sorry for the self promotion, but I was facing a similar problem with no solution so I've written my own. Anyway, that's free and open-source...
I've seen in AutomatedQa support that they a recipe recommending using msaa and setting some properties on the controls. I do not know if it works.
If you are testing DataWindows (the class is pbdwxxx, e.g. pbdw110) you will have to use a combination of clicking at specific coordinates and sending Tab keys to get to the control you want. Of course you can also send up and down arrow keys to move among rows. The easiest thing to do is to start with a normal control like an SLE and tab into the DataWindow. The problem is that the DataWindow is essentially just an image. There is no control for a given field until you move the focus there by clicking or tabbing. I've also found that the DataWindow's iAccessible interface is a bit strange. If you ask the DataWindow for the object with focus, you don't get the right answer. If you enumerate through all of the children you can find the one that has focus. If you can modify the source I also advise that you set AccessibleName for your DataWindow controls, otherwise you probably won't be able to identify the controls except by position (by DataWindow controls I mean the ones inside the DataWindow, not the DataWindow itself). If it's an MDI application, you may also find it useful to locate the MicroHelp window (class fnhelpxxx, e.g. fnhelp110, find from the main application window) to help determine your current context.
Edited to add:
Sikuli looks very promising for testing PowerBuilder. It works by recognizing objects on the screen from a saved fragment of screenshot. That is, you take a screenshot of the part of the screen you want it to find.

Categories