Dbus.Array (Reading pidgin messages from python) - python

I am trying to read a message on a pidgin window using python. I have read Pidgin how to and I using the following code:
purple.PurpleGetConversations()
and I get the following output:
dbus.Array([dbus.Int32(14414)], signature=dbus.Signature('i'))
I dont know how to access the elements of this dbus.Array
Best Regards
PD: I am interested in reading the messages, if there is a better way please let me know
Progress update: If anyone else is interested in this, I came up with an alternative solution. Pidgin leaves chat logs in ~/purple, from python you can open this files and use regex to read all msgs.
(If there is a more straigthforward way please tell me)

I found it, Here is the resulting code:
convID = purple.PurpleGetConversations()
msgpos = purple.PurpleConversationGetMessageHistory(convID[0])[0]
print purple.PurpleConversationMessageGetMessage(msgpos)
This will print the last message from an open chat

You need to use PurpleConversationGetChatData method, it takes conversation id as a parameter (14414 in your case).
I have javascript client generated from introspection xml, it might be helpful addition to a dbus documentation - https://github.com/sidorares/node-pidgin/blob/master/index.js

Related

Reading \Seen, \Unseen flags using imaplib in Python

Is it possible to read e-mail flags Seen, Unseen and restore them as the were before reading e-mail using imaplib in Python?
I couldn't find yet any information regarding reading these flags but there is plenty of examples setting Seen, Unseen etc. flags. I would appreciate if somebody would guide me to the right direction.
A big thanks goes to #stovfl and #Max in comments. I successfully made my program work using imap_conn.fetch(uid, '(BODY.PEEK[HEADER])'). On the other side if somebody needs read-only access they can use imap_conn.select('Inbox', readonly=True)

Using python to measure Wi-Fi

I am working on a school project in which I must measure and log Wi-Fi (I know how to log the data, I just don't know the most efficient way to do it). I have tried using by using
subproject.check_output('iwconfig', stderr=subprocess.STDOUT)
but that outputs bytes, which I really don't want to deal with (and I don't know how to, either, so if that is the only option, then can someone explain how to handle bytes). Is there any other way, maybe to get it in plain text? And please do not just give me the code I need, tell me how to do it.
Thank you in advance!
You are almost there. I assume that you are using python 3.x. iwconfig is sending you text encoded in whatever character set your terminal uses. That encoding is available as sys.stdin.encoding. So just put it together to get a string. BTW, you want a command list instead of a string.
raw = subprocess.check_output(['iwconfig'],stderr=subprocess.STDOUT)
data = raw.decode(sys.stdin.encoding)

Python and downloading Google Sheets feeds

I'm trying to download a spreadsheet from Google Drive inside a program I'm writing (so the data can be easily updated across all users), but I've run into a few problems:
First, and perhaps foolishly, I'm only wanting to use the basic python distribution, so I'm not requiring people to download multiple modules to run it. The urllib.request module seems to work well enough for basic downloading, specifically the urlopen() function, when I've tested it on normal webpages (more on why I say "normal" below).
Second, most questions and answers on here deal with retrieving a .csv from the spreadsheet. While this might work even better than trying to parse the feeds (and I have actually gotten it to work), using only the basic address means only the first sheet is downloaded, and I need to add a non-obvious gid to get the others. I want to have the program independent of the spreadsheet, so I only have to add new data online and the clients are automatically updated; trying to find a gid programmatically gives me trouble because:
Third, I can't actually get the feeds (interface described here) to be downloaded correctly. That does seem to be the best way to get what I want—download the overview of the entire spreadsheet, and from there obtain the addresses to each sheet—but if I try to send that through urlopen(feed).read() it just returns b''. While I'm not exactly sure what the problem is, I'd guess that the webpage is empty very briefly when it's first loaded, and that's what urlopen() thinks it should be returning. I've included what little code I'm using below, and was hoping someone had a way of working around this. Thanks!
import urllib.request as url
key = '1Eamsi8_3T_a0OfL926OdtJwLoWFrGjl1S2GiUAn75lU'
gid = '1193707515'
# Single sheet in CSV format
# feed = 'https://docs.google.com/spreadsheets/d/' + key + '/export?format=csv&gid=' + gid
# Document feed
feed = 'https://spreadsheets.google.com/feeds/worksheets/' + key + '/private/full'
csv = url.urlopen(feed).read()
(I don't actually mind publishing the key/gid, because I am planning on releasing this if I ever finish it.)
Requres OAuth2 or a password.
If you log out of google and try again with your browser, it fails (It failed when I did logged out). It looks like it requires a google account.
I did have it working with and application password a while ago. But I now use OAuth2. Both are quite a bit of messing about compared to CSV.
This sounds like a perfect use case for a wrapper library i once wrote. Let me know if you find it useful.

Is there a way to use urllib to open one site until a specified object in it?

I'm using urllib to open one site and get some information on it.
Is there a way to "open" this site only to the part I need and discard the rest (discard I mean don't open/load the rest)?
I'm not sure what you are trying to do. If you are simply trying to parse the site to find the useful "information", then I recommend using the library BeautifulSoup. That library makes it easy to keep certain parts of the site while discarding the rest.
If however you trying to save download bandwidth by downloading only a piece of the site, then you will need to do a lot more work. If that is the case please say so in your question and I'll update the answer.
You should be able to read(bytes) instead of read(), this will read a number of bytes instead of all of it. Then append to already downloaded bytes, and see if it contains what you're looking for. Then you should be able to stop download with .close().

want to add url links to .csv datafeed using python

ive looked through the current related questions but have not managed to find anything similar to my needs.
Im in the process of creating a affiliate store using zencart - now one of the issues is that zencart is not designed for redirects and affiliate stores but it can be done. I will be changing the store so it acts like a showcase store showing prices.
There is a mod called easy populate which allows me to upload datafeeds. This is all well and good however my affiliate link will not be in each product. I can do it manually after uploading the data feed and going to each product and then adding it as an image with a redirect link - However when there are over 500 items its going to be a long repetitive and time consuming job.
I have been told that I can add the links to the data feed before uploading it to zencart and this should be done using python. Ive been reading about python for several days now and feel im looking for the wrong things. I was wondering if someone could please advise the simplest way for me to get this done.
I hope the question makes sense
thanks
abs
You could craft a python script using csv module like this:
>>> import csv
>>> cartWriter = csv.writer(open('yourcart.csv', 'wb'))
>>> cartWriter.writerow(['Product', 'yourinfo', 'yourlink'])
You need to know how link should be formatted hoping that it could be composed using the other parameters present on csv file.
First, use the CSV module as systempuntoout told you, secondly, you will want to change your header to:
mimetype='text/csv'
Content-Disposition = 'attachment; filename=name_of_your_file.csv'
The way to do it depends very much of your website implementation. In pure Python you would probably do that with an HttpResponse object. In django, as well, but there are some shortcuts.
You can find a video demonstrating how to create CSV files with Python on showmedo. It's not free however.
Now, to provide a link to download the CSV, this depends of your Website. What is the technology behinds it : pure Python, Django, Pylons, Tubogear ?
If you can't answer the question, you should ask your boss a training about your infrastructure before trying to make change to it.

Categories