In playing around with Tweepy I notice that the 'status' variable returned from a call to get_user is <tweepy.models.Status object at 0x02AAE050>
Sure, I can call get_user.USER.status, but how can I grab that information from the get_user call? i.e. I want to loop through user.getstate() and if I find an object which requires further iteration, loop through that too
I've searched high/low of answers, but my newness to Python is causing problems, which I'm pretty sure are easy to solve if I knew the right questions to ask.
Thanks for any pointer here...
# -*- coding: utf-8 -*-
import sys
import tweepy
import json
from pprint import pprint
api = tweepy.API()
def main():
print "Starting."
user = api.get_user('USER',include_entities=1)
print "================ type ================="
print type(user)
print "================ dir ================="
print dir(user)
print "================ user ================="
#
# We can see 'status': <tweepy.models.Status object at 0x02AAE050>, .......but how do I "explode" that automagically?
#
pprint ((user).__getstate__())
print "================ user.status ================="
pprint ((user).status.__getstate__())
print "================= end ================="
if __name__ == "__main__":
main()
I was able to get the intended behaviour by using jsonpickle, using the following code.
import jsonpickle
.
.
.
user = api.get_user('USERNAME',include_entities=1)
pickled = jsonpickle.encode(user)
print(json.dumps(json.loads(pickled), indent=4, sort_keys=True)) #you could just print pickled, but this makes it pretty
I'm still really interested to understand what I'm missing in understanding how to detect and expand that status object.
try this:
api = tweepy.API(auth,parser=tweepy.parsers.JSONParser())
user = api.get_user('USER',include_entities=1)
now you can user user['status'] and others easily
Related
So I have this code which works great for reading messages out of predefined topics and printing it to screen. The rosbags come with a rosbag_name.db3 (sqlite) database and metadata.yaml file
from rosbags.rosbag2 import Reader as ROS2Reader
import sqlite3
from rosbags.serde import deserialize_cdr
import matplotlib.pyplot as plt
import os
import collections
import argparse
parser = argparse.ArgumentParser(description='Extract images from rosbag.')
# input will be the folder containing the .db3 and metadata.yml file
parser.add_argument('--input','-i',type=str, help='rosbag input location')
# run with python filename.py -i rosbag_dir/
args = parser.parse_args()
rosbag_dir = args.input
topic = "/topic/name"
frame_counter = 0
with ROS2Reader(rosbag_dir) as ros2_reader:
ros2_conns = [x for x in ros2_reader.connections]
# This prints a list of all topic names for sanity
print([x.topic for x in ros2_conns])
ros2_messages = ros2_reader.messages(connections=ros2_conns)
for m, msg in enumerate(ros2_messages):
(connection, timestamp, rawdata) = msg
if (connection.topic == topic):
print(connection.topic) # shows topic
print(connection.msgtype) # shows message type
print(type(connection.msgtype)) # shows it's of type string
# TODO
# this is where things crash when it's a custom message type
data = deserialize_cdr(rawdata, connection.msgtype)
print(data)
The issue is that I can't seem to figure out how to read in custom message types. deserialize_cdr takes a string for the message type field, but it's not clear to me how to replace this with a path or how to otherwise pass in a custom message.
Thanks
One approach would be that you declare and register it to the type system as a string:
from rosbags.typesys import get_types_from_msg, register_types
MY_CUSTOM_MSG = """
std_msgs/Header header
string foo
"""
register_types(get_types_from_msg(
MY_CUSTOM_MSG, 'my_custom_msgs/msg/MyCustomMsg'))
from rosbags.typesys.types import my_custom_msgs__msg__MyCustomMsg as MyCustomMsg
Next, using:
msg_type = MyCustomMsg.__msgtype__
you can get the message type that you can pass to deserialize_cdr.
Also, see here for a quick example.
Another approach is to directly load it from the message definition.
Essentially, you would need to read the message
from pathlib import Path
custom_msg_path = Path('/path/to/my_custom_msgs/msg/MyCustomMsg.msg')
msg_def = custom_msg_path.read_text(encoding='utf-8')
and then follow the same steps as above starting with get_types_from_msg().
A more detailed example of this approach is given here.
I have importet the following modules.
import saxo_openapi
from saxo_openapi import API
import saxo_openapi.endpoints.referencedata as rd
with the class being
class saxo_openapi.endpoints.referencedata.exchanges.ExchangeList(params=None)
so in my code write the following:
r = rd.exchanges.ExhangeList()
rv = client.request(r)
print(json.dumps(rv.response, indent=4))
i have also tried various other ways of writing it, like:
client = saxo_openapi.API(access_token=...)
r = rd.ExchangeList()
client.request(r)
print(json.dumps(r.response, indent=4))
but i get the same error all the time. I cant figure out, what i am missing.
module 'saxo_openapi.endpoints.referencedata.exchanges' has no
attribute 'ExhangeList'
I really do hope you guys can help.
I have a requests.cookies.RequestCookieJar object which contains multiple cookies from different domain/path. How can I extract a cookies string for a particular domain/path following the rules mentioned in here?
For example
>>> r = requests.get("https://stackoverflow.com")
>>> print(r.cookies)
<RequestsCookieJar[<Cookie prov=4df137f9-848e-01c3-f01b-35ec61022540 for .stackoverflow.com/>]>
# the function I expect
>>> getCookies(r.cookies, "stackoverflow.com")
"prov=4df137f9-848e-01c3-f01b-35ec61022540"
>>> getCookies(r.cookies, "meta.stackoverflow.com")
"prov=4df137f9-848e-01c3-f01b-35ec61022540"
# meta.stackoverflow.com is also satisfied as it is subdomain of .stackoverflow.com
>>> getCookies(r.cookies, "google.com")
""
# r.cookies does not contains any cookie for google.com, so it return empty string
I think you need to work with a Python dictionary of the cookies. (See my comment above.)
def getCookies(cookie_jar, domain):
cookie_dict = cookie_jar.get_dict(domain=domain)
found = ['%s=%s' % (name, value) for (name, value) in cookie_dict.items()]
return ';'.join(found)
Your example:
>>> r = requests.get("https://stackoverflow.com")
>>> getCookies(r.cookies, ".stackoverflow.com")
"prov=4df137f9-848e-01c3-f01b-35ec61022540"
NEW ANSWER
Ok, so I still don't get exactly what it is you are trying to achieve.
If you want to extract the originating url from a requests.RequestCookieJar object (so that you could then check if there is a match with a given subdomain) that is (as far as I know) impossible.
However, you could off course do something like:
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import requests
import re
class getCookies():
def __init__(self, url):
self.cookiejar = requests.get(url).cookies
self.url = url
def check_domain(self, domain):
try:
base_domain = re.compile("(?<=\.).+\..+$").search(domain).group()
except AttributeError:
base_domain = domain
if base_domain in self.url:
print("\"prov=" + str(dict(self.cookiejar)["prov"]) + "\"")
else:
print("No cookies for " + domain + " in this jar!")
Then if you do:
new_instance = getCookies("https://stackoverflow.com")
You could then do:
new_instance.check_domain("meta.stackoverflow.com")
Which would give the output:
"prov=5d4fda78-d042-2ee9-9a85-f507df184094"
While:
new_instance.check_domain("google.com")
Would output:
"No cookies for google.com in this jar!"
Then, if you (if needed) fine-tune the regex & create a list of urls, you could first loop through the list to create many instances and save them in eg a list or dict. In a second loop you could check another list of urls to see if their cookies might be present in any of the instances.
OLD ANSWER
The docs you link to explain:
items()
Dict-like items() that returns a list of name-value
tuples from the jar. Allows client-code to call
dict(RequestsCookieJar) and get a vanilla python dict of key value
pairs.
I think what you are looking for is:
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import requests
def getCookies(url):
r = requests.get(url)
print("\"prov=" + str(dict(r.cookies)["prov"]) + "\"")
Now I can run it like this:
>>> getCookies("https://stackoverflow.com")
"prov=f7712c78-b489-ee5f-5e8f-93c85ca06475"
actually , when I just have the problem as you are. but when I access the Class Define
class RequestsCookieJar(cookielib.CookieJar, MutableMapping):
I found a func called def get_dict(self, domain=None, path=None):
you can simply write code like this
raw = "rawCookide"
print(len(cookie))
mycookie = SimpleCookie()
mycookie.load(raw)
UCookie={}
for key, morsel in mycookie.items():
UCookie[key] = morsel.value
The following code is not promised to be "forward compatible" because I am accessing attributes of classes that were intentionally hidden (kind of) by their authors; however, if you must get into the attributes of a cookie, take a look here:
import http.cookies
import requests
import json
import sys
import os
aresponse = requests.get('https://www.att.com')
requestscookiejar = aresponse.cookies
for cdomain,cooks in requestscookiejar._cookies.items():
for cpath, cookgrp in cooks.items():
for cname,cattribs in cookgrp.items():
print(cattribs.version)
print(cattribs.name)
print(cattribs.value)
print(cattribs.port)
print(cattribs.port_specified)
print(cattribs.domain)
print(cattribs.domain_specified)
print(cattribs.domain_initial_dot)
print(cattribs.path)
print(cattribs.path_specified)
print(cattribs.secure)
print(cattribs.expires)
print(cattribs.discard)
print(cattribs.comment)
print(cattribs.comment_url)
print(cattribs.rfc2109)
print(cattribs._rest)
When a person needs to access the simple attributes of cookies is it likely less complicated to go after the following way. This avoids the use of RequestsCookieJar. Here we construct a single SimpleCookie instance by reading from the headers attribute of a response object instead of the cookies attribute. The name SimpleCookie would seem to imply a single cookie but that isn't what a simple cookie is. Try it out:
import http.cookies
import requests
import json
import sys
import os
def parse_cookies(http_response):
cookie_grp = http.cookies.SimpleCookie()
for h,v in http_response.headers.items():
if 'set-cookie' in h.lower():
for cook in v.split(','):
cookie_grp.load(cook)
return cookie_grp
aresponse = requests.get('https://www.att.com')
cookies = parse_cookies(aresponse)
print(str(cookies))
You can get list of domains in ResponseCookieJar and then dump the cookies for each domain with the following code:
import requests
response = requests.get("https://stackoverflow.com")
cjar = response.cookies
for domain in cjar.list_domains():
print(f'Cookies for {domain}: {cjar.get_dict(domain=domain)}')
Outputs:
Cookies for domain .stackoverflow.com: {'prov': 'efe8c1b7-ddbd-4ad5-9060-89ea6c29479e'}
In this example, only one domain is listed. It would have multiple lines in output if there were cookies for multiple domains in the Jar.
For many usecases, the cookie jar can be serialized by simply ignoring domains by calling:
dCookies = cjar.get_dict()
We can easily extract cookies string for a particular domain/path using functions already available in requests lib.
import requests
from requests.models import Request
from requests.cookies import get_cookie_header
session = requests.session()
r1 = session.get("https://www.google.com")
r2 = session.get("https://stackoverflow.com")
cookie_header1 = get_cookie_header(session.cookies, Request(method="GET", url="https://www.google.com"))
# '1P_JAR=2022-02-19-18; NID=511=Hz9Mlgl7DtS4uhTqjGOEolNwzciYlUtspJYxQ0GWOfEm9u9x-_nJ1jpawixONmVuyua59DFBvpQZkPzNAeZdnJjwiB2ky4AEFYVV'
cookie_header2 = get_cookie_header(session.cookies, Request(method="GET", url="https://stackoverflow.com"))
# 'prov=883c41a4-603b-898c-1d14-26e30e3c8774'
Request is used to prepare a :class:PreparedRequest <PreparedRequest>, which is sent to the server.
What you need is get_dict() method
a_session = requests.Session()
a_session.get('https://google.com/')
session_cookies = a_session.cookies
cookies_dictionary = session_cookies.get_dict()
# Now just print it or convert to json
as_string = json.dumps(cookies_dictionary)
print(cookies_dictionary)
I want to put a list into my threading script, but I am facing a problem.
Contents of list file (example):
http://google.com
http://yahoo.com
http://bing.com
http://python.org
My script:
import codecs
import threading
import sys
import requests
from time import time as timer
from timeout import timeout
import time
try:
with codecs.open(sys.argv[1], mode='r', encoding='ascii', errors='ignore') as iiz:
iiz=iiz.read().splitlines()
except IOError:
pass
oz = list(iiz)
def nnn(url):
hzz = {'param1': sys.argv[2], 'param2': sys.argv[3]}
po = requests.post(url,data=hzz)
if po:
print("ok \n")
if __name__ == '__main__':
threads = []
for i in range(1):
t = threading.Thread(target=nnn, args=(oz,))
threads.append(t)
t.start()
Can you please clarify what elaborate on exactly what you're trying to achieve.
I'm guessing that you're trying to request urls to load into a web browser or the terminal...
Also you shouldn't need to put the urls into a list because when you opened up the file containing the urls, it automatically sorted it into a list. So in other words, the contents in iiz are already in the list format.
Personally, I haven't worked much with the modules you're using (apart from time), but I'll try my best to help you and hopefully other users will try and help you too.
I have some code which I would like to be decoded but not having much luck in guessing what the codepage is, if any is being used. Any help would be much appreciated.
i am using python command line in windows 7 pc,if any python guru guide me how to decrypt and see the code thaat would be appreciated.
exec("import re;import base64");exec((lambda p,y:(lambda o,b,f:re.sub(o,b,f))(r"([0-9a-f]+)",lambda m:p(m,y),base64.b64decode("NTQgYgo1NCA3CjU0IDMKNTQgMWUKNTQgOQo1NCAxOAozZiAgICAgICA9IGIuMTAoKQoxNiAgID0gIjQzOi8vMTIuM2QvNGMvMWQuMjUuZi00ZC4zZSIKYSA9ICIxZC4yNS5mIgoyYSA4KDYpOgoJMzMgMy5jKCczYy4yZSglNTIpJyAlIDYpID09IDEKCjJhIDE1KDM1KToKCTUgPSAzLjQoMWUuNS4xYygnMTM6Ly8yZC8xZicsJzMwJykpCgkyMyA1CgkyMSA9IDcuMTQoKQoJMjEuMzgoIjEwIDI4IiwiMjAgMTAuLiIsJycsICczNiA0MCcpCgkxMT0xZS41LjFjKDUsICdlLjNlJykKCTM5OgoJCTFlLjFhKDExKQoJMWI6CgkJMmMKCQk5LmUoMzUsIDExLCAyMSkKCQkyID0gMy40KDFlLjUuMWMoJzEzOi8vMmQnLCcxZicpKQoJCTIzIDIKCQkyMS4zNCgwLCIiLCAiM2IgNDciKQoJCTE4LjQ4KDExLDIsMjEpCgkJCgkJMy41MygnMjIoKScpOyAKCQkzLjUzKCcyNigpJyk7CgkJMy41MygiNDUuZCgpIik7IAoJCTE5PTcuMzcoKTsgMTkuNTAoIjMyISIsIjJmIDNhIDQ5IDQxIDI5IiwiICAgWzI0IDQ2XTMxIDUxIDRhIDRlIDE3LjNkWy8yNF0iKQoJCSIiIgoJCTM5OgoJCQkxZS4xYSgxMSkKCQkxYjoKCQkJMmMKCQkJIzI3KCkKCQk0Mjo0NCgpCgkJIiIiCgoyYSAyYigpOgoJNGYgNGIgOChhKToKCQkxNSgxNikKCQoKCjJiKCk=")))(lambda a,b:b[int("0x"+a.group(1),16)],"0|1|addonfolder|xbmc|translatePath|path|script_name|xbmcgui|script_chk|downloader|scriptname|xbmcaddon|getCondVisibility|UpdateLocalAddons|download|supermax|Addon|lib|supermaxwizard|special|DialogProgress|INSTALL|website|SuperMaxWizard|extract|dialog|remove|except|join|plugin|os|addons|Installing|dp|UnloadSkin|print|COLOR|video|ReloadSkin|FORCECLOSE|Installer|Installed|def|Main|pass|home|HasAddon|SuperMax|packages|Brought|Success|return|update|url|Please|Dialog|create|try|Wizard|Nearly|System|com|zip|addon|Wait|been|else|http|quit|XBMC|gold|Done|all|has|You|not|sm|MP|By|if|ok|To|s|executebuiltin|import".split("|")))
The code is uglified. You can unobfuscate it yourself by executing the contents of exec(...) in your Python shell.
import re
import base64
print ((lambda p,y.....split("|")))
EDIT: As snakecharmerb says, it is generally not safe to execute unknown code. I analysed the code to find that running the insides of exec will only decrypt, and leaving off the exec itself will just result in a string. This procedure ("execute stuff inside exec") is by no means a generally safe method to decrypt uglified code, and you need to actually analyse what it does. But, at this point, I was asking you to trust my judgement, which, if it is wrong, theoretically could expose you to an attack. In addition, it seems you have problems getting it to run on your Python; so here's what I'm getting from the above:
import xbmcaddon
import xbmcgui
import xbmc
import os
import downloader
import extract
addon = xbmcaddon.Addon()
website = "http://supermaxwizard.com/sm/plugin.video.supermax-MP.zip"
scriptname = "plugin.video.supermax"
def script_chk(script_name):
return xbmc.getCondVisibility('System.HasAddon(%s)' % script_name) == 1
def INSTALL(url):
path = xbmc.translatePath(os.path.join('special://home/addons','packages'))
print path
dp = xbmcgui.DialogProgress()
dp.create("Addon Installer","Installing Addon..",'', 'Please Wait')
lib=os.path.join(path, 'download.zip')
try:
os.remove(lib)
except:
pass
downloader.download(url, lib, dp)
addonfolder = xbmc.translatePath(os.path.join('special://home','addons'))
print addonfolder
dp.update(0,"", "Nearly Done")
extract.all(lib,addonfolder,dp)
xbmc.executebuiltin('UnloadSkin()');
xbmc.executebuiltin('ReloadSkin()');
xbmc.executebuiltin("XBMC.UpdateLocalAddons()");
dialog=xbmcgui.Dialog(); dialog.ok("Success!","SuperMax Wizard has been Installed"," [COLOR gold]Brought To You By SuperMaxWizard.com[/COLOR]")
"""
try:
os.remove(lib)
except:
pass
#FORCECLOSE()
else:quit()
"""
def Main():
if not script_chk(scriptname):
INSTALL(website)
Main()