SyntaxError: Missing parentheses in call to 'print' [duplicate] - python

This question already has answers here:
What does "SyntaxError: Missing parentheses in call to 'print'" mean in Python?
(11 answers)
Closed 5 years ago.
I've been trying to scrape some twitter's data, but when ever I run this code I get the error SyntaxError: Missing parentheses in call to 'print'.
Can someone please help me out with this one?
Thanks for your time :)
"""
Use Twitter API to grab user information from list of organizations;
export text file
Uses Twython module to access Twitter API
"""
import sys
import string
import simplejson
from twython import Twython
#WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME
import datetime
now = datetime.datetime.now()
day=int(now.day)
month=int(now.month)
year=int(now.year)
#FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API
t = Twython(app_key='APP_KEY', #REPLACE 'APP_KEY' WITH YOUR APP KEY, ETC., IN THE NEXT 4 LINES
app_secret='APP_SECRET',
oauth_token='OAUTH_TOKEN',
oauth_token_secret='OAUTH_TOKEN_SECRET')
#REPLACE WITH YOUR LIST OF TWITTER USER IDS
ids = "4816,9715012,13023422, 13393052, 14226882, 14235041, 14292458, 14335586, 14730894,\
15029174, 15474846, 15634728, 15689319, 15782399, 15946841, 16116519, 16148677, 16223542,\
16315120, 16566133, 16686673, 16801671, 41900627, 42645839, 42731742, 44157002, 44988185,\
48073289, 48827616, 49702654, 50310311, 50361094,"
#ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL
#THE VARIABLE USERS IS A JSON FILE WITH DATA ON THE 32 TWITTER USERS LISTED ABOVE
users = t.lookup_user(user_id = ids)
#NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR
outfn = "twitter_user_data_%i.%i.%i.txt" % (now.month, now.day, now.year)
#NAMES FOR HEADER ROW IN OUTPUT FILE
fields = "id screen_name name created_at url followers_count friends_count statuses_count \
favourites_count listed_count \
contributors_enabled description protected location lang expanded_url".split()
#INITIALIZE OUTPUT FILE AND WRITE HEADER ROW
outfp = open(outfn, "w")
outfp.write(string.join(fields, "\t") + "\n") # header
#THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE
#THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE
for entry in users:
#CREATE EMPTY DICTIONARY
r = {}
for f in fields:
r[f] = ""
#ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY
r['id'] = entry['id']
#SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES
r['screen_name'] = entry['screen_name']
r['name'] = entry['name']
r['created_at'] = entry['created_at']
r['url'] = entry['url']
r['followers_count'] = entry['followers_count']
r['friends_count'] = entry['friends_count']
r['statuses_count'] = entry['statuses_count']
r['favourites_count'] = entry['favourites_count']
r['listed_count'] = entry['listed_count']
r['contributors_enabled'] = entry['contributors_enabled']
r['description'] = entry['description']
r['protected'] = entry['protected']
r['location'] = entry['location']
r['lang'] = entry['lang']
#NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE
if 'url' in entry['entities']:
r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url']
else:
r['expanded_url'] = ''
print r
#CREATE EMPTY LIST
lst = []
#ADD DATA FOR EACH VARIABLE
for f in fields:
lst.append(unicode(r[f]).replace("\/", "/"))
#WRITE ROW WITH DATA IN LIST
outfp.write(string.join(lst, "\t").encode("utf-8") + "\n")
outfp.close()

It seems like you are using python 3.x, however the code you are running here is python 2.x code. Two ways to solve this:
Download python 2.x on Python's website and use it to run your script
Add parentheses around your print call at the end by replacing print r by print(r) at the end (and keep using python 3)
But today, a growing majority of python programmers are using python 3, and the official python wiki states the following:
Python 2.x is legacy, Python 3.x is the present and future of the
language
If I were you, I'd go with the second option and keep using python 3.

Looks like you trying to run Python 2 code in Python 3, where print is function and required parentheses:
print(foo)

You just need to add pranethesis to your print statmetnt to convert it to a function, like the error says:
print expression -> print(expression)
In Python 2, print is a statement, but in Python 3, print is a function. So you could alternatively just run your code with Python 2. print(expression) is backwards compatible with Python 2.
Also, why are you capitalizing all your comments? It's annoying. Your code also violates PEP 8 in several ways. Get an editor like PyCharm (it's free) that can automatically detect errors like this.
You didn't leave a space between # and your comment
You didn't leave spaces between = and other tokens

Within python 2, print has been a statement, not a function. That means you can use it without parentheses. In python 3, that has changed. It is a function there and you need to use print(foo) instead of print foo.

Related

How to use list objects as arguments inside a python function?

I am new to programming and I am stuck with this following problem.
I can't find a way to pass my list objects as arguments within the following function.
My goal with the function is to run through all the list objects one by one and save the data as a variable named erc20.
Link to .json file // Link to etherscan-python github
from etherscan import Etherscan
import json
with open('adress-tracker.json') as json_file:
json_data = json.load(json_file)
print(json_data)
# Here we create a result list, in which we will store our addresses
result_list = [json_dict['Address'] for json_dict in json_data]
eth = Etherscan("APIKEY") #removed my api key
erc20 = eth.get_erc20_token_transfer_events_by_address(address = result_list, startblock="0", endblock="999999999", sort='asc')
print(erc20)
This will return the following Error:
AssertionError: Error! Invalid address format -- NOTOK
When I directly add the address or link it to a variable it works just fine. However I need to find a way how to apply the functions to all addresses, as I plan to add in hundreds.
I tried changing the list to a directory and also tried to implement Keyword Arguments with (*result_list) or created a new variable called params with all needed arguments. Then used (*params). But unfortunately I can't wrap my head around how to solve this problem.
Thank you so much in advance!
This function expects single address so you have to use for-loop to check every address separatelly
erc20 = []
for address in result_list:
result = eth.get_erc20_token_transfer_events_by_address(address=address,
startblock="0",
endblock="999999999",
sort='asc')
erc20.append(result)
print(erc20)
EDIT:
Minimal working code which works for me:
import os
import json
from etherscan import Etherscan
TOKEN = os.getenv('ETHERSCAN_TOKEN')
eth = Etherscan(TOKEN)
with open('addresses.json') as json_file:
json_data = json.load(json_file)
#print(json_data)
erc20 = []
for item in json_data:
print(item['Name'])
result = eth.get_erc20_token_transfer_events_by_address(address=item['Address'],
startblock="0",
endblock="999999999",
sort='asc')
erc20.append(result)
print('len(result):', len(result))
#print(erc20)
#for item in erc20:
# print(item)
Result:
Name 1
len(result): 44
Name 2
len(result): 1043
Name 3
len(result): 1

How to make a hard-coded HTTP processing script dynamic?

I have a Jython 2.7 script that receives a URL and uses the parameters/values in the URL to create or update records.
Example URL: http://server:host/maximo/oslc/script/CREATEWO?&wonum=WO0001&description=Legacy&classstructureid=1666&wopriority=1&worktype=CM
Details:
Receive the URL and put the parameters/values in variables:
from psdi.server import MXServer
from psdi.mbo import MboSet
resp = {}
wonum = request.getQueryParam("wonum")
description = request.getQueryParam("description")
classstructureid = request.getQueryParam("classstructureid")
wopriority = request.getQueryParam("wopriority")
worktype = request.getQueryParam("worktype")
Some lines that aren't relevant to the question:
woset = MXServer.getMXServer().getMboSet("workorder",request.getUserInfo())
whereClause = "wonum= '" + wonum + "'"
woset.setWhere(whereClause)
woset.reset()
woMbo = woset.moveFirst()
Then use the values to either create a new record or update an existing record:
#If workorder already exists, update it:
if woMbo is not None:
woMbo.setValue("description", description)
woMbo.setValue("classstructureid", classstructureid)
woMbo.setValue("wopriority", wopriority)
woMbo.setValue("worktype", worktype)
woset.save()
woset.clear()
woset.close()
resp[0]='Updated workorder ' + wonum
#Else, create a new workorder
else:
woMbo=woset.add()
woMbo.setValue("wonum",wonum)
woMbo.setValue("description", description)
woMbo.setValue("classstructureid", classstructureid)
woMbo.setValue("wopriority", wopriority)
woMbo.setValue("worktype", worktype)
woset.save()
woset.clear()
woset.close()
resp[0]='Created workorder ' + wonum
responseBody =resp[0]
Question:
Unfortunately, the field names/values are hardcoded in 3 different places in the script.
I would like to enhance the script so that it is dynamic -- not hardcoded.
In other words, it would be great if the script could accept a list of parameters/values and simply loop through them to update or create a record in the respective fields.
Is it possible to do this?
You're using the Maximo Next Gen. REST API to execute an automation script that accepts an HTTP request with parameters and creates or updates a Work Order in the system. You want to make your script more generic (presumably to accept more paramaters for the created/updated work order) and/or other mbo's.
This can be achieved without developing automation scripts and just using the Next Gen. API you're already using to execute the script. The API already accepts create & update requests on the mxwo object structure with the ability to use all the fields, child objects, etc.
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_creating_and_updating_resources
Assuming you are working always with the same query parameters, rather than define variables, loop through a list of strings and put them as key-value pairs
To populate
items = ["wonum", "description"]
resp = {k: request.getQueryParam(k) for k in items}
Then to set
for i in items:
woMbo.setValue(i, resp[i])
Otherwise, you are looking for URL parsing and the getQuery method, followed by a split("="), giving you ["wonum", "WO0001", "description", "Legacy"], for example, and you can loop over every other element to get you dynamic entries
l = ["wonum", "WO0001", "description", "Legacy"]
for i in range(0, len(l)-1, 2):
print(f'key:{l[i]}\tvalue:{l[i+1]}')
key:wonum value:WO0001
key:description value:Legacy
Note: This is subject to SQL injection attacks, and should be fixed
whereClause = "wonum= '" + wonum + "'"

I want to be able to pull login info from a CSV to use as parameters for the redfish object [duplicate]

This question already has answers here:
How do I read and write CSV files with Python?
(7 answers)
Closed 4 years ago.
if __name__ == "__main__":
# When running on the server locally use the following commented values
# iLO_https_url = "blobstore://."
# iLO_account = "None"
# iLO_password = "None"
# When running remotely connect using the iLO secured (https://) address,
# iLO account name, and password to send https requests
# iLO_https_url acceptable examples:
# "https://10.0.0.100"
# "https://f250asha.americas.hpqcorp.net"
iLO_https_url = "https://10.0.0.100"
iLO_account = "admin"
iLO_password = "password"
# Create a REDFISH object
try:
REDFISH_OBJ = RedfishObject(iLO_https_url, iLO_account, iLO_password)
except ServerDownOrUnreachableError as excp:
sys.stderr.write("ERROR: server not reachable or doesn't support " \
"RedFish.\n")
sys.exit()
except Exception as excp:
raise excp
ex4_reset_server(REDFISH_OBJ)
REDFISH_OBJ.redfish_client.logout()
Above is the "login" part of the script im writing. In this part, REDFISH_OBJ = RedfishObject(iLO_https_url, iLO_account, iLO_password), I would like to replace iLO_https_url with a variable whose value would be pulled from a simple CSV file. The CSV would have 3 columns...ip, username,pwd.
I'm trying to execute the other part of the script, not shown here, on every IP in the CSV file. I need to do this in python.
The easiest way is to use the open() function with the split() function.
Try something like this:
with open("data.csv", encoding="utf-8") as csv_file:
iLO_account, iLO_password, iLO_https_url = csv_file.readline().split(",")
If you are separating your entries with new line breaks, simply replace the ",", with a "\n"

TypeError: sequence item 1:expected str instance, bytes found

I want to download tweets from twitter as data sets for sentiment analysis using Python Twitter API ..
I am using SemEval 2016 task 6 dataset ..So I downloaded "Domain corpus for task B" and I found a Readme file that describe the steps ..
I am just a beginner and I do not know much in python ..I installed python 3.4.3 and I already found easy_install.exe and pip.exe in scripts folder ..
I typed in cmd : "easy_install twitter" as it is written in readme file ...then I tried to apply the steps from Readme file , here are the steps:
The first time you run this, it should open up a web browser, have you log into
Twitter, and show a PIN number for you to enter into a prompt generated by the
script.
Login to Twitter with your user name in your default browser.
Run the script like this to download your credentials: python download_tweets_api.py --dist=Donald_Trump.txt -- output=downloaded_Donald_Trump.txt
Download tweets like so: python download_tweets_api.py --dist=Donald_Trump.txt --output=downloaded_Donald_Trump.txt
I finished step 1 , then I typed in cmd "download_tweets_api.py --dist=Donald_Trump.txt --output=downloaded_Donald_Trump.txt" but I got an error in last line of the file
TypeError: sequence item 1:expected str instance, bytes found
Here is the content of the file "download_tweets_api.py"
import sys
import os
import time
import datetime
import argparse
from twitter import *
parser = argparse.ArgumentParser(description="downloads tweets")
parser.add_argument('--partial', dest='partial', default=None, type=argparse.FileType('r'))
parser.add_argument('--dist', dest='dist', default=None, type=argparse.FileType('r'), required=True)
parser.add_argument('--output', dest='output', default=None,type=argparse.FileType('w'), required=True)
args = parser.parse_args()
CONSUMER_KEY='xxxxxxxxxxx'
CONSUMER_SECRET='xxxxxxxxxxxxxxxxx'
MY_TWITTER_CREDS = os.path.expanduser('~/.my_app_credentials')
if not os.path.exists(MY_TWITTER_CREDS):
oauth_dance("Semeval sentiment analysis", CONSUMER_KEY, CONSUMER_SECRET, MY_TWITTER_CREDS)
oauth_token, oauth_secret = read_token_file(MY_TWITTER_CREDS)
t = Twitter(auth=OAuth(oauth_token, oauth_secret, CONSUMER_KEY, CONSUMER_SECRET))
cache = {}
if args.partial != None:
for line in args.partial:
fields = line.strip().split("\t")
text = fields[-1]
sid = fields[0]
cache[sid] = text
for line in args.dist:
fields = line.strip().split('\t')
sid = fields[0]
while not sid in cache:
try:
text = t.statuses.show(_id=sid)['text'].replace('\n', ' ').replace('\r', ' ')
cache[sid] = text.encode('utf-8')
except TwitterError as e:
if e.e.code == 429:
rate = t.application.rate_limit_status()
reset = rate['resources']['statuses']['/statuses/show/:id'] ['reset']
now = datetime.datetime.today()
future = datetime.datetime.fromtimestamp(reset)
seconds = (future-now).seconds+1
if seconds < 10000:
sys.stderr.write("Rate limit exceeded, sleeping for %s seconds until %s\n" % (seconds, future))
time.sleep(seconds)
else:
cache[sid] = 'Not Available'
text = cache[sid]
args.output.write("\t".join(fields + [text]) + '\n')
Note you can find the download_tweets_api.py and readme files in the "domain corpus for task B"
The issue is with the last line of the download_tweets_api.py script. The script was written in Python 2 and would probably run seamlessly in Python 2. The main reason it is not running in Python 3 is that the last line is trying to join strings with bytes. However, in Python 3, bytes are clearly different from strings, in that you cannot join a byte to a string without coercing one into the other. In the case of this script, either the text variable or the fields variable (or even both) is of bytes elements. So, you may have to modify the variables text and fields before applying the string join function on them.
Basically, you will have to replace the last line of the script with the following:
#Create a function that converts the byte elements of the list fields into strings.
def convert(s):
try:
return str(s,encoding='utf8')
except:
return s
text = convert(text)
fields = list(map(lambda x: convert(x), fields))
args.output.write("\t".join(fields + [text]) + '\n')
If you do not want to modify the script, then you may have to run it in Python 2, or use the 2to3 module to convert the code to Python 3.
Hope this helps.

couchdb-python change notifications

I'm trying to use couchdb.py to create and update databases. I'd like to implement notification changes, preferably in continuous mode. Running the test code posted below, I don't see how the changes scheme works within python.
class SomeDocument(Document):
#############################################################################
# def __init__ (self):
intField = IntegerField()#for now - this should to be an integer
textField = TextField()
couch = couchdb.Server('http://127.0.0.1:5984')
databasename = 'testnotifications'
if databasename in couch:
print 'Deleting then creating database ' + databasename + ' from server'
del couch[databasename]
db = couch.create(databasename)
else:
print 'Creating database ' + databasename + ' on server'
db = couch.create(databasename)
for iii in range(5):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
print doc.id + '\t' + doc.rev
something = db.changes(feed='continuous',since=4,heartbeat=1000)
for iii in range(5,10):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
time.sleep(1)
print something
print db.changes(since=iii-1)
The value
db.changes(since=iii-1)
returns information that is of interest, but in a format from which I haven't worked out how to extract the sequence or revision numbers, or the document information:
{u'last_seq': 6, u'results': [{u'changes': [{u'rev': u'1-9c1e4df5ceacada059512a8180ead70e'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f40049a8', u'seq': 5}, {u'changes': [{u'rev': u'1-bbe2953a5ef9835a0f8d548fa4c33b42'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f400560d', u'seq': 6}]}
Meanwhile, the code I'm really interested in using:
db.changes(feed='continuous',since=4,heartbeat=1000)
Returns a generator object and doesn't appear to provide notifications as they come in, as the CouchDB guide suggests ....
Has anyone used changes in couchdb-python successfully?
I use long polling rather than continous, and that works ok for me. In long polling mode db.changes blocks until at least one change has happened, and then returns all the changes in a generator object.
Here is the code I use to handle changes. settings.db is my CouchDB Database object.
since = 1
while True:
changes = settings.db.changes(since=since)
since = changes["last_seq"]
for changeset in changes["results"]:
try:
doc = settings.db[changeset["id"]]
except couchdb.http.ResourceNotFound:
continue
else:
// process doc
As you can see it's an infinite loop where we call changes on each iteration. The call to changes returns a dictionary with two elements, the sequence number of the most recent update and the objects that were modified. I then loop through each result loading the appropriate object and processing it.
For a continuous feed, instead of the while True: line use for changes in settings.db.changes(feed="continuous", since=since).
I setup a mailspooler using something similar to this. You'll need to also load couchdb.Session() I also use a filter for only receiving unsent emails to the spooler changes feed.
from couchdb import Server
s = Server('http://localhost:5984/')
db = s['testnotifications']
# the since parameter defaults to 'last_seq' when using continuous feed
ch = db.changes(feed='continuous',heartbeat='1000',include_docs=True)
for line in ch:
doc = line['doc']
// process doc here
doc['priority'] = 'high'
doc['recipient'] = 'Joe User'
# doc['state'] + 'sent'
db.save(doc)
This will allow you access your doc directly from the changes feed, manipulate your data as you see fit, and finally update you document. I use a try/except block on the actual 'db.save(doc)' so I can catch when a document has been updated while I was editing and reload the doc before saving.

Categories