Bar chart doesn't load the charts - python

My code:
# Create website bar chart
website_fig = go.Bar(
x=[
today + " - " + one_week,
one_week + " - " + two_week,
two_week + " - " + three_week,
three_week + " - " + four_week,
],
y=[
product_data[today + " - " + one_week][network],
product_data[one_week + " - " + two_week][network],
product_data[two_week + " - " + three_week][network],
product_data[three_week + " - " + four_week][network]
],
name="Website statistic"
)
The values in y are 3, 0, 0, 0. I have printed them. Tried their type with type(value) and it's class int. The chart is rendered with -1, -0.5, 0, 0.5, 1 y axis. The x axis is fine. I tried to replace the y values with hardcoded 3,0,0,0 and it worked. Tried to make them ints with int(value), tried to multiply them by 1 int(value) * 1 and value * 1.
Thank you for every advice! The whole code is here

Related

Facing "attribute error:'dict' object has no attribute 'dtype' " in google colab

I have tried "Data Augmentation" for a numeric datatset. Augmentation is successful but while exporting the augmented dataset from google colab I am facing the Attribute Error:'dict' object has no attribute 'dtype'
The part of the code and the error message is given below:
A=[]
A.append(df)
for _ in range(5):
for _,row in df.iterrows():
temp = {
'PERCENT_PUB_DATA': row['PERCENT_PUB_DATA'] + np.random.uniform(percst),
'ACCESS_TO_PUB_DATA':row['ACCESS_TO_PUB_DATA'] + np.random.uniform(accest),
'COUPLING_BETWEEN_OBJECTS':row['COUPLING_BETWEEN_OBJECTS'] + np.random.uniform(coupst),
'DEPTH':row['DEPTH'] + np.random.uniform(deptst),
'LACK_OF_COHESION_OF_METHODS':row['LACK_OF_COHESION_OF_METHODS'] + np.random.uniform(lackst),
'NUM_OF_CHILDREN':row['NUM_OF_CHILDREN'] + np.random.uniform(numost),
'DEP_ON_CHILD':row['DEP_ON_CHILD'] + np.random.uniform(depost),
'FAN_IN':row['FAN_IN'] + np.random.uniform(fanist),
'RESPONSE_FOR_CLASS':row['RESPONSE_FOR_CLASS'] + np.random.uniform(respst),
'WEIGHTED_METHODS_PER_CLASS':row['WEIGHTED_METHODS_PER_CLASS'] + np.random.uniform(weigst),
'minLOC_BLANK':row['minLOC_BLANK'] + np.random.uniform(blankst),
'minBRANCH_COUNT':row['minBRANCH_COUNT'] + np.random.uniform(branchst),
'minLOC_CODE_AND_COMMENT':row['minLOC_CODE_AND_COMMENT'] + np.random.uniform(codest),
'minLOC_COMMENTS':row['minLOC_COMMENTS'] + np.random.uniform(comentsst),
'minCYCLOMATIC_COMPLEXITY':row['minCYCLOMATIC_COMPLEXITY'] + np.random.uniform(cyclost),
'minDESIGN_COMPLEXITY':row['minDESIGN_COMPLEXITY'] + np.random.uniform(desist),
'minESSENTIAL_COMPLEXITY':row['minESSENTIAL_COMPLEXITY'] + np.random.uniform(essest),
'minLOC_EXECUTABLE':row['minLOC_EXECUTABLE'] + np.random.uniform(execst),
'minHALSTEAD_CONTENT':row['minHALSTEAD_CONTENT'] + np.random.uniform(contst),
'minHALSTEAD_DIFFICULTY':row['minHALSTEAD_DIFFICULTY'] + np.random.uniform(diffest),
'minHALSTEAD_EFFORT':row['minHALSTEAD_EFFORT'] + np.random.uniform(effortsst),
'minHALSTEAD_ERROR_EST':row['minHALSTEAD_ERROR_EST'] + np.random.uniform(errost),
'minHALSTEAD_LENGTH':row['minHALSTEAD_LENGTH'] + np.random.uniform(lengtst),
'minHALSTEAD_LEVEL':row['minHALSTEAD_LEVEL'] + np.random.uniform(levst),
'minHALSTEAD_PROG_TIME':row['minHALSTEAD_PROG_TIME'] + np.random.uniform(progst),
'minHALSTEAD_VOLUME':row['minHALSTEAD_VOLUME'] + np.random.uniform(volust),
'minNUM_OPERANDS':row['minNUM_OPERANDS'] + np.random.uniform(operanst),
'minNUM_OPERATORS':row['minNUM_OPERATORS'] + np.random.uniform(operatst),
'minNUM_UNIQUE_OPERANDS':row['minNUM_UNIQUE_OPERANDS'] + np.random.uniform(uoperandst),
'minNUM_UNIQUE_OPERATORS' :row['minNUM_UNIQUE_OPERATORS'] + np.random.uniform(uoperatorst),
'minLOC_TOTAL' :row['minLOC_TOTAL'] + np.random.uniform(totst),
'maxLOC_BLANK' :row['maxLOC_BLANK'] + np.random.uniform(mblankst),
'maxBRANCH_COUNT' :row['maxBRANCH_COUNT'] + np.random.uniform(branchcountst),
'maxLOC_CODE_AND_COMMENT' :row['maxLOC_CODE_AND_COMMENT'] + np.random.uniform(mcodest),
'maxLOC_COMMENTS' :row['maxLOC_COMMENTS'] + np.random.uniform(mcommentst),
'maxCYCLOMATIC_COMPLEXITY' :row['maxCYCLOMATIC_COMPLEXITY'] + np.random.uniform(mcyclost),
'maxDESIGN_COMPLEXITY' :row['maxDESIGN_COMPLEXITY'] + np.random.uniform(mdesist),
'maxESSENTIAL_COMPLEXITY' :row['maxESSENTIAL_COMPLEXITY'] + np.random.uniform(messenst),
'maxLOC_EXECUTABLE' :row['maxLOC_EXECUTABLE'] + np.random.uniform(mlocst),
'maxHALSTEAD_CONTENT' :row['maxHALSTEAD_CONTENT'] + np.random.uniform(mhalconst),
'maxHALSTEAD_DIFFICULTY' :row['maxHALSTEAD_DIFFICULTY'] + np.random.uniform(mhaldiffst),
'maxHALSTEAD_EFFORT' :row['maxHALSTEAD_EFFORT'] + np.random.uniform(mhaleffst),
'maxHALSTEAD_ERROR_EST' :row['maxHALSTEAD_ERROR_EST'] + np.random.uniform(mhalerrst),
'maxHALSTEAD_LENGTH' :row['maxHALSTEAD_LENGTH'] + np.random.uniform(mhallenst),
'maxHALSTEAD_LEVEL' :row['maxHALSTEAD_LEVEL'] + np.random.uniform(mhallevst),
'maxHALSTEAD_PROG_TIME' :row['maxHALSTEAD_PROG_TIME'] + np.random.uniform(mhalpst),
'maxHALSTEAD_VOLUME' :row['maxHALSTEAD_VOLUME'] + np.random.uniform(mhalvst),
'maxNUM_OPERANDS' :row['maxNUM_OPERANDS'] + np.random.uniform(mnumopst),
'maxNUM_OPERATORS' :row['maxNUM_OPERATORS'] + np.random.uniform(mnopst),
'maxNUM_UNIQUE_OPERANDS':row['maxNUM_UNIQUE_OPERANDS'] + np.random.uniform(muopst),
'maxNUM_UNIQUE_OPERATORS':row['maxNUM_UNIQUE_OPERATORS'] + np.random.uniform(muoprst),
'maxLOC_TOTAL':row['maxLOC_TOTAL'] + np.random.uniform(mloctst),
'avgLOC_BLANK' :row['avgLOC_BLANK'] + np.random.uniform(alocbst),
'avgBRANCH_COUNT' :row['avgBRANCH_COUNT'] + np.random.uniform(abcst),
'avgLOC_CODE_AND_COMMENT' :row['avgLOC_CODE_AND_COMMENT'] + np.random.uniform(aloccodest),
'avgLOC_COMMENTS' :row['avgLOC_COMMENTS'] + np.random.uniform(aloccommst),
'avgCYCLOMATIC_COMPLEXITY' :row['avgCYCLOMATIC_COMPLEXITY'] + np.random.uniform(acyclost),
'avgDESIGN_COMPLEXITY' :row['avgDESIGN_COMPLEXITY'] + np.random.uniform(adesigst),
'avgESSENTIAL_COMPLEXITY' :row['avgESSENTIAL_COMPLEXITY'] + np.random.uniform(aessest),
'avgLOC_EXECUTABLE' :row['avgLOC_EXECUTABLE'] + np.random.uniform(alocexest),
'avgHALSTEAD_CONTENT' :row['avgHALSTEAD_CONTENT'] + np.random.uniform(ahalconst),
'avgHALSTEAD_DIFFICULTY' :row['avgHALSTEAD_DIFFICULTY'] + np.random.uniform(ahaldifficst),
'avgHALSTEAD_EFFORT' :row['avgHALSTEAD_EFFORT'] + np.random.uniform(ahaleffortst),
'avgHALSTEAD_ERROR_EST' :row['avgHALSTEAD_ERROR_EST'] + np.random.uniform(ahalestst),
'avgHALSTEAD_LENGTH' :row['avgHALSTEAD_LENGTH'] + np.random.uniform(ahallenst),
'avgHALSTEAD_LEVEL' :row['avgHALSTEAD_LEVEL'] + np.random.uniform(ahallevst),
'avgHALSTEAD_PROG_TIME' :row['avgHALSTEAD_PROG_TIME'] + np.random.uniform(ahalprogst),
'avgHALSTEAD_VOLUME' :row['avgHALSTEAD_VOLUME'] + np.random.uniform(ahalvolst),
'avgNUM_OPERANDS' :row['avgNUM_OPERANDS'] + np.random.uniform(ahalnumost),
'avgNUM_OPERATORS' :row['avgNUM_OPERATORS'] + np.random.uniform(ahalnumopst),
'avgNUM_UNIQUE_OPERANDS' :row['avgNUM_UNIQUE_OPERANDS'] + np.random.uniform(anumoperanst),
'avgNUM_UNIQUE_OPERATORS' :row['avgNUM_UNIQUE_OPERATORS'] + np.random.uniform(anumuniquest),
'avgLOC_TOTAL' :row['avgLOC_TOTAL'] + np.random.uniform(aloctst),
'sumLOC_BLANK' :row['sumLOC_BLANK'] + np.random.uniform(alocbst),
'sumBRANCH_COUNT' :row['sumBRANCH_COUNT'] + np.random.uniform(sumbst),
'sumLOC_CODE_AND_COMMENT' :row['sumLOC_CODE_AND_COMMENT'] + np.random.uniform(sunlst),
'sumLOC_COMMENTS' :row['sumLOC_COMMENTS'] + np.random.uniform(sumlcommst),
'sumCYCLOMATIC_COMPLEXITY' :row['sumCYCLOMATIC_COMPLEXITY'] + np.random.uniform(sumcyclost),
'sumDESIGN_COMPLEXITY' :row['sumDESIGN_COMPLEXITY'] + np.random.uniform(sumdesist),
'sumESSENTIAL_COMPLEXITY' :row['sumESSENTIAL_COMPLEXITY'] + np.random.uniform(sumessst),
'sumLOC_EXECUTABLE' :row['sumLOC_EXECUTABLE'] + np.random.uniform(sumexst),
'sumHALSTEAD_CONTENT' :row['sumHALSTEAD_CONTENT'] + np.random.uniform(sumconst),
'sumHALSTEAD_DIFFICULTY' :row['sumHALSTEAD_DIFFICULTY'] + np.random.uniform(sumdiffest),
'sumHALSTEAD_EFFORT' :row['sumHALSTEAD_EFFORT'] + np.random.uniform(sumeffst),
'sumHALSTEAD_ERROR_EST' :row['sumHALSTEAD_ERROR_EST'] + np.random.uniform(sumerrost),
'sumHALSTEAD_LENGTH' :row['sumHALSTEAD_LENGTH'] + np.random.uniform(sumlengst),
'sumHALSTEAD_LEVEL' :row['sumHALSTEAD_LEVEL'] + np.random.uniform(sumlevst),
'sumHALSTEAD_PROG_TIME' :row['sumHALSTEAD_PROG_TIME'] + np.random.uniform(sumprogst),
'sumHALSTEAD_VOLUME' :row['sumHALSTEAD_VOLUME'] + np.random.uniform(sumvolust),
'sumNUM_OPERANDS' :row['sumNUM_OPERANDS'] + np.random.uniform(sumoperst),
'sumNUM_OPERATORS' :row['sumNUM_OPERATORS'] + np.random.uniform(sumoperandst),
'sumNUM_UNIQUE_OPERANDS' :row['sumNUM_UNIQUE_OPERANDS'] + np.random.uniform(sumuopst),
'sumNUM_UNIQUE_OPERATORS' :row['sumNUM_UNIQUE_OPERATORS'] + np.random.uniform(sumuoprst),
'sumLOC_TOTAL' :row['sumLOC_TOTAL'] + np.random.uniform(sumtolst),
'DEFECTT' :row['DEFECTT'] + np.random.uniform(deftst),
'DEFECT5' :row['DEFECT5'] + np.random.uniform(defest),
'NUMDEFECTS' :row['NUMDEFECTS'] + np.random.uniform(ndefst)
}
A.append(temp)
print(len(A), "dataset created")
df=pd. DataFrame(A)
df.to_csv("A1.csv")
The output is as follows
726 dataset created
AttributeError Traceback (most recent call
last)
in ()
1 print(len(A), "dataset created")
----> 2 df=pd. DataFrame(A)
3 df.to_csv("A1.csv")
5 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/dtypes/cast.py in
maybe_convert_platform(values)
122 arr = values
123
--> 124 if arr.dtype == object:
125 arr = cast(np.ndarray, arr)
126 arr = lib.maybe_convert_objects(arr)
AttributeError: 'dict' object has no attribute 'dtype'
Any help is appreciated
Thank You!
A=[]
if you use curly braces {} rather than block braces [] then the problem
AttributeError: 'dict' object has no attribute 'dtype'
will be solved.
Use this:
A={}

How to iterate through two pandas columns and create a new column

I am trying to create a new column by concatenating two columns with certain conditions.
master['work_action'] = np.nan
for a,b in zip(master['repair_location'],master['work_service']):
if a == 'Field':
master['work_action'].append(a + " " + b)
elif a == 'Depot':
master['work_action'].append(a + " " + b)
else:
master['work_action'].append(a)
TypeError: cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid
The problem is with master['work_action'].append(a + " " + b)
If I change my code to this:
test = []
for a,b in zip(master['repair_location'],master['work_service']):
if a == 'Field':
test.append(a + " " + b)
elif a == 'Depot':
test.append(a + " " + b)
else:
test.append(a)
I get exactly what I want in a list. But I want it in a pandas column. How do I create a new pandas column with the conditions above?
If performance is important, I would use numpy's select:
master = pd.DataFrame(
{
'repair_location': ['Field', 'Depot', 'Other'],
'work_service':[1, 2, 3]
}
)
master['work_action'] = np.select(
condlist= [
master['repair_location'] == 'Field',
master['repair_location'] == 'Depot'
],
choicelist= [
master['repair_location'] + ' ' + master['work_service'].astype(str),
master['repair_location'] + ' ' + master['work_service'].astype(str)
],
default= master['repair_location']
)
Which results in:
repair_location work_service work_action
0 Field 1 Field 1
1 Depot 2 Depot 2
2 Other 3 Other
Append method is for insert values at the end. You are trying to concatenate two strings values. Use apply method:
def fun(a,b):
if a == 'Field':
return a + " " + b
elif a == 'Depot':
return a + " " + b
else:
return a
master['work_action'] = master.apply(lambda x: fun(x['repair_location'], x['work_service']), axis=1)

Alternative of orca for creating PDF file with images and figures

I am using this script to create pdf file and append images with statistics:
import MySQLdb
from plotly import graph_objs as go
import numpy as np
import os
from plotly.subplots import make_subplots
from PyPDF2 import PdfFileMerger
from datetime import datetime, timedelta
import smtplib
from email.message import EmailMessage
import imghdr
# Database connect
db = MySQLdb.connect(host="localhost",
user="root",
passwd="****",
db="ofasorgu_10_168_1_71")
today = datetime.today().strftime('%Y-%m-%d')
one_week = (datetime.today() - timedelta(days=7)).strftime('%Y-%m-%d')
two_week = (datetime.today() - timedelta(days=14)).strftime('%Y-%m-%d')
three_week = (datetime.today() - timedelta(days=21)).strftime('%Y-%m-%d')
four_week = (datetime.today() - timedelta(days=28)).strftime('%Y-%m-%d')
# Functions
def load_post_views(table, today, one_week, two_week, three_week, four_week):
product_views_dict = dict()
cursor = db.cursor()
cursor.execute(
"SELECT client_id, product_id, referrer, `date`" +
" FROM " + table +
" WHERE `date`>='"+four_week+"'")
for x in range(0, cursor.rowcount):
row = cursor.fetchone()
network = ""
period = ""
client_id = row[0]
product_id = row[1]
referrer = row[2]
date = str(row[3])
email_cursor = db.cursor()
email_cursor.execute("SELECT address FROM c8ty_connections_email WHERE entry_id=" + str(client_id))
email = email_cursor.fetchone()
product_cursor = db.cursor()
product_cursor.execute("SELECT post_title FROM c8ty_posts WHERE id=" + str(product_id))
product_name = product_cursor.fetchone()
# Add client ID key
if client_id not in product_views_dict:
product_views_dict[client_id] = dict()
# Add product ID key to client ID parent key
if product_id not in product_views_dict[client_id]:
product_views_dict[client_id][product_id] = {
today + " - " + one_week: {
"facebook": 0,
"twitter": 0,
"instagram": 0,
"linkedin": 0,
"pinterest": 0,
"website": 0,
},
one_week + " - " + two_week: {
"facebook": 0,
"twitter": 0,
"instagram": 0,
"linkedin": 0,
"pinterest": 0,
"website": 0,
},
two_week + " - " + three_week: {
"facebook": 0,
"twitter": 0,
"instagram": 0,
"linkedin": 0,
"pinterest": 0,
"website": 0,
},
three_week + " - " + four_week: {
"facebook": 0,
"twitter": 0,
"instagram": 0,
"linkedin": 0,
"pinterest": 0,
"website": 0,
}
}
# Find referrer
if "facebook" in referrer:
network = "facebook"
elif "twitter" in referrer:
network = "twitter"
elif "instagram" in referrer:
network = "instagram"
elif "linkedin" in referrer:
network = "linkedin"
elif "pinterest" in referrer:
network = "pinterest"
else:
network = "website"
# Check view period
if date <= today and date > one_week:
period = today + " - " + one_week
if date <= one_week and date > two_week:
period = one_week + " - " + two_week
if date <= two_week and date > three_week:
period = two_week + " - " + three_week
if date <= three_week and date > four_week:
period = three_week + " - " + four_week
product_views_dict[client_id][product_id][period][network] += 1
product_views_dict[client_id]["email"] = email[0]
product_views_dict[client_id][product_id]["product"] = product_name[0]
return product_views_dict
def draw_statistic(data_dict):
for clinetID, product_info in data_dict.items():
client_email = product_info["email"]
for productID, product_data in product_info.items():
if type(product_data) is dict:
product_name = product_data['product']
table_data = [
[
today + " - " + one_week,
one_week + " - " + two_week,
two_week + " - " + three_week,
three_week + " - " + four_week,
today + " - " + four_week
]
]
total_one = []
total_two = []
total_three = []
total_four = []
overall_traces = []
networks_and_positions = [
{"network": "website","row": 2,"col": 1},
{"network": "linkedin","row": 2,"col": 2},
{"network": "facebook","row": 3,"col": 1},
{"network": "twitter","row": 3,"col": 2},
{"network": "instagram","row": 4,"col": 1},
{"network": "pinterest","row": 4,"col": 2}
]
fig = make_subplots(rows=5, cols=2)
merge_fig = make_subplots(rows=1, cols=1)
count = 2
for dictionary in networks_and_positions:
network = dictionary['network']
row = dictionary['row']
col = dictionary['col']
total_one.append(product_data[today + " - " + one_week][network])
total_two.append(product_data[one_week + " - " + two_week][network])
total_three.append(product_data[two_week + " - " + three_week][network])
total_four.append(product_data[three_week + " - " + four_week][network])
table_data.append([
product_data[today + " - " + one_week][network],
product_data[one_week + " - " + two_week][network],
product_data[two_week + " - " + three_week][network],
product_data[three_week + " - " + four_week][network],
sum([
product_data[today + " - " + one_week][network],
product_data[one_week + " - " + two_week][network],
product_data[two_week + " - " + three_week][network],
product_data[three_week + " - " + four_week][network]
])
])
xaxis = [
today + " - " + one_week,
one_week + " - " + two_week,
two_week + " - " + three_week,
three_week + " - " + four_week,
]
yaxis = [
product_data[today + " - " + one_week][network],
product_data[one_week + " - " + two_week][network],
product_data[two_week + " - " + three_week][network],
product_data[three_week + " - " + four_week][network]
]
chart_name = network.capitalize() + " statistic"
# Create bar chart
if (count == 2):
social_fig = go.Bar(
x=xaxis,
y=yaxis,
name=chart_name
)
else:
social_fig = go.Bar(
x=xaxis,
y=yaxis,
name=chart_name,
yaxis="y"+str(count)
)
# Add chart to fig
fig.add_trace(
social_fig,
row=row,
col=col
)
count += 1
trace_name = network.capitalize() + " views"
overall_traces.append(
go.Scatter(
x=xaxis,
y=yaxis,
name=trace_name
)
)
total_column_values_array = [sum(total_one), sum(total_two), sum(total_three), sum(total_four),]
total_column_values_array.append(
sum(total_one)+sum(total_two)+sum(total_three)+sum(total_four)
)
table_data.append(total_column_values_array)
# Create product table
fig.add_trace(
go.Table(
header=dict(values=["Period", "Website", "Facebook", "Twitter", "Instagram", "LinkedIn", "Pinterest", "Total"]),
cells=dict(values=table_data)
)
)
merge_fig.add_traces(overall_traces)
# Craete folder if doesn't exist
if not os.path.exists("files"):
os.mkdir("files")
statistic_file = "files/statistic_"+product_name+".pdf"
overall_file = "files/overall_"+product_name+".pdf"
out_file = "files/"+product_name+"_statistic.pdf"
# Create charts file
fig.update_layout(height=1500, width=1000, title_text="<b>Greetings</b><br />This is statistic from <a href='https://www.cfasuk.co.uk/'>CFAS UK</a> for your product <b>"+product_name+"</b>")
fig.update_layout(
yaxis3=dict(title="Website views", titlefont=dict(color="#636efa")),
yaxis4=dict(title="LinkedIn views", titlefont=dict(color="#ef553b")),
yaxis5=dict(title="Facebook views", titlefont=dict(color="#00cc96")),
yaxis6=dict(title="Twitter views", titlefont=dict(color="#b780f9")),
yaxis7=dict(title="Instagram views", titlefont=dict(color="#ffa15a")),
yaxis8=dict(title="Pinterest views", titlefont=dict(color="#19d3f3")),
)
fig.write_image(statistic_file)
# Create overall file
merge_fig.update_layout(height=700, width=1000, title_text="Overall <b>"+product_name+"</b> statistic")
merge_fig.write_image(overall_file)
merge = PdfFileMerger(strict=False)
# Append charts file to merger
merge.append(statistic_file)
# Append overall file to merger
merge.append(overall_file)
# Create end statistic file with both charts and overall
merge.write(out_file)
merge.close()
# Delete statistic file
os.remove(statistic_file)
# Delete overall file
os.remove(overall_file)
# Send email with file
send_mail(
"tomaivanovtomov#gmail.com",
"tomaivanovtomov#gmail.com",
"CFAS UK, "+product_name+" statistic",
"This is automated email. Please, do not reply!<br/>"+
"If you find some problem with the statistic or the product is not yours, please contact CFAS UK team.<br/>"+
"Best regards!",
out_file
)
def send_mail(send_from, send_to, subject, text, file=None):
with smtplib.SMTP_SSL('smtp.gmail.com', 465) as smtp:
smtp.login("tomaivanovtomov#gmail.com", "zmjquvphuvigqdai")
msg = EmailMessage()
msg['Subject'] = subject
msg['From'] = send_from
msg['To'] = send_to
msg.set_content(text)
with open(file, "rb") as f:
file_data = f.read()
file_name = f.name
msg.add_attachment(file_data, maintype='application/pdf', subtype='pdf', filename=file_name)
smtp.send_message(msg)
# Init
product_views_dict = load_post_views("an_product_view", today, one_week, two_week, three_week, four_week)
brochure_views_dict = load_post_views("an_brochure_view", today, one_week, two_week, three_week, four_week)
draw_statistic(product_views_dict)
draw_statistic(brochure_views_dict)
db.close()
exit()
It works fine when I test it on my local server. But I need to upload it to shared account on a Centos server. On my hosting provider. There I can't install anaconda which I need to install orca for static images. Is there an alternative to create images and then add it to pdf file? Thank you in advance!
Yes (you do not need to use any tool backend dependency in your code), there's a very easy alternative library example as shown below.
you can use matplotlib integration for this.
you can import these modules as shown below
from plotly.offline import init_notebook_mode, plot_mpl
import matplotlib.pyplot as plt
and you can use them as below:
init_notebook_mode()
fig = plt.figure()
# you can configure you plot here use below to save it as image
plot_mpl(fig)
plot_mpl(fig, image='png')
OR if you want to still stick to only plotly and find alternative, you can see beow
you can use a offline module of plotly to generate static images on the server and use it to generate a PDF. you can import it using from plotly.offline import plot. Once you've imported then you use the plot function as below
fig = go.Figure( data=data, layout=layout )
plot( fig, filename='your-file-name' , image = 'png')
If you are using Plotly you can install kaleido and it will use this instead of orca to generate the static image.
This is good if you can only install your packages through pip.

Writing to a google candlestick chart from python

I'm trying to write from python to a google candlestick chart, and i'm getting a TypeError that doesn't make sense. The google chart says it takes a date column, and 4 number columns, but when i get an error when trying to output it. It's worth noting that i tried to convert them to strings, and i got nothing.
output.write("""<html><head><script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script><script type="text/javascript">google.charts.load('current', {'packages':['corechart']});google.charts.setOnLoadCallback(drawChart);function drawChart() {var data = new google.visualization.DataTable();data.addColumn('number', 'date');data.addColumn('number','low')};data.addColumn('number','open');data.addColumn('number', 'close');data.addColumn('number', 'high');data.addRows([""")
if backtest:
poloData = self.conn.api_query("returnChartData",{"currencyPair": self.pair, "start": self.startTime, "end": self.endTime,"period": self.period})
for datum in poloData:
newTime += period
mycandle = [newTime, datum['open'], datum['close'], datum['high'], datum['low']]
output.write("['" + datum['date'] + "'," + datum['low'] + "," + datum['open'] + "," + datum['close'] + "," + datum['high'])
output.write("],\n")
if (datum['open'] and datum['close'] and datum['high'] and datum['low']):
self.data.append(
BotCandlestick(self.period, datum['open'], datum['close'], datum['high'], datum['low'],
datum['weightedAverage']))
output.write("""]);var options = {legend:'none};var chart = new google.visualization.CandlestickChart(document.getElementById('chart_div'));chart.draw(data, options);}</script></head><body><div id="chart_div" style="width: 100%; height: 100%"></div></body></html>""")
my TypeError:
output.write("['" + datum['date'] + "'," + datum['low'] + "," + datum['open'] + "," + datum['close'] + "," + datum['high'])
TypeError: must be str, not int

Querying a shading node Maya Python

I am currently having a problem where i want to query the 'inputX' of a multiplyDivide Node in maya and put the queried number into the 'inputX' of another multiplyDivide node.
The script currently makes an stretchy IK set up for an arm. Using a distanceBetween the shoulder and the wrist (at a certain point, which is what i want to query) the bones would then stretch. So obviously, I don't want to connect the two together.
def stretchyIK(firstJointStore, lastJointStore, side, limb):
GlobalMoveRig = cmds.rename ('GlobalMove_Grp_01')
locFirstJoint = cmds.spaceLocator (n='Loc_' + firstJointStore + '_0#')
locLastJoint = cmds.spaceLocator (n='Loc_' + lastJointStore + '_0#')
pointLoc1 = cmds.pointConstraint (side + '_Fk_' + firstJointStore + suffix, locFirstJoint)
pointLoc2 = cmds.pointConstraint (side + '_Fk_' + lastJointStore + suffix, locLastJoint)
cmds.delete (pointLoc1, pointLoc2)
cmds.pointConstraint (side + '_FK_' + firstJointStore + suffix, locFirstJoint)
cmds.pointConstraint (ikCtr, locLastJoint)
cmds.parent (locFirstJoint, locLastJoint, 'Do_Not_Touch')
#Creating Nodes for Stretchy IK
IkStretch_DisNode = cmds.shadingNode ('distanceBetween', asUtility=True, n='DistBet_IkStretch_' + side + limb + '_#')
cmds.connectAttr (locFirstJoint[0] + '.translate', IkStretch_DisNode + '.point1')
cmds.connectAttr (locLastJoint[0] + '.translate', IkStretch_DisNode + '.point2')
IkStretch_DivNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Div_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_DivNode + '.operation', 2)
input = cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1.input1X') ########HELP NEEDED HERE
cmds.setAttr (ikCtr + '.translateX', 2)
IkStretch_MultNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Mult_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_MultNode + '.input1X', IkStretch_DivNode + '.input1.input1X')#WAIT FOR PAUL
cmds.connectAttr (GlobalMoveRig + '.scaleY', IkStretch_MultNode + '.input2X')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_DivNode + '.input2X')
IkStretch_Cond_Equ = cmds.shadingNode ('condition', asUtility=True, n='Cond_Equ_IkStretch_' + side + limb + '_#')
IkStretch_Cond_GrtEqu = cmds.shadingNode ('condition', asUtility=True, n='Cond_GrtEqu_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_Cond_GrtEqu + '.operation', 3)
cmds.connectAttr (ikCtr + '.Enable', IkStretch_Cond_Equ + '.firstTerm')
cmds.setAttr (IkStretch_Cond_Equ + '.secondTerm', 1)
cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_Cond_GrtEqu + '.firstTerm')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_Cond_GrtEqu + '.secondTerm')
cmds.connectAttr (IkStretch_DivNode + '.outputX', IkStretch_Cond_GrtEqu + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', IkStretch_Cond_Equ + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + secondJointStore + suffix + '.scaleX')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + firstJointStore + suffix + '.scaleX')
Yes, your error makes perfect sense... The attribute you're looking for is actually just '.input1X' rather than '.input1.input1X'.
I know, that isn't very clear, but you'll know in the future. An easy way of figuring out stuff like this, by the way, is manually connecting stuff in Maya and seeing the MEL output in the script editor. You'll get the real deal every time, and translating that stuff to Python afterwards is quick.
So:
cmds.connectAttr(IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1X')
By the way, I'm not sure why you were assigning the result to input. I admit I'm not sure what that would return, but I don't see how it could be any useful!
Additionally: To answer your direct question, you can use getattr to query the value.
cmds.setAttr(
IkStretch_MultNode + '.input1X',
cmds.getattr(IkStretch_DivNode + '.input1X')
)
In my case, the variable being assigned to be set as the new attribute value was not being evaluated properly. setAttr interpreted the variable value as text, even though the value was input as a float value.
So, I simply assigned the variable and set it to float the variable within the command. In my case, I did the following:
cmds.setAttr(Node + '.input1X', float(variable))

Categories