I'm trying to write from python to a google candlestick chart, and i'm getting a TypeError that doesn't make sense. The google chart says it takes a date column, and 4 number columns, but when i get an error when trying to output it. It's worth noting that i tried to convert them to strings, and i got nothing.
output.write("""<html><head><script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script><script type="text/javascript">google.charts.load('current', {'packages':['corechart']});google.charts.setOnLoadCallback(drawChart);function drawChart() {var data = new google.visualization.DataTable();data.addColumn('number', 'date');data.addColumn('number','low')};data.addColumn('number','open');data.addColumn('number', 'close');data.addColumn('number', 'high');data.addRows([""")
if backtest:
poloData = self.conn.api_query("returnChartData",{"currencyPair": self.pair, "start": self.startTime, "end": self.endTime,"period": self.period})
for datum in poloData:
newTime += period
mycandle = [newTime, datum['open'], datum['close'], datum['high'], datum['low']]
output.write("['" + datum['date'] + "'," + datum['low'] + "," + datum['open'] + "," + datum['close'] + "," + datum['high'])
output.write("],\n")
if (datum['open'] and datum['close'] and datum['high'] and datum['low']):
self.data.append(
BotCandlestick(self.period, datum['open'], datum['close'], datum['high'], datum['low'],
datum['weightedAverage']))
output.write("""]);var options = {legend:'none};var chart = new google.visualization.CandlestickChart(document.getElementById('chart_div'));chart.draw(data, options);}</script></head><body><div id="chart_div" style="width: 100%; height: 100%"></div></body></html>""")
my TypeError:
output.write("['" + datum['date'] + "'," + datum['low'] + "," + datum['open'] + "," + datum['close'] + "," + datum['high'])
TypeError: must be str, not int
Related
I have tried "Data Augmentation" for a numeric datatset. Augmentation is successful but while exporting the augmented dataset from google colab I am facing the Attribute Error:'dict' object has no attribute 'dtype'
The part of the code and the error message is given below:
A=[]
A.append(df)
for _ in range(5):
for _,row in df.iterrows():
temp = {
'PERCENT_PUB_DATA': row['PERCENT_PUB_DATA'] + np.random.uniform(percst),
'ACCESS_TO_PUB_DATA':row['ACCESS_TO_PUB_DATA'] + np.random.uniform(accest),
'COUPLING_BETWEEN_OBJECTS':row['COUPLING_BETWEEN_OBJECTS'] + np.random.uniform(coupst),
'DEPTH':row['DEPTH'] + np.random.uniform(deptst),
'LACK_OF_COHESION_OF_METHODS':row['LACK_OF_COHESION_OF_METHODS'] + np.random.uniform(lackst),
'NUM_OF_CHILDREN':row['NUM_OF_CHILDREN'] + np.random.uniform(numost),
'DEP_ON_CHILD':row['DEP_ON_CHILD'] + np.random.uniform(depost),
'FAN_IN':row['FAN_IN'] + np.random.uniform(fanist),
'RESPONSE_FOR_CLASS':row['RESPONSE_FOR_CLASS'] + np.random.uniform(respst),
'WEIGHTED_METHODS_PER_CLASS':row['WEIGHTED_METHODS_PER_CLASS'] + np.random.uniform(weigst),
'minLOC_BLANK':row['minLOC_BLANK'] + np.random.uniform(blankst),
'minBRANCH_COUNT':row['minBRANCH_COUNT'] + np.random.uniform(branchst),
'minLOC_CODE_AND_COMMENT':row['minLOC_CODE_AND_COMMENT'] + np.random.uniform(codest),
'minLOC_COMMENTS':row['minLOC_COMMENTS'] + np.random.uniform(comentsst),
'minCYCLOMATIC_COMPLEXITY':row['minCYCLOMATIC_COMPLEXITY'] + np.random.uniform(cyclost),
'minDESIGN_COMPLEXITY':row['minDESIGN_COMPLEXITY'] + np.random.uniform(desist),
'minESSENTIAL_COMPLEXITY':row['minESSENTIAL_COMPLEXITY'] + np.random.uniform(essest),
'minLOC_EXECUTABLE':row['minLOC_EXECUTABLE'] + np.random.uniform(execst),
'minHALSTEAD_CONTENT':row['minHALSTEAD_CONTENT'] + np.random.uniform(contst),
'minHALSTEAD_DIFFICULTY':row['minHALSTEAD_DIFFICULTY'] + np.random.uniform(diffest),
'minHALSTEAD_EFFORT':row['minHALSTEAD_EFFORT'] + np.random.uniform(effortsst),
'minHALSTEAD_ERROR_EST':row['minHALSTEAD_ERROR_EST'] + np.random.uniform(errost),
'minHALSTEAD_LENGTH':row['minHALSTEAD_LENGTH'] + np.random.uniform(lengtst),
'minHALSTEAD_LEVEL':row['minHALSTEAD_LEVEL'] + np.random.uniform(levst),
'minHALSTEAD_PROG_TIME':row['minHALSTEAD_PROG_TIME'] + np.random.uniform(progst),
'minHALSTEAD_VOLUME':row['minHALSTEAD_VOLUME'] + np.random.uniform(volust),
'minNUM_OPERANDS':row['minNUM_OPERANDS'] + np.random.uniform(operanst),
'minNUM_OPERATORS':row['minNUM_OPERATORS'] + np.random.uniform(operatst),
'minNUM_UNIQUE_OPERANDS':row['minNUM_UNIQUE_OPERANDS'] + np.random.uniform(uoperandst),
'minNUM_UNIQUE_OPERATORS' :row['minNUM_UNIQUE_OPERATORS'] + np.random.uniform(uoperatorst),
'minLOC_TOTAL' :row['minLOC_TOTAL'] + np.random.uniform(totst),
'maxLOC_BLANK' :row['maxLOC_BLANK'] + np.random.uniform(mblankst),
'maxBRANCH_COUNT' :row['maxBRANCH_COUNT'] + np.random.uniform(branchcountst),
'maxLOC_CODE_AND_COMMENT' :row['maxLOC_CODE_AND_COMMENT'] + np.random.uniform(mcodest),
'maxLOC_COMMENTS' :row['maxLOC_COMMENTS'] + np.random.uniform(mcommentst),
'maxCYCLOMATIC_COMPLEXITY' :row['maxCYCLOMATIC_COMPLEXITY'] + np.random.uniform(mcyclost),
'maxDESIGN_COMPLEXITY' :row['maxDESIGN_COMPLEXITY'] + np.random.uniform(mdesist),
'maxESSENTIAL_COMPLEXITY' :row['maxESSENTIAL_COMPLEXITY'] + np.random.uniform(messenst),
'maxLOC_EXECUTABLE' :row['maxLOC_EXECUTABLE'] + np.random.uniform(mlocst),
'maxHALSTEAD_CONTENT' :row['maxHALSTEAD_CONTENT'] + np.random.uniform(mhalconst),
'maxHALSTEAD_DIFFICULTY' :row['maxHALSTEAD_DIFFICULTY'] + np.random.uniform(mhaldiffst),
'maxHALSTEAD_EFFORT' :row['maxHALSTEAD_EFFORT'] + np.random.uniform(mhaleffst),
'maxHALSTEAD_ERROR_EST' :row['maxHALSTEAD_ERROR_EST'] + np.random.uniform(mhalerrst),
'maxHALSTEAD_LENGTH' :row['maxHALSTEAD_LENGTH'] + np.random.uniform(mhallenst),
'maxHALSTEAD_LEVEL' :row['maxHALSTEAD_LEVEL'] + np.random.uniform(mhallevst),
'maxHALSTEAD_PROG_TIME' :row['maxHALSTEAD_PROG_TIME'] + np.random.uniform(mhalpst),
'maxHALSTEAD_VOLUME' :row['maxHALSTEAD_VOLUME'] + np.random.uniform(mhalvst),
'maxNUM_OPERANDS' :row['maxNUM_OPERANDS'] + np.random.uniform(mnumopst),
'maxNUM_OPERATORS' :row['maxNUM_OPERATORS'] + np.random.uniform(mnopst),
'maxNUM_UNIQUE_OPERANDS':row['maxNUM_UNIQUE_OPERANDS'] + np.random.uniform(muopst),
'maxNUM_UNIQUE_OPERATORS':row['maxNUM_UNIQUE_OPERATORS'] + np.random.uniform(muoprst),
'maxLOC_TOTAL':row['maxLOC_TOTAL'] + np.random.uniform(mloctst),
'avgLOC_BLANK' :row['avgLOC_BLANK'] + np.random.uniform(alocbst),
'avgBRANCH_COUNT' :row['avgBRANCH_COUNT'] + np.random.uniform(abcst),
'avgLOC_CODE_AND_COMMENT' :row['avgLOC_CODE_AND_COMMENT'] + np.random.uniform(aloccodest),
'avgLOC_COMMENTS' :row['avgLOC_COMMENTS'] + np.random.uniform(aloccommst),
'avgCYCLOMATIC_COMPLEXITY' :row['avgCYCLOMATIC_COMPLEXITY'] + np.random.uniform(acyclost),
'avgDESIGN_COMPLEXITY' :row['avgDESIGN_COMPLEXITY'] + np.random.uniform(adesigst),
'avgESSENTIAL_COMPLEXITY' :row['avgESSENTIAL_COMPLEXITY'] + np.random.uniform(aessest),
'avgLOC_EXECUTABLE' :row['avgLOC_EXECUTABLE'] + np.random.uniform(alocexest),
'avgHALSTEAD_CONTENT' :row['avgHALSTEAD_CONTENT'] + np.random.uniform(ahalconst),
'avgHALSTEAD_DIFFICULTY' :row['avgHALSTEAD_DIFFICULTY'] + np.random.uniform(ahaldifficst),
'avgHALSTEAD_EFFORT' :row['avgHALSTEAD_EFFORT'] + np.random.uniform(ahaleffortst),
'avgHALSTEAD_ERROR_EST' :row['avgHALSTEAD_ERROR_EST'] + np.random.uniform(ahalestst),
'avgHALSTEAD_LENGTH' :row['avgHALSTEAD_LENGTH'] + np.random.uniform(ahallenst),
'avgHALSTEAD_LEVEL' :row['avgHALSTEAD_LEVEL'] + np.random.uniform(ahallevst),
'avgHALSTEAD_PROG_TIME' :row['avgHALSTEAD_PROG_TIME'] + np.random.uniform(ahalprogst),
'avgHALSTEAD_VOLUME' :row['avgHALSTEAD_VOLUME'] + np.random.uniform(ahalvolst),
'avgNUM_OPERANDS' :row['avgNUM_OPERANDS'] + np.random.uniform(ahalnumost),
'avgNUM_OPERATORS' :row['avgNUM_OPERATORS'] + np.random.uniform(ahalnumopst),
'avgNUM_UNIQUE_OPERANDS' :row['avgNUM_UNIQUE_OPERANDS'] + np.random.uniform(anumoperanst),
'avgNUM_UNIQUE_OPERATORS' :row['avgNUM_UNIQUE_OPERATORS'] + np.random.uniform(anumuniquest),
'avgLOC_TOTAL' :row['avgLOC_TOTAL'] + np.random.uniform(aloctst),
'sumLOC_BLANK' :row['sumLOC_BLANK'] + np.random.uniform(alocbst),
'sumBRANCH_COUNT' :row['sumBRANCH_COUNT'] + np.random.uniform(sumbst),
'sumLOC_CODE_AND_COMMENT' :row['sumLOC_CODE_AND_COMMENT'] + np.random.uniform(sunlst),
'sumLOC_COMMENTS' :row['sumLOC_COMMENTS'] + np.random.uniform(sumlcommst),
'sumCYCLOMATIC_COMPLEXITY' :row['sumCYCLOMATIC_COMPLEXITY'] + np.random.uniform(sumcyclost),
'sumDESIGN_COMPLEXITY' :row['sumDESIGN_COMPLEXITY'] + np.random.uniform(sumdesist),
'sumESSENTIAL_COMPLEXITY' :row['sumESSENTIAL_COMPLEXITY'] + np.random.uniform(sumessst),
'sumLOC_EXECUTABLE' :row['sumLOC_EXECUTABLE'] + np.random.uniform(sumexst),
'sumHALSTEAD_CONTENT' :row['sumHALSTEAD_CONTENT'] + np.random.uniform(sumconst),
'sumHALSTEAD_DIFFICULTY' :row['sumHALSTEAD_DIFFICULTY'] + np.random.uniform(sumdiffest),
'sumHALSTEAD_EFFORT' :row['sumHALSTEAD_EFFORT'] + np.random.uniform(sumeffst),
'sumHALSTEAD_ERROR_EST' :row['sumHALSTEAD_ERROR_EST'] + np.random.uniform(sumerrost),
'sumHALSTEAD_LENGTH' :row['sumHALSTEAD_LENGTH'] + np.random.uniform(sumlengst),
'sumHALSTEAD_LEVEL' :row['sumHALSTEAD_LEVEL'] + np.random.uniform(sumlevst),
'sumHALSTEAD_PROG_TIME' :row['sumHALSTEAD_PROG_TIME'] + np.random.uniform(sumprogst),
'sumHALSTEAD_VOLUME' :row['sumHALSTEAD_VOLUME'] + np.random.uniform(sumvolust),
'sumNUM_OPERANDS' :row['sumNUM_OPERANDS'] + np.random.uniform(sumoperst),
'sumNUM_OPERATORS' :row['sumNUM_OPERATORS'] + np.random.uniform(sumoperandst),
'sumNUM_UNIQUE_OPERANDS' :row['sumNUM_UNIQUE_OPERANDS'] + np.random.uniform(sumuopst),
'sumNUM_UNIQUE_OPERATORS' :row['sumNUM_UNIQUE_OPERATORS'] + np.random.uniform(sumuoprst),
'sumLOC_TOTAL' :row['sumLOC_TOTAL'] + np.random.uniform(sumtolst),
'DEFECTT' :row['DEFECTT'] + np.random.uniform(deftst),
'DEFECT5' :row['DEFECT5'] + np.random.uniform(defest),
'NUMDEFECTS' :row['NUMDEFECTS'] + np.random.uniform(ndefst)
}
A.append(temp)
print(len(A), "dataset created")
df=pd. DataFrame(A)
df.to_csv("A1.csv")
The output is as follows
726 dataset created
AttributeError Traceback (most recent call
last)
in ()
1 print(len(A), "dataset created")
----> 2 df=pd. DataFrame(A)
3 df.to_csv("A1.csv")
5 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/dtypes/cast.py in
maybe_convert_platform(values)
122 arr = values
123
--> 124 if arr.dtype == object:
125 arr = cast(np.ndarray, arr)
126 arr = lib.maybe_convert_objects(arr)
AttributeError: 'dict' object has no attribute 'dtype'
Any help is appreciated
Thank You!
A=[]
if you use curly braces {} rather than block braces [] then the problem
AttributeError: 'dict' object has no attribute 'dtype'
will be solved.
Use this:
A={}
I have a problem with a report I am reading with openpyxl. Sometimes cell formats in a file I receive are not correct, and instead of "10.03.2020" (cell format Date) I get "43900,88455" (cell format General).
I tried google, openpyxl documentation and StackOverflow but that did not bring me any closer to the solution. Would you be able to help and advise how to switch cell format to Short Date, please?
Below did not work, I tried many other ideas but still in a dead end. I need correct dates for other operations within this script.
def sanitizeDates(self):
# pass
for i in range(3, self.fileLength-1):
self.mainWs.cell(i, 4).number_format = 'dd/mm/yyyy'
self.mainWs.cell(i, 16).number_format = 'dd/mm/yyyy'
Copy comment: So I have tried
print("Cell (4) type is: " + str(type(self.mainWs.cell(i, 4).value)) + " and current value is " + str(self.mainWs.cell(i, 4).value))
print("Cell (16) type is: " + str(type(self.mainWs.cell(i, 16).value)) + " and current value is " + str(self.mainWs.cell(i, 16).value))
that results in Cell (4) type is: <class 'datetime.datetime'>
and current value is 2020-03-10 22:41:41
Cell (16) type is: <class 'float'> and current value is 43900.9475
Excel displays it as "General" = "43900,88455"
So, I have figured this out after all. #stovfl thanks for your hints, this brought me to an idea what to look for and as a result, how to solve my issue.
Original thread on StackOverflow is available here: https://stackoverflow.com/a/29387450/5210159
Below is a working code:
def sanitizeDates(self):
for i in range(3, self.fileLength - 1):
# Check which cells should be sanitized and proceed
if isinstance(self.mainWs.cell(i, 4).value, float) and isinstance(self.mainWs.cell(i, 16).value, float):
print(str(self.mainWs.cell(i, 4).value) + " / " + str(self.mainWs.cell(i, 16).value) + " -> " + str(self.convertDate(self.mainWs.cell(i, 4).value)) + " / " + self.convertDate(self.mainWs.cell(i, 16).value))
self.mainWs.cell(i, 4).value = self.convertDate(self.mainWs.cell(i, 4).value)
self.mainWs.cell(i, 16).value = self.convertDate(self.mainWs.cell(i, 16).value)
elif isinstance(self.mainWs.cell(i, 4).value, float):
print(str(self.mainWs.cell(i, 4).value) + " -> " + str(self.convertDate(self.mainWs.cell(i, 4).value)))
self.mainWs.cell(i, 4).value = self.convertDate(self.mainWs.cell(i, 4).value)
elif isinstance(self.mainWs.cell(i, 16).value, float):
print(str(self.mainWs.cell(i, 16).value) + " -> " + str(self.convertDate(self.mainWs.cell(i, 16).value)))
self.mainWs.cell(i, 16).value = self.convertDate(self.mainWs.cell(i, 16).value)
def convertDate(self, dateToConvert):
# Thank you, StackOverflow <3
# https://stackoverflow.com/questions/29387137/how-to-convert-a-given-ordinal-number-from-excel-to-a-date
epochStart = datetime(1899, 12, 31)
if dateToConvert >= 60:
dateToConvert -= 1 # Excel leap year bug, 1900 is not a leap year
return epochStart + timedelta(days=dateToConvert)
After execution I get following results:
43899.89134259259 -> 2020-03-09 21:23:32
43900.9475 -> 2020-03-10 22:44:24
My code:
# Create website bar chart
website_fig = go.Bar(
x=[
today + " - " + one_week,
one_week + " - " + two_week,
two_week + " - " + three_week,
three_week + " - " + four_week,
],
y=[
product_data[today + " - " + one_week][network],
product_data[one_week + " - " + two_week][network],
product_data[two_week + " - " + three_week][network],
product_data[three_week + " - " + four_week][network]
],
name="Website statistic"
)
The values in y are 3, 0, 0, 0. I have printed them. Tried their type with type(value) and it's class int. The chart is rendered with -1, -0.5, 0, 0.5, 1 y axis. The x axis is fine. I tried to replace the y values with hardcoded 3,0,0,0 and it worked. Tried to make them ints with int(value), tried to multiply them by 1 int(value) * 1 and value * 1.
Thank you for every advice! The whole code is here
I am trying to create a JSON string that I can send in a PUT request but build it dynamically. For example, once I am done, I'd like the string to look like this:
{
"request-id": 1045058,
"db-connections":
[
{
"db-name":"Sales",
"table-name":"customer"
}
]
}
I'd like to use constants for the keys, for example, for request-id, I'd like to use CONST_REQUEST_ID. So the keys will be,
CONST_REQUEST_ID = "request-id"
CONST_DB_CONNECTIONS = "db_connections"
CONST_DB_NAME = "db-name"
CONST_TABLE_NAME = "table-name"
The values for the keys will be taken from various variables. For example we can take value "Sales" from a var called dbname with the value "Sales".
I have tried json.load and getting exception.
Help would be appreciated please as I am a bit new to Python.
Create a normal python dictionary, then convert it to JSON.
raw_data = {
CONST_DB_NAME: getDbNameValue()
}
# ...
json_data = json.dumps(raw_data)
# Use json_data in your PUT request.
You can concat the string using + with variables.
CONST_REQUEST_ID = "request-id"
CONST_DB_CONNECTIONS = "db_connections"
CONST_DB_NAME = "db-name"
CONST_TABLE_NAME = "table-name"
request_id = "1045058"
db_name = "Sales"
table_name = 'customer'
json_string = '{' + \
'"' + CONST_REQUEST_ID + '": ' + request_id \
+ ',' + \
'"db-connections":' \
+ '[' \
+ '{' \
+ '"' + CONST_DB_NAME +'":"' + db_name + '",' \
+ '"' + CONST_TABLE_NAME + '":"' + table_name + '"' \
+ '}' \
+ ']' \
+ '}'
print json_string
And this is the result
python st_ans.py
{"request-id": 1045058,"db-connections":[{"db-name":"Sales","table-name":"customer"}]}
I am currently having a problem where i want to query the 'inputX' of a multiplyDivide Node in maya and put the queried number into the 'inputX' of another multiplyDivide node.
The script currently makes an stretchy IK set up for an arm. Using a distanceBetween the shoulder and the wrist (at a certain point, which is what i want to query) the bones would then stretch. So obviously, I don't want to connect the two together.
def stretchyIK(firstJointStore, lastJointStore, side, limb):
GlobalMoveRig = cmds.rename ('GlobalMove_Grp_01')
locFirstJoint = cmds.spaceLocator (n='Loc_' + firstJointStore + '_0#')
locLastJoint = cmds.spaceLocator (n='Loc_' + lastJointStore + '_0#')
pointLoc1 = cmds.pointConstraint (side + '_Fk_' + firstJointStore + suffix, locFirstJoint)
pointLoc2 = cmds.pointConstraint (side + '_Fk_' + lastJointStore + suffix, locLastJoint)
cmds.delete (pointLoc1, pointLoc2)
cmds.pointConstraint (side + '_FK_' + firstJointStore + suffix, locFirstJoint)
cmds.pointConstraint (ikCtr, locLastJoint)
cmds.parent (locFirstJoint, locLastJoint, 'Do_Not_Touch')
#Creating Nodes for Stretchy IK
IkStretch_DisNode = cmds.shadingNode ('distanceBetween', asUtility=True, n='DistBet_IkStretch_' + side + limb + '_#')
cmds.connectAttr (locFirstJoint[0] + '.translate', IkStretch_DisNode + '.point1')
cmds.connectAttr (locLastJoint[0] + '.translate', IkStretch_DisNode + '.point2')
IkStretch_DivNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Div_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_DivNode + '.operation', 2)
input = cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1.input1X') ########HELP NEEDED HERE
cmds.setAttr (ikCtr + '.translateX', 2)
IkStretch_MultNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Mult_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_MultNode + '.input1X', IkStretch_DivNode + '.input1.input1X')#WAIT FOR PAUL
cmds.connectAttr (GlobalMoveRig + '.scaleY', IkStretch_MultNode + '.input2X')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_DivNode + '.input2X')
IkStretch_Cond_Equ = cmds.shadingNode ('condition', asUtility=True, n='Cond_Equ_IkStretch_' + side + limb + '_#')
IkStretch_Cond_GrtEqu = cmds.shadingNode ('condition', asUtility=True, n='Cond_GrtEqu_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_Cond_GrtEqu + '.operation', 3)
cmds.connectAttr (ikCtr + '.Enable', IkStretch_Cond_Equ + '.firstTerm')
cmds.setAttr (IkStretch_Cond_Equ + '.secondTerm', 1)
cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_Cond_GrtEqu + '.firstTerm')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_Cond_GrtEqu + '.secondTerm')
cmds.connectAttr (IkStretch_DivNode + '.outputX', IkStretch_Cond_GrtEqu + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', IkStretch_Cond_Equ + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + secondJointStore + suffix + '.scaleX')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + firstJointStore + suffix + '.scaleX')
Yes, your error makes perfect sense... The attribute you're looking for is actually just '.input1X' rather than '.input1.input1X'.
I know, that isn't very clear, but you'll know in the future. An easy way of figuring out stuff like this, by the way, is manually connecting stuff in Maya and seeing the MEL output in the script editor. You'll get the real deal every time, and translating that stuff to Python afterwards is quick.
So:
cmds.connectAttr(IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1X')
By the way, I'm not sure why you were assigning the result to input. I admit I'm not sure what that would return, but I don't see how it could be any useful!
Additionally: To answer your direct question, you can use getattr to query the value.
cmds.setAttr(
IkStretch_MultNode + '.input1X',
cmds.getattr(IkStretch_DivNode + '.input1X')
)
In my case, the variable being assigned to be set as the new attribute value was not being evaluated properly. setAttr interpreted the variable value as text, even though the value was input as a float value.
So, I simply assigned the variable and set it to float the variable within the command. In my case, I did the following:
cmds.setAttr(Node + '.input1X', float(variable))