PYTHON: Issues with Infusionsoft Deleting Method - python
I am new to Python and working with the Infusionsoft API and I am hitting a snag here. I am writing a script that retrieves all of the contacts in our system and adds them to a Pandas Data frame if they contain a given string. From what I can tell my code for retrieving the contacts is code and it will even break it down to just the ID number of the contact I want to receive. The issue comes when I try to pass that data into my delete method.
When I first started looking into this I looked into a github posting (see here: https://github.com/GearPlug/infusionsoft-python) And planned to use the method delete_contact = Client.delete_contact('ID') which takes the param 'ID' as a string. I have broken it down in my code so that the Ids will read into an array as a string and my program iterates over them and prints out all of the strings like so:
1
2
3
What has me thrown off is when I try to pass them into the method delete_contact = client.delete_contact('ID') it comes back with
File "C:\Users\Bryan\OneDrive\Desktop\Python_Scripts\site-packages\NEW_Infusion_Script.py", line 28, in <module>
delete_contact(infusion_id)
File "C:\Users\Bryan\OneDrive\Desktop\Python_Scripts\site-packages\NEW_Infusion_Script.py", line 26, in delete_contact
Client.delete_contact('id')
TypeError: Client.delete_contact() missing 1 required positional argument: 'id'
Here is my code with the obvious API keys removed:
import pandas as pd
import infusionsoft
from infusionsoft.client import Client
import xmlrpc.client
#Python has built-in support for xml-rpc. All you need to do is add the
#line above.
#Set up the API server variable
server = xmlrpc.client.ServerProxy("https://productname.infusionsoft.com:443/api/xmlrpc")
key = "#The encrypted API key"
test_rigor = []
var = server.DataService.findByField(key,"Contact",100,0,"Email","%testrigor-mail.com",["LastName","Id",'Email'] )
for result in var:
server.DataService.update(key,"Contact",result["Id"],{"LastName":" "})
test_rigor.append
##create a Data Frame from the info pull above
df = pd.DataFrame.from_dict(var)
print("Done")
print(var)
df
##Pull the data and put into a seperate array and feed that into the delete method
infusion_ids = []
for num in df['Id']:
infusion_ids.append(num)
def delete_contact(x):
Client.delete_contact('id')
for infusion_id in infusion_ids:
infusion_id[0]
delete_contact(infusion_id[0])
infusion_id.pop(0)
##print(df)
Any suggestions or obvious missteps would be greatly appreciated thanks!
Related
How to update python dictionary based on time?
So the problem I am having in python is that every time I make a connection to my jira account, I can't seem to keep my previous python dictionary values. The logic seems to fail in the for loop and when I assign empty dictionaries. Other than that, I did try to work around this issue by using a couple conditions, but those methods did not work out. Here is what I mean. Current Code: import os import pandas as pd from jira.client import JIRA from datetime import datetime jira_token = os.getenv(<jira_personal_tokne>) jira_link = os.getenv(<jira_url>) jira_server = {"server": jira_link} auth_jira = JIRA(options=jira_server, token_auth=jira_token) proj = auth_jira.search_issues("project=<my_project> and 'epic link'='<epic_link_of_interest>'") plmpgm_dict= {} for i in proj: formatted_date = datetime.strptime(i.fields.updated, '%y-%m-%dT%H:%M:%S.%f%z').strftime("%Y-%m-%dT%H:%M:%S") inner_dict = {} inner_dict["summary"]=i.fields.summary inner_dict["description"] = i.fields.description inner_dict["last_retrieved"] = formatted_date plmpgm_dict[i.key] inner_dict if i.key == "<jira issue>": print(plmpgm_dict) Output I get: {'<jira issue>':{ 'summary':'summary_values', 'description: 'description values', 'last_retrieved':'2022-03-11T19:44:15' } } Output I want/expected: {'<jira issue>':{ 'summary':'summary_values', 'description: 'description values', 'last_retrieved':'2022-03-11T18:44:15' },{ 'summary':'old summary_values', 'description: ' old description values', 'last_retrieved':'2022-03-11T18:50:15' } } Now, I am wondering if there is a way to possibly store my previous python dictionary key-value pairs instead of being replaced with the new dictionary key-value pairs when I make a connection to jira using python?
Actually, your problem is about neither Jira nor Python. When you run a program, all variables start from zero. It is programming logic. You need to save previous values, so you just need to save the data at some destination (File system, database, etc.) The Idea So, we need to save the data. It is a Python dict so we can save it as a JSON string. For now, let's say we're saving it inside a JSON file in system. And when you run the program, you should read this JSON file into plmpgm_dict variable. You can use the following code for that operation: import json def dict_to_json_file(some_dict, file_name): fp = open(file_name, 'w+) fp.write(json.dumps(some_dict)) fp.close() def json_file_to_dict(file_name): fp = open(file_name, 'r+) some_dict = json.loads(fp.read()) fp.close() return some_dict Algorithm Before the for loop, you need to read from the file (if exists) into plmpgm_dict with something like plmpgm_dict = json_file_to_dict('dump.json'). After the for loop, at the end of your code, you need to dump the dict to the JSON file with something like dict_to_json_file(plmpgm_dict, 'dump.json')
Populating an Excel File Using an API to track Card Prices in Python
I'm a novice when it comes to Python and in order to learn it, I was working on a side project. My goal is to track card prices of my YGO cards using the yu-gi-oh prices API https://yugiohprices.docs.apiary.io/# I am attempting to manually enter the print tag for each card and then have the API pull the data and populate the spreadsheet, such as the name of the card and its trait, in addition to the price data. So anytime I run the code, it is updated. My idea was to use a for loop to get the API to search up each print tag and store the information in an empty dictionary and then post the results onto the excel file. I added an example of the spreadsheet. Please let me know if I can clarify further. Any suggestions to the code that would help me achieve the goal for this project would be appreciated. Thanks in advance import requests import response as rsp import urllib3 import urlopen import json import pandas as pd df = pd.read_excel("api_ygo.xlsx") print(df[:5]) # See the first 5 columns response = requests.get('http://yugiohprices.com/api/price_for_print_tag/print_tag') print(response.json()) data = [] for i in df: print_tag = i[2] request = requests.get('http://yugiohprices.com/api/price_for_print_tag/print_tag' + print_tag) data.append(print_tag) print(data) def jprint(obj): text = json.dumps(obj, sort_keys=True, indent=4) print(text) jprint(response.json()) Example Spreadsheet
Iterating over a pandas dataframe can be done using df.apply(). This has the added advantage that you can store the results directly in your dataframe. First define a function that returns the desired result. Then apply the relevant column to that function while assigning the output to a new column: import requests import pandas as pd import time df = pd.DataFrame(['EP1-EN002', 'LED6-EN007', 'DRL2-EN041'], columns=['print_tag']) #just dummy data, in your case this is pd.read_excel def get_tag(print_tag): request = requests.get('http://yugiohprices.com/api/price_for_print_tag/' + print_tag) #this url works, the one in your code wasn't correct time.sleep(1) #sleep for a second to prevent sending too many API calls per minute return request.json() df['result'] = df['print_tag'].apply(get_tag) You can now export this column to a list of dictionaries with df['result'].tolist(). Or even better, you can flatten the results into a new dataframe with pd.json_normalize: df2 = pd.json_normalize(df['result']) df2.to_excel('output.xlsx') # save dataframe as new excel file
Reading excel file with pandas and printing it for inserting it in http GET statement for Rest-API
I want to read each line of an excel file (.xlsx-file) in the column called 'ABC'. There are 4667 lines and each line there is a string. I want to print each string. But it does not work. import requests import pandas as pd get_all_ABC = pd.read_excel('C:\Users\XXX\XXX2\XXX3\table.xlsx', header = 0) row_iterator = get_all_ABC.iterrows() _, last = row_iterator.__next__()` for i, row in row_iterator: r= requests.get(row["ABC"]) r= requests.get(last["ABC"]) last = row data = (r.text) print ((r.text))
Why are you using the requests library? That is for making HTTP requests. Also, it's almost always bad practice to iterate over rows in pandas, and 99% of the time unnecessary. Also, r.text will be undefined as it's outside of the for loop scope. Could you explain exactly what you're trying to accomplish? I don't think I'm understanding correctly.
Julian L is right in his points. I mixed a lot up. I have to use the requests method for my overallproblem because I use the GET Method on a RESTApi Server and use the strings which are written in the round about 4000 lines in the column 'ABC' in the excel file. Before I tried the following python script (in that script I also do not use an iteration): import requests import pandas as pd get_all_ABC = pd.read_excel('C:\Users\XXX\XXX2\XXX3\table.xlsx', skiprows=1).set_index('ABC') r = requests.get('http://localhost:5000/api/sensors/data?ABC={get_all_ABC}') print(r.json()) But this does not work either.
This thread leads nowhere. I delete this one and open a new one in which I describe the problem in more detail.
Get the original name of uploaded files in streamlit
I'm using streamlit to make a basic visualization app to compare two datasets, for that I'm using the following example made by Marc Skov from the streamlit gallery: from typing import Dict import streamlit as st #st.cache(allow_output_mutation=True) def get_static_store() -> Dict: """This dictionary is initialized once and can be used to store the files uploaded""" return {} def main(): """Run this function to run the app""" static_store = get_static_store() st.info(__doc__) result = st.file_uploader("Upload", type="py") if result: # Process you file here value = result.getvalue() # And add it to the static_store if not already in if not value in static_store.values(): static_store[result] = value else: static_store.clear() # Hack to clear list if the user clears the cache and reloads the page st.info("Upload one or more `.py` files.") if st.button("Clear file list"): static_store.clear() if st.checkbox("Show file list?", True): st.write(list(static_store.keys())) if st.checkbox("Show content of files?"): for value in static_store.values(): st.code(value) main() This does work, but it is odd to compare datasets without been able to display their names. The code does explicitly says that is not possible to get the file names using this method. But this is an example from 8 months ago, I wonder if is there another way to accomplish this now.
In commit made on 9 July a slight modification of file_uploader() was made. It now returns a dict that contains: name key contains the uploaded file name data key contains a BytesIO or StringIO object So you should be able to get the filename using result.name and the data using result.data.
How to use Python to extract data from the Met Office JSON download
I am using Python 3.4. I have started a project to download the UK Met Office Forecast data (in JSON format) and use the information as a weather compensator for my home heating system. I have succeeded in downloading the JSON datafile from the MET Office, and now I want to extract the info I need. I can do this by converting the file to a string and using .find and .int methods to extract the data, but this seems crude (but effective). As JSON is said to be a well-used data interchange format, there must be a better way to do this. I have found things like json.load and json.loads, and also json.JSONDecoder.decode but I haven't had any success in using these, and I really have little idea of what I am doing! My code is: import urllib.request import json #Comment: THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET #Comment: **** = my personal met office API key, which I had better keep to myself response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxfcs/all/json/354037?res=3hourly&key=****') FCData = response.read() FCDataStr = str(FCData) #Comment: END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Comment: Example of data extraction ChPos = FCDataStr.find('"DV"') #Find "DV" ChPos = FCDataStr.find('"dataDate"', ChPos, ChPos+50) #Find "dataDate" FileDataDate = FCDataStr[ChPos+12:ChPos+22] #Extract the date of the file #Comment: And so on When using json.loads(FCDataStr) I get the following error message: "ValueError: Expecting value: line 1 column 1 (char 0)" By deleting the b' at the start and the ' at the end, this error goes away (see below). Printing the JSON file in string format, using print(FCDataStr) gives: b'{"SiteRep":{"Wx":{"Param":[{"name":"F","units":"C","$":"Feels Like Temperature"},{"name":"G","units":"mph","$":"Wind Gust"},{"name":"H","units":"%","$":"Screen Relative Humidity"},{"name":"T","units":"C","$":"Temperature"},{"name":"V","units":"","$":"Visibility"},{"name":"D","units":"compass","$":"Wind Direction"},{"name":"S","units":"mph","$":"Wind Speed"},{"name":"U","units":"","$":"Max UV Index"},{"name":"W","units":"","$":"Weather Type"},{"name":"Pp","units":"%","$":"Precipitation Probability"}]},"DV":{"dataDate":"2014-07-29T20:00:00Z","type":"Forecast","Location":{"i":"354037","lat":"51.7049","lon":"-2.9022","name":"USK","country":"WALES","continent":"EUROPE","elevation":"43.0","Period":[{"type":"Day","value":"2014-07-29Z","Rep":[{"D":"NNW","F":"22","G":"11","H":"51","Pp":"4","S":"9","T":"24","V":"VG","W":"7","U":"7","$":"900"},{"D":"NW","F":"19","G":"16","H":"61","Pp":"8","S":"11","T":"22","V":"EX","W":"8","U":"1","$":"1080"},{"D":"NW","F":"16","G":"20","H":"70","Pp":"1","S":"11","T":"18","V":"VG","W":"2","U":"0","$":"1260"}]},{"type":"Day","value":"2014-07-30Z","Rep":[{"D":"NW","F":"13","G":"16","H":"84","Pp":"0","S":"7","T":"14","V":"VG","W":"0","U":"0","$":"0"},{"D":"WNW","F":"12","G":"13","H":"90","Pp":"0","S":"7","T":"13","V":"VG","W":"0","U":"0","$":"180"},{"D":"WNW","F":"13","G":"11","H":"87","Pp":"0","S":"7","T":"14","V":"GO","W":"1","U":"1","$":"360"},{"D":"SW","F":"18","G":"9","H":"67","Pp":"0","S":"4","T":"19","V":"VG","W":"1","U":"2","$":"540"},{"D":"WNW","F":"21","G":"13","H":"56","Pp":"0","S":"9","T":"22","V":"VG","W":"3","U":"6","$":"720"},{"D":"W","F":"21","G":"20","H":"55","Pp":"0","S":"11","T":"23","V":"VG","W":"3","U":"6","$":"900"},{"D":"W","F":"18","G":"22","H":"57","Pp":"0","S":"11","T":"21","V":"VG","W":"1","U":"2","$":"1080"},{"D":"WSW","F":"16","G":"13","H":"80","Pp":"0","S":"7","T":"16","V":"VG","W":"0","U":"0","$":"1260"}]},{"type":"Day","value":"2014-07-31Z","Rep":[{"D":"SW","F":"14","G":"11","H":"91","Pp":"0","S":"4","T":"15","V":"GO","W":"0","U":"0","$":"0"},{"D":"SW","F":"14","G":"11","H":"92","Pp":"0","S":"4","T":"14","V":"GO","W":"0","U":"0","$":"180"},{"D":"SW","F":"15","G":"11","H":"89","Pp":"3","S":"7","T":"16","V":"GO","W":"3","U":"1","$":"360"},{"D":"WSW","F":"17","G":"20","H":"79","Pp":"28","S":"11","T":"18","V":"GO","W":"3","U":"2","$":"540"},{"D":"WSW","F":"18","G":"22","H":"72","Pp":"34","S":"11","T":"20","V":"GO","W":"10","U":"5","$":"720"},{"D":"WSW","F":"18","G":"22","H":"66","Pp":"13","S":"11","T":"20","V":"VG","W":"7","U":"5","$":"900"},{"D":"WSW","F":"17","G":"22","H":"69","Pp":"36","S":"11","T":"19","V":"VG","W":"10","U":"2","$":"1080"},{"D":"WSW","F":"16","G":"16","H":"84","Pp":"6","S":"9","T":"17","V":"GO","W":"2","U":"0","$":"1260"}]},{"type":"Day","value":"2014-08-01Z","Rep":[{"D":"SW","F":"16","G":"13","H":"91","Pp":"4","S":"7","T":"16","V":"GO","W":"7","U":"0","$":"0"},{"D":"SW","F":"15","G":"11","H":"93","Pp":"5","S":"7","T":"16","V":"GO","W":"7","U":"0","$":"180"},{"D":"SSW","F":"15","G":"11","H":"93","Pp":"7","S":"7","T":"16","V":"GO","W":"7","U":"1","$":"360"},{"D":"SSW","F":"17","G":"18","H":"79","Pp":"14","S":"9","T":"18","V":"GO","W":"7","U":"2","$":"540"},{"D":"SSW","F":"17","G":"22","H":"74","Pp":"43","S":"11","T":"19","V":"GO","W":"10","U":"5","$":"720"},{"D":"SW","F":"16","G":"22","H":"81","Pp":"48","S":"11","T":"18","V":"GO","W":"10","U":"5","$":"900"},{"D":"SW","F":"16","G":"18","H":"80","Pp":"55","S":"9","T":"17","V":"GO","W":"12","U":"1","$":"1080"},{"D":"SSW","F":"15","G":"16","H":"89","Pp":"38","S":"7","T":"16","V":"GO","W":"9","U":"0","$":"1260"}]},{"type":"Day","value":"2014-08-02Z","Rep":[{"D":"S","F":"14","G":"11","H":"94","Pp":"15","S":"7","T":"15","V":"GO","W":"7","U":"0","$":"0"},{"D":"SSE","F":"14","G":"11","H":"94","Pp":"16","S":"7","T":"15","V":"GO","W":"7","U":"0","$":"180"},{"D":"S","F":"14","G":"13","H":"93","Pp":"36","S":"7","T":"15","V":"GO","W":"10","U":"1","$":"360"},{"D":"S","F":"15","G":"20","H":"84","Pp":"62","S":"11","T":"17","V":"GO","W":"14","U":"2","$":"540"},{"D":"SSW","F":"16","G":"22","H":"78","Pp":"63","S":"11","T":"18","V":"GO","W":"14","U":"5","$":"720"},{"D":"WSW","F":"16","G":"27","H":"66","Pp":"59","S":"13","T":"19","V":"VG","W":"14","U":"5","$":"900"},{"D":"WSW","F":"15","G":"25","H":"68","Pp":"39","S":"13","T":"18","V":"VG","W":"10","U":"2","$":"1080"},{"D":"SW","F":"14","G":"16","H":"80","Pp":"28","S":"9","T":"15","V":"VG","W":"0","U":"0","$":"1260"}]}]}}}}' The result of using: DecodedJSON = json.loads(FCDataStr) print(DecodedJSON) gives a very similar result to the original FCDataStr file. How do I proceed to extract the data (such as temperature, wind speed etc for each 3 hourly forecast) from the file?
For other clueless people who may want to use the UK Met Office 3-hourly forecast data feed, below is the solution that I am using: import urllib.request import json ### THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxfcs/all/json/**YourLocationID**?res=3hourly&key=**your_api_key**') FCData = response.read() FCDataStr = FCData.decode('utf-8') ### END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Converts JSON data to a dictionary object FCData_Dic = json.loads(FCDataStr) #The following are examples of extracting data from the dictionary object. #The JSON data is heavily nested. #Each [] goes one level down, usually defined with {} in the JSON data. dataDate = (FCData_Dic['SiteRep']['DV']['dataDate']) print('dataDate =',dataDate) #There are also [] in the JSON data, which are referenced with integers, # starting from [0] #Here, the [0] refers to the first day's block of data defined with []. DateDay0 = (FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['value']) print('DateDay0 =',DateDay0) #The second [0] picks out each of the first day's forecast data, in this case the time, referenced by '$' TimeOfFC = (FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['$']) print('TimeOfFC =',TimeOfFC) #Ditto for the temperature. Temperature = int((FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['T'])) print('Temperature =',Temperature) #Ditto for the weather Type (a code number). WeatherType = int((FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['W'])) print('WeatherType =',WeatherType) I hope this helps somebody!
This is the problem: FCDataStr = str(FCData) When you call str on a bytes object, what you get is the string representation of a bytes object—in quotes, with a b prefix, and with special characters backslash-escaped. If you wanted to decode the binary data to text, you have to do that with the decode method: FCDataStr = FCData.decode('utf-8') (I'm guessing UTF-8 because JSON is always supposed to be in UTF-8 unless otherwise specified.) In more detail: urllib.request.urlopen returns an http.client.HTTPResponse, which is a binary file-like object, (which implements io.RawIOBase). You can't pass that to json.load because it wants a text-file-like object—something with a read method that returns str, not bytes. You could wrap your HTTPResponse in an io.BufferedReader, then wrap than in an io.TextIOBase (with encoding='utf-8'), then pass that to json.load, but that's probably more work than you want to do. So, the simplest thing to do is exactly what you were trying to do, just using decode instead of str: data_bytes = response.read() data_str = data_bytes.decode('utf-8') data_dict = json.loads(data_str) Then, don't try to access the data in data_str—that's just a string, representing the JSON encoding of your data; data_dict is the actual data. For example, to find the dataDate of the DV of the SiteRep, you just do this: data_dict['SiteRep']['DV']['DataDate'] That will get you the string '2014-07-31T14:00:00Z'. You'll still probably want to convert to that to a datetime.datetime object (because JSON only understands a few basic types: strings, numbers, lists, and dicts). But it's still a lot better than trying to pick it out of data_str by find-ing or guessing at the offsets. My guess is that you've found some sample code written for Python 2.x, where you can convert between byte strings and Unicode strings just by calling the appropriate constructors, without specifying an encoding, which would default to sys.getdefaultencoding(), and often (at least on Mac or most modern Linux distros) that's UTF-8, so it just happened to work despite being wrong. In which case you may want to find some better sample code to learn from…
I been at parsing the Met Office datapoint output. Thanks to the response above I have something that works for me. I am writing the data I am interested in to a CSV file: import sys import os import urllib.request import json ### THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxobs/all/json/3351?res=hourly&?key=<my key>') FCData = response.read() FCDataStr = FCData.decode('utf-8') ### END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Converts JSON data to a dictionary object FCData_Dic = json.loads(FCDataStr) # Open output file for appending fName=<my filename> if (not os.path.exists(fName)): print(fName,' does not exist') exit() fOut=open(fName, 'a') # Loop through each day, will nearly always be 2 days, # unless run at midnight. i = 0 j = 0 for k in range(24): # there will be 24 values altogether # find the first hour value for the first day DateZ = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['value']) hhmm = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j] ['$']) Temperature = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['T']) Humidity = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['H']) DewPoint = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['Dp']) recordStr = '{},{},{},{},{}\n'.format(DateZ,hhmm,Temperature,Humidity,DewPoint) fOut.write(recordStr) j = j + 1 if (hhmm == '1380'): i = i + 1 j = 0 fOut.close() print('Records added to ',fName)`