I have searched and searched but I have only found solutions involving php and not python/django. My goal is to make a website (backend coded in python) that will allow a user to input a string. The backend script would then be run and output a dictionary with some info. What I want is to use the info from the dictionary to sort of draw it onto an image I have on the server and give the new image to the user. How can I do this offline for now? What libraries can I use? Any suggestions on the route I should head on would be lovely.
I am still a novice so please forgive me if my code needs work. So far I have no errors with what I have but like I said I have no clue where to go next to achieve my goal. Any tips would be greatly appreciated.
This is sort of what I want the end goal to be http://combatarmshq.com/dynamic-signatures.html
This is what I have so far (I used beautiful soup as a parser from here. If this is too excessive or if I did it in a not so good way please let me know if there is a better alternative. Thanks):
The url where I'm getting the numbers I want (These are dynamic) is this: http://combatarms.nexon.net/ClansRankings/PlayerProfile.aspx?user=
The name of the player will go after user so an example is http://combatarms.nexon.net/ClansRankings/PlayerProfile.aspx?user=-aonbyte
This is the code with the basic functions to scrape the website:
from urllib import urlopen
from BeautifulSoup import BeautifulSoup
def get_avatar(player_name):
'''Return the players avatar as a binary string.'''
player_name = str(player_name)
url = 'http://combat.nexon.net/Avatar/MyAvatar.srf?'
url += 'GameName=CombatArms&CharacterID=' + player_name
sock = urlopen(url)
data = sock.read()
sock.close()
return data
def save_avatar(data, file_name):
'''Saves the avatar data from get_avatar() in png format.'''
local_file = open(file_name + '.png', 'w' + 'b')
local_file.write(data)
local_file.close()
def get_basic_info(player_name):
'''Returns basic player statistics as a dictionary'''
url = 'http://combatarms.nexon.net/ClansRankings'
url += '/PlayerProfile.aspx?user=' + player_name
sock = urlopen(url)
html_raw = sock.read()
sock.close()
html_original_parse = BeautifulSoup(''.join(html_raw))
player_info = html_original_parse.find('div', 'info').find('ul')
basic_info_list = range(6)
for i in basic_info_list:
basic_info_list[i] = str(player_info('li', limit = 7)[i+1].contents[1])
basic_info = dict(date = basic_info_list[0], rank = basic_info_list[1], kdr = basic_info_list[2], exp = basic_info_list[3], gp_earned = basic_info_list[4], gp_current = basic_info_list[5])
return basic_info
And here is the code that tests out those functions:
from grabber import get_avatar, save_avatar, get_basic_info
player = raw_input('Player name: ')
print 'Downloading avatar...'
avatar_data = get_avatar(player)
file_name = raw_input('Save as? ')
print 'Saving avatar as ' + file_name + '.png...'
save_avatar(avatar_data, file_name)
print 'Retrieving ' + player + '\'s basic character info...'
player_info = get_basic_info(player)
print ''
print ''
print 'Info for character named ' + player + ':'
print 'Character creation date: ' + player_info['date']
print 'Rank: ' + player_info['rank']
print 'Experience: ' + player_info['exp']
print 'KDR: ' + player_info['kdr']
print 'Current GP: ' + player_info['gp_current']
print ''
raw_input('Press enter to close...')
If I understand you correctly, you want to get an image from one place, get some textual information from another place, draw text on top of the image, and then return the marked-up image. Do I have that right?
If so, get PIL, the Python Image Library. Both PIL and BeatifulSoup are capable of reading directly from an opened URL, so you can forget that socket nonsense. Get the player name from the HTTP request, open the image, use BeautifulSoup to get the data, use PIL's text functions to write on the image, save the image back into the HTTP response, and you're done.
Related
I use this .py to send message automatically on whatsapp
The message 'INSTRUCTOR' contains Japanese, and the final Japanese message is garbled. Can you please help me to solve the messy code problem here?
the csv file:
enter image description here
the garbled message:
enter image description here
import pyautogui as pg
import webbrowser as web
import time
import pandas as pd
#Ensure that the .csv is in the correct format, otherwise the encoding format may be incorrect!
data = pd.read_csv('/Users/wen9953/Desktop/REV/Auto_Whatsapp/Customer04.csv', encoding='utf-8-sig')
data_dict = data.to_dict('list')
leads = data_dict['phone'] #Enter the corresponding column name in single quotes, and make sure the column name is correct
instructors = data_dict['INSTRUCTOR']
name = data_dict['NAME']
combo = zip(leads, instructors, name)
first = True
for leads, instructors, name in combo:
time.sleep(4)
web.open("https://web.whatsapp.com/send?phone=" + leads + "&text=" + "Hello " + name + "! I'm Chevon, a membership consultant from Revolution and I noticed that you recently attended " + instructors + "'s class! How was it? I would love to hear any feedback that you may have for us! :)")
if first:
time.sleep(6)
first = False
width, height = pg.size()
pg.click(width / 2, height / 2)
time.sleep(8)
pg.press('enter')
time.sleep(8)
pg.hotkey('ctrl', 'w')
I think I've got to a point where I need help from professionals. I would like to build a scraper for a browser game that gives an alarm to a bot (Telegram or Discord). The connection of the bot is not the problem at first, it is more about getting the right result.
My script runs in a while-loop (it also runs without) and is supposed to look for links in an -tag. These links contain an ID. This is always incremented +1 when a new player signs up to the game and that's exactly what I need.
Since I need to compare the information, I figured I need to save it in a .csv file. And there lies the problem the output looks like this in the .csv:
index.php?section=impressum
I have two problems:
I want to limit the output to the first 5 results in the file
Only have in the file if something changes or the corresponding change.
1. + 2.
This ist my code so far:
import requests
import time
import csv
from datetime import datetime
from bs4 import BeautifulSoup
def writeCSV(data):
csv_file = open('ags_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow([data])
csv_file.close()
sleepTimer = 3
# Adresse der Webseite
url = "https://www.ag-spiel.de/"
allAGs = []
firstRun = True
while True:
response = requests.get(url + "index.php?section=live")
# BeautifulSoup HTML-Dokument aus dem Quelltext parsen
html = BeautifulSoup(response.text, 'html.parser')
# url aus dem <a href> parsen
newDetected = False
newAGs = []
possible_links = html.find_all('a')
for link in possible_links:
if link.has_attr('href'):
inhalt = str(link.attrs['href'])
if "aktie=" in inhalt:
if firstRun is True:
allAGs.append(inhalt)
else:
if str(inhalt) not in allAGs:
newDetected = True
print("ATTENTION!!! New AG! Url is: " + inhalt)
allAGs.append(inhalt)
# in Datei schreiben
writeCSV(inhalt)
else:
# print ("Debug output "+ inhalt + " already in AGlist")
continue
if firstRun is True:
print("Frist run successfull, current ags: " + str(len(allAGs)))
for AGurl in allAGs:
print(AGurl)
else:
if newDetected is False:
print(str(datetime.now().strftime("%H:%M:%S")) + ": Nothing changed")
writeCSV(inhalt)
else:
print("Something Changed, current ags: " + str(len(allAGs)))
for AGurl in allAGs:
print(AGurl)
firstRun = False
time.sleep(sleepTimer)
ยดยดยดยด
I have writen some python code o help me pull data from an API. The first version of my program work quite well.
N0w i am trying to develop a more DRY version of the code by introducing functions and loops . I am still new to python.
Your proffesional advice will be really apreciated
import requests
import json
# Bika Lims Authentication details
username = 'musendamea'
password = '!Am#2010#bgl;'
# API url Calls for patients, analysis and cases
patient_url = "adress1"
analysis_url = "adress2"
cases_url = "adress3"
# peform API calls and parse json data
patient_data = requests.get(patient_url, auth=(username, password ))
analysis_data = requests.get(analysis_url, auth=(username, password ))
cases_data = requests.get(cases_url, auth=(username, password ))
patients = json.loads(patient_data.text)
analysis = json.loads(analysis_data.text)
cases = json.loads(cases_data.text)
# checks for errors if any
print ("Patients")
print (patients['error'])
print (patients['success'])
print (patients['last_object_nr'])
print (patients['total_objects'])
print ("\n Analysis")
print (analysis['error'])
print (analysis['success'])
print (analysis['last_object_nr'])
print (analysis['total_objects'])
print ("\n Cases")
print (cases['error'])
print (cases['success'])
print (cases['last_object_nr'])
print (cases['total_objects'])
# create and save json files for patients, analysis and cases
with open('patients.json', 'w') as outfile:
json.dump(patients['objects'], outfile)
with open('analysis.json', 'w') as outfile1:
json.dump(analysis['objects'], outfile1)
with open('cases.json', 'w') as outfile2:
json.dump(cases['objects'], outfile2)
The Above code works pretty well but my challenge is making the code DRY. somehow the loop breaks when i change the following section
your_domain = "10.0.0.191"
data_types = ['patients', 'analysis', 'cases']
checkers = ['error', 'success', 'total_objects']
urls = []
data_from_api = []
# API url Call
base_url = "http://" + your_domain + "/##API/read?"
page_size = "1000000000000000000"
patient_url = base_url + "catalog_name=bika_patient_catalog&page_size="
+ page_size
analysis_url = base_url + "portal_type=AnalysisRequest&
review_state=published&page_size=" + page_size
cases_url = base_url + "portal_type=Batch&page_size=" + page_size
urls.append(patient_url)
urls.append(analysis_url)
urls.append(cases_url)
# peform API calls and parse json data
def BikaApiCalls(urls, username, password):
for i in len(urls):
data_ = requests.get(urls[i - 1], auth = (username, password))
print (data_types[i] + " ~ status_code: ")
print (data_.status_code + "\n")
data_from_api.append(json.loads(data_.text))
for val in len(checkers):
print (data_from_api[i][val])
BikaApiCalls(urls, username, password)
# Write JSON files
def WriteJson(data_types, data_from_api):
for i in len(data_from_api):
with open(data_types[i] + '.json', 'w') as outfile:
json.dump(data_from_api[i]['objects'], outfile)
WriteJson(data_types, data_from_api)
Where am I getting it wrong. I tried some debugging but i ca seen to get through. Id really appreciate your help.
Thanks in advance :)
I've been trying to download screenshots from the App Store and here's my code (I'm a beginner).
The problem I encounter is list index out of range at line 60 (screenshotList = data["results"][resultCounter]["screenshotUrls"]
The thing is that sometimes, the search API returns 0 results for the search term used, and therefore it gets messed up because "resultCount" = 0.
I'm not sure what else it could be/nor how I can fix it. Any help?
# Required libraries
import urllib
import string
import random
import json
import time
""" screenshotCounter is used so that all screenshots have a different name
resultCounter is used to go from result to result in downloaded JSON file
"""
screenshotCounter = 0
resultCounter = 0
""" Create three random letters as search term on App Store
Download JSON results file
Shows used search term
"""
searchTerm = (''.join(random.choice(string.ascii_lowercase) for i in range(3)))
urllib.urlretrieve("https://itunes.apple.com/search?country=us&entity=software&limit=3&term=" + str(searchTerm), "download.txt")
print "Used search term: " + str(searchTerm)
# Function to download screenshots + give it a name + confirmation msg
def download_screenshot(screenshotLink, screenshotName):
urllib.urlretrieve(screenshotLink, screenshotName)
print "Downloaded with success:" + str(screenshotName)
# Opens newly downloaded JSON file
with open ('download.txt') as data_file:
data = json.load(data_file)
""" Get the first list of screenshots from stored JSON file,
resultCounter = 0 on first iteration
"""
screenshotList = data["results"][resultCounter]["screenshotUrls"]
# Gives the number of found results and serves as iteration limit
iterationLimit = data["resultCount"]
# Prints the number of found results
print str(iterationLimit) + " results found."
""" Change the number of iterations to the number of results, which will be
different for every request, minus 1 since indexing starts at 0
"""
iterations = [0] * iterationLimit
""" For each iteration (number of results), find each screenshot in the
screenshotList, name it, download it. Then change result to find the next
screenshotList and change screenshotList variable.
"""
for number in iterations:
for screenshotLink in screenshotList:
screenshotName = "screenshot" + str(screenshotCounter) + ".jpeg"
download_screenshot(screenshotLink, screenshotName)
screenshotCounter = screenshotCounter + 1
resultCounter = resultCounter + 1
screenshotList = data["results"][resultCounter]["screenshotUrls"]
# Sleeping to avoid crash
time.sleep(1)
I rewrote your code to check for the presence of results before trying anything. If there aren't any, it goes back through the loop with a new search term. If there are, it will stop at the end of that iteration.
# Required libraries
import urllib
import string
import random
import json
import time
# Function to download screenshots + give it a name + confirmation msg
def download_screenshot(screenshotLink, screenshotName):
urllib.urlretrieve(screenshotLink, screenshotName)
print "Downloaded with success:" + str(screenshotName)
success = False
while success == False:
""" Create three random letters as search term on App Store
Download JSON results file
Shows used search term
"""
searchTerm = (''.join(random.choice(string.ascii_lowercase) for i in range(3)))
urllib.urlretrieve("https://itunes.apple.com/search?country=us&entity=software&limit=3&term=" + str(searchTerm), "download.txt")
print "Used search term: " + str(searchTerm)
# Opens newly downloaded JSON file
with open ('download.txt') as data_file:
data = json.load(data_file)
""" Get the first list of screenshots from stored JSON file,
resultCounter = 0 on first iteration
"""
resultCount = len(data["results"])
if resultCount == 0:
continue #if no results, skip to the next loop
success = True
print str(resultCount) + " results found."
for j, resultList in enumerate(data["results"]):
screenshotList = resultList["screenshotUrls"]
""" For each iteration (number of results), find each screenshot in the
screenshotList, name it, download it. Then change result to find the next
screenshotList and change screenshotList variable.
"""
for i, screenshotLink in enumerate(screenshotList):
screenshotName = "screenshot" + str(i) + '_' + str(j) + ".jpeg"
download_screenshot(screenshotLink, screenshotName)
# Sleeping to avoid crash
time.sleep(1)
have you tried
try:
for screenshotLink in screenshotList:
screenshotName = "screenshot" + str(screenshotCounter) + ".jpeg"
download_screenshot(screenshotLink, screenshotName)
screenshotCounter = screenshotCounter + 1
except IndexError:
pass
I'm trying to write a program that will go to a website and download all of the songs they have posted. Right now I'm having trouble creating new file names for each of the songs I download. I initially get all of the file names and the locations of the songs (html). However, when I try to create new files for the songs to be put in, I get an error saying:
IOError: [Errno 22] invalid mode ('w') or filename
I have tried using different modes like "w+", "a", and, "a+" to see if these would solve the issue but so far I keep getting the error message. I have also tried "% name"-ing the string but that has not worked either. My code is following, any help would be appreciated.
import urllib
import urllib2
def earmilk():
SongList = []
SongStrings = []
SongNames = []
earmilk = urllib.urlopen("http://www.earmilk.com/category/pop")
reader = earmilk.read()
#gets the position of the playlist
PlaylistPos = reader.find("var newPlaylistTracks = ")
#finds the number of songs in the playlist
NumberSongs = reader[reader.find("var newPlaylistIds = " ): PlaylistPos].count(",") + 1
initPos = PlaylistPos
#goes though the playlist and records the html address and name of the song
for song in range(0, NumberSongs):
songPos = reader[initPos:].find("http:") + initPos
namePos = reader[songPos:].find("name") + songPos
namePos += reader[namePos:].find(">")
nameEndPos = reader[namePos:].find("<") + namePos
SongStrings.append(reader[songPos: reader[songPos:].find('"') + songPos])
SongNames.append(reader[namePos + 1: nameEndPos])
#initPos += len(SongStrings[song])
initPos = nameEndPos
for correction in range(0, NumberSongs):
SongStrings[correction] = SongStrings[correction].replace('\\/', "/")
#downloading songs
#for download in range(0, NumberSongs):
#print reader.find("So F*")
#x= SongNames[0]
songDL = open(SongNames[0].formant(name), "w+")
songDL.write(urllib.urlretrieve(SongStrings[0], SongNames[0] + ".mp3"))
songDL.close()
print SongStrings
for name in range(0, NumberSongs):
print SongNames[name] + "\n"
earmilk.close()
You need to use filename = '%s' % (SongNames[0],) to construct the name but you also need to make sure that your file name is a valid one - I don't know of any songs called *.* but I wouldn't like to chance it so something like:
filename = ''.join([a.isalnum() and a or '_' for a in SongNames[0]])