I'm building a media player for the office, and so far so good but I want to add a voting system (kinda like Pandora thumbs up/thumbs down)
To build the playlist, I am currently using the following code, which pulls 100 random tracks that haven't been played recently (we make sure all tracks have around the same play count), and then ensures we don't hear the same artist within 10 songs and builds a playlist of 50 songs.
max_value = Items.select(fn.Max(Items.count_play)).scalar()
query = (Items
.select()
.where(Items.count_play < max_value, Items.count_skip_vote < 5)
.order_by(fn.Rand()).limit(100))
if query.count < 1:
max_value = max_value - 1
query = (Items
.select()
.where(Items.count_play < max_value, Items.count_skip_vote < 5)
.order_by(fn.Rand()).limit(100))
artistList = []
playList = []
for item in query:
if len(playList) is 50:
break
if item.artist not in artistList:
playList.append(item.path)
if len(artistList) < 10:
artistList.append(item.artist)
else:
artistList.pop(0)
artistList.append(item.artist)
for path in playList:
client.add(path.replace("/music/Library/",""))
I'm trying to work out the best way to use the up/down votes.
I want to see less with downvotes and more with upvotes.
I'm not after direct code because I'm pretty OK with python, it's more of the logic that I can't quite nut out (that being said, if you feel the need to improve my code, I won't stop you :) )
Initially give each track a weight w, e.g. 10 - a vote up increases this, down reduces it (but never to 0). Then when deciding which track to play next:
Calculate the total of all the weights, generate a random number between 0 and this total, and step through the tracks from 0-49 adding up their w until you exceed the random number. play that track.
The exact weighting algorithm (e.g. how much an upvote/downvote changes w) will of course affect how often tracks (re)appear. Wasn't it Apple who had to change the 'random' shuffle of their early iPod because it could randomly play the same track twice (or close enough together for a user to notice) so they had to make it less random, which I presume means also changing the weighting by how recently the track was played - in that case the time since last play would also be taken into account at the time of choosing the next track. Make sure you cover the end cases where everyone downvotes 49 (or all 50 if they want silence) of the tracks. Or maybe that's what you want...
Related
I am trying to solve the problem of throw the ball in a park which I found at https://brainly.in/question/53867717
My solution to the problem:
seconds = 6
player = random.sample(range(1, 11), 10)
next_receiver = player[0]
for i in range(1, seconds):
next_receiver = player[next_receiver-1]
# next_receiver at the end of the loop is the player who will have the ball
This solution gives the correct answer; however, it has one problem. When the size of player is of the order 10^9 and seconds=9999999, it takes longer to get an answer. In my PC it takes about 1 minute and 20 seconds to get the answer. I cannot think of any better way to solve this problem. Any hints?
We know that most likely there is a cycle in which we get stuck in at the end. If this cycle is smaller than the seconds count we can use it to speed up the process by jumping ahead a multiple of the cycle size
def throw_balls(start, receiver, seconds):
current_player = start
# Stores which player already received the ball and the second at which they got it.
already_seen = dict()
for i in range(seconds):
current_player = receiver[current_player - 1]
if current_player in already_seen:
# This player previously received the ball, so we are in a cycle
loop_length = i - already_seen[current_player]
remaining = seconds - i
break
else:
already_seen[current_player] = i
else:
return current_player
return throw_balls(current_player, receiver, remaining % loop_length)
Note that this does not guarantee faster execution, for example when each player passes the ball onto the next person, this will behave the exact same as your solution, although slower since we are doing extra work.
Edit:
After a lengthy discussion #Kelly Bundy and I came to the conclusion that this probably will not help the situation you put yourself in with the way you are initializing receiver, although it probably helps with the way to original problem is formulated.
If receiver is a permutation of the potential targets, so no two players will targeted the same person, then the expected cycle size for N=10^9 is to large to notice/reach in ~10^7 steps (see 100 Prisoners Problem). A simple calculation results in a chance of 1% that a cycle is reached in 10^7 steps.
However, the original problem sounds like it's not a permutation, but instead each player's target is chosen independently of everyone else. This means that two players can target the same person. In that case, there is a close to 100% chance that a cycle will be reached after only 10^6 steps and this changed algorithm will show a 10x theoretical speedup.
Good evening!
I have just picked up Python last month and I am having a lot of fun with it!
I am writing my first program ever in order to determine what are the probability of 2 songs of a same album playing back to back in a Spotify shuffled playlist: Sbotify! I have the use input sorted out but I cannot find a way to apply the probability formula for a value.
The formula used will be to multiply the probabilities of each separate event by one another as described in this article (method 2-3). I need to multiply an imputed value int() by itself -1 on repeat until this value reaches 0 and then do the same thing for another value and so on and so forth.
I have tried:
for loops
Map() function
Range() to bypass the int cannot be integrate error
But nothing seem to work. I've looked for an answer for hours and I cannot seem to find anything that fits my purpose. Any link, resource, or knowledge is much welcomed! 😊
Here is my code,
import math
import statistics
print("\nHey user! Sbotify has for purpose to tell you the probability of \n
playing a song of the same album in a shuffled Spotify playlist. \n\n")
total_songs = int(input("\nEnter the total of your songs:"))
lst = []
num = int(input('Enter a number of albums: '))
for n in range(num):
numbers = int(input('Enter number of song for each album: '))
lst.append(numbers)
print("Sum of songs belonging to an album is :", sum(lst))
print(lst)
print("\nLet's find out the songs that don't belong to any album. This will be handy later:")
solo_songs = total_songs - sum(lst)
print(int(solo_songs))
# I'm afraid we need to use... **MATH**
print("Nice! Now we need to find out the probabilities of 2 songs of the same album playing back to back in shuffle mode")
#Each album needs to be considered 1 entity and to that, the solo_songs value needs to be multiplied by itself -1 until 0
#to respect the probability formula
let me know if anything is correct, over complicated or to be redone completely!
I was trying to make a Spreeder alike terminal tool. Spreeder is basically an app that increase your reading speed by showing you words in text according to your choice of WPM and chunks size. In order to understand better, you can take a look it's Website (Just run it 1 time, you'll understand what it's doing)
I managed to divide string into chunks and display them according to user's choice of chunks size. But I couldn't figured out how to calculate time that will be passed between chunks. This is my whole code:
import time
import os
# Original String
string = "Speed reading is the art of silencing subvocalization. Most readers have an average reading speed of 200 wpm, which is about as fast as they can read a passage out loud. This is no coincidence. It is their inner voice that paces through the text that keeps them from achieving higher reading speeds. They can only read as fast as they can speak because that's the way they were taught to read, through reading systems like Hooked on Phonics.However, it is entirely possible to read at a much greater speed, with much better reading comprehension, by silencing this inner voice. The solution is simple - absorb reading material faster than that inner voice can keep up. In the real world, this is achieved through methods like reading passages using a finger to point your way. You read through a page of text by following your finger line by line at a speed faster than you can normally read. This works because the eye is very good at tracking movement. Even if at this point full reading comprehension is lost, it's exactly this method of training that will allow you to read faster.With the aid of software like Spreeder, it's much easier to achieve this same result with much less effort. Load a passage of text (like this one), and the software will pace through the text at a predefined speed that you can adjust as your reading comprehension increases. To train to read faster, you must first find your base rate. Your base rate is the speed that you can read a passage of text with full comprehension. We've defaulted to 300 wpm, showing one word at a time, which is about the average that works best for our users. Now, read that passage using spreeder at that base rate. After you've finished, double that speed by going to the Settings and changing the Words Per Minute value. Reread the passage. You shouldn't expect to understand everything - in fact, more likely than not you'll only catch a couple words here and there. If you have high comprehension, that probably means that you need to set your base rate higher and rerun this test again. You should be straining to keep up with the speed of the words flashing by. This speed should be faster than your inner voice can 'read2.Now, reread the passage again at your base rate. It should feel a lot slower – if not, try running the speed test again). Now try moving up a little past your base rate – for example, at 400 wpm – , and see how much you can comprehend at that speed. That's basically it - constantly read passages at a rate faster than you can keep up, and keep pushing the edge of what you're capable of. You'll find that when you drop down to lower speeds, you'll be able to pick up much more than you would have thought possible.One other setting that's worth mentioning in this introduction is the chunk size – the number of words that are flashed at each interval on the screen. When you read aloud, you can only say one word at a time. However, this limit does not apply to speed reading. Once your inner voice subsides and with constant practice, you can read multiple words at a time. This is the best way to achieve reading speeds of 1000+ wpm. Start small with 2 word chunk sizes and find out that as you increase, 3,4, or even higher chunk sizes are possible.Good luck!"
# All words in a list
all_words = string.split(" ")
# User selects how many words are going to be viewed per time
chunkSize = int(input("Please enter the chunk size > "))
# Word per minute calculation
wpm = int(input("Please enter WPM > "))
# Finds number of iteration
def run_times(chunkSize):
if chunkSize > len(all_words):
print("Chunk size is too big for number of words in text.")
return "Error"
elif len(all_words)/chunkSize != int(len(all_words)/chunkSize):
return int(len(all_words)/chunkSize) + 1
return int(len(all_words)/chunkSize)
# Dyeing word set in the original string
def dye(startIndex,endIndex):
temp_list = string.split(" ")
temp_string = ""
for index in range(startIndex,endIndex+1):
temp_list[index] = "\033[1;31m" + temp_list[index] + "\033[0;0m"
for word in temp_list:
temp_string += word + " "
return temp_string
# Clearing screen function
def clear_screen(time_to_wait):
time.sleep(time_to_wait)
try:
os.system("cls")
except:
os.system("clear")
def main():
run_time = run_times(chunkSize)
startIndex = 0
endIndex = chunkSize - 1
# Time'll be passed between chunks
# The closest equation to right answer I found, but still it's not right :(
t = chunkSize/(wpm/60)
for i in range(run_time):
temp_string = dye(startIndex,endIndex)
print(temp_string)
# Increasing Index values
if startIndex + chunkSize >= len(all_words):
startIndex = len(all_words)
else:
startIndex += chunkSize
if endIndex + chunkSize >= len(all_words):
endIndex = len(all_words) - 1
else:
endIndex += chunkSize
clear_screen(t)
main()
The clear_screen function in main function determines how much time will be passed, so the value t,need to be changed as the right amount of time. And the assignment that determines value of t, is in main function as well.
By the way,I added my calculation for t value however it's not correct it's just the closest finding to right answer.
If you copy paste my code and try to run it, you can see that everything works fine but value of time passed between chunks is wrong.
Any idea how to calculate value of t ?
I have little to no formal discrete math training, and have run into a wee bit of an issue. I am trying to write an agent which reads in a human player's (arbitrary) score and scores a point every so often. The agent needs to "lag behind" and "catch up" every so often, so that the human player believes there is some competition going on. Then, the agent must either win or lose (depending on the condition) against the human.
I have tried a few different techniques, including a wonky probabilistic loop (which failed horribly). I was thinking that this problem calls for something like an emission Hidden Markov Model (HMM), but I'm not sure how to implement it (or even whether this is the best approach).
I have a gist up, but again, it sucks.
I hope the __main__ function provides some insight as to the goal of this agent. It is going to be called in pygame.
I think you may be over-thinking this. You can use simple probability to estimate how often and by how much the computer's score should "catch-up". Additionally, you can calculate the difference between the computer's score and human's score, and then feed this to a sigmoid-like function to give you the degree at which the computer's score increases.
Illustrative Python:
#!/usr/bin/python
import random, math
human_score = 0
computer_score = 0
trials = 100
computer_ahead_factor = 5 # maximum amount of points the computer can be ahead by
computer_catchup_prob = 0.33 # probability of computer catching up
computer_ahead_prob = 0.5 # probability of computer being ahead of human
computer_advantage_count = 0
for i in xrange(trials):
# Simulate player score increase.
human_score += random.randint(0,5) # add an arbitrary random amount
# Simulate computer lagging behind human, by calculating the probability of
# computer jumping ahead based on proximity to the human's score.
score_diff = human_score - computer_score
p = (math.atan(score_diff)/(math.pi/2.) + 1)/2.
if random.random() < computer_ahead_prob:
computer_score = human_score + random.randint(0,computer_ahead_factor)
elif random.random() < computer_catchup_prob:
computer_score += int(abs(score_diff)*p)
# Display scores.
print 'Human score:',human_score
print 'Computer score:',computer_score
computer_advantage_count += computer_score > human_score
print 'Effective computer advantage ratio: %.6f' % (computer_advantage_count/float(trials),)
I am making the assumption that the human cannot see the computer agent playing the game. Â If this is the case, here is one idea you might try.
Create a list of all the possible point combinations that can be scored for any given move. Â For each move, find a score range which you would like the agent to end up within after the current turn. Â Reduce the set of possible move values to only the values which would end the agent in that particular range and randomly select one. Â As conditions change for how far behind or ahead you would like the agent to get, simply slide your range appropriately.
If you are looking for something with some kind of built in and researched psychological effects for the human, I cant help you with that. Â You will need to define more rules for us if you want something more specific to your situation than this.
Lets say I have around 1,000,000 users. I want to find out what position any given user is in, and which users are around him. A user can get a new achievement at any time, and if he could see his standing update, that would be wonderful.
Honestly, every way I think of doing this would be horrendously expensive in time and/or memory. Ideas? My closest idea so far is to order the users offline and build percentile buckets, but that can't show a user his exact position.
Some code if that helps you django people :
class Alias(models.Model) :
awards = models.ManyToManyField('Award', through='Achiever')
#property
def points(self) :
p = cache.get('alias_points_' + str(self.id))
if p is not None : return p
points = 0
for a in self.achiever_set.all() :
points += a.award.points * a.count
cache.set('alias_points_' + str(self.id), points, 60 * 60) # 1 hour
return points
class Award(MyBaseModel):
owner_points = models.IntegerField(help_text="A non-normalized point value. Very subjective but try to be consistent. Should be proporional. 2x points = 2x effort (or skill)")
true_points = models.FloatField(help_text="The true value of this award. Recalculated with a cron job. Based on number of people who won it", editable=False, null=True)
#property
def points(self) :
if self.true_points :
# blend true_points into real points over 30 days
age = datetime.now() - self.created
blend_days = 30
if age > timedelta(days=blend_days) :
age = timedelta(days=blend_days)
num_days = 1.0 * age.days / blend_days
r = self.true_points * num_days + self.owner_points * (1 - num_days)
return int(r * 10) / 10.0
else :
return self.owner_points
class Achiever(MyBaseModel):
award = models.ForeignKey(Award)
alias = models.ForeignKey(Alias)
count = models.IntegerField(default=1)
I think Counterstrike solves this by requiring users to meet a minimum threshold to become ranked--you only need to accurately sort the top 10% or whatever.
If you want to sort everyone, consider that you don't need to sort them perfectly: sort them to 2 significant figures. With 1M users you could update the leaderboard for the top 100 users in real time, the next 1000 users to the nearest 10, then the masses to the nearest 1% or 10%. You won't jump from place 500,000 to place 99 in one round.
Its meaningless to get the 10 user context above and below place 500,000--the ordering of the masses will be incredibly jittery from round to round due to the exponential distribution.
Edit: Take a look at the SO leaderboard. Now go to page 500 out of 2500 (roughly 20th percentile). Is there any point to telling the people with rep '157' that the 10 people on either side of them also have rep '157'? You'll jump 20 places either way if your rep goes up or down a point. More extreme, is that right now the bottom 1056 pages (out of 2538), or the bottom 42% of users, are tied with rep 1. you get one more point, and you jumped up 1055 pages. Which is roughly a 37,000 increase in rank. It might be cool to tell them "you can beat 37k people if you get one more point!" but does it matter how many significant figures the 37k number has?
There's no value in knowing your peers on a ladder until you're already at the top, because anywhere but the top, there's an overwhelming number of them.
One million is not so much, I would try it the easy way first. If the points property is the thing you are sorting on that needs to be a database column. Then you can just do a count of points greater than the person in question to get the rank. To get other people near a person in question you do a query of people with higher points and sort ascending limit it to the number of people you want.
The tricky thing will be calculating the points on save. You need to use the current time as a bonus multiplier. One point now needs to turn into a number that is less than 1 point 5 days from now. If your users frequently gain points you will need to create a queue to handle the load.