Python Slicing List - python

I have a 2d List
a= [['Turn', 'the', 'light', 'out', 'now'], ['Now', ',', 'I', 'll', 'take', 'you', 'by', 'the', 'hand'], ['Hand', 'you', 'anoth', 'drink'], ['Drink', 'it', 'if', 'you', 'can'], ['Can', 'you', 'spend', 'a', 'littl', 'time'], ['Time', 'is', 'slip', 'away'], ['Away', 'from', 'us', ',', 'stay'], ['Stay', 'with', 'me', 'I', 'can', 'make'], ['Make', 'you', 'glad', 'you', 'came']]
Is there an easy way to get a range of values from this list?
if I write:
print a[0][0]
I get 'Turn', but is there a way to slice multiple things out of the list and make a new list? If I have a value i=0 and n=2 I am looking to make a new list that starts at a=[i][n] and makes a new list with the list items on either side of that index.
Thanks for the help

Maybe you mean
print a[0][0:2]
or
print [d[0] for d in a]

Related

Does HttpResponseObject change after read().decode('utf-8') in python openurl method?

I am currently learning python 3 and was playing around with openurl when I noticed that after read().decode('utf-8') the length of my HTTP response object became zero and I cant figure out why is it behaving like that.
story = urlopen('http://sixty-north.com/c/t.txt')
print(story.read().decode('utf-8'))
story_words = []
for line in story:
line_words = line.decode('utf-8').split()
for word in line_words:
story_words.append(word)
story.close()
print(story_words)
On execution of the print command on line 2, the length of the HTTP response in story changes from 593 to 0 and an empty array is printed on story words. If I remove the print command the story_words array is populated.
Output with read().decode()
It was the best of times
it was the worst of times
it was the age of wisdom
it was the age of foolishness
it was the epoch of belief
it was the epoch of incredulity
it was the season of Light
it was the season of Darkness
it was the spring of hope
it was the winter of despair
we had everything before us
we had nothing before us
we were all going direct to Heaven
we were all going direct the other way
in short the period was so far like the present period that some of
its noisiest authorities insisted on its being received for good or for
evil in the superlative degree of comparison only
[]
Output without it -
['It', 'was', 'the', 'best', 'of', 'times', 'it', 'was', 'the', 'worst', 'of', 'times', 'it', 'was', 'the', 'age', 'of', 'wisdom', 'it', 'was', 'the', 'age', 'of', 'foolishness', 'it', 'was', 'the', 'epoch', 'of', 'belief', 'it', 'was', 'the', 'epoch', 'of', 'incredulity', 'it', 'was', 'the', 'season', 'of', 'Light', 'it', 'was', 'the', 'season', 'of', 'Darkness', 'it', 'was', 'the', 'spring', 'of', 'hope', 'it', 'was', 'the', 'winter', 'of', 'despair', 'we', 'had', 'everything', 'before', 'us', 'we', 'had', 'nothing', 'before', 'us', 'we', 'were', 'all', 'going', 'direct', 'to', 'Heaven', 'we', 'were', 'all', 'going', 'direct', 'the', 'other', 'way', 'in', 'short', 'the', 'period', 'was', 'so', 'far', 'like', 'the', 'present', 'period', 'that', 'some', 'of', 'its', 'noisiest', 'authorities', 'insisted', 'on', 'its', 'being', 'received', 'for', 'good', 'or', 'for', 'evil', 'in', 'the', 'superlative', 'degree', 'of', 'comparison', 'only']
Calling urlopen returns a file-like buffer object. With read you can get the response up to a number of bytes or get the whole response until EOF when not passing a parameter. After reading, the buffer is empty. This means that you need to save the returned value in a variable before printing.

Word Counter loop keeps loading forever in Python

I have a DataFrame comments as seen below. I want to make a Counter of words for the Text field. I have made a list of UserId whose count of words is needed, those UserIds are stored in gold_users. But the loop to create Counter just keeps loading. Please help me fix this.
comments
This is just part of dataframe, original has many rows.
Id| Text | UserId
6| Before the 2006 course, there was Allen Knutso... | 3
8| Also, Theo Johnson-Freyd has some notes from M... | 1
Code
#Text Cleaning
punct = set(string.punctuation)
stopword = set(stopwords.words('english'))
lm = WordNetLemmatizer()
def clean_text(text):
text = ''.join(char.lower() for char in text if char not in punct)
tokens = re.split('\W+', text)
text = [lm.lemmatize(word) for word in tokens if word not in stopword]
return tuple(text) # Writing only `return text` was giving unhashable error 'list'
comments['Text'] = comments['Text'].apply(lambda x: clean_text(x))
for index,rows in comments.iterrows():
gold_comments = rows[comments.Text.loc[comments.UserId.isin(gold_users)]]
Counter(gold_comments)
Expected Output
[['scholar',20],['school',18],['bus',15],['class',14],['teacher',14],['bell',13],['time',12],['books',11],['bag',9],'student',7],......]
Passing your dataframe already having only your gold_users ids and texts, the following pure python function returns exactly what you need:
def word_count(df):
counts = dict()
for str in df['Text']:
words = str.split()
for word in words:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
return list(counts.items())
Hope it helps!
You overcomplicated the problem, I am afraid. In Pandas, it is almost never desirable to iterate through the rows. Select the rows that meet your condition, add their Texts, and apply a Counter to the combined list:
gold_users = [3,1]
golden_comments = comments[comments['UserId'].isin(gold_users)]
counter = Counter(golden_comments['Text'].sum())
If necessary, convert the counter to a list of lists:
[[k, v] for k, v in counter.items()]
# [['2006', 1], ['course', 1], ['allen', 1], ...]
# Initialise packages in session:
import pandas as pd
import re
# comments => Data Frame
comments = pd.DataFrame({"Id": [6, 8],
"Text": ["Before the 2006 course, there was Allen Knutso...",
"Also, Theo Johnson-Freyd has some notes from M..."],
"UserId": [3, 1]})
# Stopwords to remove from text: stopwords_lst => list of strings
stopwords_lst = ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself',
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these',
'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having',
'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as',
'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into',
'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in',
'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when',
'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some',
'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can',
'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', 've',
'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't",
'mustn', "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn',
"wasn't", 'weren', "weren't", 'won', "won't", 'wouldn', "wouldn't"]
# Clean lists of strings using regex: list of strings => function() => list of strings
def clean_string_list(str_lst):
"""Convert all alpha numeric characters in list of strings to lowercase,
non alpha-numeric characters to whitepsace, and trim whitespace on both sides of each string.
Args:
str_lst (list): Function takes a list of strings.
Returns:
(list) A list of strings
"""
return([*map(lambda x: re.sub('\W+', ' ', x.lower().strip('\s+\t\n\r')), str_lst)])
# Store a list of gold user's UserIds: gold_user_ids => list of integers:
gold_user_ids = [3, 1]
# Take Subset of Data Frame containing only gold users: gold_users => Data Frame
gold_users = comments[comments["UserId"].isin(gold_user_ids)]
# Apply the function to the list of stopwords and collapse the list into a single string: stopwords_re => string
stopwords_re = ' | '.join(clean_string_list(stopwords_lst))
# Clean strings, and remove stopwords: cleaned_text => vector of strings
gold_users['cleaned_text'] = [*map(lambda y: re.sub(stopwords_re, ' ', y), clean_string_list(gold_users['Text']))]
# Split each word on whitespace: words => list of strings
words = (' '.join(gold_users['cleaned_text'])).split()
# Count the number of occurences of each word: word_count => dict
word_count = dict(zip(words, [*map(lambda z: words.count(z), words)]))
# Print words to console: dictionary => stdout
print(word_count)

Python3 - how can I sort the list by frequency of its elements? [duplicate]

This question already has answers here:
Sort list by frequency
(8 answers)
Closed 4 years ago.
I'm working on the code that can analyze the input text.
One of the functions I would like to ask for help is that making a list of words used in order of descending frequency.
By referring the similar topics in stack overflow, I was able to retain only alphanumeric characters (remove all quotation / punctuation etc) and put each words into the list.
Here is the list I have now. (variable called word_list)
['Hi', 'beautiful', 'creature', 'Said', 'by', 'Rothchild', 'the',
'biggest', 'enemy', 'of', 'Zun', 'Zun', 'started', 'get', 'afraid',
'of', 'him', 'As', 'her', 'best', 'friend', 'Lia', 'can', 'feel',
'her', 'fear', 'Why', 'the', 'the', 'hell', 'you', 'are', 'here']
(FYI, text file is just random fanfiction I found from the web)
However, I'm having trouble to modify this list to the list in order of descending frequency - for example, there are 3 'the' in that list, so 'the' becomes the first element of the list. next element would be 'of', which occurring 2 times.
I tried several things similar to my case but keep displaying error (Counter, sorted).
Can someone teach me how can I sort the list?
In addition, after sorting the list, how can I retain only 1 copy for repeating ones? (my current idea is using for loop and indexing - compare with previous index, remove if it's same.)
Thank you.
You can use a itertools.Counter for your sorting in different ways:
from collections import Counter
lst = ['Hi', 'beautiful', 'creature', 'Said', 'by', 'Rothchild', 'the', 'biggest', 'enemy', 'of', 'Zun', 'Zun', 'started', 'get', 'afraid', 'of', 'him', 'As', 'her', 'best', 'friend', 'Lia', 'can', 'feel', 'her', 'fear', 'Why', 'the', 'the', 'hell', 'you', 'are', 'here']
c = Counter(lst) # mapping: {item: frequency}
# now you can use the counter directly via most_common (1.)
lst = [x for x, _ in c.most_common()]
# or as a sort key (2.)
lst = sorted(set(lst), key=c.get, reverse=True)
# ['the', 'Zun', 'of', 'her', 'Hi', 'hell', 'him', 'friend', 'Lia',
# 'get', 'afraid', 'Rothchild', 'started', 'by', 'can', 'Why', 'fear',
# 'you', 'are', 'biggest', 'enemy', 'Said', 'beautiful', 'here',
# 'best', 'creature', 'As', 'feel']
These approaches use either the Counter keys (1.) or set for the removal of duplicates.
However, if you want the sort to be stable with regard to the original list (keep order of occurrence for equal frequency items), you might have to do this, following the collections.OrderedDict based recipe for duplicate removal:
from collections import OrderedDict
lst = sorted(OrderedDict.fromkeys(lst), key=c.get, reverse=True)
# ['the', 'of', 'Zun', 'her', 'Hi', 'beautiful', 'creature', 'Said',
# 'by', 'Rothchild', 'biggest', 'enemy', 'started', 'get', 'afraid',
# 'him', 'As', 'best', 'friend', 'Lia', 'can', 'feel', 'fear', 'Why',
# 'hell', 'you', 'are', 'here']

Fuzzy Group By, Grouping Similar Words

this question is asked here before
What is a good strategy to group similar words?
but no clear answer is given on how to "group" items. The solution based on difflib is basically search, for given item, difflib can return the most similar word out of a list. But how can this be used for grouping?
I would like to reduce
['ape', 'appel', 'apple', 'peach', 'puppy']
to
['ape', 'appel', 'peach', 'puppy']
or
['ape', 'apple', 'peach', 'puppy']
One idea I tried was, for each item, iterate through the list, if get_close_matches returns more than one match, use it, if not keep the word as is. This partly worked, but it can suggest apple for appel, then appel for apple, these words would simply switch places and nothing would change.
I would appreciate any pointers, names of libraries, etc.
Note: also in terms of performance, we have a list of 300,000 items, and get_close_matches seems a bit slow. Does anyone know of a C/++ based solution out there?
Thanks,
Note: Further investigation revealed kmedoid is the right algorithm (as well as hierarchical clustering), since kmedoid does not require "centers", it takes / uses data points themselves as centers (these points are called medoids, hence the name). In word grouping case, the medoid would be the representative element of that group / cluster.
You need to normalize the groups. In each group, pick one word or coding that represents the group. Then group the words by their representative.
Some possible ways:
Pick the first encountered word.
Pick the lexicographic first word.
Derive a pattern for all the words.
Pick an unique index.
Use the soundex as pattern.
Grouping the words could be difficult, though. If A is similar to B, and B is similar to C, A and C is not necessarily similar to each other. If B is the representative, both A and C could be included in the group. But if A or C is the representative, the other could not be included.
Going by the first alternative (first encountered word):
class Seeder:
def __init__(self):
self.seeds = set()
self.cache = dict()
def get_seed(self, word):
LIMIT = 2
seed = self.cache.get(word,None)
if seed is not None:
return seed
for seed in self.seeds:
if self.distance(seed, word) <= LIMIT:
self.cache[word] = seed
return seed
self.seeds.add(word)
self.cache[word] = word
return word
def distance(self, s1, s2):
l1 = len(s1)
l2 = len(s2)
matrix = [range(zz,zz + l1 + 1) for zz in xrange(l2 + 1)]
for zz in xrange(0,l2):
for sz in xrange(0,l1):
if s1[sz] == s2[zz]:
matrix[zz+1][sz+1] = min(matrix[zz+1][sz] + 1, matrix[zz][sz+1] + 1, matrix[zz][sz])
else:
matrix[zz+1][sz+1] = min(matrix[zz+1][sz] + 1, matrix[zz][sz+1] + 1, matrix[zz][sz] + 1)
return matrix[l2][l1]
import itertools
def group_similar(words):
seeder = Seeder()
words = sorted(words, key=seeder.get_seed)
groups = itertools.groupby(words, key=seeder.get_seed)
return [list(v) for k,v in groups]
Example:
import pprint
print pprint.pprint(group_similar([
'the', 'be', 'to', 'of', 'and', 'a', 'in', 'that', 'have',
'I', 'it', 'for', 'not', 'on', 'with', 'he', 'as', 'you',
'do', 'at', 'this', 'but', 'his', 'by', 'from', 'they', 'we',
'say', 'her', 'she', 'or', 'an', 'will', 'my', 'one', 'all',
'would', 'there', 'their', 'what', 'so', 'up', 'out', 'if',
'about', 'who', 'get', 'which', 'go', 'me', 'when', 'make',
'can', 'like', 'time', 'no', 'just', 'him', 'know', 'take',
'people', 'into', 'year', 'your', 'good', 'some', 'could',
'them', 'see', 'other', 'than', 'then', 'now', 'look',
'only', 'come', 'its', 'over', 'think', 'also', 'back',
'after', 'use', 'two', 'how', 'our', 'work', 'first', 'well',
'way', 'even', 'new', 'want', 'because', 'any', 'these',
'give', 'day', 'most', 'us'
]), width=120)
Output:
[['after'],
['also'],
['and', 'a', 'in', 'on', 'as', 'at', 'an', 'one', 'all', 'can', 'no', 'want', 'any'],
['back'],
['because'],
['but', 'about', 'get', 'just'],
['first'],
['from'],
['good', 'look'],
['have', 'make', 'give'],
['his', 'her', 'if', 'him', 'its', 'how', 'us'],
['into'],
['know', 'new'],
['like', 'time', 'take'],
['most'],
['of', 'I', 'it', 'for', 'not', 'he', 'you', 'do', 'by', 'we', 'or', 'my', 'so', 'up', 'out', 'go', 'me', 'now'],
['only'],
['over', 'our', 'even'],
['people'],
['say', 'she', 'way', 'day'],
['some', 'see', 'come'],
['the', 'be', 'to', 'that', 'this', 'they', 'there', 'their', 'them', 'other', 'then', 'use', 'two', 'these'],
['think'],
['well'],
['what', 'who', 'when', 'than'],
['with', 'will', 'which'],
['work'],
['would', 'could'],
['year', 'your']]
You have to decide in closed matches words, which words you want to use. May be get the first element from the list which get_close_matches is returning, or just use random function on that list and get one element from closed matches.
There must be some sort of rule, for it..
In [19]: import difflib
In [20]: a = ['ape', 'appel', 'apple', 'peach', 'puppy']
In [21]: a = ['appel', 'apple', 'peach', 'puppy']
In [22]: b = difflib.get_close_matches('ape',a)
In [23]: b
Out[23]: ['apple', 'appel']
In [24]: import random
In [25]: c = random.choice(b)
In [26]: c
Out[26]: 'apple'
In [27]:
Now remove c from the initial list, thats it...
For c++, you can use Levenshtein_distance
Here is another version using Affinity Propagation algorithm.
import numpy as np
import scipy.linalg as lin
import Levenshtein as leven
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.cluster import AffinityPropagation
import itertools
words = np.array(
['the', 'be', 'to', 'of', 'and', 'a', 'in', 'that', 'have',
'I', 'it', 'for', 'not', 'on', 'with', 'he', 'as', 'you',
'do', 'at', 'this', 'but', 'his', 'by', 'from', 'they', 'we',
'say', 'her', 'she', 'or', 'an', 'will', 'my', 'one', 'all',
'would', 'there', 'their', 'what', 'so', 'up', 'out', 'if',
'about', 'who', 'get', 'which', 'go', 'me', 'when', 'make',
'can', 'like', 'time', 'no', 'just', 'him', 'know', 'take',
'people', 'into', 'year', 'your', 'good', 'some', 'could',
'them', 'see', 'other', 'than', 'then', 'now', 'look',
'only', 'come', 'its', 'over', 'think', 'also', 'back',
'after', 'use', 'two', 'how', 'our', 'work', 'first', 'well',
'way', 'even', 'new', 'want', 'because', 'any', 'these',
'give', 'day', 'most', 'us'])
print "calculating distances..."
(dim,) = words.shape
f = lambda (x,y): -leven.distance(x,y)
res=np.fromiter(itertools.imap(f, itertools.product(words, words)), dtype=np.uint8)
A = np.reshape(res,(dim,dim))
af = AffinityPropagation().fit(A)
cluster_centers_indices = af.cluster_centers_indices_
labels = af.labels_
unique_labels = set(labels)
for i in unique_labels:
print words[labels==i]
Distances had to be converted to similarities, I did that by taking the negative of distance. The output is
['to' 'you' 'do' 'by' 'so' 'who' 'go' 'into' 'also' 'two']
['it' 'with' 'at' 'if' 'get' 'its' 'first']
['of' 'for' 'from' 'or' 'your' 'look' 'after' 'work']
['the' 'be' 'have' 'I' 'he' 'we' 'her' 'she' 'me' 'give']
['this' 'his' 'which' 'him']
['and' 'a' 'in' 'an' 'my' 'all' 'can' 'any']
['on' 'one' 'good' 'some' 'see' 'only' 'come' 'over']
['would' 'could']
['but' 'out' 'about' 'our' 'most']
['make' 'like' 'time' 'take' 'back']
['that' 'they' 'there' 'their' 'when' 'them' 'other' 'than' 'then' 'think'
'even' 'these']
['not' 'no' 'know' 'now' 'how' 'new']
['will' 'people' 'year' 'well']
['say' 'what' 'way' 'want' 'day']
['because']
['as' 'up' 'just' 'use' 'us']
Another method could be using matrix factorization, using SVD. First we create word distance matrix, for 100 words this would be 100 x 100 matrix representating the distance from each word to all other words. Then, SVD is ran on this matrix, the u in the resulting u,s,v can be seen as membership strength to each cluster.
Code
import numpy as np
import scipy.linalg as lin
import Levenshtein as leven
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import itertools
words = np.array(
['the', 'be', 'to', 'of', 'and', 'a', 'in', 'that', 'have',
'I', 'it', 'for', 'not', 'on', 'with', 'he', 'as', 'you',
'do', 'at', 'this', 'but', 'his', 'by', 'from', 'they', 'we',
'say', 'her', 'she', 'or', 'an', 'will', 'my', 'one', 'all',
'would', 'there', 'their', 'what', 'so', 'up', 'out', 'if',
'about', 'who', 'get', 'which', 'go', 'me', 'when', 'make',
'can', 'like', 'time', 'no', 'just', 'him', 'know', 'take',
'people', 'into', 'year', 'your', 'good', 'some', 'could',
'them', 'see', 'other', 'than', 'then', 'now', 'look',
'only', 'come', 'its', 'over', 'think', 'also', 'back',
'after', 'use', 'two', 'how', 'our', 'work', 'first', 'well',
'way', 'even', 'new', 'want', 'because', 'any', 'these',
'give', 'day', 'most', 'us'])
print "calculating distances..."
(dim,) = words.shape
f = lambda (x,y): leven.distance(x,y)
res=np.fromiter(itertools.imap(f, itertools.product(words, words)),
dtype=np.uint8)
A = np.reshape(res,(dim,dim))
print "svd..."
u,s,v = lin.svd(A, full_matrices=False)
print u.shape
print s.shape
print s
print v.shape
data = u[:,0:10]
k=KMeans(init='k-means++', k=25, n_init=10)
k.fit(data)
centroids = k.cluster_centers_
labels = k.labels_
print labels
for i in range(np.max(labels)):
print words[labels==i]
def dist(x,y):
return np.sqrt(np.sum((x-y)**2, axis=1))
print "centroid points.."
for i,c in enumerate(centroids):
idx = np.argmin(dist(c,data[labels==i]))
print words[labels==i][idx]
print words[labels==i]
plt.plot(centroids[:,0],centroids[:,1],'x')
plt.hold(True)
plt.plot(u[:,0], u[:,1], '.')
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.plot(u[:,0], u[:,1], u[:,2],'.', zs=0,
zdir='z', label='zs=0, zdir=z')
plt.show()
The result
any
['and' 'an' 'can' 'any']
do
['to' 'you' 'do' 'so' 'go' 'no' 'two' 'how']
when
['who' 'when' 'well']
my
['be' 'I' 'by' 'we' 'my' 'up' 'me' 'use']
your
['for' 'or' 'out' 'about' 'your' 'our']
its
['it' 'his' 'if' 'him' 'its']
could
['would' 'people' 'could']
this
['this' 'think' 'these']
she
['the' 'he' 'she' 'see']
back
['all' 'back' 'want']
one
['of' 'on' 'one' 'only' 'even' 'new']
just
['but' 'just' 'first' 'most']
come
['some' 'come']
that
['that' 'than']
way
['say' 'what' 'way' 'day']
like
['like' 'time' 'give']
in
['in' 'into']
get
['her' 'get' 'year']
because
['because']
will
['with' 'will' 'which']
over
['other' 'over' 'after']
as
['a' 'as' 'at' 'also' 'us']
them
['they' 'there' 'their' 'them' 'then']
good
['not' 'from' 'know' 'good' 'now' 'look' 'work']
have
['have' 'make' 'take']
The selection of k for number of clusters is important, k=25 gives much better results than k=20 for instance.
The code also selects a representative word for each cluster by picking the word whose u[..] coordinate is closest to the cluster centroid.
Here is an approach based on medoids. First install MlPy. On Ubuntu
sudo apt-get install python-mlpy
Then
import numpy as np
import mlpy
class distance:
def compute(self, s1, s2):
l1 = len(s1)
l2 = len(s2)
matrix = [range(zz,zz + l1 + 1) for zz in xrange(l2 + 1)]
for zz in xrange(0,l2):
for sz in xrange(0,l1):
if s1[sz] == s2[zz]:
matrix[zz+1][sz+1] = min(matrix[zz+1][sz] + 1, matrix[zz][sz+1] + 1, matrix[zz][sz])
else:
matrix[zz+1][sz+1] = min(matrix[zz+1][sz] + 1, matrix[zz][sz+1] + 1, matrix[zz][sz] + 1)
return matrix[l2][l1]
x = np.array(['ape', 'appel', 'apple', 'peach', 'puppy'])
km = mlpy.Kmedoids(k=3, dist=distance())
medoids,clusters,a,b = km.compute(x)
print medoids
print clusters
print a
print x[medoids]
for i,c in enumerate(x[medoids]):
print "medoid", c
print x[clusters[a==i]]
The output is
[4 3 1]
[0 2]
[2 2]
['puppy' 'peach' 'appel']
medoid puppy
[]
medoid peach
[]
medoid appel
['ape' 'apple']
The bigger word list and using k=10
medoid he
['or' 'his' 'my' 'have' 'if' 'year' 'of' 'who' 'us' 'use' 'people' 'see'
'make' 'be' 'up' 'we' 'the' 'one' 'her' 'by' 'it' 'him' 'she' 'me' 'over'
'after' 'get' 'what' 'I']
medoid out
['just' 'only' 'your' 'you' 'could' 'our' 'most' 'first' 'would' 'but'
'about']
medoid to
['from' 'go' 'its' 'do' 'into' 'so' 'for' 'also' 'no' 'two']
medoid now
['new' 'how' 'know' 'not']
medoid time
['like' 'take' 'come' 'some' 'give']
medoid because
[]
medoid an
['want' 'on' 'in' 'back' 'say' 'and' 'a' 'all' 'can' 'as' 'way' 'at' 'day'
'any']
medoid look
['work' 'good']
medoid will
['with' 'well' 'which']
medoid then
['think' 'that' 'these' 'even' 'their' 'when' 'other' 'this' 'they' 'there'
'than' 'them']

from a list of strings, how do you get a tuple/list of words which contain 3 or more characters and are evenly spaced in python [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Checking if a string's characters are ascending alphabetically and its ascent is evenly spaced python
I have a list of strings/words:
mylist = ['twas', 'brillig', 'and', 'the', 'slithy', 'toves', 'did', 'gyre', 'and', 'gimble', 'in', 'the', 'wabe', 'all', 'mimsy', 'were', 'the', 'borogoves', 'and', 'the', 'mome', 'raths', 'outgrabe', '"beware', 'the', 'jabberwock', 'my', 'son', 'the', 'jaws', 'that', 'bite', 'the', 'claws', 'that', 'catch', 'beware', 'the', 'jubjub', 'bird', 'and', 'shun', 'the', 'frumious', 'bandersnatch', 'he', 'took', 'his', 'vorpal', 'sword', 'in', 'hand', 'long', 'time', 'the', 'manxome', 'foe', 'he', 'sought', 'so', 'rested', 'he', 'by', 'the', 'tumtum', 'tree', 'and', 'stood', 'awhile', 'in', 'thought', 'and', 'as', 'in', 'uffish', 'thought', 'he', 'stood', 'the', 'jabberwock', 'with', 'eyes', 'of', 'flame', 'came', 'whiffling', 'through', 'the', 'tulgey', 'wood', 'and', 'burbled', 'as', 'it', 'came', 'one', 'two', 'one', 'two', 'and', 'through', 'and', 'through', 'the', 'vorpal', 'blade', 'went', 'snicker-snack', 'he', 'left', 'it', 'dead', 'and', 'with', 'its', 'head', 'he', 'went', 'galumphing', 'back', '"and', 'has', 'thou', 'slain', 'the', 'jabberwock', 'come', 'to', 'my', 'arms', 'my', 'beamish', 'boy', 'o', 'frabjous', 'day', 'callooh', 'callay', 'he', 'chortled', 'in', 'his', 'joy', '`twas', 'brillig', 'and', 'the', 'slithy', 'toves', 'did', 'gyre', 'and', 'gimble', 'in', 'the', 'wabe', 'all', 'mimsy', 'were', 'the', 'borogoves', 'and', 'the', 'mome', 'raths', 'outgrabe']
firstly i need to only get the words which have 3 or more characters in them - i assume a for loop for that or something.
then i need to get a list of words which contain only words that increase from left to right alphabetically and are a fixed number apart. (e.g. ('ace', 2) or ('ceg', 2) does not have to be 2) the list also has to be sorted in alphabetical order and each element should be a tuple consisting of the word and character difference.
I think i have to use a for loop but im not sure how to use it in this case and am not sure how to do the second part.
for the list above the answer i should get is:
([])
I do not have the newest version of python.
Any help is greatly appreciated.
You should probably start by learning how to use a for loop. A for loop will get things out of a collection and assign them to a variable:
for letter in "strings are collections":
print letter
Or..
for thing in ['this is a list', 'of', 4, 'things']:
if thing == 4:
print '4 is in the list'
If you're able to do more than this, then try something, figure out where you get stuck, and ask as more specifically what you need help with.
Take this problem in steps
To filter words with length >= 3
[w for w in mylist if len(w) >= 3]
To see if the words are increasing in regular interval? Calculate the difference, between consecutive letters, create a set and check if the length == 1
diff =lambda word:len({ord(n)-ord(c) for n,c in zip(word[1:],word)}) == 1
Now use this new function to filter the remaining words
[w for w in mylist if len(w) >= 3 and diff(w)]

Categories