-
-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improvement for speed and clarity #1
Conversation
* phrases as tuples instead of lists * can use set with tuple of strings to get unique phrases
2 similar comments
2 similar comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes. I had made a mental note to change this section myself when I reread the code, but somehow lost track of it.
@@ -166,20 +166,12 @@ def _generate_phrases(self, sentences): | |||
@return: List of List of strings where each sublist is a collection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you please change the @return to
@return: List of string tuples where each tuple is a collection of words forming a contender phrase.
so that code change and corresponding documentation are part of a single commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we should maybe even say @return: Set of string tuples...
because the return value is now a set instead of a list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that would be better.
unique_phrase_list.append(phrase) | ||
return unique_phrase_list | ||
phrase_list.update(self._get_phrase_list_from_words(word_list)) | ||
return phrase_list | ||
|
||
def _get_phrase_list_from_words(self, word_list): | ||
'''Method to create contender phrases from the list of words that form a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change
List of phrases: [['red', 'apples'], ['good'], ['flavour']]
in the function definition to
List of phrases: [('red', 'apples'), ('good'), ('flavour')]
to reflect the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -53,13 +53,15 @@ def test_generate_phrases(self): | |||
['define'], ['sequence'], ['one'], ['words'], | |||
['provide'], ['compact', 'representation'], ['document'], | |||
['content']] | |||
phrase_list = set(tuple(phrase) for phrase in phrase_list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the expected set is a hard coded list of phrases, maybe its a good idea to create the set in line 52 as a set of tuples rather than taking another pass at it here. Same goes for the other two changes. If you choose not to do it, I will do it myself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll do it, I was just lazy about it when updating tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the additional 0.1% coverage 💯
Hi, and thanks for writing the RAKE algorithm using NLTK.
I've noticed in the code that you keep phrases as lists of words, which then makes it more difficult to compute the list of unique phrases. What I changed was to use tuples instead of lists, then the list of phrases can be a set instead and you get unique phrases with less code, and faster as well.