![]() The purpose of using the sets at all was to weed out a lot of the ones that don't have any shot at all and thereby give a speed-up. However, you are also getting a pretty major slow down from making a set out of every word. Next, Ferran suggested a variable for the rack set, which is a good idea. ![]() Of course, you could just create your own dictionary beforehand from the original that removes those that aren't valid: those that aren't the right length or have letters outsize of a-z. Also, there could be a problem with the last word in the list because it probably won't have a new line at the end of it, but on my computer the last word is études which won't be found with our method anyway. Remember that word is unstripped of new line characters in my comparisons. It eliminates words that are too long or short before we get too far in the process. This gives the biggest improvement of all of my suggested changes. Return (word.strip() for word in open(filename) \Īnd call it as words = word_reader('/usr/share/dict/words', len(rack)) Without going too far from your basic code, here are some fairly simple optimizations:įirst, change your word reader to be: def word_reader(filename, L): ![]() Scored = ((score_word(word), word) for word in words if set(word).issubset(set(rack)) and len(word) > 1 and spellable(word, rack)) Words = word_reader('/usr/share/dict/words') I've tried to optimize it as much as I know how (using generators instead of list comprehensions made a big difference), and I've run out of ideas. Right now it's taking about a second on a list of about 200K words. I have no real need to improve it, it's just for fun.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |