christabor/namebot

View on GitHub

Showing 83 of 83 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

class GetVowelRepeatFrequencyTestCase(unittest.TestCase):

    def test_basic(self):
        res = metrics.get_consonant_duplicate_repeat_frequency(
            ['food', 'beef', 'cheese'])
Severity: Major
Found in namebot/tests/test_metrics.py and 1 other location - About 1 hr to fix
namebot/tests/test_metrics.py on lines 128..133

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function score_pronounceability has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
Open

def score_pronounceability(word):
    """Get the ratio of vowels to consonants, a very basic measurement.

    Half vowels and half consonants indicates a highly pronounceable word.
    For example, 0.5 / 0.5 = 1.0, so one is perfect, and lower is worse.
Severity: Minor
Found in namebot/scoring.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

    def test_suffix(self):
        res = techniques.suffixify(self.words)
        self.assertEqual(res[:3], ['shopage', 'shopable', 'shopible'])
Severity: Major
Found in namebot/tests/test_techniques.py and 2 other locations - About 1 hr to fix
namebot/tests/test_techniques.py on lines 145..147
namebot/tests/test_techniques.py on lines 153..155

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function _get_lemma_names has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

def _get_lemma_names(sub_synset, use_definitions=False):
    """Get lemma names."""
    results = []
    if sub_synset():
        for v in sub_synset():
Severity: Minor
Found in namebot/nlp.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_synsets has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

def get_synsets(words, use_definitions=False, clean=False):
    """Brute force loop on a synset ring to get all related words.

    You are expected to filter or remove any that are not relevant separately,
    if the resultant set is too long.
Severity: Minor
Found in namebot/nlp.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

    def test_prefix(self):
        res = techniques.prefixify(self.words)
        self.assertEqual(res[:3], ['ennishop', 'epishop', 'equishop'])
Severity: Major
Found in namebot/tests/test_techniques.py and 2 other locations - About 1 hr to fix
namebot/tests/test_techniques.py on lines 149..151
namebot/tests/test_techniques.py on lines 153..155

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

    def test_duplifix(self):
        res = techniques.duplifixify(self.words)
        self.assertEqual(res[:3], ['shop ahop', 'shop bhop', 'shop chop'])
Severity: Major
Found in namebot/tests/test_techniques.py and 2 other locations - About 1 hr to fix
namebot/tests/test_techniques.py on lines 145..147
namebot/tests/test_techniques.py on lines 149..151

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_disfix_nosingle_pairs(self):
        words = ['shop', 'prop']
        res = techniques.disfixify(words)
        self.assertEqual(res, ['shop', 'prop'])
Severity: Major
Found in namebot/tests/test_techniques.py and 1 other location - About 1 hr to fix
namebot/tests/test_techniques.py on lines 161..164

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 38.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_disfix(self):
        words = ['propagating', 'gigantic']
        res = techniques.disfixify(words)
        self.assertEqual(res, ['pagating', 'antic'])
Severity: Major
Found in namebot/tests/test_techniques.py and 1 other location - About 1 hr to fix
namebot/tests/test_techniques.py on lines 166..169

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 38.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function flatten has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def flatten(lst):
    """Flatten a list with arbitrary levels of nesting.

    CREDIT: http://stackoverflow.com/questions/10823877/
        what-is-the-fastest-way-to-flatten-arbitrarily-nested-lists-in-python
Severity: Minor
Found in namebot/normalization.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function duplifixify has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def duplifixify(words):
    """Apply a duplifix technique to a set of words (e.g: teeny weeny, etc...).

    :param words (list) - The list of words to operate on.
    :rtype new_arr (list): the updated *fixed words
Severity: Minor
Found in namebot/techniques.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

    def test_infix_novowels(self):
        words = ['shp', 'prp']
        res = techniques.infixify(words)
        self.assertEqual(res, words)
Severity: Major
Found in namebot/tests/test_techniques.py and 2 other locations - About 45 mins to fix
namebot/tests/test_techniques.py on lines 171..174
namebot/tests/test_techniques.py on lines 198..201

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 35.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Avoid deeply nested control flow statements.
Open

                    if word[-1] is 'e':
                        if word[-2] is not 'i':
                            new_arr.append('{}{}'.format(word[:-2], suffix))
                        else:
                            new_arr.append('{}{}'.format(word[:-1], suffix))
Severity: Major
Found in namebot/techniques.py - About 45 mins to fix

    Function score_length has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
    Open

    def score_length(word):
        """Return a score, 1-5, of the length of the word.
    
        Really long, or really short words get a lower score.
        There is no hard science, but popular opinion suggests
    Severity: Minor
    Found in namebot/scoring.py - About 45 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Avoid deeply nested control flow statements.
    Open

                        if len(p) + len(p2) > 2:
                            if re.search(
                                _regexes['all_vowels'], p) or re.search(
                                    _regexes['all_vowels'], p2):
                                        if p[-1] is p2[0]:
    Severity: Major
    Found in namebot/techniques.py - About 45 mins to fix

      Avoid deeply nested control flow statements.
      Open

                          if any(bad_matches):
                              continue
                          replacer = '{}{}{}'.format(first, infix_pair, second)
      Severity: Major
      Found in namebot/techniques.py - About 45 mins to fix

        Similar blocks of code found in 3 locations. Consider refactoring.
        Open

            def test_vowel_u(self):
                self.assertEqual(techniques.reduplication_ablaut(
                    ['cat', 'dog'], random=False, vowel='u'),
                    ['cat cut', 'dog dug'])
        Severity: Major
        Found in namebot/tests/test_techniques.py and 2 other locations - About 45 mins to fix
        namebot/tests/test_techniques.py on lines 120..123
        namebot/tests/test_techniques.py on lines 125..128

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 35.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Function all_prefix_first_vowel has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
        Open

        def all_prefix_first_vowel(word, letters=list(ascii_uppercase)):
            """Find the first vowel in a word and prefixes with consonants.
        
            :param word (str) - the word to update
            :param letters (list) - the letters to use for prefixing.
        Severity: Minor
        Found in namebot/techniques.py - About 45 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Similar blocks of code found in 3 locations. Consider refactoring.
        Open

            def test_infix_nosingle_pairs(self):
                words = ['shop', 'prop']
                res = techniques.infixify(words)
                self.assertEqual(res, words)
        Severity: Major
        Found in namebot/tests/test_techniques.py and 2 other locations - About 45 mins to fix
        namebot/tests/test_techniques.py on lines 171..174
        namebot/tests/test_techniques.py on lines 188..191

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 35.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Avoid deeply nested control flow statements.
        Open

                                if l3 and len(l3) > 0:
                                    for v in l3:
                                        new_arr.append(l1 + v + l2)
                                    else:
                                        new_arr.append('{}{}{}'.format(l1, 't', l2))
        Severity: Major
        Found in namebot/techniques.py - About 45 mins to fix
          Severity
          Category
          Status
          Source
          Language