christabor/namebot

View on GitHub
namebot/techniques.py

Summary

Maintainability
F
4 days
Test Coverage

File techniques.py has 716 lines of code (exceeds 250 allowed). Consider refactoring.
Open

"""Primary techniques for the core functionality of namebot."""

from __future__ import absolute_import
from __future__ import division

Severity: Major
Found in namebot/techniques.py - About 1 day to fix

    Function make_vowel has a Cognitive Complexity of 38 (exceeds 5 allowed). Consider refactoring.
    Open

    def make_vowel(words, vowel_type, vowel_index):
        """Primary for all Portmanteau generators.
    
        This creates the portmanteau based on :vowel_index, and :vowel_type.
    
    
    Severity: Minor
    Found in namebot/techniques.py - About 5 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function suffixify has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
    Open

    def suffixify(words):
        """Apply a suffix technique to a set of words.
    
        :param words (list) - The list of words to operate on.
        :rtype new_arr (list): the updated *fixed words
    Severity: Minor
    Found in namebot/techniques.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function make_portmanteau_split has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
    Open

    def make_portmanteau_split(words):
        """Make a portmeanteau, split by vowel/consonant combos.
    
        Based on the word formation of nikon: [ni]pp[on] go[k]aku,
        which is comprised of Nippon + Gokaku.
    Severity: Minor
    Found in namebot/techniques.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function backronym has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
    Open

    def backronym(acronym, theme, max_attempts=10):
        """Attempt to generate a backronym based on a given acronym and theme.
    
        :param acronym (str): The starting acronym.
        :param theme (str): The seed word to base other words off of.
    Severity: Minor
    Found in namebot/techniques.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function infixify has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
    Open

    def infixify(words):
        """Apply a infix technique to a set of words.
    
        Adds all consonant+vowel pairs to all inner matching vowel+consonant pairs
        of a word, giving all combinations for each word.
    Severity: Minor
    Found in namebot/techniques.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function prefixify has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
    Open

    def prefixify(words):
        """Apply a prefix technique to a set of words.
    
        :param words (list) - The list of words to operate on.
        :rtype new_arr (list): the updated *fixed words
    Severity: Minor
    Found in namebot/techniques.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function duplifixify has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def duplifixify(words):
        """Apply a duplifix technique to a set of words (e.g: teeny weeny, etc...).
    
        :param words (list) - The list of words to operate on.
        :rtype new_arr (list): the updated *fixed words
    Severity: Minor
    Found in namebot/techniques.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Avoid deeply nested control flow statements.
    Open

                        if word[-1] is 'e':
                            if word[-2] is not 'i':
                                new_arr.append('{}{}'.format(word[:-2], suffix))
                            else:
                                new_arr.append('{}{}'.format(word[:-1], suffix))
    Severity: Major
    Found in namebot/techniques.py - About 45 mins to fix

      Avoid deeply nested control flow statements.
      Open

                          if len(p) + len(p2) > 2:
                              if re.search(
                                  _regexes['all_vowels'], p) or re.search(
                                      _regexes['all_vowels'], p2):
                                          if p[-1] is p2[0]:
      Severity: Major
      Found in namebot/techniques.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                            if any(bad_matches):
                                continue
                            replacer = '{}{}{}'.format(first, infix_pair, second)
        Severity: Major
        Found in namebot/techniques.py - About 45 mins to fix

          Function all_prefix_first_vowel has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
          Open

          def all_prefix_first_vowel(word, letters=list(ascii_uppercase)):
              """Find the first vowel in a word and prefixes with consonants.
          
              :param word (str) - the word to update
              :param letters (list) - the letters to use for prefixing.
          Severity: Minor
          Found in namebot/techniques.py - About 45 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Avoid deeply nested control flow statements.
          Open

                                  if l3 and len(l3) > 0:
                                      for v in l3:
                                          new_arr.append(l1 + v + l2)
                                      else:
                                          new_arr.append('{}{}{}'.format(l1, 't', l2))
          Severity: Major
          Found in namebot/techniques.py - About 45 mins to fix

            Function _create_pos_subtypes has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

            def _create_pos_subtypes(words):
                """Check part-of-speech tags for a noun-phrase, adding combinations if so.
            
                If it exists, add combinations with noun-phrase + verb-phrase,
                noun-phrase + verb, and noun-phrase + adverb,
            Severity: Minor
            Found in namebot/techniques.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function make_name_alliteration has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
            Open

            def make_name_alliteration(words, divider=' '):
                """Make an alliteration with a set of words, if applicable.
            
                Examples:
                java jacket
            Severity: Minor
            Found in namebot/techniques.py - About 25 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

                    if 'VB' in types:
                        new_words += _add_pos_subtypes(words['NNP'], words['VB'])
            Severity: Major
            Found in namebot/techniques.py and 2 other locations - About 40 mins to fix
            namebot/techniques.py on lines 736..737
            namebot/techniques.py on lines 740..741

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 34.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

                    if 'RB' in types:
                        new_words += _add_pos_subtypes(words['NNP'], words['RB'])
            Severity: Major
            Found in namebot/techniques.py and 2 other locations - About 40 mins to fix
            namebot/techniques.py on lines 736..737
            namebot/techniques.py on lines 738..739

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 34.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

                    if 'VBP' in types:
                        new_words += _add_pos_subtypes(words['NNP'], words['VBP'])
            Severity: Major
            Found in namebot/techniques.py and 2 other locations - About 40 mins to fix
            namebot/techniques.py on lines 738..739
            namebot/techniques.py on lines 740..741

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 34.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            There are no issues that match your filters.

            Category
            Status