KarrLab/wc_rules

View on GitHub

Showing 74 of 74 total issues

Function iter_items has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
Open

    def iter_items(d,mode='tuple'):
        if mode=='tuple':
            for k,v in d.items():
                if isinstance(v,dict):
                    for k1,v1 in NestedDict.iter_items(v):
Severity: Minor
Found in wc_rules/utils/data.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

        edges1 = [x for x in tour._edges if x[0] in b and x[3] in b]
Severity: Major
Found in wc_rules/graph/euler_tour.py and 3 other locations - About 1 hr to fix
wc_rules/graph/euler_tour.py on lines 256..256
wc_rules/graph/euler_tour.py on lines 259..259
wc_rules/graph/euler_tour.py on lines 260..260

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 46.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

        spares1 = [x for x in tour._spares if x[0] in b and x[3] in b]
Severity: Major
Found in wc_rules/graph/euler_tour.py and 3 other locations - About 1 hr to fix
wc_rules/graph/euler_tour.py on lines 255..255
wc_rules/graph/euler_tour.py on lines 256..256
wc_rules/graph/euler_tour.py on lines 260..260

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 46.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

        edges2 = [x for x in tour._edges if x[0] in s and x[3] in s]
Severity: Major
Found in wc_rules/graph/euler_tour.py and 3 other locations - About 1 hr to fix
wc_rules/graph/euler_tour.py on lines 255..255
wc_rules/graph/euler_tour.py on lines 259..259
wc_rules/graph/euler_tour.py on lines 260..260

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 46.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

        spares2 = [x for x in tour._spares if x[0] in s and x[3] in s]
Severity: Major
Found in wc_rules/graph/euler_tour.py and 3 other locations - About 1 hr to fix
wc_rules/graph/euler_tour.py on lines 255..255
wc_rules/graph/euler_tour.py on lines 256..256
wc_rules/graph/euler_tour.py on lines 259..259

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 46.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function augcut has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def augcut(self,edge):
        node1,attr1,attr2,node2 = edge
        tour = self.find_edge(node1,node2)[0]
        if edge in tour._spares:
            tour.remove_spares([edge])
Severity: Minor
Found in wc_rules/graph/euler_tour.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_helper_calls has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def get_helper_calls(self):
        helpercalls = defaultdict(SortedSet)
        for c in self.execs:
            for fnametuple,kwargs in c.deps.function_calls.items():
                if len(fnametuple)==2:
Severity: Minor
Found in wc_rules/expressions/executable.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function initialize_pattern has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
Open

    def initialize_pattern(self,pattern):

        if self.get_node(core=pattern) is not None:
            return self

Severity: Minor
Found in wc_rules/matcher/initialize_methods.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function dfs_visit has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
Open

def dfs_visit(tree,ignore_tokens=True):
    yield tree
    if hasattr(tree,'children'):
        for child in tree.children:
            for elem in dfs_visit(child):
Severity: Minor
Found in wc_rules/expressions/exprgraph_utils.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function initialize_from_strings has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def initialize_from_strings(cls,strings,classes,cmax=0):
        d = dict()
        allowed_forms = '\n'.join([x for c in classes for x in c.allowed_forms])
        for s in strings:
            for c in classes:
Severity: Minor
Found in wc_rules/expressions/executable.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function function_node_constraints has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

    def function_node_constraints(self,node,token):
        Logger.pattern(node.num,token)
        executable_manager = node.data.executables
        caches = node.data.caches
        if token.action == 'AddEntry':
Severity: Minor
Found in wc_rules/matcher/node_functions.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function process_function_call has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

    def process_function_call(self,x):
        if 'function_name' in x:
            kws, kwpairs = set(),set()
            if 'kws' in x:
                for kw,arg in zip(x['kws'],x['args']):
Severity: Minor
Found in wc_rules/expressions/dependency.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function do has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

    def do(self):
        global CURRENT_ACTION_RECORD
        CURRENT_ACTION_RECORD = deque()

        tokens = []
Severity: Minor
Found in wc_rules/simulator/simulator.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function duplicate_relations has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

    def duplicate_relations(self,target,nodemap,attrlist=None):
        ''' Duplicates self's relations, converts them using nodemap {id:new_node}, and applies to targetself.
        E.g. if old A1->X1, and nodemap { A1.id:A2, X1.id:X2 }
        A1.duplicate_relations(A2,nodemap) builds the new edge A2->X2
         '''
Severity: Minor
Found in wc_rules/schema/base.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function visualize_exprgraph has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def visualize_exprgraph(g,asdict=False):
    nodes,edges = [],[]
    for idx,node in g.iter_nodes():
        label = node.serialize_for_vis() if isinstance(node,ExprBase) else node.id
        nodes.append(visualize_node(node=node,idx=node.id,label=label))
Severity: Minor
Found in wc_rules/graph/vis.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function check_cycle has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def check_cycle(gdict):
    # gdict is a directed graph represented as a dict
    # node0: [node1, node2]
    # node3: [node4]
    nodes,paths = deque(gdict), deque()
Severity: Minor
Found in wc_rules/utils/validate.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function make_edge_token has 7 arguments (exceeds 4 allowed). Consider refactoring.
Open

def make_edge_token(_class1,ref1,attr1,_class2,ref2,attr2,action):
Severity: Major
Found in wc_rules/matcher/token.py - About 50 mins to fix

    Function __init__ has 7 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(self, name='', reactants=dict(), helpers=dict(), actions=[], factories=dict(),rate_prefix='', parameters = []):
    Severity: Major
    Found in wc_rules/modeling/rule.py - About 50 mins to fix

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          def peek(self):
              event,time =  Scheduler.peek(self)
              self.schedule(time,event)
              return event,time
      Severity: Minor
      Found in wc_rules/simulator/scheduler.py and 1 other location - About 50 mins to fix
      wc_rules/simulator/scheduler.py on lines 101..104

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 36.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          def pop(self):
              event,time = Scheduler.pop(self)
              self.schedule(time,event)
              return event,time
      Severity: Minor
      Found in wc_rules/simulator/scheduler.py and 1 other location - About 50 mins to fix
      wc_rules/simulator/scheduler.py on lines 106..109

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 36.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Severity
      Category
      Status
      Source
      Language