neka-nat/tensorboard-chainer

View on GitHub

Showing 28 of 28 total issues

File summary_pb2.py has 489 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Generated by the protocol buffer compiler.  DO NOT EDIT!
# source: tb_chainer/src/summary.proto

import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
Severity: Minor
Found in tb_chainer/src/summary_pb2.py - About 7 hrs to fix

    Function make_grid has a Cognitive Complexity of 29 (exceeds 5 allowed). Consider refactoring.
    Open

    def make_grid(tensor, nrow=8, padding=2,
                  normalize=False, range=None, scale_each=False, pad_value=0):
        """Make a grid of images.
    
        Args:
    Severity: Minor
    Found in tb_chainer/utils.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    File attr_value_pb2.py has 343 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    # Generated by the protocol buffer compiler.  DO NOT EDIT!
    # source: tb_chainer/src/attr_value.proto
    
    import sys
    _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
    Severity: Minor
    Found in tb_chainer/src/attr_value_pb2.py - About 4 hrs to fix

      File event_pb2.py has 343 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      # Generated by the protocol buffer compiler.  DO NOT EDIT!
      # source: tb_chainer/src/event.proto
      
      import sys
      _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
      Severity: Minor
      Found in tb_chainer/src/event_pb2.py - About 4 hrs to fix

        Function build_computational_graph has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
        Open

        def build_computational_graph(
                outputs, remove_split=True, variable_style='default',
                function_style='default', rankdir='TB', remove_variable=False,
                show_name=True):
            """Builds a graph of functions and variables backward-reachable from outputs.
        Severity: Minor
        Found in tb_chainer/graph.py - About 3 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

                    grid[:, (y * height + 1 + padding // 2):((y + 1) * height + 1 + padding // 2 - padding), (x * width + 1 + padding // 2):((x + 1) * width + 1 + padding // 2 - padding)] = tensor[k]
        Severity: Major
        Found in tb_chainer/utils.py and 1 other location - About 3 hrs to fix
        tb_chainer/utils.py on lines 78..78

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 65.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

                    for input_ in cand.inputs:
                        if input_ is not cand and (input_, cand) not in seen_edges:
                            add_cand(input_)
                            seen_edges.add((input_, cand))
                            nodes.add(input_)
        Severity: Major
        Found in tb_chainer/graph.py and 1 other location - About 2 hrs to fix
        tb_chainer/graph.py on lines 111..115

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 61.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Function add_all_variable_images has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
        Open

            def add_all_variable_images(self, last_var, exclude_params=True, global_step=None, pattern='.*'):
                cp = re.compile(pattern)
                g = build_computational_graph(last_var)
                names = NodeName(g.nodes)
                for n in g.nodes:
        Severity: Minor
        Found in tb_chainer/writer.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

                    if creator is not None and (creator, cand) not in seen_edges:
                        add_cand(creator)
                        seen_edges.add((creator, cand))
                        nodes.add(creator)
                        nodes.add(cand)
        Severity: Major
        Found in tb_chainer/graph.py and 1 other location - About 2 hrs to fix
        tb_chainer/graph.py on lines 117..122

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 61.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        File writer.py has 252 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
        #
        # Licensed under the Apache License, Version 2.0 (the "License");
        # you may not use this file except in compliance with the License.
        # You may obtain a copy of the License at
        Severity: Minor
        Found in tb_chainer/writer.py - About 2 hrs to fix

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              def __iter__(self):
                  # Traverse the linked list in order.
                  root = self.__root
                  curr = root.next
                  while curr is not root:
          Severity: Major
          Found in tb_chainer/ordered_set.py and 1 other location - About 1 hr to fix
          tb_chainer/ordered_set.py on lines 55..61

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 44.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              def __reversed__(self):
                  # Traverse the linked list in reverse order.
                  root = self.__root
                  curr = root.prev
                  while curr is not root:
          Severity: Major
          Found in tb_chainer/ordered_set.py and 1 other location - About 1 hr to fix
          tb_chainer/ordered_set.py on lines 47..53

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 44.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function make_list_of_nodes has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
          Open

          def make_list_of_nodes(fn):
              list_of_nodes = []
              g = build_computational_graph(fn)
              node_name = NodeName(g.nodes)
              for n in g.nodes:
          Severity: Minor
          Found in tb_chainer/graph.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function save_image has 8 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def save_image(tensor, filename, nrow=8, padding=2,
          Severity: Major
          Found in tb_chainer/utils.py - About 1 hr to fix

            Function base_name has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
            Open

                def base_name(obj):
                    name_scope = (obj.name_scope + '/') if hasattr(obj, 'name_scope') else ''
                    if hasattr(obj, '_variable') and obj._variable is not None:
                        if isinstance(obj._variable(), chainer.Parameter):
                            return name_scope + (('Parameter_' + obj.name) if obj.name is not None else 'Parameter')
            Severity: Minor
            Found in tb_chainer/graph.py - About 55 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function make_grid has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def make_grid(tensor, nrow=8, padding=2,
            Severity: Major
            Found in tb_chainer/utils.py - About 50 mins to fix

              Function build_computational_graph has 7 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def build_computational_graph(
              Severity: Major
              Found in tb_chainer/graph.py - About 50 mins to fix

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                                    img = make_grid(np.expand_dims(data, 1) if data.shape[0] != 3 else data)
                Severity: Minor
                Found in tb_chainer/writer.py and 1 other location - About 50 mins to fix
                tb_chainer/writer.py on lines 286..286

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 36.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                                        img = make_grid(np.expand_dims(d, 1) if d.shape[0] != 3 else d)
                Severity: Minor
                Found in tb_chainer/writer.py and 1 other location - About 50 mins to fix
                tb_chainer/writer.py on lines 289..289

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 36.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Function image has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                Open

                def image(tag, tensor):
                    """Outputs a `Summary` protocol buffer with images.
                    The summary has up to `max_images` summary values containing images. The
                    images are built from `tensor` which must be 3-D with shape `[height, width,
                    channels]` and where `channels` can be:
                Severity: Minor
                Found in tb_chainer/summary.py - About 45 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Severity
                Category
                Status
                Source
                Language