oujago/NumpyDL

View on GitHub

Showing 162 of 162 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    @property
    def params(self):
        return self.U_r, self.U_z, self.U_h, \
               self.W_r, self.W_z, self.W_h, \
               self.b_r, self.b_z, self.b_h
Severity: Major
Found in npdl/layers/recurrent.py and 1 other location - About 1 hr to fix
npdl/layers/recurrent.py on lines 324..328

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function train has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

    def train(self, training_inputs, training_outputs):
        self.feed_forward(training_inputs)

        # 1. Output neuron deltas
        pd_errors_wrt_output_neuron_total_net_input = [0] * len(self.output_layer.neurons)
Severity: Minor
Found in docs/tutorials/mlp_bp.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

    def __init__(self, epsilon=1e-6, *args, **kwargs):
        super(Adagrad, self).__init__(*args, **kwargs)

        self.epsilon = epsilon
Severity: Major
Found in npdl/optimizers.py and 2 other locations - About 1 hr to fix
npdl/optimizers.py on lines 123..128
npdl/optimizers.py on lines 171..176

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function fit has 8 arguments (exceeds 4 allowed). Consider refactoring.
Open

    def fit(self, X, Y, max_iter=100, batch_size=64, shuffle=True,
Severity: Major
Found in npdl/model.py - About 1 hr to fix

    Function __init__ has 8 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(self, n_out, n_in=None, nb_batch=None, nb_seq=None,
    Severity: Major
    Found in npdl/layers/recurrent.py - About 1 hr to fix

      Function forward has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
      Open

          def forward(self, input, mask, c0=None, h0=None):
              assert np.ndim(input) == 3, 'Only support batch training.'
      
              # record
              self.last_input = input
      Severity: Minor
      Found in npdl/layers/recurrent.py - About 55 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function forward has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
      Open

          def forward(self, input, c0=None, h0=None):
              """Forward propagation.
              
              Parameters
              ----------
      Severity: Minor
      Found in npdl/layers/recurrent.py - About 55 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __init__ has 7 arguments (exceeds 4 allowed). Consider refactoring.
      Open

          def __init__(self, embed_words=None, static=None,
      Severity: Major
      Found in npdl/layers/embedding.py - About 50 mins to fix

        Function __init__ has 7 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def __init__(self, num_inputs, num_hidden, num_outputs,
        Severity: Major
        Found in docs/tutorials/mlp_bp.py - About 50 mins to fix

          Avoid deeply nested control flow statements.
          Open

                                  for w in np.arange(new_w):
                                      outputs[a, b, h, w] = np.mean(input[a, b, h:h + pool_h, w:w + pool_w])
          
          Severity: Major
          Found in npdl/layers/pooling.py - About 45 mins to fix

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                with io.open(os.path.join(here, 'CHANGES.rst'), 'r', encoding='utf-8') as f:
                    CHANGES = f.read()
            Severity: Minor
            Found in setup.py and 1 other location - About 45 mins to fix
            setup.py on lines 23..24

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 35.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function __init__ has 6 arguments (exceeds 4 allowed). Consider refactoring.
            Open

                def __init__(self, nb_filter, filter_size, input_shape=None, stride=1,
            Severity: Minor
            Found in npdl/layers/convolution.py - About 45 mins to fix

              Avoid deeply nested control flow statements.
              Open

                                      for h in np.arange(new_img_h):
                                          for w in np.arange(new_img_w):
                                              h_shift, w_shift = h * self.stride, w * self.stride
                                              layer_grads[b, t, h_shift:h_shift + filter_h, w_shift:w_shift + filter_w] += \
                                                  self.W[r, t] * delta[b, r, h, w]
              Severity: Major
              Found in npdl/layers/convolution.py - About 45 mins to fix

                Avoid deeply nested control flow statements.
                Open

                                        for w in np.arange(new_w):
                                            patch = self.last_input[a, b, h:h + pool_h, w:w + pool_w]
                                            max_idx = np.unravel_index(patch.argmax(), patch.shape)
                                            h_shift, w_shift = h * pool_h + max_idx[0], w * pool_w + max_idx[1]
                                            layer_grads[a, b, h_shift, w_shift] = pre_grad[a, b, a, w]
                Severity: Major
                Found in npdl/layers/pooling.py - About 45 mins to fix

                  Avoid deeply nested control flow statements.
                  Open

                                          for w in np.arange(new_w):
                                              h_shift, w_shift = h * pool_h, w * pool_w
                                              layer_grads[a, b, h_shift: h_shift + pool_h, w_shift: w_shift + pool_w] = \
                                                  pre_grad[a, b, h, w] / length
                  
                  Severity: Major
                  Found in npdl/layers/pooling.py - About 45 mins to fix

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                                h_pre = self.h0 if t == 0 else output[:, t - 1, :]
                    Severity: Minor
                    Found in npdl/layers/recurrent.py and 1 other location - About 45 mins to fix
                    npdl/layers/recurrent.py on lines 427..427

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 35.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                        with io.open(os.path.join(here, 'README.rst'), 'r', encoding='utf-8') as f:
                            README = f.read()
                    Severity: Minor
                    Found in setup.py and 1 other location - About 45 mins to fix
                    setup.py on lines 25..26

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 35.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Function forward has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                    Open

                        def forward(self, idxs, masks):
                            ##############################
                            # Encode
                            ##############################
                    
                    
                    Severity: Minor
                    Found in applications/chatbot/model.py - About 45 mins to fix

                    Cognitive Complexity

                    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                    A method's cognitive complexity is based on a few simple rules:

                    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                    • Code is considered more complex for each "break in the linear flow of the code"
                    • Code is considered more complex when "flow breaking structures are nested"

                    Further reading

                    Identical blocks of code found in 2 locations. Consider refactoring.
                    Open

                                    self.d_c0 = self.IFOGf[t, :, n_out:2 * n_out] * dC[t]
                    Severity: Minor
                    Found in npdl/layers/recurrent.py and 1 other location - About 45 mins to fix
                    npdl/layers/recurrent.py on lines 716..716

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 35.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Identical blocks of code found in 2 locations. Consider refactoring.
                    Open

                                    dC[t - 1] += self.IFOGf[t, :, n_out:2 * n_out] * dC[t]
                    Severity: Minor
                    Found in npdl/layers/recurrent.py and 1 other location - About 45 mins to fix
                    npdl/layers/recurrent.py on lines 719..719

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 35.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Severity
                    Category
                    Status
                    Source
                    Language