tensorflow/models

View on GitHub
research/lfads/lfads.py

Summary

Maintainability
F
2 wks
Test Coverage

File lfads.py has 1759 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in research/lfads/lfads.py - About 4 days to fix

    Function __init__ has a Cognitive Complexity of 163 (exceeds 5 allowed). Consider refactoring.
    Open

      def __init__(self, hps, kind="train", datasets=None):
        """Create an LFADS model.
    
           train - a model for training, sampling of posteriors is used
           posterior_sample_and_average - sample from the posterior, this is used
    Severity: Minor
    Found in research/lfads/lfads.py - About 3 days to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __init__ has 117 lines of code (exceeds 25 allowed). Consider refactoring.
    Open

      def __init__(self, hps, kind="train", datasets=None):
        """Create an LFADS model.
    
           train - a model for training, sampling of posteriors is used
           posterior_sample_and_average - sample from the posterior, this is used
    Severity: Major
    Found in research/lfads/lfads.py - About 4 hrs to fix

      Function eval_model_parameters has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
      Open

        def eval_model_parameters(use_nested=True, include_strs=None):
          """Evaluate and return all of the TF variables in the model.
      
          Args:
          use_nested (optional): For returning values, use a nested dictoinary, based
      Severity: Minor
      Found in research/lfads/lfads.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function eval_model_runs_batch has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
      Open

        def eval_model_runs_batch(self, data_name, data_bxtxd, ext_input_bxtxi=None,
                                  do_eval_cost=False, do_average_batch=False):
          """Returns all the goodies for the entire model, per batch.
      
          If data_bxtxd and ext_input_bxtxi can have fewer than batch_size along dim 1
      Severity: Minor
      Found in research/lfads/lfads.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function train_model has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
      Open

        def train_model(self, datasets):
          """Train the model, print per-epoch information, and save checkpoints.
      
          Loop over training epochs. The function that actually does the
          training is train_epoch.  This function iterates over the training
      Severity: Minor
      Found in research/lfads/lfads.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function eval_model_runs_batch has 56 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

        def eval_model_runs_batch(self, data_name, data_bxtxd, ext_input_bxtxi=None,
                                  do_eval_cost=False, do_average_batch=False):
          """Returns all the goodies for the entire model, per batch.
      
          If data_bxtxd and ext_input_bxtxi can have fewer than batch_size along dim 1
      Severity: Major
      Found in research/lfads/lfads.py - About 2 hrs to fix

        Function eval_model_runs_push_mean has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring.
        Open

          def eval_model_runs_push_mean(self, data_name, data_extxd,
                                        ext_input_extxi=None):
            """Returns values of interest for the model by pushing the means through
        
            The mean values for both initial conditions and the control inputs are
        Severity: Minor
        Found in research/lfads/lfads.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function write_model_samples has 43 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

          def write_model_samples(self, dataset_name, output_fname=None):
            """Use the prior distribution to generate batch_size number of samples
            from the model.
        
            LFADS generates a number of outputs for each sample, and these are all
        Severity: Minor
        Found in research/lfads/lfads.py - About 1 hr to fix

          Function eval_model_runs_avg_epoch has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
          Open

            def eval_model_runs_avg_epoch(self, data_name, data_extxd,
                                          ext_input_extxi=None):
              """Returns all the expected value for goodies for the entire model.
          
              The expected value is taken over hidden (z) variables, namely the initial
          Severity: Minor
          Found in research/lfads/lfads.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function write_model_runs has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
          Open

            def write_model_runs(self, datasets, output_fname=None, push_mean=False):
              """Run the model on the data in data_dict, and save the computed values.
          
              LFADS generates a number of outputs for each examples, and these are all
              saved.  They are:
          Severity: Minor
          Found in research/lfads/lfads.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function eval_model_runs_push_mean has 28 lines of code (exceeds 25 allowed). Consider refactoring.
          Open

            def eval_model_runs_push_mean(self, data_name, data_extxd,
                                          ext_input_extxi=None):
              """Returns values of interest for the model by pushing the means through
          
              The mean values for both initial conditions and the control inputs are
          Severity: Minor
          Found in research/lfads/lfads.py - About 1 hr to fix

            Function eval_model_runs_avg_epoch has 28 lines of code (exceeds 25 allowed). Consider refactoring.
            Open

              def eval_model_runs_avg_epoch(self, data_name, data_extxd,
                                            ext_input_extxi=None):
                """Returns all the expected value for goodies for the entire model.
            
                The expected value is taken over hidden (z) variables, namely the initial
            Severity: Minor
            Found in research/lfads/lfads.py - About 1 hr to fix

              Function __init__ has 7 arguments (exceeds 4 allowed). Consider refactoring.
              Open

                def __init__(self, num_units, forget_bias=1.0,
              Severity: Major
              Found in research/lfads/lfads.py - About 50 mins to fix

                Function write_model_samples has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                Open

                  def write_model_samples(self, dataset_name, output_fname=None):
                    """Use the prior distribution to generate batch_size number of samples
                    from the model.
                
                    LFADS generates a number of outputs for each sample, and these are all
                Severity: Minor
                Found in research/lfads/lfads.py - About 45 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function run_epoch has 6 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                  def run_epoch(self, datasets, ops_to_eval, kind="train", batch_size=None,
                Severity: Minor
                Found in research/lfads/lfads.py - About 45 mins to fix

                  Consider simplifying this complex logical expression.
                  Open

                          if kl_weight >= 1.0 and \
                            (l2_weight >= 1.0 or \
                             (self.hps.l2_gen_scale == 0.0 and self.hps.l2_con_scale == 0.0)) \
                             and (len(valid_costs) > n_lve and run_avg_lve < lowest_ev_cost):
                  
                  
                  Severity: Major
                  Found in research/lfads/lfads.py - About 40 mins to fix

                    Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                      def __init__(self, num_units, forget_bias=1.0, weight_scale=1.0,
                    Severity: Minor
                    Found in research/lfads/lfads.py - About 35 mins to fix

                      Function eval_model_runs_batch has 5 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                        def eval_model_runs_batch(self, data_name, data_bxtxd, ext_input_bxtxi=None,
                      Severity: Minor
                      Found in research/lfads/lfads.py - About 35 mins to fix

                        Function spikify_rates has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                        Open

                          def spikify_rates(rates_bxtxd):
                            """Randomly spikify underlying rates according a Poisson distribution
                        
                            Args:
                              rates_bxtxd: A numpy tensor with shape:
                        Severity: Minor
                        Found in research/lfads/lfads.py - About 25 mins to fix

                        Cognitive Complexity

                        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                        A method's cognitive complexity is based on a few simple rules:

                        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                        • Code is considered more complex for each "break in the linear flow of the code"
                        • Code is considered more complex when "flow breaking structures are nested"

                        Further reading

                        Function plot_single_example has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                        Open

                          def plot_single_example(self, datasets):
                            """Plot an image relating to a randomly chosen, specific example.  We use
                            posterior sample and average by taking one example, and filling a whole
                            batch with that example, sample from the posterior, and then average the
                            quantities.
                        Severity: Minor
                        Found in research/lfads/lfads.py - About 25 mins to fix

                        Cognitive Complexity

                        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                        A method's cognitive complexity is based on a few simple rules:

                        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                        • Code is considered more complex for each "break in the linear flow of the code"
                        • Code is considered more complex when "flow breaking structures are nested"

                        Further reading

                        Function run_epoch has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                        Open

                          def run_epoch(self, datasets, ops_to_eval, kind="train", batch_size=None,
                                        do_collect=True, keep_prob=None):
                            """Run the model through the entire dataset once.
                        
                            Args:
                        Severity: Minor
                        Found in research/lfads/lfads.py - About 25 mins to fix

                        Cognitive Complexity

                        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                        A method's cognitive complexity is based on a few simple rules:

                        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                        • Code is considered more complex for each "break in the linear flow of the code"
                        • Code is considered more complex when "flow breaking structures are nested"

                        Further reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                              if self.hps.ic_dim > 0:
                                prior_g0_mean[es_idx,:] = model_values['prior_g0_mean']
                                prior_g0_logvar[es_idx,:] = model_values['prior_g0_logvar']
                                post_g0_mean[es_idx,:] = model_values['post_g0_mean']
                                post_g0_logvar[es_idx,:] = model_values['post_g0_logvar']
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 5 hrs to fix
                        research/lfads/lfads.py on lines 1918..1922

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 94.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                              if self.hps.ic_dim > 0:
                                prior_g0_mean[es_idx,:] = model_values['prior_g0_mean']
                                prior_g0_logvar[es_idx,:] = model_values['prior_g0_logvar']
                                post_g0_mean[es_idx,:] = model_values['post_g0_mean']
                                post_g0_logvar[es_idx,:] = model_values['post_g0_logvar']
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 5 hrs to fix
                        research/lfads/lfads.py on lines 1800..1804

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 94.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.ic_dim > 0:
                              prior_g0_mean = np.zeros([E_to_process, hps.ic_dim])
                              prior_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
                              post_g0_mean = np.zeros([E_to_process, hps.ic_dim])
                              post_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 4 hrs to fix
                        research/lfads/lfads.py on lines 1874..1878

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 76.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.ic_dim > 0:
                              prior_g0_mean = np.zeros([E_to_process, hps.ic_dim])
                              prior_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
                              post_g0_mean = np.zeros([E_to_process, hps.ic_dim])
                              post_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 4 hrs to fix
                        research/lfads/lfads.py on lines 1765..1769

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 76.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.output_dist == 'poisson':
                              out_dist_params = np.zeros([E_to_process, T, D])
                            elif hps.output_dist == 'gaussian':
                              out_dist_params = np.zeros([E_to_process, T, D+D])
                            else:
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 3 hrs to fix
                        research/lfads/lfads.py on lines 1777..1782

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 67.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.output_dist == 'poisson':
                              out_dist_params = np.zeros([E_to_process, T, D])
                            elif hps.output_dist == 'gaussian':
                              out_dist_params = np.zeros([E_to_process, T, D+D])
                            else:
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 3 hrs to fix
                        research/lfads/lfads.py on lines 1886..1891

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 67.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 3 locations. Consider refactoring.
                        Open

                            if self.hps.ic_dim > 0:
                              model_vals['prior_g0_mean'] = prior_g0_mean
                              model_vals['prior_g0_logvar'] = prior_g0_logvar
                              model_vals['post_g0_mean'] = post_g0_mean
                              model_vals['post_g0_logvar'] = post_g0_logvar
                        Severity: Major
                        Found in research/lfads/lfads.py and 2 other locations - About 2 hrs to fix
                        research/lfads/lfads.py on lines 1820..1824
                        research/lfads/lfads.py on lines 1941..1945

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 58.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 3 locations. Consider refactoring.
                        Open

                            if self.hps.ic_dim > 0:
                              model_runs['prior_g0_mean'] = prior_g0_mean
                              model_runs['prior_g0_logvar'] = prior_g0_logvar
                              model_runs['post_g0_mean'] = post_g0_mean
                              model_runs['post_g0_logvar'] = post_g0_logvar
                        Severity: Major
                        Found in research/lfads/lfads.py and 2 other locations - About 2 hrs to fix
                        research/lfads/lfads.py on lines 1725..1729
                        research/lfads/lfads.py on lines 1820..1824

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 58.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 3 locations. Consider refactoring.
                        Open

                            if self.hps.ic_dim > 0:
                              model_runs['prior_g0_mean'] = prior_g0_mean
                              model_runs['prior_g0_logvar'] = prior_g0_logvar
                              model_runs['post_g0_mean'] = post_g0_mean
                              model_runs['post_g0_logvar'] = post_g0_logvar
                        Severity: Major
                        Found in research/lfads/lfads.py and 2 other locations - About 2 hrs to fix
                        research/lfads/lfads.py on lines 1725..1729
                        research/lfads/lfads.py on lines 1941..1945

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 58.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                          def __init__(self, num_units, forget_bias=1.0,
                                       input_weight_scale=1.0, rec_weight_scale=1.0, clip_value=np.inf,
                                       input_collections=None, recurrent_collections=None):
                            """Create a GRU object.
                        
                        
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 2 hrs to fix
                        official/vision/losses/segmentation_losses.py on lines 28..59

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 53.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                                self.atau_summ[c] = \
                                    tf.summary.scalar("AR Autocorrelation taus " + str(c),
                                                      tf.exp(self.prior_zs_ar_con.logataus_1xu[0,c]))
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 2 hrs to fix
                        research/lfads/lfads.py on lines 982..984

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 51.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                                self.pvar_summ[c] = \
                                    tf.summary.scalar("AR Variances " + str(c),
                                                      tf.exp(self.prior_zs_ar_con.logpvars_1xu[0,c]))
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 2 hrs to fix
                        research/lfads/lfads.py on lines 979..981

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 51.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            for op_values in collected_op_values:
                              total_cost += op_values[0]
                              total_recon_cost += op_values[1]
                              total_kl_cost += op_values[2]
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 1581..1584

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 45.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                            for op_values in collected_op_values:
                              total_cost += op_values[0]
                              total_recon_cost += op_values[1]
                              total_kl_cost += op_values[2]
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 1253..1256

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 45.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.output_dist == 'poisson':
                              # Enforce correct dtype
                              assert np.issubdtype(
                                  datasets[hps.dataset_names[0]]['train_data'].dtype, int), \
                                  "Data dtype must be int for poisson output distribution"
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 326..330

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 42.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                            elif hps.output_dist == 'gaussian':
                              assert np.issubdtype(
                                  datasets[hps.dataset_names[0]]['train_data'].dtype, float), \
                                  "Data dtype must be float for gaussian output dsitribution"
                              data_dtype = tf.float32
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 320..325

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 42.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                              data_bxtxd = np.pad(data_bxtxd, ((0, hps.batch_size-E), (0, 0), (0, 0)),
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 1626..1628

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 40.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                              if ext_input_bxtxi is not None:
                                ext_input_bxtxi = np.pad(ext_input_bxtxi,
                                                         ((0, hps.batch_size-E), (0, 0), (0, 0)),
                        Severity: Major
                        Found in research/lfads/lfads.py and 1 other location - About 1 hr to fix
                        research/lfads/lfads.py on lines 1624..1624

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 40.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                                  out_fac_lin_mean = \
                                      init_linear(factors_dim, data_dim, do_bias=True,
                                                  mat_init_value=out_mat_fxc,
                                                  bias_init_value=out_bias_1xc,
                                                  normalized=False,
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 55 mins to fix
                        research/lfads/lfads.py on lines 467..473

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 37.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                                  out_fac_lin_logvar = \
                                      init_linear(factors_dim, data_dim, do_bias=True,
                                                  mat_init_value=mat_init_value,
                                                  bias_init_value=bias_init_value,
                                                  normalized=False,
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 55 mins to fix
                        research/lfads/lfads.py on lines 456..462

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 37.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                              if self.hps.co_dim > 0:
                                controller_outputs[es_idx,:,:] = model_values['controller_outputs']
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 45 mins to fix
                        research/lfads/lfads.py on lines 1807..1808

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 35.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Identical blocks of code found in 2 locations. Consider refactoring.
                        Open

                              if self.hps.co_dim > 0:
                                controller_outputs[es_idx,:,:] = model_values['controller_outputs']
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 45 mins to fix
                        research/lfads/lfads.py on lines 1925..1926

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 35.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if self.hps.co_dim > 0:
                              controller_outputs = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 35 mins to fix
                        research/lfads/lfads.py on lines 2056..2057

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 33.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                            if hps.co_dim > 0:
                              prior_zs_ar_con = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 35 mins to fix
                        research/lfads/lfads.py on lines 1665..1666

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 33.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                              con_cell = gen_cell_class(hps.con_dim,
                                                        input_weight_scale=hps.cell_weight_scale,
                                                        rec_weight_scale=hps.cell_weight_scale,
                                                        clip_value=hps.cell_clip_value,
                                                        recurrent_collections=['l2_con_reg'])
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 30 mins to fix
                        research/lfads/lfads.py on lines 671..675

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 32.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        Similar blocks of code found in 2 locations. Consider refactoring.
                        Open

                            gen_cell = gen_cell_class(hps.gen_dim,
                                                      input_weight_scale=hps.gen_cell_input_weight_scale,
                                                      rec_weight_scale=hps.gen_cell_rec_weight_scale,
                                                      clip_value=hps.cell_clip_value,
                                                      recurrent_collections=['l2_gen_reg'])
                        Severity: Minor
                        Found in research/lfads/lfads.py and 1 other location - About 30 mins to fix
                        research/lfads/lfads.py on lines 658..662

                        Duplicated Code

                        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                        Tuning

                        This issue has a mass of 32.

                        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                        Refactorings

                        Further Reading

                        There are no issues that match your filters.

                        Category
                        Status