kengz/SLM-Lab

View on GitHub

Showing 104 of 104 total issues

Similar blocks of code found in 4 locations. Consider refactoring.
Open

        act_q_preds = q_preds.gather(-1, batch['actions'].long().unsqueeze(-1)).squeeze(-1)
Severity: Major
Found in slm_lab/agent/algorithm/sarsa.py and 3 other locations - About 1 hr to fix
slm_lab/agent/algorithm/dqn.py on lines 97..97
slm_lab/agent/algorithm/dqn.py on lines 200..200
slm_lab/agent/algorithm/sarsa.py on lines 124..124

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 46.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

            if 'actor_' in k:
                actor_net_spec[k.replace('actor_', '')] = actor_net_spec.pop(k)
                critic_net_spec.pop(k)
Severity: Minor
Found in slm_lab/agent/algorithm/actor_critic.py and 1 other location - About 45 mins to fix
slm_lab/agent/algorithm/actor_critic.py on lines 141..143

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        max_q_targets = batch['rewards'] + self.gamma * (1 - batch['dones']) * max_next_q_preds
Severity: Major
Found in slm_lab/agent/algorithm/dqn.py and 2 other locations - About 45 mins to fix
slm_lab/agent/algorithm/dqn.py on lines 203..203
slm_lab/agent/algorithm/sarsa.py on lines 125..125

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        act_q_targets = batch['rewards'] + self.gamma * (1 - batch['dones']) * act_next_q_preds
Severity: Major
Found in slm_lab/agent/algorithm/sarsa.py and 2 other locations - About 45 mins to fix
slm_lab/agent/algorithm/dqn.py on lines 100..100
slm_lab/agent/algorithm/dqn.py on lines 203..203

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        max_q_targets = batch['rewards'] + self.gamma * (1 - batch['dones']) * max_next_q_preds
Severity: Major
Found in slm_lab/agent/algorithm/dqn.py and 2 other locations - About 45 mins to fix
slm_lab/agent/algorithm/dqn.py on lines 100..100
slm_lab/agent/algorithm/sarsa.py on lines 125..125

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

            if 'critic_' in k:
                critic_net_spec[k.replace('critic_', '')] = critic_net_spec.pop(k)
                actor_net_spec.pop(k)
Severity: Minor
Found in slm_lab/agent/algorithm/actor_critic.py and 1 other location - About 45 mins to fix
slm_lab/agent/algorithm/actor_critic.py on lines 138..140

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 43.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function subproc_worker has a Cognitive Complexity of 13 (exceeds 10 allowed). Consider refactoring.
Open

def subproc_worker(
        pipe, parent_pipe, env_fn_wrapper,
        obs_bufs, obs_shapes, obs_dtypes, keys):
    '''
    Control a single environment instance using IPC and shared memory. Used by ShmemVecEnv.
Severity: Minor
Found in slm_lab/env/vec_env.py - About 45 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid deeply nested control flow statements.
Open

                        for k, v in minibatch.items():
                            if k not in ('advs', 'v_targets'):
                                minibatch[k] = math_util.venv_pack(v, self.body.env.num_envs)
                    advs, v_targets = minibatch['advs'], minibatch['v_targets']
Severity: Major
Found in slm_lab/agent/algorithm/ppo.py - About 45 mins to fix

    Function tick has a Cognitive Complexity of 12 (exceeds 10 allowed). Consider refactoring.
    Open

    def tick(spec, unit):
        '''
        Method to tick lab unit (experiment, trial, session) in meta spec to advance their indices
        Reset lower lab indices to -1 so that they tick to 0
        spec_util.tick(spec, 'session')
    Severity: Minor
    Found in slm_lab/spec/spec_util.py - About 35 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function plot_trial has a Cognitive Complexity of 12 (exceeds 10 allowed). Consider refactoring.
    Open

    def plot_trial(trial_spec, trial_metrics, ma=False):
        '''
        Plot the trial graphs:
        - mean_returns, strengths, sample_efficiencies, training_efficiencies, stabilities (with error bar)
        - consistencies (no error bar)
    Severity: Minor
    Found in slm_lab/lib/viz.py - About 35 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            self.state_std = self.state_std * self.alpha + \
                observation.std() * (1 - self.alpha)
    Severity: Minor
    Found in slm_lab/env/wrapper.py and 1 other location - About 35 mins to fix
    slm_lab/env/wrapper.py on lines 298..299

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 41.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            self.state_mean = self.state_mean * self.alpha + \
                observation.mean() * (1 - self.alpha)
    Severity: Minor
    Found in slm_lab/env/wrapper.py and 1 other location - About 35 mins to fix
    slm_lab/env/wrapper.py on lines 300..301

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 41.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Avoid too many return statements within this function.
    Open

            return 'multi_binary'
    Severity: Major
    Found in slm_lab/agent/algorithm/policy_util.py - About 30 mins to fix

      Identical blocks of code found in 3 locations. Consider refactoring.
      Open

              if self.entropy_coef_spec is not None:
                  self.entropy_coef_scheduler = policy_util.VarScheduler(self.entropy_coef_spec)
                  self.body.entropy_coef = self.entropy_coef_scheduler.start_val
      Severity: Minor
      Found in slm_lab/agent/algorithm/ppo.py and 2 other locations - About 30 mins to fix
      slm_lab/agent/algorithm/actor_critic.py on lines 102..104
      slm_lab/agent/algorithm/reinforce.py on lines 69..71

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Identical blocks of code found in 3 locations. Consider refactoring.
      Open

              if self.entropy_coef_spec is not None:
                  self.entropy_coef_scheduler = policy_util.VarScheduler(self.entropy_coef_spec)
                  self.body.entropy_coef = self.entropy_coef_scheduler.start_val
      Severity: Minor
      Found in slm_lab/agent/algorithm/actor_critic.py and 2 other locations - About 30 mins to fix
      slm_lab/agent/algorithm/ppo.py on lines 106..108
      slm_lab/agent/algorithm/reinforce.py on lines 69..71

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Identical blocks of code found in 3 locations. Consider refactoring.
      Open

              if self.entropy_coef_spec is not None:
                  self.entropy_coef_scheduler = policy_util.VarScheduler(self.entropy_coef_spec)
                  self.body.entropy_coef = self.entropy_coef_scheduler.start_val
      Severity: Minor
      Found in slm_lab/agent/algorithm/reinforce.py and 2 other locations - About 30 mins to fix
      slm_lab/agent/algorithm/actor_critic.py on lines 102..104
      slm_lab/agent/algorithm/ppo.py on lines 106..108

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function init_global_nets has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
      Open

      def init_global_nets(algorithm):
          '''
          Initialize global_nets for Hogwild using an identical instance of an algorithm from an isolated Session
          in spec.meta.distributed, specify either:
          - 'shared': global network parameter is shared all the time. In this mode, algorithm local network will be replaced directly by global_net via overriding by identify attribute name
      Severity: Minor
      Found in slm_lab/agent/net/net_util.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function plot_multi_trial has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
      Open

      def plot_multi_trial(trial_metrics_path_list, legend_list, title, graph_prepath, ma=False, name_time_pairs=None, frame_scales=None, palette=None, showlegend=True):
          '''
          Plot multiple trial graphs together
          This method can be used in analysis and also custom plotting by specifying the arguments manually
          @example
      Severity: Minor
      Found in slm_lab/lib/viz.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function step has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
      Open

          def step(self, action):
              obs, reward, done, info = self.env.step(action)
              self.tracked_reward += reward
              # fix shape by inferring from reward
              if np.isscalar(self.total_reward) and not np.isscalar(reward):
      Severity: Minor
      Found in slm_lab/env/wrapper.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function init_params has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
      Open

      def init_params(module, init_fn):
          '''Initialize module's weights using init_fn, and biases to 0.0'''
          bias_init = 0.0
          classname = util.get_class_name(module)
          if 'Net' in classname:  # skip if it's a net, not pytorch layer
      Severity: Minor
      Found in slm_lab/agent/net/net_util.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Severity
      Category
      Status
      Source
      Language