petspats/pyha

View on GitHub

Showing 164 of 164 total issues

File redbaron_transforms.py has 915 lines of code (exceeds 250 allowed). Consider refactoring.
Open

import logging
import textwrap
from contextlib import suppress

from parse import parse
Severity: Major
Found in pyha/conversion/redbaron_transforms.py - About 2 days to fix

    File type_transforms.py has 573 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    import copy
    import inspect
    import logging
    import time
    from collections import deque
    Severity: Major
    Found in pyha/conversion/type_transforms.py - About 1 day to fix

      Function transform_call has a Cognitive Complexity of 62 (exceeds 5 allowed). Consider refactoring.
      Open

      def transform_call(red_node):
          """
          Converts Python style function calls to VHDL style:
          self.d(a) -> d(self, a)
      
      
      Severity: Minor
      Found in pyha/conversion/redbaron_transforms.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function transform_dynamic_lists has a Cognitive Complexity of 59 (exceeds 5 allowed). Consider refactoring.
      Open

      def transform_dynamic_lists(red_node):
          data = VHDLModule('-', convert_obj)
      
          dynamic_lists = [x for x in data.elems if isinstance(x, VHDLList) and not x.elements_compatible_typed]
          for x in dynamic_lists:
      Severity: Minor
      Found in pyha/conversion/redbaron_transforms.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function simulate has a Cognitive Complexity of 58 (exceeds 5 allowed). Consider refactoring.
      Open

      def simulate(model, *args, simulations=None, conversion_path=None, input_types=None,
                   pipeline_flush='self.DELAY', trace=False):
          """
          Run simulations on model.
      
      
      Severity: Minor
      Found in pyha/simulation/simulation_interface.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __init__ has a Cognitive Complexity of 51 (exceeds 5 allowed). Consider refactoring.
      Open

          def __init__(self, obj, datamodel=None):
              """ Convert object and all childs to VHDL """
              with RecursiveConverter.in_progress:
                  self.obj = obj
                  self.class_name = obj.__class__.__name__
      Severity: Minor
      Found in pyha/conversion/conversion.py - About 7 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Identical blocks of code found in 2 locations. Consider refactoring.
      Open

                      for packet_i, pack in enumerate(packets):
                          for i in range(offset):
                              pack[i], pack[i + offset] = pack[i] + pack[i + offset], \
                                                          (pack[i] - pack[i + offset]) * twiddles[packet_i]
      Severity: Major
      Found in pyha/cores/fft/fft_core/r2sdf.py and 1 other location - About 7 hrs to fix
      pyha/cores/fft/fft_core/dev/reference.py on lines 101..104

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 112.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Identical blocks of code found in 2 locations. Consider refactoring.
      Open

          for packet_i, pack in enumerate(packets):
              for i in range(offset):
                  pack[i], pack[i + offset] = pack[i] + pack[i + offset], \
                                                     (pack[i] - pack[i + offset]) * twiddles[packet_i]
      Severity: Major
      Found in pyha/cores/fft/fft_core/dev/reference.py and 1 other location - About 7 hrs to fix
      pyha/cores/fft/fft_core/r2sdf.py on lines 143..146

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 112.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Sfix has 39 functions (exceeds 20 allowed). Consider refactoring.
      Open

      class Sfix:
          """
          Signed fixed-point type. Default fixed-point format in Pyha is ``Sfix(left=0, right=-17)`` (17 fractional bits + sign)
          , representing values in range [-1, 1] ``(2**0)`` with resolution of 0.0000076 ``(2**-17)``.
      
      
      Severity: Minor
      Found in pyha/common/fixed_point.py - About 5 hrs to fix

        File core.py has 378 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        import logging
        import sys
        import time
        import weakref
        from collections import UserList
        Severity: Minor
        Found in pyha/common/core.py - About 5 hrs to fix

          File fixed_point.py has 378 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          import logging
          import math
          
          import numpy as np
          
          
          Severity: Minor
          Found in pyha/common/fixed_point.py - About 5 hrs to fix

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

            def test_remez16():
                np.random.seed(0)
                taps = signal.remez(16, [0, 0.1, 0.2, 0.5], [1, 0])
                dut = FIR(taps)
                inp = np.random.uniform(-1, 1, 1024)
            Severity: Major
            Found in pyha/cores/filter/fir/fir.py and 2 other locations - About 4 hrs to fix
            pyha/cores/filter/fir/fir.py on lines 67..74
            pyha/cores/filter/fir/fir.py on lines 77..84

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 84.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

            def test_remez32():
                np.random.seed(1)
                taps = signal.remez(32, [0, 0.1, 0.2, 0.5], [1, 0])
                dut = FIR(taps)
                inp = np.random.uniform(-1, 1, 64)
            Severity: Major
            Found in pyha/cores/filter/fir/fir.py and 2 other locations - About 4 hrs to fix
            pyha/cores/filter/fir/fir.py on lines 57..64
            pyha/cores/filter/fir/fir.py on lines 77..84

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 84.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 3 locations. Consider refactoring.
            Open

            def test_remez128():
                np.random.seed(2)
                taps = signal.remez(128, [0, 0.1, 0.2, 0.5], [1, 0])
                dut = FIR(taps)
                inp = np.random.uniform(-1, 1, 128)
            Severity: Major
            Found in pyha/cores/filter/fir/fir.py and 2 other locations - About 4 hrs to fix
            pyha/cores/filter/fir/fir.py on lines 57..64
            pyha/cores/filter/fir/fir.py on lines 67..74

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 84.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            File simulation_interface.py has 366 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            import logging
            import os
            import shutil
            import subprocess
            import sys
            Severity: Minor
            Found in pyha/simulation/simulation_interface.py - About 4 hrs to fix

              Function transform_auto_resize has a Cognitive Complexity of 30 (exceeds 5 allowed). Consider refactoring.
              Open

              def transform_auto_resize(red_node):
                  """ Auto resize on Sfix assignments
                   Examples (depend on initial Sfix type):
                       self.sfix_reg = a        ->   self.sfix_reg = resize(a, 5, -29, fixed_wrap, fixed_round)
                       self.sfix_list[0] = a    ->   self.sfix_list[0] = resize(a, 0, 0, fixed_saturate, fixed_round)
              Severity: Minor
              Found in pyha/conversion/redbaron_transforms.py - About 4 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

              def test_non_symmetric():
                  np.random.seed(0)
                  taps = [0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07]
                  dut = FIR(taps)
                  inp = np.random.uniform(-1, 1, 128)
              Severity: Major
              Found in pyha/cores/filter/fir/fir.py and 1 other location - About 4 hrs to fix
              pyha/cores/filter/fir/fir.py on lines 37..44

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 76.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

              def test_symmetric():
                  np.random.seed(0)
                  taps = [0.01, 0.02, 0.03, 0.04, 0.03, 0.02, 0.01]
                  dut = FIR(taps)
                  inp = np.random.uniform(-1, 1, 64)
              Severity: Major
              Found in pyha/cores/filter/fir/fir.py and 1 other location - About 4 hrs to fix
              pyha/cores/filter/fir/fir.py on lines 47..54

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 76.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Function __setitem__ has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
              Open

                  def __setitem__(self, i, y):
                      """ Implements auto-resize feature, ie resizes all assigns to Sfix registers.
                      Also implements the register behaviour i.e saves assigned value to shadow variable, that is later used by the '_pyha_update_registers' function.
                      """
                      if hasattr(self.data[0], '_pyha_update_registers'):
              Severity: Minor
              Found in pyha/common/core.py - About 3 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

              def test_quadrant_iii():
                  inputs = [-0.234 - 0.92j]
                  expect = [np.abs(inputs), np.angle(inputs) / np.pi]
              
                  dut = ToPolar()
              Severity: Major
              Found in pyha/cores/cordic/to_polar/to_polar.py and 1 other location - About 3 hrs to fix
              pyha/cores/cordic/to_polar/to_polar.py on lines 98..104

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 70.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Severity
              Category
              Status
              Source
              Language