chebpy/chebpy

View on GitHub

Showing 32 of 32 total issues

Chebfun has 48 functions (exceeds 20 allowed). Consider refactoring.
Open

class Chebfun:
    def __init__(self, funs):
        self.funs = check_funs(funs)
        self.breakdata = compute_breakdata(self.funs)
        self.transposed = False
Severity: Minor
Found in chebpy/core/chebfun.py - About 6 hrs to fix

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        if np.isreal(coeffs).all():
            vals = fft(tmp)
            vals = np.real(vals)
        elif np.isreal(1j * coeffs).all():
            vals = fft(np.imag(tmp))
    Severity: Major
    Found in chebpy/core/algorithms.py and 1 other location - About 5 hrs to fix
    chebpy/core/algorithms.py on lines 281..288

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 97.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        if np.isreal(vals).all():
            coeffs = ifft(tmp)
            coeffs = np.real(coeffs)
        elif np.isreal(1j * vals).all():
            coeffs = ifft(np.imag(tmp))
    Severity: Major
    Found in chebpy/core/algorithms.py and 1 other location - About 5 hrs to fix
    chebpy/core/algorithms.py on lines 304..311

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 97.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Chebtech has 43 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class Chebtech(Smoothfun, ABC):
        """Abstract base class serving as the template for Chebtech1 and
        Chebtech2 subclasses.
    
        Chebtech objects always work with first-kind coefficients, so much
    Severity: Minor
    Found in chebpy/core/chebtech.py - About 5 hrs to fix

      File chebtech.py has 349 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      from abc import ABC, abstractmethod
      
      import numpy as np
      
      from .smoothfun import Smoothfun
      Severity: Minor
      Found in chebpy/core/chebtech.py - About 4 hrs to fix

        Onefun has 34 functions (exceeds 20 allowed). Consider refactoring.
        Open

        class Onefun(ABC):
            # --------------------------
            #  alternative constructors
            # --------------------------
            @abstractclassmethod
        Severity: Minor
        Found in chebpy/core/onefun.py - About 4 hrs to fix

          Fun has 34 functions (exceeds 20 allowed). Consider refactoring.
          Open

          class Fun(ABC):
              # --------------------------
              #  alternative constructors
              # --------------------------
              @abstractclassmethod
          Severity: Minor
          Found in chebpy/core/fun.py - About 4 hrs to fix

            File chebfun.py has 343 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            import operator
            
            import numpy as np
            
            from .bndfun import Bndfun
            Severity: Minor
            Found in chebpy/core/chebfun.py - About 4 hrs to fix

              Classicfun has 25 functions (exceeds 20 allowed). Consider refactoring.
              Open

              class Classicfun(Fun, ABC):
                  # --------------------------
                  #  alternative constructors
                  # --------------------------
                  @classmethod
              Severity: Minor
              Found in chebpy/core/classicfun.py - About 2 hrs to fix

                File algorithms.py has 258 lines of code (exceeds 250 allowed). Consider refactoring.
                Open

                import warnings
                
                import numpy as np
                
                from .ffts import fft, ifft
                Severity: Minor
                Found in chebpy/core/algorithms.py - About 2 hrs to fix

                  Function preandpostprocess has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def preandpostprocess(f):
                      """Pre- and post-processing tasks common to bary and clenshaw"""
                  
                      @wraps(f)
                      def thewrapper(*args, **kwargs):
                  Severity: Minor
                  Found in chebpy/core/decorators.py - About 1 hr to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function standard_chop has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def standard_chop(coeffs, tol=None):
                      """Chop a Chebyshev series to a given tolerance. This is a Python
                      transcription of the algorithm described in:
                  
                      J. Aurentz and L.N. Trefethen, Chopping a Chebyshev series (2015)
                  Severity: Minor
                  Found in chebpy/core/algorithms.py - About 1 hr to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function __add__ has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
                  Open

                      def __add__(self, f):
                          cls = self.__class__
                          if np.isscalar(f):
                              if np.iscomplexobj(f):
                                  dtype = complex
                  Severity: Minor
                  Found in chebpy/core/chebtech.py - About 1 hr to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Similar blocks of code found in 2 locations. Consider refactoring.
                  Open

                      Fc = np.append(2.0 * fc[:1], (fc[1:], fc[:0:-1]))
                  Severity: Major
                  Found in chebpy/core/algorithms.py and 1 other location - About 1 hr to fix
                  chebpy/core/algorithms.py on lines 241..241

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 41.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Similar blocks of code found in 2 locations. Consider refactoring.
                  Open

                      Gc = np.append(2.0 * gc[:1], (gc[1:], gc[:0:-1]))
                  Severity: Major
                  Found in chebpy/core/algorithms.py and 1 other location - About 1 hr to fix
                  chebpy/core/algorithms.py on lines 240..240

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 41.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Similar blocks of code found in 2 locations. Consider refactoring.
                  Open

                      coeffs[1 : n - 1] = 2 * coeffs[1 : n - 1]
                  Severity: Major
                  Found in chebpy/core/algorithms.py and 1 other location - About 1 hr to fix
                  chebpy/core/algorithms.py on lines 302..302

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 39.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Similar blocks of code found in 2 locations. Consider refactoring.
                  Open

                      coeffs[1 : n - 1] = 0.5 * coeffs[1 : n - 1]
                  Severity: Major
                  Found in chebpy/core/algorithms.py and 1 other location - About 1 hr to fix
                  chebpy/core/algorithms.py on lines 290..290

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 39.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Function float_argument has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def float_argument(f):
                      """Chebfun classmethod wrapper for __call__: ensure that we provide
                      float output for float input and array output otherwise"""
                  
                      @wraps(f)
                  Severity: Minor
                  Found in chebpy/core/decorators.py - About 55 mins to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function addBinaryOp has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def addBinaryOp(methodname):
                      @self_empty()
                      def method(self, f, *args, **kwds):
                          cls = self.__class__
                          if isinstance(f, cls):
                  Severity: Minor
                  Found in chebpy/core/classicfun.py - About 55 mins to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function chebfun has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def chebfun(f=None, domain=None, n=None):
                      """Chebfun constructor"""
                      # chebfun()
                      if f is None:
                          return Chebfun.initempty()
                  Severity: Minor
                  Found in chebpy/api.py - About 55 mins to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Severity
                  Category
                  Status
                  Source
                  Language