bjmorgan/vasppy

View on GitHub

Showing 55 of 55 total issues

File procar.py has 558 lines of code (exceeds 250 allowed). Consider refactoring.
Open

from functools import reduce
from copy import deepcopy
from typing import Optional
import warnings
import math
Severity: Major
Found in vasppy/procar.py - About 1 day to fix

    Function main has a Cognitive Complexity of 49 (exceeds 5 allowed). Consider refactoring.
    Open

    def main():
        supported_flags = Summary.supported_flags
        to_print = [
            "title",
            "status",
    Severity: Minor
    Found in vasppy/scripts/vasp_summary.py - About 7 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    Severity: Major
    Found in vasppy/scripts/poscar_to_pimaim.py and 1 other location - About 6 hrs to fix
    vasppy/scripts/poscar_to_xtl.py on lines 0..27

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 100.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    Severity: Major
    Found in vasppy/scripts/poscar_to_xtl.py and 1 other location - About 6 hrs to fix
    vasppy/scripts/poscar_to_pimaim.py on lines 0..27

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 100.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            interpolate(
                interpolate(
                    cube[0, 0, 0], cube[1, 0, 0], r[0]
                ),  # trilinear interpolation => http://en.wikipedia.org/wiki/Trilinear_interpolation
                interpolate(cube[0, 1, 0], cube[1, 1, 0], r[0]),
    Severity: Major
    Found in vasppy/grid.py and 1 other location - About 4 hrs to fix
    vasppy/grid.py on lines 20..23

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 77.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function plot_pdos has a Cognitive Complexity of 29 (exceeds 5 allowed). Consider refactoring.
    Open

        def plot_pdos(
            self,
            ax: Optional[Axes] = None,
            to_plot: Optional[Dict[str, List[str]]] = None,
            colors: Optional[Iterable] = None,
    Severity: Minor
    Found in vasppy/doscar.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            interpolate(
                interpolate(cube[0, 0, 1], cube[1, 0, 1], r[0]),
                interpolate(cube[0, 1, 1], cube[1, 1, 1], r[0]),
                r[1],
    Severity: Major
    Found in vasppy/grid.py and 1 other location - About 4 hrs to fix
    vasppy/grid.py on lines 13..18

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 77.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    File doscar.py has 329 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    import numpy as np
    import pandas as pd  # type: ignore
    import matplotlib.pyplot as plt  # type: ignore
    from matplotlib.axes import Axes  # type: ignore
    from matplotlib.figure import Figure  # type: ignore
    Severity: Minor
    Found in vasppy/doscar.py - About 3 hrs to fix

      Summary has 31 functions (exceeds 20 allowed). Consider refactoring.
      Open

      class Summary:
          """
          TODO Document Summary class
          """
      
      
      Severity: Minor
      Found in vasppy/summary.py - About 3 hrs to fix

        File summary.py has 324 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        # Summary class and helper methods
        # Used for summarising VASP calculations as YAML
        
        from pymatgen.io.vasp.outputs import Vasprun  # type: ignore
        from pymatgen.analysis.transition_state import NEBAnalysis  # type: ignore
        Severity: Minor
        Found in vasppy/summary.py - About 3 hrs to fix

          Function output_coordinates_only has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
          Open

              def output_coordinates_only(self, coordinate_type="Direct", opts=None):
                  prefix = []
                  suffix = []
                  for i in range(self.coordinates.shape[0]):
                      prefix_string = ""
          Severity: Minor
          Found in vasppy/poscar.py - About 3 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Poscar has 28 functions (exceeds 20 allowed). Consider refactoring.
          Open

          class Poscar:
              lines_offset = 9
          
              def __init__(self):
                  self.title = "Title"
          Severity: Minor
          Found in vasppy/poscar.py - About 3 hrs to fix

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

            def coords_from_outcar(filename="OUTCAR"):
                """Finds and returns Cartesian coordinates from the OUTCAR file.
            
                Args:
                    filename (:obj:'str', optional): the name of the ``OUTCAR`` file to be read. Default is `OUTCAR`.
            Severity: Major
            Found in vasppy/outcar.py and 1 other location - About 3 hrs to fix
            vasppy/outcar.py on lines 96..114

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 64.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

            def forces_from_outcar(filename="OUTCAR"):
                """Finds and returns forces from the OUTCAR file.
            
                Args:
                    filename (:obj:'str', optional): the name of the ``OUTCAR`` file to be read. Default is `OUTCAR`.
            Severity: Major
            Found in vasppy/outcar.py and 1 other location - About 3 hrs to fix
            vasppy/outcar.py on lines 117..135

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 64.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function pdos_select has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
            Open

                def pdos_select(
                    self,
                    atoms: Optional[Union[int, List[int]]] = None,
                    spin: Optional[str] = None,
                    l: Optional[str] = None,
            Severity: Minor
            Found in vasppy/doscar.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            File poscar.py has 284 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            import numpy as np
            import sys
            import re
            import copy
            from vasppy import cell
            Severity: Minor
            Found in vasppy/poscar.py - About 2 hrs to fix

              Procar has 24 functions (exceeds 20 allowed). Consider refactoring.
              Open

              class Procar:
                  """
                  Object for working with PROCAR data.
              
                  Attributes:
              Severity: Minor
              Found in vasppy/procar.py - About 2 hrs to fix

                Function replicate has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
                Open

                    def replicate(self, h, k, l, group=False):
                        lattice_scaling = np.array([h, k, l], dtype=float)
                        lattice_shift = np.reciprocal(lattice_scaling)
                        new_poscar = Poscar()
                        new_poscar.title = self.title
                Severity: Minor
                Found in vasppy/poscar.py - About 2 hrs to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function get_forces_data has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
                Open

                def get_forces_data(outcar_filename="OUTCAR", convergence=None, warn=False):
                    """Parse an OUTCAR file and return forces data, includig various summary statistics.
                
                    args:
                        outcar_filename (optional, `str`): OUTCAR filename. Default is "OUTCAR".
                Severity: Minor
                Found in vasppy/scripts/checkforce.py - About 2 hrs to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function potcar_spec has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
                Open

                def potcar_spec(filename, return_hashes=False):
                    """
                    Returns a dictionary specifying the pseudopotentials contained in a POTCAR file.
                
                    Args:
                Severity: Minor
                Found in vasppy/summary.py - About 2 hrs to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Severity
                Category
                Status
                Source
                Language