mordred-descriptor/mordred

View on GitHub

Showing 136 of 136 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

class WNSA(FNSA):
    r"""surface weighted charged partial negative surface area descriptor.

    :type version: int
    :param version: one of :py:attr:`versions`
Severity: Major
Found in mordred/CPSA.py and 1 other location - About 3 hrs to fix
mordred/CPSA.py on lines 262..278

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 63.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

class WPSA(FPSA):
    r"""surface weighted charged partial positive surface area descriptor.

    :type version: int
    :param version: one of :py:attr:`versions`
Severity: Major
Found in mordred/CPSA.py and 1 other location - About 3 hrs to fix
mordred/CPSA.py on lines 243..259

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 63.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

File descriptor.py has 290 lines of code (exceeds 250 allowed). Consider refactoring.
Open

import inspect
import operator
from abc import ABCMeta, abstractmethod
from contextlib import contextmanager
from distutils.version import StrictVersion
Severity: Minor
Found in mordred/_base/descriptor.py - About 2 hrs to fix

    File CPSA.py has 290 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    r"""charged partial surface area descriptor.
    
    References
        * :doi:`10.1021/ac00220a013`
    
    
    Severity: Minor
    Found in mordred/CPSA.py - About 2 hrs to fix

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      class HBondDonor(HBondBase):
          r"""hydrogen bond donor descriptor(rdkit wrapper)."""
      
          since = "1.0.0"
          __slots__ = ()
      Severity: Major
      Found in mordred/HydrogenBond.py and 1 other location - About 2 hrs to fix
      mordred/HydrogenBond.py on lines 19..35

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 61.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function main_process has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
      Open

      def main_process(
          input, parser, output, nproc, quiet, stream, descriptor, with3D, verbosity
      ):
          mols = (m for i in input for m in parser(i))
      
      
      Severity: Minor
      Found in mordred/__main__.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function get_descriptors_in_module has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
      Open

      def get_descriptors_in_module(mdl, submodule=True):
          r"""Get descriptors in module.
      
          Parameters:
              mdl(module): module to search
      Severity: Minor
      Found in mordred/_base/calculator.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      class HBondAcceptor(HBondBase):
          r"""hydrogen bond acceptor descriptor(rdkit wrapper)."""
      
          since = "1.0.0"
          __slots__ = ()
      Severity: Major
      Found in mordred/HydrogenBond.py and 1 other location - About 2 hrs to fix
      mordred/HydrogenBond.py on lines 38..54

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 61.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          def calculate(self, avec, gmat):
              if self._order == 0:
                  return (avec ** 2).sum().astype("float")
      
              return 0.5 * avec.dot(gmat).dot(avec)
      Severity: Major
      Found in mordred/Autocorrelation.py and 1 other location - About 2 hrs to fix
      mordred/Autocorrelation.py on lines 242..246

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 60.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          def calculate(self, cavec, gmat):
              if self._order == 0:
                  return (cavec ** 2).sum().astype("float")
      
              return 0.5 * cavec.dot(gmat).dot(cavec)
      Severity: Major
      Found in mordred/Autocorrelation.py and 1 other location - About 2 hrs to fix
      mordred/Autocorrelation.py on lines 175..179

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 60.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Calculator has 25 functions (exceeds 20 allowed). Consider refactoring.
      Open

      class Calculator(object):
          r"""descriptor calculator.
      
          Parameters:
              descs: see Calculator.register() method
      Severity: Minor
      Found in mordred/_base/calculator.py - About 2 hrs to fix

        Descriptor has 23 functions (exceeds 20 allowed). Consider refactoring.
        Open

        class Descriptor(six.with_metaclass(DescriptorMeta, object)):
            r"""Abstract base class of descriptors.
        
            Attributes:
                mol(rdkit.Mol): target molecule
        Severity: Minor
        Found in mordred/_base/descriptor.py - About 2 hrs to fix

          Function _get_sulfur_contrib has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
          Open

              def _get_sulfur_contrib(cls, atom):
                  nH = cls._hydrogen_count(atom)
                  cnt = cls._bond_type_count(atom)
          
                  if atom.GetFormalCharge() != 0:
          Severity: Minor
          Found in mordred/TopoPSA.py - About 2 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          File _atomic_property.py has 255 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          from __future__ import division
          
          import os
          
          import six
          Severity: Minor
          Found in mordred/_atomic_property.py - About 2 hrs to fix

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                @classmethod
                def preset(cls, version):
                    if version >= _version_remove_SM1_D:
                        return (cls(m) for m in methods if m != SM1)
                    else:
            Severity: Major
            Found in mordred/DistanceMatrix.py and 1 other location - About 2 hrs to fix
            mordred/AdjacencyMatrix.py on lines 27..32

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 51.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function _calculate_one has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
            Open

                def _calculate_one(self, cxt, desc, reset):
                    if desc in self._cache:
                        return self._cache[desc]
            
                    if reset:
            Severity: Minor
            Found in mordred/_base/calculator.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function from_query has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
            Open

                def from_query(cls, mol, require_3D, explicit_hydrogens, kekulizes, id, config):
                    if not isinstance(mol, Chem.Mol):
                        raise TypeError("{!r} is not rdkit.Chem.Mol instance".format(mol))
            
                    n_frags = len(Chem.GetMolFrags(mol))
            Severity: Minor
            Found in mordred/_base/context.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                @classmethod
                def preset(cls, version):
                    if version >= _version_remove_SM1_A:
                        return (cls(m) for m in methods if m != SM1)
                    else:
            Severity: Major
            Found in mordred/AdjacencyMatrix.py and 1 other location - About 2 hrs to fix
            mordred/DistanceMatrix.py on lines 27..32

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 51.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

            class Valence(AdjacencyMatrix):
                __slots__ = ()
            
                def dependencies(self):
                    return {"D": AdjacencyMatrix(self.explicit_hydrogens, self.useBO)}
            Severity: Major
            Found in mordred/_graph_matrix.py and 1 other location - About 1 hr to fix
            mordred/_graph_matrix.py on lines 112..119

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 49.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function preset has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring.
            Open

                def preset(cls, version):
                    for fused in [False, True]:
                        for arom in [None, True, False]:
                            for hetero in [None, True]:
                                yield cls(None, False, fused, arom, hetero)
            Severity: Minor
            Found in mordred/RingCount.py - About 1 hr to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Severity
            Category
            Status
            Source
            Language