KarrLab/bpforms

View on GitHub

Showing 397 of 397 total issues

Function get_resid_monomer_structure has 5 arguments (exceeds 4 allowed). Consider refactoring.
Open

    def get_resid_monomer_structure(self, name, pdb_filename, ph=None, major_tautomer=False, dearomatize=False):
Severity: Minor
Found in bpforms/alphabet/protein.py - About 35 mins to fix

    Function run has 5 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def run(self, ph=None, major_tautomer=False, dearomatize=False, path=filename):
    Severity: Minor
    Found in bpforms/alphabet/dna.py - About 35 mins to fix

      Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Open

          def __init__(self, id=None, uniprot_id=None, org_taxid=None, exp_org_taxid=None, het=None):
      Severity: Minor
      Found in examples/pdb_analysis.py - About 35 mins to fix

        Function build_repairtoire has 5 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def build_repairtoire(self, alphabet, ph=None, major_tautomer=False, dearomatize=False):
        Severity: Minor
        Found in bpforms/alphabet/dna.py - About 35 mins to fix

          Function _form_crosslink has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def _form_crosslink(self, mol, atoms, atom_map, order, stereo):
          Severity: Minor
          Found in bpforms/core.py - About 35 mins to fix

            Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

                def __init__(self, molecule, element, position=None, charge=0, monomer=None):
            Severity: Minor
            Found in bpforms/core.py - About 35 mins to fix

              Function run_analyze_uniprot_ids has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

              def run_analyze_uniprot_ids(filename):
                  """ get 1 uniprot id per entry and get the unique ids
              
                  """
                  df = pd.read_csv(filename, converters={'uniprot_id': lambda x: str(x)})
              Severity: Minor
              Found in examples/pdb_analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function get_resid_monomer_structure has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                  def get_resid_monomer_structure(self, name, pdb_filename, ph=None, major_tautomer=False, dearomatize=False):
                      """ Get the structure of an amino acid from a PDB file
              
                      Args:
                          name (:obj:`str`): name of monomeric form
              Severity: Minor
              Found in bpforms/alphabet/protein.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function get_query_heterogen has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

              def get_query_heterogen(entries_org, entries_engineered_in_query):
                  """ Get the native and full heterogen set of the query organism and descendants
              
                  """
              
              
              Severity: Minor
              Found in examples/pdb_analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function calc_perc_transformable has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

              def calc_perc_transformable(entries_org, query_het_set, out_filename_prefix):
                  """ Calculate and write percent transformable
              
                  """
                  fp = open(out_filename_prefix+'.csv','w')
              Severity: Minor
              Found in examples/pdb_analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function analyze_het_frequency has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

              def analyze_het_frequency(query_hets_by_entry, out_filename_prefix):
                  """ calculate the frequency of each heterogen
                      frequency of heterogen = # of entries in which the heterogen appears / # of total entries
                  """
                  num_entries = len(query_hets_by_entry)
              Severity: Minor
              Found in examples/pdb_analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                                                            (right['start'], protein['seq'][right['start'] - 1])))
              Severity: Minor
              Found in examples/pro.py and 1 other location - About 35 mins to fix
              examples/pro.py on lines 768..768

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Function _add_bonds_to_set has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                  def _add_bonds_to_set(self, mol, r_bond_atoms, l_bond_atoms, atom_map, bond_set, el_table,
                                        i_monomer_1=None, i_monomer_2=None):
                      for md_atom_1, md_atom_2 in zip(r_bond_atoms, l_bond_atoms):
                          if i_monomer_1 is None:
                              i_monomer_1 = md_atom_1.monomer
              Severity: Minor
              Found in bpforms/core.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                                  if modification['monomer'] not in code_freq:
                                      code_freq[modification['monomer']] = 0
              Severity: Minor
              Found in examples/pro.py and 1 other location - About 35 mins to fix
              examples/pro.py on lines 826..827

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                                  if modification['residue'] not in canonical_code_freq:
                                      canonical_code_freq[modification['residue']] = 0
              Severity: Minor
              Found in examples/pro.py and 1 other location - About 35 mins to fix
              examples/pro.py on lines 828..829

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                              protein['crosslinks'].append(((left['end'], protein['seq'][left['end']-1]),
              Severity: Minor
              Found in examples/pro.py and 1 other location - About 35 mins to fix
              examples/pro.py on lines 769..769

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Avoid too many return statements within this function.
              Open

                          return False  # pragma no cover: case not used by MODOMICS or the RNA Modification Database
              Severity: Major
              Found in bpforms/alphabet/rna.py - About 30 mins to fix

                Avoid too many return statements within this function.
                Open

                        return  # pragma: no cover # element is always present
                Severity: Major
                Found in bpforms/alphabet/core.py - About 30 mins to fix

                  Avoid too many return statements within this function.
                  Open

                              return False
                  Severity: Major
                  Found in bpforms/alphabet/protein.py - About 30 mins to fix

                    Avoid too many return statements within this function.
                    Open

                                return False
                    Severity: Major
                    Found in bpforms/alphabet/protein.py - About 30 mins to fix
                      Severity
                      Category
                      Status
                      Source
                      Language