KarrLab/datanator

View on GitHub

Showing 791 of 791 total issues

Similar blocks of code found in 4 locations. Consider refactoring.
Open

            if i % 20 == 0 and self.verbose:
                print("Processing doc {}".format(i))
Severity: Major
Found in datanator/schema_2/transform.py and 3 other locations - About 30 mins to fix
datanator/data_source/kegg_org_code.py on lines 153..154
datanator/data_source/protein_modification/10_1093_nar_gkw1075.py on lines 51..52
datanator/data_source/uniprot_nosql.py on lines 247..248

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

            if i % 50 == 0 and self.verbose:
                print('Processing row {}'.format(i))
datanator/data_source/kegg_org_code.py on lines 153..154
datanator/data_source/uniprot_nosql.py on lines 247..248
datanator/schema_2/transform.py on lines 42..43

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

            if count % 1 == 0 and self.verbose:
                print('Inserting bulk {} ...'.format(count))            
Severity: Major
Found in datanator/data_source/kegg_org_code.py and 3 other locations - About 30 mins to fix
datanator/data_source/protein_modification/10_1093_nar_gkw1075.py on lines 51..52
datanator/data_source/uniprot_nosql.py on lines 247..248
datanator/schema_2/transform.py on lines 42..43

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 4 locations. Consider refactoring.
Open

                        if i % 100 == 0 and self.verbose:
                            print("     Adding kinlaw_id {} into uniprot collection.".format(kinlaw_id))
Severity: Major
Found in datanator/data_source/uniprot_nosql.py and 3 other locations - About 30 mins to fix
datanator/data_source/kegg_org_code.py on lines 153..154
datanator/data_source/protein_modification/10_1093_nar_gkw1075.py on lines 51..52
datanator/schema_2/transform.py on lines 42..43

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        data['ko_number'] = data['ko_number'].astype(str).str.replace(';', '')
Severity: Minor
Found in datanator/data_source/uniprot_nosql.py and 2 other locations - About 30 mins to fix
datanator/data_source/uniprot_nosql.py on lines 87..87
datanator/data_source/uniprot_nosql.py on lines 88..88

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

        a =  [item.replace('\t', '') for item in line.split('|')[:-1]]
Severity: Minor
Found in datanator/data_source/taxon_tree.py and 1 other location - About 30 mins to fix
datanator/data_source/taxon_tree.py on lines 76..76

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

            if self.commit_intermediate_results and (i_compound % 100 == 99):
                self.session.commit()
Severity: Minor
Found in datanator/data_source/sabio_rk.py and 1 other location - About 30 mins to fix
datanator/data_source/sabio_rk.py on lines 1708..1709

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

        a =  [item.replace('\t', '') for item in line.split('|')[:-1]]
Severity: Minor
Found in datanator/data_source/taxon_tree.py and 1 other location - About 30 mins to fix
datanator/data_source/taxon_tree.py on lines 89..89

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        data['kegg_org_gene'] = data['kegg_org_gene'].astype(str).str.replace(';', '')
Severity: Minor
Found in datanator/data_source/uniprot_nosql.py and 2 other locations - About 30 mins to fix
datanator/data_source/uniprot_nosql.py on lines 87..87
datanator/data_source/uniprot_nosql.py on lines 95..95

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

                self.col.update_one({'uniprot_id': doc['uniprot_id']},
                                    {'$set': {'ko_number': ko_number,
                                             'ko_name': ko_name}})
Severity: Minor
Found in datanator/data_source/protein_aggregate.py and 1 other location - About 30 mins to fix
datanator/data_source/rna_halflife/doi_10_1186_s12864_016_3219_8.py on lines 195..197

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

            self.collection.update_one({'_id': doc['_id']},
                                       {'$set': {'gene_name': gene_name,
                                                 'protein_name': protein_name}})
datanator/data_source/protein_aggregate.py on lines 111..113

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 32.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function replace_key_in_similar_compounds has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def replace_key_in_similar_compounds(self):
        query = {}
        projection = {'similar_compounds': 1}
        _, _, col = self.con_db('metabolites_meta')
        docs = col.find(filter=query, projection=projection)
Severity: Minor
Found in datanator/data_source/metabolites_meta_collection.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function fill_meta has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def fill_meta(self):
        """Fill exisiting metabolites with meta information.
        """
        query = {'inchikey': {'$exists': False}}
        count = self.collection.count_documents(query)

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function calc_inchi_formula_connectivity has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def calc_inchi_formula_connectivity(self, structure):
        """ Calculate a searchable structures

        * InChI format
        * Core InChI format
Severity: Minor
Found in datanator/data_source/sabio_rk_nosql.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function load_content has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def load_content(self):
        """ Collects and Parses all data from Pax DB website and adds to MongoDB

        """
        client, db_obj, collection = self.con_db(self.collection)
Severity: Minor
Found in datanator/data_source/pax_nosql.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function process_fastq has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

def process_fastq(experiment_name, sample_name, strain_name, num_fastq_files, read_type, output_directory, temp_directory):

    exp_dirname = "{}/{}".format(output_directory, experiment_name)
    if not os.path.isdir(exp_dirname):
        os.makedirs(exp_dirname)
Severity: Minor
Found in datanator/data_source/process_rna_seq/core.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function fill_species_name has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def fill_species_name(self):
        ncbi_taxon_ids = self.collection.distinct('ncbi_taxonomy_id')
        start = 0
        for i, _id in enumerate(ncbi_taxon_ids[start:]):
            if i == self.max_entries:
Severity: Minor
Found in datanator/data_source/uniprot_nosql.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function fill_trna_primary has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def fill_trna_primary(self, file_location, sheet_name='tRNA', start_row=[0],
                        use_columns='A:H', column_names=['Amino_acid', 'aa_Code', 'aa_Name',
                        'kegg_orthology_id', 'kegg_gene_Name', 'Definition', 'kegg_Pathway_id',
                        'kegg_Pathway_name']):
        """
Severity: Minor
Found in datanator/data_source/rna_modification/rna_modification.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function run has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def run(self):
        # create dict of MODOMICS single character monomer codes
        modomics_short_code_to_monomer = {}
        for monomer in bpforms.rna_alphabet.monomers.values():
            for identifier in monomer.identifiers:
Severity: Minor
Found in datanator/data_source/modomics.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_enzyme_name has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
Open

    def parse_enzyme_name(self, sbml):
        """ Parse the name of an enzyme in SBML for the enzyme name, wild type status, and variant
        description that it contains.

        Args:
Severity: Minor
Found in datanator/data_source/sabio_rk_nosql.py - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Severity
Category
Status
Source
Language