IlyaGusev/rulm

View on GitHub

Showing 260 of 260 total issues

Identical blocks of code found in 2 locations. Consider refactoring.
Open

def sha256str(s):
    h = hashlib.sha256()
    h.update(s.encode("utf-8"))
    return h.hexdigest()
Severity: Minor
Found in data_processing/merge.py and 1 other location - About 55 mins to fix
data_processing/exact_undup.py on lines 9..12

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 37.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def is_question(elem_attribs):
    post_type_id = elem_attribs["PostTypeId"]
    return post_type_id is not None and post_type_id == "1"
Severity: Minor
Found in data_processing/create_stackoverflow.py and 1 other location - About 55 mins to fix
data_processing/create_stackoverflow.py on lines 41..43

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 37.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function main has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def main(
    input_path,
    output_path,
    num_perm
):
Severity: Minor
Found in data_processing/undup.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def is_answer(elem_attribs):
    post_type_id = elem_attribs["PostTypeId"]
    return post_type_id is not None and post_type_id == "2"
Severity: Minor
Found in data_processing/create_stackoverflow.py and 1 other location - About 55 mins to fix
data_processing/create_stackoverflow.py on lines 36..38

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 37.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function main has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
Open

def main(output_path):

    with open(output_path, "w") as w:
        with psycopg2.connect("dbname=pikabu user=postgres password=postgres") as connection:
            with connection.cursor(name="stories") as cursor:
Severity: Minor
Found in data_processing/convert_pikabu.py - About 55 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

def sha256str(s):
    h = hashlib.sha256()
    h.update(s.encode("utf-8"))
    return h.hexdigest()
Severity: Minor
Found in data_processing/exact_undup.py and 1 other location - About 55 mins to fix
data_processing/merge.py on lines 11..14

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 37.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function predict_saiga_k_shots has 7 arguments (exceeds 4 allowed). Consider refactoring.
Open

def predict_saiga_k_shots(
Severity: Major
Found in self_instruct/src/benchmarks/eval_zs_tape.py - About 50 mins to fix

    Function predict_rwsd has 7 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def predict_rwsd(
    Severity: Major
    Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

      Function predict_terra has 7 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def predict_terra(
      Severity: Major
      Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

        Function generate_answers has 7 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def generate_answers(
        Severity: Major
        Found in self_instruct/src/infer_llama3.py - About 50 mins to fix

          Function __init__ has 7 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def __init__(
          Severity: Major
          Found in self_instruct/src/data_processing/embedder.py - About 50 mins to fix

            Function predict_saiga_zero_shot has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def predict_saiga_zero_shot(
            Severity: Major
            Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

              Function main has 7 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def main(
              Severity: Major
              Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                Function predict_russe has 7 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def predict_russe(
                Severity: Major
                Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                  Function predict_muserc has 7 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def predict_muserc(
                  Severity: Major
                  Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                    Function predict_parus has 7 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def predict_parus(
                    Severity: Major
                    Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                      Function main has 7 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                      def main(
                      Severity: Major
                      Found in self_instruct/src/benchmarks/eval_lora_rsg.py - About 50 mins to fix

                        Function predict_danetqa has 7 arguments (exceeds 4 allowed). Consider refactoring.
                        Open

                        def predict_danetqa(
                        Severity: Major
                        Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                          Function predict_rcb has 7 arguments (exceeds 4 allowed). Consider refactoring.
                          Open

                          def predict_rcb(
                          Severity: Major
                          Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 50 mins to fix

                            Similar blocks of code found in 3 locations. Consider refactoring.
                            Open

                                                record = {stories_mapping[k]: v for k, v in record.items() if k in stories_mapping}
                            Severity: Major
                            Found in data_processing/convert_pikabu.py and 2 other locations - About 50 mins to fix
                            data_processing/convert_pikabu.py on lines 142..142
                            data_processing/convert_yandex_q.py on lines 65..65

                            Duplicated Code

                            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                            Tuning

                            This issue has a mass of 36.

                            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                            Refactorings

                            Further Reading

                            Severity
                            Category
                            Status
                            Source
                            Language