IlyaGusev/rulm

View on GitHub

Showing 260 of 260 total issues

Similar blocks of code found in 3 locations. Consider refactoring.
Open

for row in load_dataset("IlyaGusev/ru_stackoverflow", split="train"):
    if random.random() < 0.045:
        seeds.append({
            "seed": row["title"],
            "source": "ru_stackoverflow"
Severity: Major
Found in self_instruct/src/data_processing/fetch_chat_seeds.py and 2 other locations - About 1 hr to fix
self_instruct/src/data_processing/fetch_chat_seeds.py on lines 23..27
self_instruct/src/data_processing/fetch_chat_seeds.py on lines 30..34

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 49.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

for row in load_dataset("its5Q/habr_qna", split="train"):
    if random.random() < 0.025:
        seeds.append({
            "seed": row["title"],
            "source": "habr_qna"
Severity: Major
Found in self_instruct/src/data_processing/fetch_chat_seeds.py and 2 other locations - About 1 hr to fix
self_instruct/src/data_processing/fetch_chat_seeds.py on lines 16..20
self_instruct/src/data_processing/fetch_chat_seeds.py on lines 30..34

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 49.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function improve_instructions has 15 arguments (exceeds 4 allowed). Consider refactoring.
Open

def improve_instructions(
Severity: Major
Found in self_instruct/src/data_processing/improve_instructions.py - About 1 hr to fix

    Function main has 44 lines of code (exceeds 25 allowed). Consider refactoring.
    Open

    def main(train_path, val_path):
        random.seed(42)
    
        instruct_records = []
        for row in tqdm(load_dataset("lksy/ru_instruct_gpt4", split="train")):
    Severity: Minor
    Found in self_instruct/src/data_processing/create_short_chat_set.py - About 1 hr to fix

      Function main has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def main(existing_path, output_path, langdetect_threshold: float = 0.8, sim_threshold: float = 0.93):
          lang_detector = FasttextLanguageDetector()
          embedder = Embedder("intfloat/multilingual-e5-base")
      
          existing_quries = list()
      Severity: Minor
      Found in self_instruct/src/data_processing/fetch_new_queries.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function predict_muserc has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def predict_muserc(
          split,
          predict_func,
          output_path,
          batch_size: int = 2,
      Severity: Minor
      Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function predict_rucos has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def predict_rucos(
          split,
          predict_func,
          output_path,
          batch_size: int = 4,
      Severity: Minor
      Found in self_instruct/src/benchmarks/eval_zs_rsg.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function convert_rsg has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def convert_rsg(split, output_path, tasks: List[str] = ALL_TASKS, use_short: bool = True):
          functions = []
          if "danetqa" in tasks:
              functions.append(get_danetqa(split))
          if "muserc" in tasks:
      Severity: Minor
      Found in self_instruct/src/data_processing/convert_rsg.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function parse_chat has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def parse_chat(result):
          try:
              chat = json.loads(result)
          except Exception:
              print("Incorrect JSON:", result)
      Severity: Minor
      Found in self_instruct/src/data_processing/generate_char_chats.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function undup_alpaca has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def undup_alpaca(alpaca_records, num_perm: int = 32, threshold: float = 0.3, debug: bool = False):
          for record in tqdm(alpaca_records, desc="Fingerprinting"):
              record["minhash"] = calc_fingerprint(record["messages"][0]["content"], num_perm=num_perm)
      
          lsh = MinHashLSH(
      Severity: Minor
      Found in self_instruct/src/data_processing/create_chat_set.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function main has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
      Open

      def main(
          token,
          agg_output,
          raw_output,
          pools_file,
      Severity: Minor
      Found in self_instruct/crowd/aggregate.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __init__ has 14 arguments (exceeds 4 allowed). Consider refactoring.
      Open

          def __init__(
      Severity: Major
      Found in data_processing/util.py - About 1 hr to fix

        Similar blocks of code found in 5 locations. Consider refactoring.
        Open

            with open(train_path, "w") as w:
                for record in train_records:
                    w.write(json.dumps(record, ensure_ascii=False).strip() + "\n")
        Severity: Major
        Found in self_instruct/src/data_processing/fetch_reward.py and 4 other locations - About 1 hr to fix
        self_instruct/src/data_processing/create_instruct_set.py on lines 21..23
        self_instruct/src/data_processing/create_instruct_set.py on lines 24..26
        self_instruct/src/data_processing/fetch_chat_seeds.py on lines 53..55
        self_instruct/src/data_processing/fetch_reward.py on lines 30..32

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 46.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 5 locations. Consider refactoring.
        Open

        with open(val_path, "w") as w:
            for record in val_records:
                w.write(json.dumps(record, ensure_ascii=False).strip() + "\n")
        Severity: Major
        Found in self_instruct/src/data_processing/create_instruct_set.py and 4 other locations - About 1 hr to fix
        self_instruct/src/data_processing/create_instruct_set.py on lines 21..23
        self_instruct/src/data_processing/fetch_chat_seeds.py on lines 53..55
        self_instruct/src/data_processing/fetch_reward.py on lines 27..29
        self_instruct/src/data_processing/fetch_reward.py on lines 30..32

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 46.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 5 locations. Consider refactoring.
        Open

            with open(val_path, "w") as w:
                for record in val_records:
                    w.write(json.dumps(record, ensure_ascii=False).strip() + "\n")
        Severity: Major
        Found in self_instruct/src/data_processing/fetch_reward.py and 4 other locations - About 1 hr to fix
        self_instruct/src/data_processing/create_instruct_set.py on lines 21..23
        self_instruct/src/data_processing/create_instruct_set.py on lines 24..26
        self_instruct/src/data_processing/fetch_chat_seeds.py on lines 53..55
        self_instruct/src/data_processing/fetch_reward.py on lines 27..29

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 46.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 5 locations. Consider refactoring.
        Open

        with open(output_path, "w") as w:
            for record in seeds:
                w.write(json.dumps(record, ensure_ascii=False).strip() + "\n")
        Severity: Major
        Found in self_instruct/src/data_processing/fetch_chat_seeds.py and 4 other locations - About 1 hr to fix
        self_instruct/src/data_processing/create_instruct_set.py on lines 21..23
        self_instruct/src/data_processing/create_instruct_set.py on lines 24..26
        self_instruct/src/data_processing/fetch_reward.py on lines 27..29
        self_instruct/src/data_processing/fetch_reward.py on lines 30..32

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 46.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 5 locations. Consider refactoring.
        Open

        with open(train_path, "w") as w:
            for record in train_records:
                w.write(json.dumps(record, ensure_ascii=False).strip() + "\n")
        Severity: Major
        Found in self_instruct/src/data_processing/create_instruct_set.py and 4 other locations - About 1 hr to fix
        self_instruct/src/data_processing/create_instruct_set.py on lines 24..26
        self_instruct/src/data_processing/fetch_chat_seeds.py on lines 53..55
        self_instruct/src/data_processing/fetch_reward.py on lines 27..29
        self_instruct/src/data_processing/fetch_reward.py on lines 30..32

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 46.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Function generate_answers has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
        Open

        def generate_answers(
            model_name: str,
            template_path: str,
            input_path: str,
            output_path: str,
        Severity: Minor
        Found in self_instruct/src/infer_saiga.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function convert_to_native has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
        Open

        def convert_to_native(
            model_name: str,
            output_path: str,
            device: str = "cpu",
            enable_offloading: bool = False
        Severity: Minor
        Found in self_instruct/src/tools/convert_to_native.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function train has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
        Open

        def train(
            config_file: str,
            train_file: str,
            val_file: str,
            output_dir: str,
        Severity: Minor
        Found in self_instruct/src/train.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Severity
        Category
        Status
        Source
        Language