┌───────────────────┐
 113 Code Findings
└───────────────────┘
 
    .actions\assistant.py
    ❯❱ python.lang.security.audit.dynamic-urllib-use-detected.dynamic-urllib-use-detected
          Detected a dynamic value being used with urllib. urllib supports 'file://' schemes, so a dynamic
          value controlled by a malicious actor may allow them to read arbitrary files. Audit uses of urllib
          calls to ensure user data cannot control the URLs, or consider using the 'requests' library instead.
          Details: https://sg.run/dKZZ
 
          427┆ urllib.request.urlretrieve(zip_url, zip_file)
 
    .github\actions\pip-wheels\action.yml
   ❯❯❱ yaml.github-actions.security.run-shell-injection.run-shell-injection
          Using variable interpolation `${{...}}` with `github` context data in a `run:` step could allow an
          attacker to inject their own code into the runner. This would allow them to steal secrets and code.
          `github` context data can have arbitrary user input and should be treated as untrusted. Instead, use
          an intermediate environment variable with `env:` to store the data and use the environment variable
          in the `run:` script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
          Details: https://sg.run/pkzk
 
           46┆ run: |
           47┆   # cat requirements.dump
           48┆   pip wheel -r requirements.dump --prefer-binary \
           49┆     --wheel-dir=".wheels" \
           50┆     --extra-index-url=${{ inputs.torch-url }} -f ${{ inputs.wheel-dir }}
           51┆   ls -lh .wheels/
            ⋮┆----------------------------------------
           56┆ run: |
           57┆   import os, glob
           58┆   wheels = [os.path.basename(p) for p in glob.glob(".wheels/*")]
           59┆   pkgs = [os.path.basename(p) for p in glob.glob("${{ inputs.wheel-dir }}/*")]
           60┆   diff = [w for w in wheels if w not in pkgs]
           61┆   print(diff)
           62┆   with open(os.environ['GITHUB_OUTPUT'], 'a') as fh:
           63┆       print(f'count-new={len(diff)}', file=fh)
            ⋮┆----------------------------------------
           73┆ - run: cp .wheels/* ${{ inputs.wheel-dir }}
 
    .github\actions\pkg-check\action.yml
   ❯❯❱ yaml.github-actions.security.run-shell-injection.run-shell-injection
          Using variable interpolation `${{...}}` with `github` context data in a `run:` step could allow an
          attacker to inject their own code into the runner. This would allow them to steal secrets and code.
          `github` context data can have arbitrary user input and should be treated as untrusted. Instead, use
          an intermediate environment variable with `env:` to store the data and use the environment variable
          in the `run:` script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
          Details: https://sg.run/pkzk
 
           26┆ run: echo "PACKAGE_NAME=${{ inputs.pkg-name }}" >> $GITHUB_ENV
            ⋮┆----------------------------------------
           55┆ run: |
           56┆   import os, glob, pathlib, shutil
           57┆   # list folders without ending .egg-info
           58┆   ls = glob.glob(os.path.join("*", "src", "*"))
           59┆   dirs = [d for d in ls if os.path.isdir(d) and not d.endswith(".egg-info")]
           60┆   print(dirs)
           61┆   assert len(dirs) == ${{ inputs.nb-dirs }}
           62┆   # cleaning
           63┆   shutil.rmtree(pathlib.Path(dirs[0]).parent.parent)
 
    .github\actions\pkg-install\action.yml
   ❯❯❱ yaml.github-actions.security.run-shell-injection.run-shell-injection
          Using variable interpolation `${{...}}` with `github` context data in a `run:` step could allow an
          attacker to inject their own code into the runner. This would allow them to steal secrets and code.
          `github` context data can have arbitrary user input and should be treated as untrusted. Instead, use
          an intermediate environment variable with `env:` to store the data and use the environment variable
          in the `run:` script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
          Details: https://sg.run/pkzk
 
           25┆ run: |
           26┆   import os, glob
           27┆
           28┆   lut = {'fabric': 'lightning_fabric', 'pytorch': 'pytorch_lightning'}
           29┆   act_pkg = lut.get('${{inputs.pkg-name}}', 'lightning')
           30┆   pkg_sdist = glob.glob('*.tar.gz')[0]
           31┆   pkg_wheel = glob.glob('*.whl')[0]
           32┆   extra = '${{inputs.pkg-extra}}'
           33┆   extra = f'[{extra}]' if extra else ''
           34┆
             [hid 3 additional lines, adjust with --max-lines-per-finding]
           42┆ run: |
           43┆   pip install "${PKG_WHEEL}${PKG_EXTRA}" ${{ inputs.pip-flags }}
           44┆   pip list | grep lightning
           45┆   python -c "import ${{ env.PKG_IMPORT }}; print(${{ env.PKG_IMPORT }}.__version__)"
            ⋮┆----------------------------------------
           50┆ run: |
           51┆   pip install "${PKG_SOURCE}${PKG_EXTRA}" ${{ inputs.pip-flags }}
           52┆   pip list | grep lightning
           53┆   python -c "import ${{ env.PKG_IMPORT }}; print(${{ env.PKG_IMPORT }}.__version__)"
 
    .github\actions\pkg-publish\action.yml
   ❯❯❱ yaml.github-actions.security.run-shell-injection.run-shell-injection
          Using variable interpolation `${{...}}` with `github` context data in a `run:` step could allow an
          attacker to inject their own code into the runner. This would allow them to steal secrets and code.
          `github` context data can have arbitrary user input and should be treated as untrusted. Instead, use
          an intermediate environment variable with `env:` to store the data and use the environment variable
          in the `run:` script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
          Details: https://sg.run/pkzk
 
           20┆ - run: ls -lh ${{ inputs.pkg-folder }}
 
    .github\workflows\_legacy-checkpoints.yml
   ❯❯❱ yaml.github-actions.security.run-shell-injection.run-shell-injection
          Using variable interpolation `${{...}}` with `github` context data in a `run:` step could allow an
          attacker to inject their own code into the runner. This would allow them to steal secrets and code.
          `github` context data can have arbitrary user input and should be treated as untrusted. Instead, use
          an intermediate environment variable with `env:` to store the data and use the environment variable
          in the `run:` script. Be sure to use double-quotes the environment variable, like this: "$ENVVAR".
          Details: https://sg.run/pkzk
 
           75┆ run: pip install "pytorch-lightning==${{ inputs.pl_version }}" --extra-index-
               url="${TORCH_URL}"
            ⋮┆----------------------------------------
           95┆ run: bash generate_checkpoints.sh ${{ inputs.pl_version }}
            ⋮┆----------------------------------------
          102┆ run: |
          103┆   python -c "print('KEEP_DAYS=' + str(30 if '${{ github.event_name
               }}'.startswith('pull_request') else 0))" >> $GITHUB_ENV
          104┆   python -c "print('AWS_RUN=' + str('' if '${{inputs.push_to_s3}}' == 'true' else '--
               dryrun'))" >> $GITHUB_ENV
          105┆
 
    docs\source-pytorch\conf.py
    ❯❱ python.lang.security.audit.dynamic-urllib-use-detected.dynamic-urllib-use-detected
          Detected a dynamic value being used with urllib. urllib supports 'file://' schemes, so a dynamic
          value controlled by a malicious actor may allow them to read arbitrary files. Audit uses of urllib
          calls to ensure user data cannot control the URLs, or consider using the 'requests' library instead.
          Details: https://sg.run/dKZZ
 
          101┆ urllib.request.urlretrieve(f"{URL_RAW_DOCS_HABANA}/{img}", img_)
 
    examples\fabric\dcgan\train_fabric.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           69┆ dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True,
               num_workers=workers)
 
    examples\fabric\dcgan\train_torch.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           68┆ dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True,
               num_workers=workers)
 
    examples\fabric\fp8_distributed_transformer\train.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           46┆ dataloader = DataLoader(dataset, num_workers=8, batch_size=micro_batch_size)
 
    examples\fabric\image_classifier\train_fabric.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           87┆ train_loader = torch.utils.data.DataLoader(
           88┆     train_dataset,
           89┆     batch_size=hparams.batch_size,
           90┆ )
            ⋮┆----------------------------------------
           91┆ test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=hparams.batch_size)
 
    examples\fabric\image_classifier\train_torch.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           70┆ train_loader = torch.utils.data.DataLoader(
           71┆     train_dataset,
           72┆     batch_size=hparams.batch_size,
           73┆ )
            ⋮┆----------------------------------------
           74┆ test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=hparams.batch_size)
 
    examples\fabric\kfold_cv\train_fabric.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          141┆ train_loader = DataLoader(dataset, batch_size=batch_size,
               sampler=SubsetRandomSampler(train_ids))
            ⋮┆----------------------------------------
          142┆ val_loader = DataLoader(dataset, batch_size=batch_size,
               sampler=SubsetRandomSampler(val_ids))
 
    examples\fabric\language_model\train.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           69┆ train_dataloader = DataLoader(train_dataset, batch_size=20, shuffle=True)
            ⋮┆----------------------------------------
           70┆ val_dataloader = DataLoader(val_dataset, batch_size=20, shuffle=False)
            ⋮┆----------------------------------------
           71┆ test_dataloader = DataLoader(test_dataset, batch_size=20, shuffle=False)
 
    examples\fabric\tensor_parallel\train.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           44┆ dataloader = DataLoader(dataset, batch_size=8)
 
    examples\pytorch\basics\autoencoder.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           96┆ images, _ = next(iter(DataLoader(trainer.datamodule.mnist_val,
               batch_size=self.num_samples)))
            ⋮┆----------------------------------------
          167┆ return DataLoader(self.mnist_train, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          170┆ return DataLoader(self.mnist_val, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          173┆ return DataLoader(self.mnist_test, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          176┆ return DataLoader(self.mnist_test, batch_size=self.batch_size)
 
    examples\pytorch\basics\backbone_image_classifier.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          116┆ return DataLoader(self.mnist_train, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          119┆ return DataLoader(self.mnist_val, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          122┆ return DataLoader(self.mnist_test, batch_size=self.batch_size)
            ⋮┆----------------------------------------
          125┆ return DataLoader(self.mnist_test, batch_size=self.batch_size)
 
    examples\pytorch\basics\profiler_example.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           88┆ return torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=True, num_workers=0)
            ⋮┆----------------------------------------
           92┆ return torch.utils.data.DataLoader(valset, batch_size=2, shuffle=True, num_workers=0)
 
    examples\pytorch\basics\transformer.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           48┆ train_dataloader = DataLoader(train_dataset, batch_size=20, shuffle=True)
            ⋮┆----------------------------------------
           49┆ val_dataloader = DataLoader(val_dataset, batch_size=20, shuffle=False)
            ⋮┆----------------------------------------
           50┆ test_dataloader = DataLoader(test_dataset, batch_size=20, shuffle=False)
 
    examples\pytorch\bug_report\bug_report_model.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           47┆ train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
            ⋮┆----------------------------------------
           48┆ val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
            ⋮┆----------------------------------------
           49┆ test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
 
    examples\pytorch\domain_templates\computer_vision_fine_tuning.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          143┆ return DataLoader(dataset=dataset, batch_size=self._batch_size,
               num_workers=self._num_workers, shuffle=train)
 
    examples\pytorch\domain_templates\reinforce_learn_Qnet.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          368┆ return DataLoader(dataset=dataset, batch_size=self.batch_size, sampler=None)
 
    examples\pytorch\domain_templates\reinforce_learn_ppo.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          425┆ return DataLoader(dataset=dataset, batch_size=self.batch_size)
 
    examples\pytorch\domain_templates\semantic_segmentation.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          369┆ return DataLoader(self.trainset, batch_size=self.batch_size, shuffle=True)
            ⋮┆----------------------------------------
          372┆ return DataLoader(self.validset, batch_size=self.batch_size, shuffle=False)
 
    examples\pytorch\fp8_distributed_transformer\train.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           67┆ train_dataloader = DataLoader(dataset, num_workers=8, batch_size=1)
 
    examples\pytorch\servable_module\production.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           50┆ return torch.utils.data.DataLoader(trainset, batch_size=2, shuffle=True, num_workers=0)
            ⋮┆----------------------------------------
           54┆ return torch.utils.data.DataLoader(valset, batch_size=2, shuffle=True, num_workers=0)
 
    examples\pytorch\tensor_parallel\train.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           46┆ return DataLoader(dataset, batch_size=8, num_workers=4)
 
    src\lightning\fabric\accelerators\registry.py
    ❯❱ python.lang.security.audit.non-literal-import.non-literal-import
          Untrusted user input in `importlib.import_module()` function allows an attacker to load arbitrary
          code. Avoid dynamic values in `importlib.import_module()` or use a whitelist to prevent running
          untrusted code.
          Details: https://sg.run/y6Jk
 
          126┆ module = importlib.import_module(base_module)
 
    src\lightning\fabric\cli.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          168┆ torch.save(checkpoint, config.output_file)
 
    src\lightning\fabric\plugins\environments\lightning.py
 python.lang.security.audit.network.bind.avoid-bind-to-all-interfaces
          Running `socket.bind` to 0.0.0.0, or empty string could unexpectedly expose the server publicly as
          it binds to all available interfaces. Consider instead getting correct address from an environment
          variable or configuration file.
          Details: https://sg.run/rdln
 
          115┆ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
          116┆ s.bind(("", 0))
 
    src\lightning\fabric\plugins\io\xla.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           74┆ torch.save(cpu_data, path)
 
    src\lightning\fabric\strategies\fsdp.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          490┆ torch.save(metadata, path / _METADATA_FILENAME)
            ⋮┆----------------------------------------
          509┆ torch.save(full_state, path)
            ⋮┆----------------------------------------
          589┆ metadata = torch.load(path / _METADATA_FILENAME)
 
    src\lightning\fabric\strategies\model_parallel.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          396┆ torch.save(converted_state, path)
            ⋮┆----------------------------------------
          403┆ torch.save(metadata, path / _METADATA_FILENAME)
            ⋮┆----------------------------------------
          449┆ metadata = torch.load(path / _METADATA_FILENAME)
            ⋮┆----------------------------------------
          461┆ checkpoint = torch.load(path, mmap=True, map_location="cpu", weights_only=False)
            ⋮┆----------------------------------------
          535┆ state_dict = torch.load(path, mmap=True, map_location="cpu") if _TORCH_GREATER_EQUAL_2_3
               else _lazy_load(path)
 
    src\lightning\fabric\strategies\xla.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          257┆ torch.save(obj, buffer)
            ⋮┆----------------------------------------
          269┆ obj = torch.load(buffer)
 
    src\lightning\fabric\strategies\xla_fsdp.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          391┆ torch.save(obj, buffer)
            ⋮┆----------------------------------------
          403┆ obj = torch.load(buffer)
            ⋮┆----------------------------------------
          567┆ sharded_ckpt = torch.load(file)
            ⋮┆----------------------------------------
          611┆ full_ckpt = torch.load(path)
 
    src\lightning\fabric\utilities\cloud_io.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           48┆ return torch.load(
           49┆     path_or_url,
           50┆     map_location=map_location,  # type: ignore[arg-type] # upstream annotation is not
               correct
           51┆     weights_only=weights_only,
           52┆ )
            ⋮┆----------------------------------------
           61┆ return torch.load(
           62┆     f,
           63┆     map_location=map_location,  # type: ignore[arg-type]
           64┆     weights_only=weights_only,
           65┆ )
            ⋮┆----------------------------------------
           86┆ torch.save(checkpoint, bytesbuffer)
 
    src\lightning\fabric\utilities\consolidate_checkpoint.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           79┆ torch.save(checkpoint, config.output_file)
 
    src\lightning\fabric\utilities\load.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          263┆ extra = torch.load(extra_file, map_location="cpu") if extra_file.is_file() else {}
 
    src\lightning\pytorch\demos\boring_classes.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          145┆ return DataLoader(RandomDataset(32, 64))
            ⋮┆----------------------------------------
          148┆ return DataLoader(RandomDataset(32, 64))
            ⋮┆----------------------------------------
          151┆ return DataLoader(RandomDataset(32, 64))
            ⋮┆----------------------------------------
          154┆ return DataLoader(RandomDataset(32, 64))
            ⋮┆----------------------------------------
          180┆ return DataLoader(self.random_train)
            ⋮┆----------------------------------------
          183┆ return DataLoader(self.random_val)
            ⋮┆----------------------------------------
          186┆ return DataLoader(self.random_test)
            ⋮┆----------------------------------------
          189┆ return DataLoader(self.random_predict)
            ⋮┆----------------------------------------
          214┆ return DataLoader(self.random_train)
            ⋮┆----------------------------------------
          217┆ return DataLoader(self.random_val)
            ⋮┆----------------------------------------
          220┆ return DataLoader(self.random_test)
            ⋮┆----------------------------------------
          223┆ return DataLoader(self.random_predict)
            ⋮┆----------------------------------------
          256┆ combined_train = apply_to_collection(self.train_datasets, Dataset, lambda x:
               DataLoader(x))
            ⋮┆----------------------------------------
          260┆ combined_val = apply_to_collection(self.val_datasets, Dataset, lambda x: DataLoader(x))
            ⋮┆----------------------------------------
          264┆ combined_test = apply_to_collection(self.test_datasets, Dataset, lambda x: DataLoader(x))
            ⋮┆----------------------------------------
          268┆ combined_predict = apply_to_collection(self.predict_datasets, Dataset, lambda x:
               DataLoader(x))
 
    src\lightning\pytorch\demos\lstm.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
           96┆ return DataLoader(dataset, batch_sampler=SequenceSampler(dataset, batch_size=20))
 
    src\lightning\pytorch\demos\mnist_datamodule.py
    ❯❱ python.lang.security.audit.dynamic-urllib-use-detected.dynamic-urllib-use-detected
          Detected a dynamic value being used with urllib. urllib supports 'file://' schemes, so a dynamic
          value controlled by a malicious actor may allow them to read arbitrary files. Audit uses of urllib
          calls to ensure user data cannot control the URLs, or consider using the 'requests' library instead.
          Details: https://sg.run/dKZZ
 
          100┆ urllib.request.urlretrieve(url, fpath)  # noqa: S310
 
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          110┆ res = torch.load(path_data)
 
    src\lightning\pytorch\demos\transformer.py
    ❯❱ trailofbits.python.automatic-memory-pinning.automatic-memory-pinning
          If possible, it is better to rely on automatic pinning in PyTorch to avoid undefined behavior and
          for efficiency
          Details: https://sg.run/jz5N
 
          205┆ return DataLoader(dataset)
 
    src\lightning\pytorch\serve\servable_module_validator.py
 python.lang.security.audit.insecure-transport.requests.request-with-http.request-with-http
          Detected a request using 'http://'. This request will be unencrypted, and attackers could listen
          into traffic on the network and be able to obtain sensitive information. Use 'https://' instead.
          Details: https://sg.run/W8J4
 
          107┆ resp = requests.get(f"http://{self.host}:{self.port}/ping")
 
 
          Taint comes from:
 
          107┆ resp = requests.get(f"http://{self.host}:{self.port}/ping")
 
 
                This is how taint reaches the sink:
 
          107┆ resp = requests.get(f"http://{self.host}:{self.port}/ping")
 
 
            ⋮┆----------------------------------------
          119┆ self.resp = requests.post(f"http://{self.host}:{self.port}/serve", json=payload)
 
 
          Taint comes from:
 
          119┆ self.resp = requests.post(f"http://{self.host}:{self.port}/serve", json=payload)
 
 
                This is how taint reaches the sink:
 
          119┆ self.resp = requests.post(f"http://{self.host}:{self.port}/serve", json=payload)
 
 
    src\lightning\pytorch\strategies\fsdp.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          574┆ torch.save(checkpoint, path / _METADATA_FILENAME)
            ⋮┆----------------------------------------
          624┆ metadata = torch.load(path / _METADATA_FILENAME)
 
    src\lightning\pytorch\strategies\launchers\multiprocessing.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          241┆ torch.save(callback_metrics, buffer)
            ⋮┆----------------------------------------
          257┆ callback_metrics = torch.load(io.BytesIO(callback_metrics_bytes), weights_only=True)
 
    src\lightning\pytorch\strategies\model_parallel.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          325┆ torch.save(checkpoint, path / _METADATA_FILENAME)
 
    src\lightning\pytorch\strategies\xla.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
          230┆ torch.save(obj, buffer)
            ⋮┆----------------------------------------
          242┆ obj = torch.load(buffer)
 
    src\lightning\pytorch\trainer\connectors\signal_connector.py
   ❯❯❱ python.lang.security.audit.subprocess-shell-true.subprocess-shell-true
          Found 'subprocess' function 'call' with 'shell=True'. This is dangerous because this call will spawn
          the command using a shell process. Doing so propagates current shell settings and variables, which
          makes it much easier for a malicious actor to execute commands. Use 'shell=False' instead.
          Details: https://sg.run/J92w
 
           ▶▶┆ Autofix ▶ False
          100┆ result = call(" ".join(cmd), shell=True)
 
    src\lightning\pytorch\utilities\consolidate_checkpoint.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           30┆ torch.save(checkpoint, config.output_file)
 
    src\lightning\pytorch\utilities\deepspeed.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           96┆ optim_state = torch.load(optim_files[0], map_location=CPU_DEVICE)
            ⋮┆----------------------------------------
           99┆ client_state = torch.load(model_file, map_location=CPU_DEVICE)
            ⋮┆----------------------------------------
          107┆ torch.save(client_state, output_file)
 
    src\lightning\pytorch\utilities\parsing.py
    ❯❱ python.lang.security.deserialization.pickle.avoid-pickle
          Avoid using `pickle`, which is known to lead to code execution vulnerabilities. When unpickling, the
          serialized data could be manipulated to run arbitrary code. Instead, consider serializing the
          relevant data as JSON or a similar text-based serialization format.
          Details: https://sg.run/OPwB
 
           34┆ pickle.dumps(obj)
 
    src\lightning\pytorch\utilities\upgrade_checkpoint.py
   ❯❯❱ trailofbits.python.pickles-in-pytorch.pickles-in-pytorch
          Functions reliant on pickle can result in arbitrary code execution.  Consider loading from
          `state_dict`, using fickling, or switching to a safer serialization method like ONNX
          Details: https://sg.run/NwQy
 
           62┆ checkpoint = torch.load(file, map_location=(torch.device("cpu") if args.map_to_cpu else
               None))
            ⋮┆----------------------------------------
           64┆ torch.save(checkpoint, file)