You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an input matrix of roughly 240k cells x 6.5k genes.
pySCENIC works fine so my question is not about a bug, more about whether it is possible to speed up the process in a smart way.
On the HPC that I use there is a limitation on the amount of time a job can run (maximum 72 hours).
I have run the GRN command with 64 CPUs and 920gb of RAM and it could not finish in 72 hours. I know I can request more CPUs but I'm having a hard time getting access to those nodes.
Would you have any recommendation on speeding the process up?
Best your subset your data multiple times (max 80k cells).
Run those subsets on all TFs. Takes ~4 hours with 44CPUs.
Extract the TFs from all those runs.
Run all cells with the adjusted list of TFs.
Hello!
I have an input matrix of roughly 240k cells x 6.5k genes.
pySCENIC works fine so my question is not about a bug, more about whether it is possible to speed up the process in a smart way.
On the HPC that I use there is a limitation on the amount of time a job can run (maximum 72 hours).
I have run the GRN command with 64 CPUs and 920gb of RAM and it could not finish in 72 hours. I know I can request more CPUs but I'm having a hard time getting access to those nodes.
Would you have any recommendation on speeding the process up?
My command is the following:
log_dir=/rds/general/project/ukdrmultiomicsproject/live/MAP_analysis/TREM2_enriched_scflow/pySCENIC/Astro/log/
table_dir=/rds/general/project/ukdrmultiomicsproject/live/MAP_analysis/TREM2_enriched_scflow/pySCENIC/Astro/tables/
subcluster=Astro
resources_dir=/rds/general/project/ukdrmultiomicsproject/live/MAP_analysis/TREM2_enriched_scflow/pySCENIC/Astro/resources/
out_dir=/rds/general/project/ukdrmultiomicsproject/live/MAP_analysis/TREM2_enriched_scflow/pySCENIC/Astro/out/
START=$(date)
echo job started at $START
ulimit -S -n 4096
mkdir -p $log_dir
singularity run docker://aertslab/pyscenic:0.12.0 pyscenic grn
$table_dir/Astro.0.1.tsv
$resources_dir/allTFs_hg38.txt
--num_workers 64
--transpose
-o $out_dir/$subcluster.adjacencies.tsv &> $log_dir/$subcluster.grn.out
The text was updated successfully, but these errors were encountered: