From Computational Cognitive Neuroscience Wiki
Revision as of 01:53, 26 November 2009 by Oreilly (Talk | contribs) (moved CCNLab/dream to CCNLab/glow: progress..)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

back to CCNLab


The primary computational resource of the lab is the dream linux "beowulf" cluster. It has 28 dual-CPU nodes, with each node having two 2.4Ghz Pentium Xeon 4 processors, 1Gb of ram, and Myrinet fiber-optic network connections.

Dream sm.jpg

Picture of Randy next to Dream

We run Maui on top of PBS for job scheduling (with Mpiexec - parallel job launcher for PBS), and use the distributed memory functions in PDP++ to achieve very fast simulation of large neural network models. We purchased our system from Aspen Systems, Inc.. We have developed some useful shell scripts for submitting jobs, and you can see our configuration notes and a patch for PBS to prevent jobs from being killed if they exceed their requested time (who knows how long its really going to

  • dmem ftp directory: Directory of dmem files.
  • cluster_config_notes.txt: notes on how we configured our cluster.
  • mom_mach_linux.c.patch patch file for PBS mom_mach_linux.c file to prevent killing of jobs that exceed their scheduled time.
  • README.dmem: Readme for PDP++ distributed memory functions.
  • dm_qsub: A script file for submitting a multi-processor (distributed memory = dm) job.
  • sp_qsub: A script file for submitting a single-processor (sp) job.
  • pb_qsub: parallel-batch run job submission.