我真正想要的是一种模仿SLURM的方法,一种可以安装的交互式和合理用户友好的方式.
原帖
我想用SLURM测试一些最小的例子,我试图用Ubuntu 16.04在本地机器上安装它.我跟随the most recent slurm install guide I could find,然后我开始使用sudo /etc/init.d/slurmd start开始slurmd.
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service Failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details. Failed!
我不知道如何解释systemctl日志:
● slurmd.service - Slurm node daemon Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled) Active: Failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited,status=1/FAILURE) Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon... Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited,code=exited status=1 Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon. Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered Failed state. Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'.
lsb_release -a给出以下内容. (是的,我知道,严格来说,KDE Neon并不完全是Ubuntu.)
o LSB modules are available. Distributor ID: neon Description: KDE neon User Edition 5.11 Release: 16.04 Codename: xenial
与导游说的不同,我使用了自己的用户名wlandau,我确保将/ var / lib / slurm-llnl和/ var / run / slurm-llnl chown给我.这是我的/etc/slurm-llnl/slurm.conf.
# slurm.conf file generated by configurator.html. # Put this file on all nodes of your cluster. # See the slurm.conf man page for more information. # ControlMachine=linux0 #ControlAddr= #BackupController= #BackupAddr= # AuthType=auth/munge CacheGroups=0 #CheckpointType=checkpoint/none CryptoType=crypto/munge #DisableRootJobs=NO #EnforcePartLimits=NO #Epilog= #EpilogSlurmctld= #FirstJobId=1 #MaxJobId=999999 #GresTypes= #GroupUpdateForce=0 #GroupUpdateTime=600 #JobCheckpointDir=/var/lib/slurm-llnl/checkpoint #JobCredentialPrivateKey= #JobCredentialPublicCertificate= #JobFileAppend=0 #JobRequeue=1 #JobSubmitPlugins=1 #KillOnBadExit=0 #LaunchType=launch/slurm #Licenses=foo*4,bar #MailProg=/usr/bin/mail #MaxJobCount=5000 #MaxStepCount=40000 #MaxTasksPerNode=128 MpiDefault=none #MpiParams=ports=#-# #PluginDir= #PlugStackConfig= #PrivateData=jobs ProctrackType=proctrack/pgid #Prolog= #PrologFlags= #PrologSlurmctld= #PropagatePrioProcess=0 #PropagateResourceLimits= #PropagateResourceLimitsExcept= #RebootProgram= ReturnToService=1 #SallocDefaultCommand= SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid SlurmdPort=6818 SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd SlurmUser=wlandau #SlurmdUser=root #SrunEpilog= #SrunProlog= StateSaveLocation=/var/lib/slurm-llnl/slurmctld SwitchType=switch/none #TaskEpilog= TaskPlugin=task/none #TaskPluginParam= #TaskProlog= #TopologyPlugin=topology/tree #TmpFS=/tmp #TrackWCKey=no #TreeWidth= #UnkillableStepProgram= #UsePAM=0 # # # TIMERS #BatchStartTimeout=10 #CompleteWait=0 #EpilogMsgTime=2000 #GetEnvTimeout=2 #HealthCheckInterval=0 #HealthCheckProgram= InactiveLimit=0 KillWait=30 #MessageTimeout=10 #ResvOverRun=0 MinJobAge=300 #OverTimeLimit=0 SlurmctldTimeout=120 SlurmdTimeout=300 #UnkillableStepTimeout=60 #VSizeFactor=0 Waittime=0 # # # SCHEDULING #DefMemPercpu=0 FastSchedule=1 #MaxMemPercpu=0 #SchedulerRootFilter=1 #SchedulerTimeSlice=30 SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/linear #SelectTypeParameters= # # # JOB PRIORITY #PriorityFlags= #PriorityType=priority/basic #PriorityDecayHalfLife= #PriorityCalcPeriod= #PriorityFavorSmall= #PriorityMaxAge= #PriorityUsageResetPeriod= #PriorityWeightAge= #PriorityWeightFairshare= #PriorityWeightJobSize= #PriorityWeightPartition= #PriorityWeightQOS= # # # LOGGING AND ACCOUNTING #AccountingStorageEnforce=0 #AccountingStorageHost= #AccountingStorageLoc= #AccountingStoragePass= #AccountingStoragePort= AccountingStorageType=accounting_storage/none #AccountingStorageUser= AccountingStoreJobComment=YES ClusterName=cluster #DebugFlags= #JobCompHost= #JobCompLoc= #JobCompPass= #JobCompPort= JobCompType=jobcomp/none #JobCompUser= #JobContainerPlugin=job_container/none JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/none SlurmctldDebug=3 SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log SlurmdDebug=3 SlurmdLogFile=/var/log/slurm-llnl/slurmd.log #SlurmSchedLogFile= #SlurmSchedLogLevel= # # # POWER SAVE SUPPORT FOR IDLE NODES (optional) #SuspendProgram= #ResumeProgram= #SuspendTimeout= #ResumeTimeout= #ResumeRate= #SuspendExcNodes= #SuspendExcParts= #SuspendRate= #SuspendTime= # # # COMPUTE NODES NodeName=linux[1-32] cpus=1 State=UNKNOWN PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP
跟进
在@damienfrancois的帮助下重写我的slurm.conf之后,slurmd现在开始了.但不幸的是,当我调用它时,sinfo挂起,我得到了和以前一样的错误信息.
$sudo /etc/init.d/slurmctld stop [ ok ] Stopping slurmctld (via systemctl): slurmctld.service. $sudo /etc/init.d/slurmctld start [ ok ] Starting slurmctld (via systemctl): slurmctld.service. $sinfo slurm_load_partitions: Unable to contact slurm controller (connect failure) $slurmd -Dvvv slurmd: fatal: Frontend not configured correctly in slurm.conf. See man slurm.conf look for frontendname.
然后我尝试重新启动守护进程,并且slurmd无法重新开始.
$sudo /etc/init.d/slurmctld start [....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service Failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details. Failed!
ControlMachine=Haggunenon [...] NodeName=Haggunenon cpus=1 State=UNKNOWN
如果要启动多个slurmd守护程序来模拟更大的集群,则需要使用-N选项启动slurmd(但这需要使用–enable-multiple-slurmd configure选项构建Slurm)
UPDATE.这是一个演练.我用Vagrant和VirtualBox设置了一个虚拟机(vagrant init ubuntu / xenial64; vagrant up)然后在vagrant ssh之后运行以下命令:
ubuntu@ubuntu-xenial:~$lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial ubuntu@ubuntu-xenial:~$sudo apt-get update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] [...] Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B] Fetched 23.6 MB in 4s (4,783 kB/s) Reading package lists... Done ubuntu@ubuntu-xenial:~$sudo apt-get install munge libmunge2 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: libmunge2 munge 0 upgraded,2 newly installed,0 to remove and 0 not upgraded. Need to get 102 kB of archives. After this operation,351 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB] Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB] Fetched 102 kB in 0s (290 kB/s) Selecting prevIoUsly unselected package libmunge2. (Reading database ... 57914 files and directories currently installed.) Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ... Unpacking libmunge2 (0.5.11-3ubuntu0.1) ... Selecting prevIoUsly unselected package munge. Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ... Unpacking munge (0.5.11-3ubuntu0.1) ... Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... Setting up libmunge2 (0.5.11-3ubuntu0.1) ... Setting up munge (0.5.11-3ubuntu0.1) ... Generating a pseudo-random key using /dev/urandom completed. Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key. Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@ubuntu-xenial:~$sudo apt-get install slurm-wlm slurm-wlm-basic-plugins Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 [...] python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd 0 upgraded,43 newly installed,0 to remove and 0 not upgraded. Need to get 20.8 MB of archives. After this operation,87.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB] [...] Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B] Fetched 20.8 MB in 3s (5,274 kB/s) Extracting templates from packages: 100% Selecting prevIoUsly unselected package fonts-dejavu-core. (Reading database ... 57952 files and directories currently installed.) [...] Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@ubuntu-xenial:~$sudo vim /etc/slurm-llnl/slurm.conf ubuntu@ubuntu-xenial:~$grep -v \# /etc/slurm-llnl/slurm.conf ControlMachine=ubuntu-xenial AuthType=auth/munge CacheGroups=0 CryptoType=crypto/munge MpiDefault=none ProctrackType=proctrack/pgid ReturnToService=1 SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid SlurmdPort=6818 SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd SlurmUser=ubuntu StateSaveLocation=/var/lib/slurm-llnl/slurmctld SwitchType=switch/none TaskPlugin=task/none InactiveLimit=0 KillWait=30 MinJobAge=300 SlurmctldTimeout=120 SlurmdTimeout=300 Waittime=0 FastSchedule=1 SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/linear AccountingStorageType=accounting_storage/none AccountingStoreJobComment=YES ClusterName=cluster JobCompType=jobcomp/none JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/none SlurmctldDebug=3 SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log SlurmdDebug=3 SlurmdLogFile=/var/log/slurm-llnl/slurmd.log NodeName=ubuntu-xenial cpus=1 State=UNKNOWN PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP ubuntu@ubuntu-xenial:~$sudo chown ubuntu /var/log/slurm-llnl ubuntu@ubuntu-xenial:~$sudo chown ubuntu /var/lib/slurm-llnl/slurmctld ubuntu@ubuntu-xenial:~$sudo chown ubuntu /var/run/slurm-llnl ubuntu@ubuntu-xenial:~$sudo /etc/init.d/slurmctld start [ ok ] Starting slurmctld (via systemctl): slurmctld.service. ubuntu@ubuntu-xenial:~$sudo /etc/init.d/slurmd start [ ok ] Starting slurmd (via systemctl): slurmd.service.
最后,它给了我预期的结果:
ubuntu@ubuntu-xenial:~$sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST debug* up infinite 1 idle ubuntu-denial
如果按照此处的确切步骤没有帮助,请尝试运行
sudo slurmctld -Dvvv
和
sudo slurmd -Dvvv
消息应该足够明确.