<html>
  <head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
<div style="16px" text-align="left">got it, thanks.<br></div><div style="16px" text-align="left"><br></div><div style="16px" text-align="left">Jan 18, 2019, 4:40 PM by ric@email.arizona.edu:<br></div><blockquote class="tutanota_quote" style="border-left: 1px solid #93A3B8; padding-left: 10px; margin-left: 5px;"><div class=""><p class="">We use the University's common authentication for all our HPC nodes, so extending that to the ondemand VM was relatively easy for us via Shibboleth.  That method means we have common User IDs and Group IDs for all systems, which make NFS
 easy.  OOD has an implicit dependency on the compute nodes having access to the same directory path as the OOD node as far as I can tell.<br></p><p class=""> <br></p><p class="">I wouldn't worry about .bashrc or .bash_profile.  The user should not be able to get a command line shell on the OOD node.  If the user opens open a shell window vi a OOD, it's on a cluster node. <br></p><p class="">Ric<br></p><div><p class="">--<br></p><p class=""><b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">Ric Anderson</span></span></b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">| <b>Systems Administrator</b> <img style="width: 0.1562in; height: 0.1562in; max-width: 100px;" id="_x0000_i1028" src="data:image/svg+xml;utf8,<svg version='1.1' viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'><rect width='512' height='512' fill='%23f8f8f8'/><path d='m220 212c0 12.029-9.7597 21.789-21.789 21.789-12.029 0-21.789-9.7597-21.789-21.789s9.7597-21.789 21.789-21.789c12.029 0 21.789 9.7597 21.789 21.789zm116.21 43.578v50.841h-159.79v-21.789l36.315-36.315 18.158 18.158 58.104-58.104zm10.895-79.893h-181.58c-1.9292 0-3.6315 1.7023-3.6315 3.6315v138c0 1.9292 1.7023 3.6315 3.6315 3.6315h181.58c1.9292 0 3.6315-1.7023 3.6315-3.6315v-138c0-1.9292-1.7023-3.6315-3.6315-3.6315zm18.158 3.6315v138c0 9.9867-8.1709 18.158-18.158 18.158h-181.58c-9.9867 0-18.158-8.1709-18.158-18.158v-138c0-9.9867 8.1709-18.158 18.158-18.158h181.58c9.9867 0 18.158 8.1709 18.158 18.158z' fill='%23b4b4b4' stroke-width='.11348'/></svg>" alt="Description: Description: Description: Description: Description: Description: Description: http://redbar.web.arizona.edu/logos/images/thumb_pawprints.gif" width="15" height="15"></span></span><br></p><p class=""><span class="colour" style="color:black">Research And Discovery Tech | HPC Large Systems Support</span><br></p><p class=""><span class="colour" style="color:black">XSEDE Campus Champion</span><br></p><p class=""><u><span class="colour" style="color:black"><a href="mailto:Ric@email.arizona.edu" rel="noopener noreferrer" target="_blank"><span class="colour" style="color:rgb(5, 99, 193)">ric@email.arizona.edu</span></a>         </span></u><u><span class="colour" style="color:rgb(175, 171, 171)">(V):  +1-520-626-1642</span></u><span class="colour" style="color:black"><span class="size" style="font-size:12pt"></span></span><br></p><p class=""><span class="colour" style="color:black"><img style="width: 1.6666in; height: 0.4062in; max-width: 100px;" id="_x0000_i1027" src="data:image/svg+xml;utf8,<svg version='1.1' viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'><rect width='512' height='512' fill='%23f8f8f8'/><path d='m220 212c0 12.029-9.7597 21.789-21.789 21.789-12.029 0-21.789-9.7597-21.789-21.789s9.7597-21.789 21.789-21.789c12.029 0 21.789 9.7597 21.789 21.789zm116.21 43.578v50.841h-159.79v-21.789l36.315-36.315 18.158 18.158 58.104-58.104zm10.895-79.893h-181.58c-1.9292 0-3.6315 1.7023-3.6315 3.6315v138c0 1.9292 1.7023 3.6315 3.6315 3.6315h181.58c1.9292 0 3.6315-1.7023 3.6315-3.6315v-138c0-1.9292-1.7023-3.6315-3.6315-3.6315zm18.158 3.6315v138c0 9.9867-8.1709 18.158-18.158 18.158h-181.58c-9.9867 0-18.158-8.1709-18.158-18.158v-138c0-9.9867 8.1709-18.158 18.158-18.158h181.58c9.9867 0 18.158 8.1709 18.158 18.158z' fill='%23b4b4b4' stroke-width='.11348'/></svg>" alt="cid:image005.png@01D01593.CF7DFA60" width="160" height="39" border="0"></span><span class="colour" style="color:black"><span class="size" style="font-size:12pt"></span></span><br></p><p class=""> <br></p></div><p class=""> <br></p><p class=""> <br></p><div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in"><p class=""><b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">From: </span></span></b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">"<a rel="noopener noreferrer" target="_blank" href="mailto:edijh403@tutanota.com">edijh403@tutanota.com</a>" <<a rel="noopener noreferrer" target="_blank" href="mailto:edijh403@tutanota.com">edijh403@tutanota.com</a>><br> <b>Date: </b>Friday, January 18, 2019 at 6:26 AM<br> <b>To: </b>"Anderson, Richard O - (ric)" <<a rel="noopener noreferrer" target="_blank" href="mailto:ric@email.arizona.edu">ric@email.arizona.edu</a>><br> <b>Cc: </b>Ohio Super Computing On Demand Users List <<a rel="noopener noreferrer" target="_blank" href="mailto:ood-users@lists.osc.edu">ood-users@lists.osc.edu</a>><br> <b>Subject: </b>Re: [OOD-users] common NFS for ood data shared with slurm workers</span></span></p></div><div><p class=""> <br></p></div><div><p class="">Ok, thanks, Ric.<br></p></div><div><p class=""> <br></p></div><div><p class="">For the slurm master and the worker nodes it makes sense to have an NFS mounted at their /home<br></p></div><div><p class="">directories because these nodes are very similar (in my case they have at least the same users and the<br></p></div><div><p class="">same OS (Ubuntu)).<br></p></div><div><p class=""> <br></p></div><div><p class="">However, I'm hesitating to also share the ood node's /home because that node uses another OS<br></p></div><div><p class="">(CentOS, because OOD is not yet available as a Debian package) and there are different users on it<br></p></div><div><p class="">(no 'ubuntu' user but a 'centos' user instead). After all, I don't want the NFS to hide /home/centos.<br></p></div><div><p class=""> <br></p></div><div><p class="">So I could mount at /home/ood instead. But then who gives me the guarantee that e.g. .bash_profile,<br></p></div><div><p class="">.bashrc, etc. will work on both Ubuntu and CentOS?<br></p></div><div><p class=""> <br></p></div><div><p class="">So should I mount at /home/ood/ondemand where OOD actually puts its data?<br></p></div><div><p class=""> <br></p></div><div><p class="">Thanks.<br></p></div><div><p class=""> <br></p></div><div><p class="">Jan 17, 2019, 6:02 PM by <a rel="noopener noreferrer" target="_blank" href="mailto:ric@email.arizona.edu">ric@email.arizona.edu</a>:<br></p></div><blockquote style="border:none;border-left:solid #93A3B8 1.0pt;padding:0in 0in 0in 8.0pt;margin-left:3.75pt;margin-top:5.0pt;margin-bottom:5.0pt"><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">We use common NFS mount for /home and several other file systems that the compute nodes can access as users may have files in any/all of those they need to edit.<br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Ric<br></p><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">--<br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class=""><b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">Ric Anderson</span></span></b></span><span class=""><span class="colour" style="color:black"><span class="size" style="font-size:12pt">| <b>Systems Administrator</b> </span></span></span><span class="colour" style="color:black"><span class="size" style="font-size:12pt"><img style="width: 0.1562in; height: 0.1562in; max-width: 100px;" id="Picture_x0020_2" src="data:image/svg+xml;utf8,<svg version='1.1' viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'><rect width='512' height='512' fill='%23f8f8f8'/><path d='m220 212c0 12.029-9.7597 21.789-21.789 21.789-12.029 0-21.789-9.7597-21.789-21.789s9.7597-21.789 21.789-21.789c12.029 0 21.789 9.7597 21.789 21.789zm116.21 43.578v50.841h-159.79v-21.789l36.315-36.315 18.158 18.158 58.104-58.104zm10.895-79.893h-181.58c-1.9292 0-3.6315 1.7023-3.6315 3.6315v138c0 1.9292 1.7023 3.6315 3.6315 3.6315h181.58c1.9292 0 3.6315-1.7023 3.6315-3.6315v-138c0-1.9292-1.7023-3.6315-3.6315-3.6315zm18.158 3.6315v138c0 9.9867-8.1709 18.158-18.158 18.158h-181.58c-9.9867 0-18.158-8.1709-18.158-18.158v-138c0-9.9867 8.1709-18.158 18.158-18.158h181.58c9.9867 0 18.158 8.1709 18.158 18.158z' fill='%23b4b4b4' stroke-width='.11348'/></svg>" alt="Image removed by sender. Description: Description: Description: Description: Description: Description: Description: http://redbar.web.arizona.edu/logos/images/thumb_pawprints.gif" width="15" height="15" border="0"></span></span><br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class=""><span class="colour" style="color:black">Research And Discovery Tech | HPC Large Systems Support</span></span><br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class=""><span class="colour" style="color:black">XSEDE Campus Champion</span></span><br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class=""><u><span class="colour" style="color:black"><a href="mailto:Ric@email.arizona.edu" target="_blank" rel="noopener noreferrer"><span class="colour" style="color:rgb(5, 99, 193)">ric@email.arizona.edu</span></a>         </span><span class="colour" style="color:rgb(175, 171, 171)">(V):
  +1-520-626-1642</span></u></span><br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class="colour" style="color:black"><img style="width: 1.6666in; height: 0.4062in; max-width: 100px;" id="Picture_x0020_1" src="data:image/svg+xml;utf8,<svg version='1.1' viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'><rect width='512' height='512' fill='%23f8f8f8'/><path d='m220 212c0 12.029-9.7597 21.789-21.789 21.789-12.029 0-21.789-9.7597-21.789-21.789s9.7597-21.789 21.789-21.789c12.029 0 21.789 9.7597 21.789 21.789zm116.21 43.578v50.841h-159.79v-21.789l36.315-36.315 18.158 18.158 58.104-58.104zm10.895-79.893h-181.58c-1.9292 0-3.6315 1.7023-3.6315 3.6315v138c0 1.9292 1.7023 3.6315 3.6315 3.6315h181.58c1.9292 0 3.6315-1.7023 3.6315-3.6315v-138c0-1.9292-1.7023-3.6315-3.6315-3.6315zm18.158 3.6315v138c0 9.9867-8.1709 18.158-18.158 18.158h-181.58c-9.9867 0-18.158-8.1709-18.158-18.158v-138c0-9.9867 8.1709-18.158 18.158-18.158h181.58c9.9867 0 18.158 8.1709 18.158 18.158z' fill='%23b4b4b4' stroke-width='.11348'/></svg>" alt="Image removed by sender. cid:image005.png@01D01593.CF7DFA60" width="160" height="39" border="0"></span><br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p><div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in"><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span class=""><b><span class="colour" style="color:black"><span class="size" style="font-size:12pt">From: </span></span></b></span><span class=""><span class="colour" style="color:black"><span class="size" style="font-size:12pt">OOD-users <<a href="mailto:ood-users-bounces+ric=email.arizona.edu@lists.osc.edu" target="_blank" rel="noopener noreferrer">ood-users-bounces+ric=email.arizona.edu@lists.osc.edu</a>> on behalf of Ohio Super
 Computing On Demand Users List <<a href="mailto:ood-users@lists.osc.edu" target="_blank" rel="noopener noreferrer">ood-users@lists.osc.edu</a>></span></span></span><span class="colour" style="color:black"><span class="size" style="font-size:12pt"><br> <span class=""><b>Reply-To: </b>"<a href="mailto:edijh403@tutanota.com" target="_blank" rel="noopener noreferrer">edijh403@tutanota.com</a>" <<a href="mailto:edijh403@tutanota.com" target="_blank" rel="noopener noreferrer">edijh403@tutanota.com</a>>, Ohio Super Computing On Demand Users List <<a href="mailto:ood-users@lists.osc.edu" target="_blank" rel="noopener noreferrer">ood-users@lists.osc.edu</a>></span><br> <span class=""><b>Date: </b>Thursday, January 17, 2019 at 9:59 AM</span><br> <span class=""><b>To: </b>Ohio Super Computing On Demand Users List <<a href="mailto:ood-users@lists.osc.edu" target="_blank" rel="noopener noreferrer">ood-users@lists.osc.edu</a>></span><br> <span class=""><b>Subject: </b>[OOD-users] common NFS for ood data shared with slurm workers</span></span></span></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Hi all,<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">when trying to launch a slurm job from within the ood dashboard, i get, in slurmd.log:<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">[14.batch] error: Could not open stdout file /home/ood/ondemand/data/sys/myjobs/projects/default/4/slurm-14.out: No such file or directory<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">[14.batch] error: IO setup failed: No such file or directory<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">similarly, when trying to launch a jupyter notebook, i get:<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">[39.batch] error: Could not open stdout file /home/ood/ondemand/data/sys/dashboard/batch_connect/dev/jupyter/output/380b6eec-6d71-4a83-8a5e-20398831668a/output.log: No such file
 or directory<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">[39.batch] error: IO setup failed: No such file or directory<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">and that's because that path only exists on the ood node but not on a slurm worker node.<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">to have this path exist on the ood node and all slurm worker nodes<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">i'd suggest to use a common NFS they all mount. is that the recommended way to go<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">or what would you suggest?<br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <br></p></div><div><p class="" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">thanks in advance.<br></p></div></div></blockquote><div><p class=""> <br></p></div></div></blockquote><div style="16px" text-align="left"><br></div>  </body>
</html>