[Bigjob-users] Cannot access Redis from compute nodes on alamo.

pradeep kumar Mantha pradeepm66 at gmail.com
Tue Jan 31 14:38:02 CST 2012


Now I  can see the same problem on other compute nodes.

[pmantha at c030 site-packages]$ hostname
c030
[pmantha at c030 site-packages]$ ls -ltr BigJob-0.3.2-py2.7.egg/
total 32
drwxr-xr-x 2 merzky users 4096 Nov 24 06:51 examples
drwxr-xr-x 2 merzky users 4096 Nov 24 06:51 EGG-INFO




On Tue, Jan 31, 2012 at 1:21 PM, Andre Merzky <andremerzky at gmail.com> wrote:

> Hi,
>
> I did as you suggested, and could delete BigJob on that node.  But I
> am very confident that I did not install on that node seperately.  My
> guess would be that the CSA spaces are synced over the different nodes
> from time to time by the sys admins?  Yaakoub, do you know if that is
> correct?
>
> Best, Andre.
>
>
> On Tue, Jan 31, 2012 at 8:17 PM, pradeep kumar Mantha
> <pradeepm66 at gmail.com> wrote:
> > Hi!
> > I am able to access c079 in the following way.
> >
> >
> > [pmantha at login1 ~]$ qsub -I -lnodes=1
> > qsub: waiting for job 136596.master1.cm.cluster to start
> > qsub: job 136596.master1.cm.cluster ready
> >
> > [pmantha at c079 ~]$ ssh c079
> > Last login: Tue Jan 31 12:28:55 2012 from c080.cm.cluster
> >
> >
> > [pmantha at c079 ~]$ cd
> > /N/soft/SAGA/saga/1.6/gcc-4.1.2/lib/python2.7/site-packages/
> > [pmantha at c079 site-packages]$ ls -ltr
> > total 96
> > drwxr-xr-x 9 merzky users  4096 Nov 24 06:50 saga
> > -rw-r--r-- 1 merzky users  1779 Nov 24 06:51 site.pyc
> > -rw-r--r-- 1 merzky users  2362 Nov 24 06:51 site.py
> > drwxr-xr-x 9 merzky users  4096 Nov 24 06:51 BigJob-0.3.2-py2.7.egg
> > -rw-r--r-- 1 merzky users 37188 Nov 24 06:51 redis-2.2.4-py2.7.egg
> > drwxr-xr-x 4 merzky users  4096 Nov 24 06:51 virtualenv-1.6.4-py2.7.egg
> > -rw-r--r-- 1 merzky users 12888 Nov 24 06:51 threadpool-1.2.7-py2.7.egg
> > -rw-r--r-- 1 merzky users 13846 Nov 24 06:51 uuid-1.30-py2.7.egg
> > -rw-r--r-- 1 merzky users   314 Nov 24 06:51 easy-install.pth
> > [pmantha at c079 site-packages]$
> >
> > thanks
> > pradeep
> >
> >
> > On Tue, Jan 31, 2012 at 1:04 PM, Andre Merzky <andremerzky at gmail.com>
> wrote:
> >>
> >> On Tue, Jan 31, 2012 at 7:54 PM, Andre Luckow <aluckow at cct.lsu.edu>
> wrote:
> >> > Hi Ole
> >> >
> >> >> If this is *not* the default case and only a fall-back solution, I
> >> >> strongly
> >> >> suggest that we make it the default case, otherwise we will keep on
> >> >> running
> >> >> into trouble like this! Change BigJob in that regard should be
> >> >> sufficiently
> >> >> easier than trying to keep versions consistent across n number of
> >> >> machines?
> >> >
> >> > It is the default case if no BigJob installation is found. In this
> >> > case BJ is found in the PYTHONPATH, thus, it is used. I think there
> >> > should be always a possibility for the user to override the default
> >> > behavior and that's what is done here.
> >> >
> >> > @AndreM: Could you please delete the old BJ from the node c079 on
> Alamo?
> >>
> >> How:
> >>
> >>  -bash-3.2$ ssh c079
> >>  Warning: Permanently added 'c079,10.141.0.79' (RSA) to the list of
> >> known hosts.
> >>  Connection closed by 10.141.0.79
> >>
> >> Is it possible to interactively connect there?  Why should that node
> >> have a different CSA installation than the head node?  I only install
> >> CSA on head nodes...
> >>
> >> Best, Andre.
> >>
> >>
> >> > Thanks,
> >> > Andre
> >>
> >>
> >>
> >> --
> >> Nothing is ever easy...
> >
> >
>
>
>
> --
> Nothing is ever easy...
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cct.lsu.edu/pipermail/bigjob-users/attachments/20120131/45ae28b4/attachment-0001.html 


More information about the Bigjob-users mailing list