Welcome to iraf.net Thursday, March 28 2024 @ 02:26 PM GMT
dougmink |
01/10/2006 04:44PM (Read 5483 times)
|
|
|
Status: offline
Registered: 12/09/2005
Posts: 17
|
We've been building instrument data reduction pipelines in CL
for quite a while and have been moving the top level to CL
scripts callable from the Unix command line so that we can
use multiple processors. Every thing works well under Solaris
and has moved pretty well between versions of IRAF, but
when we try to run working scripts under Linux (Redhat FC2),
they crash soon after entering any SPP program which is
called by the CL script. Functions such as lpar, dpar, dir, and
epar work, but as soon as you try to access a data file, the SPP
program and the script crash.Does anyone have any ideas of what could be going wrong?
|
|
|
|
fitz |
01/10/2006 04:44PM
|
|
|
Status: offline
Registered: 09/30/2005
Posts: 4040
|
The default hlib$cl.csh has the line limit stacksize unlimitedto workaround some memory layout changes in recent kernels. #!cl scripts of course don't use this script so my guess is you need to have that same line defined in the environment that starts the scripts (or "ulimit -s unlimited" for bash shells). Note
that a "!" escape from the script itself won't work, it has to be defined in the parent shell of the CL process.-Mike
|
|
|
|
dougmink |
01/10/2006 04:44PM
|
|
|
Status: offline
Registered: 12/09/2005
Posts: 17
|
Mike,Thanks for the rapid response!!!
"limit stacksize unlimited" does the trick, at least for me in tcsh.
Do you know how to do the same thing in ksh?-Doug
|
|
|
|
fitz |
01/10/2006 04:44PM
|
|
|
Status: offline
Registered: 09/30/2005
Posts: 4040
|
I believe it is the same "ulimit -s" as in bash, don't have a
ksh installed myself.-Mike
|
|
|
|
| |
|
Content generated in: 0.15 seconds |
|