Welcome to iraf.net Monday, May 20 2024 @ 10:48 PM GMT


 Forum Index > Help Desk > General IRAF New Topic Post Reply
 memory size maximum
   
duvall
 04/15/2011 04:30PM (Read 4751 times)  
+++--
Chatty

Status: offline


Registered: 03/08/2006
Posts: 59
I am able to generate large fits files with iraf v2.15. I was also hoping to be able, when needed, to bring a full image into memory. I have an image with 2.25GB that I try to input with a call to imggsr() but it fails with the message 'out of memory'. If I execute begmem() with best_size=0, I get for the size 1073741312. It doesn't let me increase the size above this. Is there some way to get around this?
Thanks.
Tom

 
Profile Email
 Quote
fitz
 04/15/2011 04:30PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
The answer is somewhat platform dependent: On some linux kernels the max memory per process is built into the kernel even if you physically have more memory installed and lots of swap space. Other linux kernels allow you to use up to the max of phys+swap. OSX is similary dependent on the version but what seems to be common is that the ability to set the resource limits from within a program (in this case, the iraf kernel) has been broken in Darwin for a while now.Note that begmem() with best_size=0 returns a value in terms if SPP chars, so your result is essentially the 32-bit 2GB limit. Also keep in mind that SPP malloc() works in terms of SPP chars as well, are you perhaps allocating 2.4 billion SPP chars (which is 4.8GB in reality)? Even though your machine may be capable of 64-bit, it has to be enabled in the kernel (i.e. does "uname -m" return 'x86_64'? Anything else is a 32-bit OS). The kernel does allow you to set a MAXWORKSET unix environment variable to specify a size in MB of the process working set size, this typically only helps on linux systems but only up to the linux kernel limit. You should also check the output of the 'limit' (for csh, or 'ulimit -a' for bash) to see if your shell is setting a limit on the data segment size.Lastly, it may not always be necessary to read the entire image into memory. The OS (linux or mac) is pretty aggressive about putting often used items in virtual memory or the nearby cache so even line-by-line access in the code may not actually be hitting the disk as often as you think (there is also the file i/o buffering done in the iraf VOS to limit disk access).If you could post some sample code I might have additional comments or can tell you whether/how to make it work (I'll also need the OS version and platform you're using).

 
Profile Email
 Quote
duvall
 04/15/2011 04:30PM  
+++--
Chatty

Status: offline


Registered: 03/08/2006
Posts: 59
The problem I was having back in April with memory size is still with me. I did work around it by not reading in so much. But now I have a case where I would really like to read in more than I can. 'limit' seems to say everything is unlimited. The os version, gotten from 'dmesg | head -1' is:
Linux version 2.6.18-194.17.4.el5 (brewbuilder@norob.fnal.gov) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Oct 25 19:10:57 EDT 2010I am using v.15.1a of iraf. I made a small test program available in http://sun.stanford.edu/~duvall/Iraf/. It tries to read a requested number of planes from image.fits (same directory) which has an overall size of 2.3GB. 1 or 10
planes are read nicely, but with 350, imggsr() gets a segfault. My computer has plenty of memory for this, 16GB. The c compiler does make code that will execute with large images and with jobs that take 5 GB of memory.
Seems that my iraf jobs should be able to do this also. Can you help me with this?

 
Profile Email
 Quote
fitz
 04/15/2011 04:30PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
Hi Tom, I think I understand the problem but will have to confirm it tomorrow. Basically, the allocation being done in the iraf kernel is casting the size to be allocated in terms of 'int' and not 'size_t', limiting an allocation to 2GB before a truncation/overflow occurs and the value is corrupted. This can cause either an
'out of memory' error, or a segfault of the value returns a pointer to space less that what you requested. It's a trivial code change to make but will require a relink of the system. I'll confirm the fix and post detailed instructions later.Cheers,
-Mike

 
Profile Email
 Quote
fitz
 04/15/2011 04:30PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
Hi Tom, I've verified the fix I mentioned earlier and have made the change for the next release. The change itself is trivial, but since it's in the IRAF kernel a complete set of binaries is needed. You can make the change yourself by editing the os$zmaloc.c and os$zraloc.c files to change the calls to the malloc()/realloc() routines that cast the argument as "(int)" to "(size_t)", e.g. for xmaloc.c the code should read[code:1:66cc41fa4b]bufptr = malloc ((size_t)*nbytes);[/code:1:66cc41fa4b]Make a similar change to the realloc() call in os$zraloc.c
To compile the change, first setup your environment as e.g.[code:1:66cc41fa4b]
% setenv iraf /iraf/iraf/ <-- trailing '/' required
% setenv IRAFARCH linux64
% source $iraf/unix/hlib/irafuser.csh
[/code:1:66cc41fa4b]The compile the system with the following steps:[code:1:66cc41fa4b]
% cd $iraf
% make linux64 # set system architecture
% cd unix # go to kernel directory
% ./reboot # rebuild the kernel
% cd $iraf # go back to iraf root
% make update # relink the system
[/code:1:66cc41fa4b]You should then be able to relink your program and allocate memory larger than 2GB.Two additional comments: 1) Your 4096x4096x350 image is actually a 23 GB file, not 2.3 GB. 2) As such, you might reconsider trying to read the whole thing into memory. I was able to run IMSTAT on the image in a little over 4 minutes on a server with 24GB of RAM, I killed your test program that only reads the image section to a pointer after ~45 min when all that happened was that it caused the machine to swap endlessly and never ran to completion.Anyway, hope this helps, post back if you still have problems.Cheers,
-Mike

 
Profile Email
 Quote
duvall
 04/15/2011 04:30PM  
+++--
Chatty

Status: offline


Registered: 03/08/2006
Posts: 59
Sorry about that test image. I meant to create one of length 2.3 GB but through a decimal point error, I made a 23GB image. No wonder it took forever to do anything with it!I'm having a problem getting the fix to work on the system that I want to update.
Namely, when I execute:
source $iraf/unix/hlib/irafuser.cshI get the message:
Warning in hlib$irafuser.csh: unknown platform linux4I do notice that the environment variable $MACH is set to linux4 before the
source is executed. This is presumably used by some local stuff here.
Any idea what to do about this?
Tom

 
Profile Email
 Quote
fitz
 04/15/2011 04:30PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
Must be a typo for 'linux64' in your .cshrc or .login file (probably in setting the IRAFARCH). Try doing[code:1:0ff18c4165]% env | grep linux4[/code:1:0ff18c4165]to find the variable. I don't see where this is defined incorrectly in any of the iraf scripts.

 
Profile Email
 Quote
duvall
 04/15/2011 04:30PM  
+++--
Chatty

Status: offline


Registered: 03/08/2006
Posts: 59
It was something that I was doing, I guess. When I became iraf, the problems went away. Anyway, I have succeeded in getting it to work. I can now use BIG memory! Thanks for your help.
Tom

 
Profile Email
 Quote
   
Content generated in: 0.40 seconds
New Topic Post Reply

Normal Topic Normal Topic
Sticky Topic Sticky Topic
Locked Topic Locked Topic
New Post New Post
Sticky Topic W/ New Post Sticky Topic W/ New Post
Locked Topic W/ New Post Locked Topic W/ New Post
View Anonymous Posts 
Anonymous users can post 
Filtered HTML Allowed 
Censored Content 
dog allergies remedies cialis 20 mg chilblain remedies


Privacy Policy
Terms of Use

User Functions

Login