Welcome to iraf.net Friday, March 29 2024 @ 12:04 AM GMT


 Forum Index > Help Desk > Systems New Topic Post Reply
 IRAF porting to other CPUs/Architectures
First | Previous | 1 2 | Next | Last
   
olebole
 05/11/2014 03:52PM (Read 5137 times)  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Hi Fitz and all,

I just created a repository on github containing assembler code for a couple of new (for IRAF) architectures: all with (Debian) Linux as operating system. I also checked the amd64 and i386 codes on the Non-Linux-Debian platforms (kfreebsd and Hurd) and can confirm they work there as well.

Can you specify what else is needed to do the porting? Which are system dependencies that are needed to be adjusted?

There are also some other CPUs on the horizon; mainly 64-bit versions of the known ones (arm64, mips64, ppc64), some of them already in some Linux distributions like Ubuntu. Since they are also going to be used in larger systems (IBM Bluegene) or on handhelds (both going to be used by astronomers), it would be nice to see IRAF there as well. My question is here also what has to be done to make new ports as painless as possible, and what the plans there are.

Best regards

Ole
P.S: I tried to upload the code into the board, but I got an error.

 
Profile Email Website
 Quote
fitz
 05/11/2014 05:07PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
Having a working zsvjmp.s is only one part of the porting puzzle, but it's a good start to have the collection of routines you've put together.

There is no fixed list of things to modify since each port will be different, in general you must first port all of the code under the iraf$unix tree and the remainder of the SPP code should be platform independent. Even if the OS os Debian on all of these systems you'll need to account for the endian-ness and 32-vs-64-bit nature of the CPUs and other architecture quirks. Much of this is defined in the iraf.h and mach.h in the hlib$ directory but there is also machine-dependent code in places like sys$osb. The next thing to do would be to look at all of the code in the iraf kernel, i.e. unix$os, and modify as needed to get the signal/exception handling right. You'll also need to port the XC compiler in unix$boot/spp to set the appropriate host compiler flags, differences in Debian versions (and thus GCC/GLIBC versions) will likely mean you need to create new architectures in the system (i.e. a "bin.armhf") with platform-specific flags (see hlib$irafarch.[c]sh). In the past, the handling has been a perennial problem between systems. Lastly, the inclusion in v2.16 of the iraf$vendor code means these native C files must also be ported (e.g. to modify Makefiles, fix compiler errors, etc). I would caution you against trying to "fix" the system to use gfortran/g95 as the compiler until you understand how/why the current F2C scheme works to make integer*8 the default on 64-bit systems.

The revisions files for past ports are kept in the iraf$doc/ports directory, this would be a good place to see what had to be done for previous systems and things you should check in doing a new port. We don't have plans to port to any of the machines you mention until there is sufficient demand in the community due to the effort required to port and support these systems, not to mention a lack of hardware to do the actual work. We can however answer specific questions you might have even if we can't help debug it.

 
Profile Email
 Quote
olebole
 05/12/2014 08:32AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Hi Fitz,

thank you for the directions. I think, however, that it is important that IRAF gets more simple to adjust for a new system, especially if the development of it will end in 2016 -- it will be used after this, and new compilers and platforms will appear and needed to be supported. For this, it would really good if you could make a systematic on the system dependencies which does not end with the phrase "and so on". One possible modern way go ensure thuis would be the use of cmake.

The next thing to do would be to look at all of the code in the iraf kernel, i.e. unix$os, and modify as needed to get the signal/exception handling right.
One question that arises when I look into this code: os/zxwhen.c contains some twiggling with the FPU, which is not really documented there. Especially when it comes to non-x86 architectures, it is quite unclear what happens there, there are some magic number in the code without any explanation. I would guess that IEEE math is switched on there? If it is so, could you consider to switch to the C99 functions from fenv.h which have a portable implementation of floating point rounding and exception handling? Or is there any compiler that you support (in future) that is not able to handle this?

You'll also need to port the XC compiler in unix$boot/spp to set the appropriate host compiler flags, differences in Debian versions (and thus GCC/GLIBC versions) will likely mean you need to create new architectures in the system (i.e. a "bin.armhf") with platform-specific flags (see hlib$irafarch.[c]sh). In the past, the handling has been a perennial problem between systems.

In "modern" Linux systems, multiplatform support works a bit differently from what IRAF assumes: system independent stuff goes into /usr/share/iraf/ and system dependent files into /usr/lib/ (resp. /usr/lib/arch/ on multi-arch systems for Debian and Ubuntu). So, there is not need at all to have a "bin.armhf", since the (iraf-private) executables would reside in the system dependent path /usr/lib/arch/iraf/ anyway. It may always be called "bin".

I would caution you against trying to "fix" the system to use gfortran/g95 as the compiler until you understand how/why the current F2C scheme works to make integer*8 the default on 64-bit systems.

Is this somewhere documented?

It would also be helpful for porting, if there were unit test cases for each of these components so that one can ensure that it works right: For the "xpp.e", there is a small test.x script, but this is basically a "hello, world". Is this enough to thest the whole xpp stuff (compiler flags etc.)?
Also, for the f2c, it would be good to have test cases that show whether the integer size, pointer size, endianess etc. is handled correctly.
I would really encourage you to put some time and effort here to bring this into a good shape.

We don't have plans to port to any of the machines you mention until there is sufficient demand in the community

I would appreciate, if you would actually not do new ports, but instead improve the portability, so that ports may be done by the community if needed.

not to mention a lack of hardware to do the actual work

Well, for an arm port, we could collect some money to buy you a raspberrypi Smile
For the S390x, there is an excellent emulator available which would help you there. However, as I said, I would see more need in making IRAF more portable and better documented, make it more standards compliant (filesystem: support /usr/share/, use fenv.h instead of system specific FPU flags etc.) and write test cases than in actually putting much of your efforts into the porting itself. This does not sound as sexy, but it would help much more on the longer term.

(So to say, what I would need most in the moment, would be some good f2c/Fortran test code that checks for the problems that may appear there).

Best regards

Ole

 
Profile Email Website
 Quote
fitz
 05/12/2014 04:58PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
thank you for the directions. I think, however, that it is important that IRAF gets more simple to adjust for a new system, especially if the development of it will end in 2016 -- it will be used after this, and new compilers and platforms will appear and needed to be supported. For this, it would really good if you could make a systematic on the system dependencies which does not end with the phrase "and so on". One possible modern way go ensure thuis would be the use of cmake.


System development will be scaled back in 2016 but is certainly not ending (your actual deadline is ~2030 when I retire 8-)). New platforms will continue to be supported provided there is both an internal need within NOAO and sufficient demand in the community, but the resource reality is that we can pursue projects like new spectroscopy tools and python interfaces, OR develop makefiles for fringe hardware, not both.

The next thing to do would be to look at all of the code in the iraf kernel, i.e. unix$os, and modify as needed to get the signal/exception handling right.
One question that arises when I look into this code: os/zxwhen.c contains some twiggling with the FPU, which is not really documented there. Especially when it comes to non-x86 architectures, it is quite unclear what happens there, there are some magic number in the code without any explanation. I would guess that IEEE math is switched on there? If it is so, could you consider to switch to the C99 functions from fenv.h which have a portable implementation of floating point rounding and exception handling? Or is there any compiler that you support (in future) that is not able to handle this?


This is a good suggestion, the current zxwhen.c code predates the fenv.h and should be updated (rewritten) especially if an interface can make it simpler.


In "modern" Linux systems, multiplatform support works a bit differently from what IRAF assumes: system independent stuff goes into /usr/share/iraf/ and system dependent files into /usr/lib/ (resp. /usr/lib/arch/ on multi-arch systems for Debian and Ubuntu). So, there is not need at all to have a "bin.armhf", since the (iraf-private) executables would reside in the system dependent path /usr/lib/arch/iraf/ anyway. It may always be called "bin".


This is problematic (i.e. the use of /usr/lib) given the way IRAF constructs paths to libraries/binaries based on the iraf root and architecture names (that would all have to be thrown out). It also isn't clear how external packages would be handled, i.e. iraf can use 32-bit binaries on 64-bit core installation to support things like STSDAS, so does package code also have to be unpacked to /usr/lib?


I would caution you against trying to "fix" the system to use gfortran/g95 as the compiler until you understand how/why the current F2C scheme works to make integer*8 the default on 64-bit systems.

Is this somewhere documented?


The 64-bit porting notes are in iraf$local/notes.64-bit. The basic strategy was to compile IRAF as an ILP64 system (because of the frequent interchange of ints and pointers in the system) when then underlying GCC didn't actually support that. Changes to XPP and F2C were required to promote datatype sizes which don't quite work with the gfortran/g95 front-ends. Tests (done on 32-bit machines where this doesn't matter) show there is no real speed advantage in switching compilers but there is extra support required.


It would also be helpful for porting, if there were unit test cases for each of these components so that one can ensure that it works right: For the "xpp.e", there is a small test.x script, but this is basically a "hello, world". Is this enough to thest the whole xpp stuff (compiler flags etc.)? Also, for the f2c, it would be good to have test cases that show whether the integer size, pointer size, endianess etc. is handled correctly. I would really encourage you to put some time and effort here to bring this into a good shape.


If you develop these during your ports I would be happy to include them in the system.



 
Profile Email
 Quote
olebole
 05/13/2014 07:54AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
New platforms will continue to be supported provided there is both an internal need within NOAO and sufficient demand in the community, but the resource reality is that we can pursue projects like new spectroscopy tools and python interfaces, OR develop makefiles for fringe hardware, not both.

This means: make it as easy as possible for the community to take part in the development (i.e. change to a community based development model), so that you don't have to do too much yourself.

What about a public git repository, BTW? Have a look at the astropy repository; could this be a model for a future community driven development?

(/usr/share)This is problematic (i.e. the use of /usr/lib) given the way IRAF constructs paths to libraries/binaries based on the iraf root and architecture names (that would all have to be thrown out).

Come on: /usr/share is out for almost 20 years now, and not only on Linux. This is more than half of the lifetime of IRAF. Maybe it is time go jump on that development now (same for fenv.h).

It is also not so complicated as you suggest: I have a patch which does this (in a preliminary way, without multi-arch, and for 2.16 before 2.16.1, however). The patch (including the adjustment to apply the freedesktop.org base directory standard) is just 160 lines long, not a major change at all.

Would you consider applying something like this to IRAF?

It also isn't clear how external packages would be handled, i.e. iraf can use 32-bit binaries on 64-bit core installation to support things like STSDAS, so does package code also have to be unpacked to /usr/lib?
There are standard installation paths for different architectures (like i386 or amd64) on Linux systems. For Debian and Ubuntu (with Multiarch) it is /usr/lib/i386-linux-gnu/ resp. /usr/lib/x86_64-linux-gnu/; for Fedora and Redhat ist is AFAIK still /usr/lib/ for 64-bit and /usr/lib32/ for 32-bit (on 64bit installations).

I would caution you against trying to "fix" the system to use gfortran/g95 as the compiler until you understand how/why the current F2C scheme works to make integer*8 the default on 64-bit systems.
Is this somewhere documented?
The 64-bit porting notes are in iraf$local/notes.64-bit. The basic strategy was to compile IRAF as an ILP64 system (because of the frequent interchange of ints and pointers in the system) when then underlying GCC didn't actually support that. Changes to XPP and F2C were required to promote datatype sizes which don't quite work with the gfortran/g95 front-ends. Tests (done on 32-bit machines where this doesn't matter) show there is no real speed advantage in switching compilers but there is extra support required.

gfortran has the -fdefault-integer-8 option which does this out of the box. What "Changes to XPP and F2C" were required? What else is different from the standard Fortran invocation (with --ff2c) except the ILP64 support?

It would also be helpful for porting, if there were unit test cases for each of these components so that one can ensure that it works right
If you develop these during your ports I would be happy to include them in the system.

The problem here is that you know best were the pitfalls are, so you would be the right person to actually do this. It would also help yourself in two ways: First, it makes the adoption of new systems in the next 15 years easier for you, and second it would enable us (who are packaging and porting the system) to do a lot of work ourself, actually *reducing* the workload for you.
And, since IRAF is going into maintenance mode somehow, there are not so much changes, and any test case you write may stay for quite a while, probably much longer as the actual adaptions to a new system (which will be out of date again after a few years).
Putting work there is it really worth, even for you. Wouldn't you agree here?

Best regards

Ole

 
Profile Email Website
 Quote
fitz
 05/13/2014 05:38PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040

This means: make it as easy as possible for the community to take part in the development (i.e. change to a community based development model), so that you don't have to do too much yourself.

What about a public git repository, BTW? Have a look at the astropy repository; could this be a model for a future community driven development?


Yes, astropy has an admirable and active group of enthusiastic python purists, but I doubt the lack of active IRAF developers has much to do with whether or not there is a git repository (I can count on one hand the number of bug *fixes* submitted to this site in 9 years). Even for institutions that fully embrace git and python, you won't find repositories for PyRAF (to leverage other enthusiasts), STSDAS (where the 64-bit port is mostly C code not requiring IRAF/SPP expertise) or GEMINI (where the SPP changes for 64-bit support are trivial, and new instrument development is likewise python). IRAF development doesn't have the same cool factor as astropy (but is still used heavily).

I'm happy to create a git repository should somebody come forward to say they actually need it for a project, but even within NOAO the rate of change in the system is slow. Where we could *really* use community involvement is in writing new/updated cookbooks (e.g. see https://iraf.net/irafdocs), or even in collecting links from Google to the many tutorials available elsewhere on the web (no coding required).


Come on: /usr/share is out for almost 20 years now, and not only on Linux. This is more than half of the lifetime of IRAF. Maybe it is time go jump on that development now (same for fenv.h).


And for 20 years you've been free to install the iraf tree in /usr/share and specify the install bin directory as /usr/bin (the public commands are things like 'cl', there's no good reason to put e.g. x_system.e in /usr/lib just because it's a binary), just as you can have a package installer do now. However, there are many users who are not allowed to administer their own machines (no root or sudo powers) and so being able to install IRAF for personal use in v2.16.1 is arguably a bigger benefit than spreading files all over the system.

The FHS doesn't require that all software conform to its recommendations, maintaining an "iraf root" directory as we have now would satisfy the need to coordinate with distributions while maintaing the internal structure of the system. Package managers can apply patches such as yours at installation time if you choose.


gfortran has the -fdefault-integer-8 option which does this out of the box. What "Changes to XPP and F2C" were required? What else is different from the standard Fortran invocation (with --ff2c) except the ILP64 support?


See the iraf$local/notes.64-bit for details, as I remember it had to do with handling on Memr and the size of booleans needing to be promoted as well.



The problem here is that you know best were the pitfalls are, so you would be the right person to actually do this. ....


That could be said of many things needing doing in the system, but that argues against your case there is a community of developers anxious to work on the code who could do this just as easily. If/when there is a new CPU port I can address adding test code when porting that part of the system, but doing so right now is time I need to spend on other priorities.

As I said, I'll be happy to include any test code you develop during your ports.

 
Profile Email
 Quote
olebole
 05/14/2014 08:33AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

What about a public git repository, BTW? Have a look at the astropy repository; could this be a model for a future community driven development?
Yes, astropy has an admirable and active group of enthusiastic python purists, but I doubt the lack of active IRAF developers has much to do with whether or not there is a git repository (I can count on one hand the number of bug *fixes* submitted to this site in 9 years).
This may have something to do with the openness of the IRAF development in the past.

There is a famous open source document called "The cathedral and the bazaar" which describes the difference between a closed development process (like IRAF in the past) and a community based one, and it shows why the latter model tends to be more successfull.

And what concerns patches: I already submitted now four of them: three new zsvjmp.s files, and the patch to support a separate /usr/share directory. Discussing this in the context of a public repository is much easier than in a forum.

I'm happy to create a git repository should somebody come forward to say they actually need it for a project
There already was a thread about a git repository here in the forum where some people expressed their interest. What else do you expect here?

BTW, using a repository would also help to create cleaner source tar files, without accidently kept object files, bash history files or even ssh known-hosts files or your private keys. Just have a look in your long-year collection of IRAF source tarballs, you will be surprised Monkey

Where we could *really* use community involvement is in writing new/updated cookbooks (e.g. see https://iraf.net/irafdocs), or even in collecting links from Google to the many tutorials available elsewhere on the web (no coding required).
Yea, but the community is not someone who just takes work from you. If someone is there who feels that this would be useful and is worth spending time, he would probably do so. I feel that it is useful to have IRAF in Debian and Ubuntu, so I am doing my best to make this possible.

Come on: /usr/share is out for almost 20 years now, and not only on Linux. This is more than half of the lifetime of IRAF. Maybe it is time go jump on that development now (same for fenv.h).
And for 20 years you've been free to install the iraf tree in /usr/share and specify the install bin directory as /usr/bin (the public commands are things like 'cl', there's no good reason to put e.g. x_system.e in /usr/lib just because it's a binary), just as you can have a package installer do now.

There is a good reason: /usr/share is meant for architecture independent data. x_system.e is not architecture independent and therefore must not go into /usr/share.

However, there are many users who are not allowed to administer their own machines (no root or sudo powers) and so being able to install IRAF for personal use in v2.16.1 is arguably a bigger benefit than spreading files all over the system.

Having IRAF in a Linux distribution would allow the system administrator to install (and update) IRAF, as they do for gcc or emacs, so if IRAF would be included in the major Linux distributions, the need to install it yourself would be much less. And to bring IRAF into the distributions, it would be best if if would support /usr/share out of the box. As you can see from my patch, this is not so difficult as you are afraid of.

And a user may always choose to re-unify them. I would, however, not see a big pain in a layout like
TEXT Formatted Code
$iraf/share ....... architectur independent data
$iraf/lib/ .. architecture dependent executables and libraries for
$iraf/bin/ ........ binaries that are going to be called by the user (may be linked from ../lib)

Common installation scripts provide options for all these directories (and some more). Just issue "./configure --help" on a random autoconf'ed package to get an idea.

The FHS doesn't require that all software conform to its recommendations,

A standard is not just a recommendation among a dozen others. It is the rule to follow unless there are specific reasons not to do so. And for IRAF, there is nothing really specific. There are other software packages as well which have binary plugins, which support multiple architectures, have a long history etc.

maintaining an "iraf root" directory as we have now would satisfy the need to coordinate with distributions while maintaing the internal structure of the system. Package managers can apply patches such as yours at installation time if you choose.

Don't you think that it would be best to discuss and maintain an IRAF directory structure that fullfills the needs of the LInux distributions in your source? Ofcourse I can apply any patch (also Fedora can, or Gentoo) -- but to have it somehow coordinated, it would be best to do this "upstream", in the source of IRAF. Do you really disagree here???

gfortran has the -fdefault-integer-8 option which does this out of the box. What "Changes to XPP and F2C" were required? What else is different from the standard Fortran invocation (with --ff2c) except the ILP64 support?
See the iraf$local/notes.64-bit for details, as I remember it had to do with handling on Memr and the size of booleans needing to be promoted as well.

Sorry, but notes.64 bit is not a documentation at all. Maybe I am far too stupid, but from reading it I don't get even an idea what I need to set and what to check. The file is just a list of notes like
TEXT Formatted Code
unix/gdev/sgidev/sgi2svg.x      +
    Added new SGI driver for SVG graphics.

Does this have anything to do with 64-bit porting? If yes: how important is this?

Similarly, if I want to port it to a 32-bit architecture: There is a file unix/portkit/README which actually describes where I have to look. It is dated on 1986 -- is it still actual? Are there additions to that?

The problem here is that you know best were the pitfalls are, so you would be the right person to actually do this. ....
That could be said of many things needing doing in the system, but that argues against your case there is a community of developers anxious to work on the code who could do this just as easily. If/when there is a new CPU port I can address adding test code when porting that part of the system, but doing so right now is time I need to spend on other priorities.

For Debian, I am going to create quite a number of ports: ARM, MIPS, s390, Hurd-i386, kfreeBSD. Ubuntu may add some 64-bit architectures. So there is a number of ports which you can address adding test code. Will you?

 
Profile Email Website
 Quote
fitz
 05/14/2014 05:37PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040

This may have something to do with the openness of the IRAF development in the past.


I'm not going to get into a discussion of past development practices ....


And what concerns patches: I already submitted now four of them: three new zsvjmp.s files, and the patch to support a separate /usr/share directory. Discussing this in the context of a public repository is much easier than in a forum.


I've included your zsvjmp code in the unix$portkit/ as reference sources for future ports, however I could not get the jmptest.f test file to execute properly on existing systems. The procedures also do not define the Mem common address so you will need to ensure this is set by the compiler or define the global address here.

The /usr/share patch is deferred since it does not address the issues of external package installation, multi-platform architectures, package environment loading, interactions with existing installation/update procedures, the multiple /usr/lib architecture names you've pointed out and would also appear to break task compilation without symlinks in the existing bin. directories. Although OSX has /usr/share (and a single /usr/lib that complicates multi-platform support), it could be argued a proper OSX packaging would have IRAF under a /Applications/iraf/Contents/ tree to abide by the Apple scheme (or the use of /usr/share in a Fink/MacPorts env). The use of /usr/share is deferred to package maintainers who can better address the needs of a particular platform as part of their installation scripts.



Where we could *really* use community involvement is in writing new/updated cookbooks (e.g. see https://iraf.net/irafdocs), or even in collecting links from Google to the many tutorials available elsewhere on the web (no coding required).


Yea, but the community is not someone who just takes work from you. ...


And likewise neither do I take work from them (as the many suggestions about what I need to change to help your project would imply). This site was founded to keep open the dialogues we have with users and I was simply giving part of a lengthy list of work we don't have time to do ourselves, Community input is valuable in setting both long- and short-term priorities, but it does not define our daily work schedule (except in extreme cases).



Don't you think that it would be best to discuss and maintain an IRAF directory structure that fullfills the needs of the LInux distributions in your source? Ofcourse I can apply any patch (also Fedora can, or Gentoo) -- but to have it somehow coordinated, it would be best to do this "upstream", in the source of IRAF. Do you really disagree here???


As the guy who has to maintain the new structure in the long-run, answer user questions caused by confusion over the new disk layout, keep pace with Linux releases that will end up driving my schedule (long past 2016), then, yes I do (disagree, that is).


Sorry, but notes.64 bit is not a documentation at all. Maybe I am far too stupid, but from reading it I don't get even an idea what I need to set and what to check. ....


Nobody was questioning your intelligence, but don't expect me to be shamed into writing a more prosaic version of those notes to meet your high documentation standards either. Some of the files listed will need to be modified, others won't, you'll have to decide which changes might affect your system. That's the documentation we have, deal with it, or not.


Similarly, if I want to port it to a 32-bit architecture: There is a file unix/portkit/README which actually describes where I have to look. It is dated on 1986 -- is it still actual? Are there additions to that?


That readme hasn't been updated since it was written but is largely still accurate (the major missing piece would discuss the iraf$vendor code).


For Debian, I am going to create quite a number of ports: ARM, MIPS, s390, Hurd-i386, kfreeBSD. Ubuntu may add some 64-bit architectures. So there is a number of ports which you can address adding test code. Will you?


Not at this time. Unit testing was not part of the original system development and there are many places where it could be added, but no resources to actually add them. We'll as test code as needed in future ports or include code developed by others until then. Note that STScI has some sort of test framework as part of their Ureka distribution and internal builds that you could perhaps leverage to test your ports, contact help@stsci.edu for details.

 
Profile Email
 Quote
olebole
 05/17/2014 10:52AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
I've included your zsvjmp code in the unix$portkit/ as reference sources for future ports, however I could not get the jmptest.f test file to execute properly on existing systems.
Could you give me the exact error message?
The problem in the moment is that it hardcodes the status as INTEGER*8, which probably has to be set to INTEGER*4 on 32-bit systems.

The procedures also do not define the Mem common address so you will need to ensure this is set by the compiler or define the global address here.
Right. This is since IRAF itself states that this does not belong here.

The /usr/share patch is deferred since it does not address the issues of external package installation,
As long as you have a standard local install, like the one I proposed, this is not an issue. For the distributions, one should package the external packages as well.

multi-platform architectures,
This is solved, as I wrote in my proposal.

package environment loading,
What do you mean here?

interactions with existing installation/update procedures,
Can you be more specific here?
One of the goals of having IRAF in the major distributions is also to get rid of home-made installation procedures.

the multiple /usr/lib architecture names you've pointed out and would also appear to break task compilation without symlinks in the existing bin. directories.
Why?

Although OSX has /usr/share (and a single /usr/lib that complicates multi-platform support), it could be argued a proper OSX packaging would have IRAF under a /Applications/iraf/Contents/ tree to abide by the Apple scheme (or the use of /usr/share in a Fink/MacPorts env).
I don't know how proper packaging works for OS-X. If there is a standard, one should follow it.
As for Linux: if there is a standard, it should be followed. The longer you wait, the more difficulties raise. If you had completed the transition already 15 years ago, it would have been easier that today. And today it is easier than tomorrow. So, let us start resolving the issues today instead of waiting longer.

The use of /usr/share is deferred to package maintainers who can better address the needs of a particular platform as part of their installation scripts.
The "particular platform" is Linux, since almost all distributions adopted the FHS in one or the other way. So, I still think that the source is the best place to get this fixed.

Don't you think that it would be best to discuss and maintain an IRAF directory structure that fullfills the needs of the LInux distributions in your source? Ofcourse I can apply any patch (also Fedora can, or Gentoo) -- but to have it somehow coordinated, it would be best to do this "upstream", in the source of IRAF. Do you really disagree here???
As the guy who has to maintain the new structure in the long-run, answer user questions caused by confusion over the new disk layout, keep pace with Linux releases that will end up driving my schedule (long past 2016), then, yes I do (disagree, that is).

At the end, it will be a common maintenance anyway, independent where the change is finally applied (at your place or during packaging). The advantage for you doing this in the source is that we can discuss compromises which are better to maintain for you as well. If we did that independently for Fedora, openSUSE, Debian and Ubuntu, you will end up with questions which are specific for the systems. It is probably also for advantage for you to keep IRAF uniform among different distributions, won't you agree?

Sorry, but notes.64 bit is not a documentation at all. Maybe I am far too stupid, but from reading it I don't get even an idea what I need to set and what to check. ....
Nobody was questioning your intelligence, but don't expect me to be shamed into writing a more prosaic version of those notes to meet your high documentation standards either. Some of the files listed will need to be modified, others won't, you'll have to decide which changes might affect your system. That's the documentation we have, deal with it, or not.
My point is that the documentation is not enough, and that's why I am trying to get more information from you.

Similarly, if I want to port it to a 32-bit architecture: There is a file unix/portkit/README which actually describes where I have to look. It is dated on 1986 -- is it still actual? Are there additions to that?
That readme hasn't been updated since it was written but is largely still accurate (the major missing piece would discuss the iraf$vendor code).[/quote]Great, thanks.
Could you also shine a light of what happens in the FPU specific stuff in zxwhen.c? I could then eventually replace that by <fenv.h> stuff that makes the code more portable.

Unit testing was not part of the original system development and there are many places where it could be added, but no resources to actually add them.

You elswhere wrote that you are going to assign some ressources to IRAF development in the next two years. It may be a good idea to designate a significant amount of that for code cleanup, documentation, and unit tests. What do you think?

Note that STScI has some sort of test framework as part of their Ureka distribution and internal builds that you could perhaps leverage to test your ports, contact help@stsci.edu for details.

Thank you for the hint.

Best regards

Ole

 
Profile Email Website
 Quote
fitz
 05/17/2014 12:28PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040


package environment loading,
What do you mean here?


The MKPKG task takes a "-p" flag (e.g. "mkpkg -p noao") to load package-specific environments from the package's lib directory, this is also used to find libraries that are outside the iraf tree in the package source. Packages have their own 'zzsetenv.def' files so some sort of package structure in /usr/share or /usr/lib must still be maintained to avoid naming conflicts.

[/quote]
interactions with existing installation/update procedures,
Can you be more specific here?[/quote]

External package installation for one.


At the end, it will be a common maintenance anyway, independent where the change is finally applied (at your place or during packaging). The advantage for you doing this in the source is that we can discuss compromises which are better to maintain for you as well. If we did that independently for Fedora, openSUSE, Debian and Ubuntu, you will end up with questions which are specific for the systems. It is probably also for advantage for you to keep IRAF uniform among different distributions, won't you agree?


I do, which is why we're sticking with a single, simple tarball rather than multiple .deb, .rpm and .dmg distributions.



Could you also shine a light of what happens in the FPU specific stuff in zxwhen.c? I could then eventually replace that by stuff that makes the code more portable.


Details depend on the platform, the main point of the code is to get the specific FPU exception to identify a divide-by-zero error from an overflow. See also os$zzepro.c which may no longer be required with a working



You elswhere wrote that you are going to assign some ressources to IRAF development in the next two years. It may be a good idea to designate a significant amount of that for code cleanup, documentation, and unit tests. What do you think?


I think you over-estimate the resources (i.e. part-time me) available .....



 
Profile Email
 Quote
olebole
 05/17/2014 01:45PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

package environment loading,
What do you mean here?
The MKPKG task takes a "-p" flag (e.g. "mkpkg -p noao") to load package-specific environments from the package's lib directory, this is also used to find libraries that are outside the iraf tree in the package source. Packages have their own 'zzsetenv.def' files so some sort of package structure in /usr/share or /usr/lib must still be maintained to avoid naming conflicts.

Why would the proposed structure not allow this?

interactions with existing installation/update procedures,
Can you be more specific here?
External package installation for one.
As long as the packages have the same structure, I don't see a problem here.

It is probably also for advantage for you to keep IRAF uniform among different distributions, won't you agree?
I do, which is why we're sticking with a single, simple tarball rather than multiple .deb, .rpm and .dmg distributions.
This would not solve the problem, since Debian (and Ubuntu) are going to use a directory structure that follows the FHS. So, there will be two different directory structures unless we develop of a compromise that included FHS and your thoughts.

Could you also shine a light of what happens in the FPU specific stuff in zxwhen.c? I could then eventually replace that by stuff that makes the code more portable.
Details depend on the platform,
Yea, but what is it supposed to do? I don't need to know how it is implemented (this is different per platform for sure), but what it should do, independent of the platform.

the main point of the code is to get the specific FPU exception to identify a divide-by-zero error from an overflow. See also os$zzepro.c which may no longer be required with a working
So, it is would just define this one exception?
<fenv.h> is already used for MacOSX; is there a reason why it is not used for Linux? This would simplify things a lot and make the code much more portable since there is no need to do FPU specific stuff on ARM or MIPS (or s390). And, it is in the standard since 15 years. Is there any platform today which does not support it?

You elswhere wrote that you are going to assign some ressources to IRAF development in the next two years. It may be a good idea to designate a significant amount of that for code cleanup, documentation, and unit tests. What do you think?
I think you over-estimate the resources (i.e. part-time me) available .....
Even in this case it would be good to use much of the time for documentation, cleanup, and tests, don't you think?

 
Profile Email Website
 Quote
fitz
 05/18/2014 10:39AM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040


The MKPKG task takes a "-p" flag (e.g. "mkpkg -p noao") to load package-specific environments from the package's lib directory, this is also used to find libraries that are outside the iraf tree in the package source. Packages have their own 'zzsetenv.def' files so some sort of package structure in /usr/share or /usr/lib must still be maintained to avoid naming conflicts.


Why would the proposed structure not allow this?


It might, but would require significant changes to the current idea of package structure if one were to follow FHS to an extreme. For example, the package help database and zzsetenv.def files would probably belong in /usr/share and to avoid name conflicts you'd have to keep something resembling a current iraf tree to keep package files separate. Package paths are built from a single environment variable and so if some files are in /usr/share (or /usr/lib) but the source is elsewhere you'd need to adjust all of these path accordingly.

Similarly, are package parameter files considered data that belong in /usr/share? What about things like the noao$lib/obsdb.dat file or noao$lib/onedstds directory? What about CL scripts that do (and do not) have their own parameters declared as part of the script itself. The minute you start moving these files from where the CL and iraf system expects them to be you need to either change all the path definitions and how they're accessed or else you have a mirror copy under /usr/share for a runtime system and another under /usr/src so you can build.

The MKPKG and XC tasks are host commands and get paths from the iraf kernel (specifically os$irafpath.c or os$boot/bootlib during a bootstrap). These rely only on having a $iraf and $IRAFARCH defined (where IRAFARCH is not as simple as "lib' or 'lib64'). What you propose would require having $iraf, $iraf_l (for lib), $iraf_s (for share), etc or else hardwiring the FHS paths into the build process. All of this is new complexity the users won't understand and the grumpy upstream bastard doesn't want to deal with, so I defer it to you.

Didn't the Mageia thing address this already?


interactions with existing installation/update procedures, Can you be more specific here? External package installation for one. As long as the packages have the same structure, I don't see a problem here.


The current external package installation is done with a simple

PHP Formatted Code

    % cd $iraf/extern
    % ./configure                # needed only once
   % make mscred
 


where scripts in the iraf$util directory do all the real work. That would all need to be replaced to unpack an external package as you propose.

Note also, we don't control all the available external packages and I can guatantee resistance to a major restructuring from some outside groups. Similarly, users have their own packages that aren't distributed, the currect packcage declaration scheme will have to continue to be supported as well.

And all of this work for a FHS standard that doesn't require apps to conform and at best will provide a working system no better (from the user standpoint) as what we have now, but does require we take a step back to having root permission to be able to install and relies on a lot of work being done to update code by third parties.


Yea, but what is it supposed to do? I don't need to know how it is implemented (this is different per platform for sure), but what it should do, independent of the platform.


ZXWHEN if the kernel exception handler, both for signal and hardware
exceptions such as FPE. IRAF tasks can post handlers themselves to catch or
ignore these errors. I'd imagine that reworking the FPU part of this code
would simply be to replace the e.g. sfpucw() call (from zsvjmp.s) with the
corresponding fenv routines.


is already used for MacOSX; is there a reason why it is not used for Linux? This would simplify things a lot and make the code much more portable since there is no need to do FPU specific stuff on ARM or MIPS (or s390). And, it is in the standard since 15 years.


In early PC/IRAF Linux ports I remember we did actually use some fenv calls, however they didn't actually work. It isn't as simple as handling one exception, you have to catch them all or else punt on a SIGFPE and report a generic error.

Is there any platform today which does not support it?


Not that we plan to port to anytime soon. These things tend to be OS-specific
rather than hardware specific.


Even in this case it would be good to use much of the time for documentation, cleanup, and tests, don't you think?


Come on, it's not like this hasn't occurred to us in the past and in an ideal world we could work far enough down the ToDo list to get to this. I'd suggest again this is where the community could pitch in, but I've been recently reminded they don't actually work for me 8-) As such, I eagerly await their efforts and the best we can do is address this as part of new ports, major releases, system development, etc.

 
Profile Email
 Quote
olebole
 05/18/2014 05:21PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz


The MKPKG task takes a "-p" flag (e.g. "mkpkg -p noao") to load package-specific environments from the package's lib directory, this is also used to find libraries that are outside the iraf tree in the package source. Packages have their own 'zzsetenv.def' files so some sort of package structure in /usr/share or /usr/lib must still be maintained to avoid naming conflicts.
Why would the proposed structure not allow this?


It might, but would require significant changes to the current idea of package structure if one were to follow FHS to an extreme. For example, the package help database and zzsetenv.def files would probably belong in /usr/share and to avoid name conflicts you'd have to keep something resembling a current iraf tree to keep package files separate.


Yes: /usr/share/iraf/ contains everything that is architecture independent; /usr/lib/<arch>/iraf contains everything that is architecture dependent. The latter is just the stuff that is today in the lib.<arch> resp. bin.<arch> directories. The directory structure under /usr/share/iraf is as before (without bin.* and lib.*), and this is also IRAFs home. To use this system, one just has to adjust the paths for system dependent executables and libraries to add the ones in /usr/lib/<arch>/iraf (which would have a similar structure).

Package paths are built from a single environment variable and so if some files are in /usr/share (or /usr/lib) but the source is elsewhere you'd need to adjust all of these path accordingly.


Source is in /usr/share (if there at all). Path adjustment is needed only for the libraries. As long as a setup uses the standard installation environment scripts, it is fine to install.

Similarly, are package parameter files considered data that belong in /usr/share?

Yes.
What about things like the noao$lib/obsdb.dat file or noao$lib/onedstds directory? What about CL scripts that do (and do not) have their own parameters declared as part of the script itself. The minute you start moving these files from where the CL and iraf system expects them to be you need to either change all the path definitions and how they're accessed or else you have a mirror copy under /usr/share for a runtime system and another under /usr/src so you can build.

All in /usr/share. Except binaries and libraries. /usr/share/iraf is the new $iraf.

The MKPKG and XC tasks are host commands and get paths from the iraf kernel (specifically os$irafpath.c or os$boot/bootlib during a bootstrap). These rely only on having a $iraf and $IRAFARCH defined (where IRAFARCH is not as simple as "lib' or 'lib64'). What you propose would require having $iraf, $iraf_l (for lib), $iraf_s (for share), etc or else hardwiring the FHS paths into the build process.

My patch puts them in iraf_b, which is probably not the best name.
All of this is new complexity the users won't understand

SInce they just call mkpkg and xc, they don't need to understand this at all. I also don't see the additional complexity if there is another program that follows the same standard as all the others on my computer.
and the grumpy upstream bastard doesn't want to deal with, so I defer it to you.
Didn't the Mageia thing address this already?

Yes. And you probably have at some day to deal with mageia users that have exactly that setup.
And all of this work for a FHS standard that doesn't require apps to conform

Can you give a reference for that? At least Debian requires.


Even in this case it would be good to use much of the time for documentation, cleanup, and tests, don't you think?
Come on, it's not like this hasn't occurred to us in the past and in an ideal world we could work far enough down the ToDo list to get to this. I'd suggest again this is where the community could pitch in, but I've been recently reminded they don't actually work for me 8-) As such, I eagerly await their efforts and the best we can do is address this as part of new ports, major releases, system development, etc.

I am going to prepare a patch for the makefiles that basically replaces all the hardcoded "gcc" by $CC. Is this patch welcome by you?

Best

Ole

 
Profile Email Website
 Quote
fitz
 05/19/2014 11:27AM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040



Source is in /usr/share (if there at all). Path adjustment is needed only for the libraries. As long as a setup uses the standard installation environment scripts, it is fine to install.
:
All in /usr/share. Except binaries and libraries. /usr/share/iraf is the new $iraf.


So basically your proposal is /usr/share/iraf is a mandated iraf root directory but as part of the installation the bin/lib files move out of the iraf tree? An assuming external packages go into /usr/share/iraf/extern as they are now, the only issue for them is moving the bin/lib files?

Users have had their choice of iraf root dir for decades, if you want to use /usr/share please do. If Debian wants to move the bin/lib files as part of their installation the package script can easily do that and replace the files in the iraf tree with symlinks and no changes are required by me. I see no real reason to modify the system to support one particular choice if iraf root path.



All of this is new complexity the users won't understand


SInce they just call mkpkg and xc, they don't need to understand this at all.


Perhaps users don't, but package developers certainly do if you're requiring that parts of the package (e.g. lib) be moved outside the package tree.


Yes. And you probably have at some day to deal with mageia users that have exactly that setup.


Oh no.... both of them? Actually, they will be referred back to the RPM packager or asked to install the release we support. That packager (or someone such as yourself) can of course reply to users who post here.



And all of this work for a FHS standard that doesn't require apps to conform


Can you give a reference for that? At least Debian requires.


The FHS 2.3 Standard itself at http://www.pathname.com/fhs/pub/fhs-2.3.pdf In particular, Sec 1.1. commenting on the scope that says:


Local placement of local \$this-\$this->_split2($m[0])_normalize_entities2($m[0])les is a local issue, so FHS does not attempt to usurp system administrators


In that case IRAF would properly belong under /opt per Sec 3.13, but the same issues about (and resistance to) moving things around apply.


I am going to prepare a patch for the makefiles that basically replaces all the hardcoded "gcc" by $CC. Is this patch welcome by you?


Only so long as the change keeps gcc as the value of $CC in that Makefile and doesn't inherit an environment value. The same would apply to F77 definitions that might be 'gfortran' instead of F2C and would bread linking. In the case of C, there are slight differences with e.g. clang as the compiler, but major differences when using the Intel compilers and IRAF ports are only supported for GCC.

 
Profile Email
 Quote
olebole
 05/19/2014 12:16PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

All in /usr/share. Except binaries and libraries. /usr/share/iraf is the new $iraf.

So basically your proposal is /usr/share/iraf is a mandated iraf root directory but as part of the installation the bin/lib files move out of the iraf tree? An assuming external packages go into /usr/share/iraf/extern as they are now, the only issue for them is moving the bin/lib files?


More or less: yes. As I write above, the structure could be for local installations in one tree:
TXT Formatted Code

share/ ... everything which is for all architectures. IRAF root.
 ... base dir for arch (with lib/, bin/, noao/lib/, noao/bin/ etc. subdirs)
 

You still can have everything in one place. If you don't like the share/ folder, you could even put everything into the base directory. For FHS compability, it is just important that there is a single path <arch> that can be moved to the architecture dependent directory /usr/lib/<arch>/iraf/.

I see no real reason to modify the system to support one particular choice if iraf root path.
Because it is the standard? And because it would be useful not to ignore the established standard since 20 years? Especially now since you see that the change is really minor.

Perhaps users don't, but package developers certainly do if you're requiring that parts of the package (e.g. lib) be moved outside the package tree.
As long as they use the variables provided by IRAF (one for the shared dir, one for the arch dependent dir), they won't see a difference.

Yes. And you probably have at some day to deal with mageia users that have exactly that setup.

Oh no.... both of them? Actually, they will be referred back to the RPM packager or asked to install the release we support. That packager (or someone such as yourself) can of course reply to users who post here.

Ofcourse you can do this. And if you act formally, you are on the safe side. If you on the other hand want the best user expierience, it would be better to find a common solution which does not get you a surprise by Mageia or Debian users.

The FHS 2.3 Standard itself at http://www.pathname.com/fhs/pub/fhs-2.3.pdf In particular, Sec 1.1. commenting on the scope that says:
Local placement of local \$this-\$this->_split2($m[0])_normalize_entities2($m[0])les is a local issue, so FHS does not attempt to usurp system administrators

This is for local installations, not for things that come through the system's package management. If I want to create a package for Debian, I cannot use this rule. If someone is packaging IRAF for openSUSE, he cannot ignore this rule.

I am going to prepare a patch for the makefiles that basically replaces all the hardcoded "gcc" by $CC. Is this patch welcome by you?
Only so long as the change keeps gcc as the value of $CC in that Makefile and doesn't inherit an environment value.

Since you (surprisingly, and without asking the user) set the value of CC in the environment: why do you care?
I would do it that if no CC is set, the gcc is used. More important is CFLAGS, since for Debian I need to add some special flags that improve security. These include more warnings (I'll also send you a patch that removes these warnings). Using another compiler that gcc also allows to check for more code problems, which would improve code quality as well, even if it does not create a workable IRAF installation in the moment.
I would also not care to set CC, CFLAGS etc. exactly once, but in the moment this is a complete mess: sometimes you use $CC, sometimes gcc, sometimes /usr/bin/gcc -- this is what I mean with code cleanup.

 
Profile Email Website
 Quote
fitz
 05/19/2014 01:30PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040

This is for local installations, not for things that come through the system's package management. If I want to create a package for Debian, I cannot use this rule. If someone is packaging IRAF for openSUSE, he cannot ignore this rule.


People packaging for their favorite system are free to do whatever they like or is required to meet whatever rules they impose on theselves, I wish them luck.


The FHS2.3 standard Sec 3.13 clearly states:


/opt is reserved for the installation of add-on application software packages.

:

The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man are reserved
for local system administrator use. Packages may provide "front-end" \$this-\$this->_split2($m[0])_normalize_entities2($m[0])les intended to be placed in (by linking or
copying) these reserved directories by the local system administrator, but must function normally in the absence
of these reserved directories.


In what way does the current IRAF installation process with a user-selected /opt/iraf root not meet these requirements and therefore comply with the FHS standarad document? If that's not sufficient for Debian's self-imposed rules on .deb files, I can't help.

 
Profile Email
 Quote
olebole
 05/19/2014 02:10PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

This is for local installations, not for things that come through the system's package management. If I want to create a package for Debian, I cannot use this rule. If someone is packaging IRAF for openSUSE, he cannot ignore this rule.
People packaging for their favorite system are free to do whatever they like or is required to meet whatever rules they impose on theselves, I wish them luck.


At the end, they will create an IRAF installation. And because then it is in the distribution, it will be much easier to install and update these packages than to download from an FTP server, adjust for local needs and then install.

How often do you install, f.e., the gcc or emacs from "upstream"? Probably quite seldom (well, you will probably do it a bit more often than others to check things for other version, but you probably get the idea). Distributing software with the system or similar (MacOS flink) is by far the easiest way for a user or a system administrator to get some software, and to keep it updated. How long would it take to install all the bug fixes you favourite Linux distribution got in the last half a year by hand in the traditional IRAF way (downloading the file, reading the README, and finally following some weird update procedure). How often did you that for gcc, for the linker, for your E-mail program, for your browser? One Linux, people rely on packaging, having IRAF in the distributions would be definitely a plus, and IRAF has the luck that there are volunteers to put it into the major distributions.

The question here is just to get support to make this packaging process in a way that makes upstream happy as well, and to keep the connection between users and developers. Nobody wins if you start to divide the IRAF users in "friends" (which use IRAF in the old-fashioned way from the tarball) and "wish you good luck!" (for those who just fire up a "apt-get install iraf" or so). The latter will come at some day, and if we don't come to compromises, we (you as upstream, and us packagers) will have unhappy IRAF users, no matter whether they use Debian or not. I don't want that, that's why I do this discussion.

The FHS2.3 standard Sec 3.13 clearly states:
The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man are reserved for local system administrator use. Packages may provide "front-end" \$this-\$this->_split2($m[0])_normalize_entities2($m[0])les intended to be placed in (by linking or copying) these reserved directories by the local system administrator, but must function normally in the absence of these reserved directories.
In what way does the current IRAF installation process with a user-selected /opt/iraf root not meet these requirements and therefore comply with the FHS standarad document?

I highlighted the parts that makes your proposal impossible.

 
Profile Email Website
 Quote
fitz
 05/19/2014 02:17PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040

You define "package" as meaning an RPM or DEB file, I read it to mean an "add-on application package" from the first statement, and therefore not impossible.

 
Profile Email
 Quote
fitz
 05/19/2014 02:43PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
How often do you install, f.e., the gcc or emacs from "upstream"? ....


Not often, but I've done it. I would note however that the GCC project itself only distributes tarballs, the convenience of package installers for users to keep this up to date is not done as part of GCC development, but by a community interested in supporting a particular distribution. As an individual user, I can override the autoconf script to specify my own e.g. 'bindir' if I want to install outside the linux distribution standard dirs. An RPM spec file that simply puts things in /iraf/iraf and creates the needed links for the and/or 'cl' command is trivial and is all that most users would want, one that creates a /usr/share/iraf tree and links for bin/lib is not much harder.

 
Profile Email
 Quote
olebole
 05/19/2014 03:01PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz


You define "package" as meaning an RPM or DEB file, I read it to mean an "add-on application package" from the first statement, and therefore not impossible.


The idea of having a package is that it is installable by the user (resp. the admin) in a simple, standardized way. And that is kept up-to-date in the same way. You do this for gcc, for bash, for emacs, for the kernel and for a zillions of other packages. What makes IRAF so special that you insist of that the user has to separately download the package from some FTP server, wait for the right lunar phase (never on new moon, as stated in the README, right?) and do some vodoo tasks. All other packages are installed quite simple.

Want ds9?
SHELL Formatted Code
apt-get install saods9

Want astropy?
SHELL Formatted Code
apt-get install python-astropy

Want skycat?
SHELL Formatted Code
apt-get install skycat

Want iraf?
BRAINFUCK Formatted Code
>>[-]>[-]>[-]>[-]&lt;&lt;&lt;&lt;&lt;[-&gt;>+&lt;-[&gt;>>]>[[&lt;+&gt;-]>+>>]&lt;&lt;&lt;&lt;&lt;]

This is just not it should work. It should work so:
SHELL Formatted Code
apt-get install iraf

At least, if you have a Debian derivative.

How do you update you system, including ds9, astropy, and skycat usually?
SHELL Formatted Code
# lean back and relax
(at least, on my system, this is done automatically)
What does one additionally need to do if IRAF needs an update? Is this also a zero-liner? How much time the individual spends on doing a single update of IRAF compared with a single update of f.e. saods9 via apt-get?

To reach this, we need a Debian package (and an Fedora one, and so on).

How often do you install, f.e., the gcc or emacs from "upstream"? ....
Not often, but I've done it.

You have a reason why you don't always do this, right?

I would note however that the GCC project itself only distributes tarballs, the convenience of package installers for users to keep this up to date is not done as part of GCC development, but by a community interested in supporting a particular distribution.
This method (upstream distributes tar files, and the user install specific compilations from the distribution) seems to work quite fine, right? If there would be a good ("good enough"; as good as the gcc packages) IRAF package for the Linux distributions, there would be no need for upstream to distribute extra tar files.

As an individual user, I can override the autoconf script to specify my own e.g. 'bindir' if I want to install outside the linux distribution standard dirs.
This is basically what I want from IRAF: I want to specify the bindir, the libdir, the datadir etc, and get a working installation that obeys these directories. Note that even gcc makes a difference between /usr/lib and /usr/share.

An RPM spec file that simply puts things in /iraf/iraf and creates the needed links for the and/or 'cl' command is trivial and is all that most users would want
Contradicts the FHS, since /iraf/ is not allowed there.
one that creates a /usr/share/iraf tree and links for bin/lib is not much harder.
Contradicts the FHS, since then the /usr/share tree contains architecture dependent files.

 
Profile Email Website
 Quote
fitz
 05/19/2014 03:13PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040

This is no longer a productive discussion, if you want a debian package please build one.

 
Profile Email
 Quote
robsteele49
 04/04/2017 05:33PM  
++---
Junior

Status: offline


Registered: 05/03/2010
Posts: 28
Has this effort just died? I couldn't find an official repository on github and was wondering if there was one. Or was this a good starting place for one?

Rob Steele (Robert.D.Steele@jpl.nasa.gov)
 
Profile Email
 Quote
olebole
 04/05/2017 07:55AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: robsteele49

Has this effort just died? I couldn't find an official repository on github and was wondering if there was one. Or was this a good starting place for one?


Aside from my effort to create zsvjmp.c for a number of architectures (the official Debian ones) at github, there is IMO nothing.
Aside from this, there is still another problem for anyone trying to distribute binaries: IRAF is linked to libreadline, which is GPL; so IRAF itself needs to be GPL. This basically means that they need to offer all source code under GPL. This is however impossible since IRAF extensively useds code from the Numerical Recipes which is not GPL, and incompatible to it.
That means: any binary distribution of IRAF is illegal, which killed my Debian efforts, and if NRAO would respect copyright, they would also have to remove the binary distributions of IRAF from their servers.
Unfortunately, there is no effort to solve this problem. NRAO seems to just try to ignore it.

 
Profile Email Website
 Quote
olebole
 04/05/2017 09:11AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
NOAO ofcourse, not NRAO. Dont want to blame the wrong institution.

 
Profile Email Website
 Quote
fitz
 04/05/2017 03:39PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
The ONLY tasks is IRAF linked to readline are the ECL and VOCL. You can edit the iraf$pkg/mkpkg file to comment these out from the build, or add a "-DNO_READLINE" flag and modify the mkpkg file so the binaries are built without readline functionality (i.e. you lose the up-arrow history).

I refer you to the iraf$local/COPYRIGHTS file that explicitly grants permission to make and redistribute changes like this.

 
Profile Email
 Quote
olebole
 04/05/2017 03:49PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

The ONLY tasks is IRAF linked to readline are the ECL and VOCL. You can edit the iraf$pkg/mkpkg file to comment these out from the build, or add a "-DNO_READLINE" flag and modify the mkpkg file so the binaries are built without readline functionality (i.e. you lose the up-arrow history).


The problem is that you distribute a binary package which is linked to readline. This makes the IRAF binary package a derivative work, for which the whole source must be distributed under GPL.

 
Profile Email Website
 Quote
fitz
 04/05/2017 03:54PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
And I will no doubt burn for all eternity by doing so. The point is, you don't have to distribute those binaries if you choose to create your own distribution.

 
Profile Email
 Quote
olebole
 04/05/2017 04:03PM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
Quote by: fitz

And I will no doubt burn for all eternity by doing so.


Maybe I am wrong with this, but I think that license conditions are not a voluntary hint which one can obey or ignore as one wishes.
Especially we as scientists should be aware of intellectual property and their rules. Also I am not sure whether NOAO really wants to be involved in license violation.

But maybe we discuss this at the next ADASS in public?

 
Profile Email Website
 Quote
fitz
 04/05/2017 04:07PM  
AAAAA
Admin

Status: offline


Registered: 09/30/2005
Posts: 4040
So wouldn't just putting the ECL/VOCL under a separate GPL license solve your issue with readline?

 
Profile Email
 Quote
olebole
 04/06/2017 09:09AM  
++++-
Regular Member

Status: offline


Registered: 05/01/2014
Posts: 103
If they were independent tools: yes. But they are integral part of IRAF. Also, IRAF has some other GPL sources, like healpix.x or rmturlach.x.

 
Profile Email Website
 Quote
   
First | Previous | 1 2 | Next | Last
Content generated in: 1.04 seconds
New Topic Post Reply

Normal Topic Normal Topic
Sticky Topic Sticky Topic
Locked Topic Locked Topic
New Post New Post
Sticky Topic W/ New Post Sticky Topic W/ New Post
Locked Topic W/ New Post Locked Topic W/ New Post
View Anonymous Posts 
Anonymous users can post 
Filtered HTML Allowed 
Censored Content 
dog allergies remedies cialis 20 mg chilblain remedies


Privacy Policy
Terms of Use

User Functions

Login