16

Nowadays, you can use Ansible, Chef, Salt and others, to manage Linux/*BSD systems, keep them updated, install software in a reproducible way etc...

What tools were used to manage large UNIX installs (such as a data center, or an environment with hundreds of UNIX workstations) back in the 80s/90s?

14
  • 12
    Sneakernet in the 80s probably. Commented 2 days ago
  • 34
    Data centres full of identical Unix servers weren't really a thing yet. That 1995-vintage picture of Pixar's ~300 Sun box render farm for Toy Story is notable because of the huge scale. Remember that hard disks were very expensive even compared to expensive Unix workstations, and one approach was to just netboot machines and have them NFS-mount the same shared readonly filesystem (with some read-write space for /etc and so on). OS updates involved updating just that one filesystem. Commented 2 days ago
  • 49
    Once upon a time, software did not need to be patched every week. Systems were configured to particular things, and mostly they just carried on doing them. Commented 2 days ago
  • 8
    The university department I was in had only about hundred computers, and not hundreds, but the single sysadmin had extensive scripting in place for everything (HP-UX), and also for the occasional updates. And yes, everything was on the internet (and every computer had a public IP address, no NAT, you could do X forwarding when you were at a conference at another uni and work like from home, though a bit slower). Commented yesterday
  • 9
    I did an install from about 50 5.25" floppies in the early 80s. You lose the will to live after about 4 hours. Commented yesterday

4 Answers 4

49

I worked at a Very Large North American Automaker in the mid/late 90's. We had approximately 500 Unix workstations on our site, plus handfuls of large servers. These workstations ran SunOS, Solaris, Irix, AIX, and HP/UX.

First, @dave above is correct. Updates and patches were infrequent from the OS/hardware vendors. We'd skip all minor releases and update systems perhaps once a quarter. Software (CAD/CAM/CAE in our case) was updated at about the same pace. Vendors which had many "urgent" updates were heavily marked against during purchasing rounds the next year. "6 updates last year SGI? Bad quality control. Maybe we go with Sun Microsystems for next year's purchase..."

When we did have to do updates, techniques varied widely. If they could be distributed via network, we'd put them on servers then log in to each workstation individually remotely and execute the updates. Lots of homebrew scripts were developed to log in and update systems.

Sometimes though updates required Single User mode, had physical media that needed to be inserted, or physical switches on the front of the system had to be positioned for an update (e.g. RS/6000's key). In those cases, we'd walk around and update one at a time over the course of several days.

New contributor
Clinton Pierce is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
1
9

Nowadays, you can use Ansible, Chef, Salt and others, to manage Linux/*BSD systems, keep them updated, install software in a reproducible way etc...

Though I had not yet born at the time, I've learned of similar things having existed even back in the 80s (note that my research was limited to Unix environments), although I do not know how widely they were used:

  • Before Ansible, there was rdist, which came as part of 4.3BSD in 1986 and seems to have been used to push out standard files via rcp.

    When we [University of Southern California] purchased our first workstations (Sun-3/50’s) in early 1986, there was still nothing to maintain all the distributed files. We were still muddling along with simple shell scripts for our thirty UNIX hosts. By the time rdist was released with 4.3BSD, and shortly afterwards with SunOS, we were really starting to struggle to maintain our 100 UNIX hosts.

    The release and subsequent Great Discovery of rdist in 1986-1987 came as the number of UNIX workstations started doubling every year. Over the year or so following the Great Discovery, the number of supported machines continued to dramatically increase [...]

  • There was also sup, dating back to 1985 (original article in PostScript, ASCII).

    [...] SUP is intended specifically to address these situations:

    • A large collection of system software prepared by a maintenance staff for use by a large user community. For example, the CMU-CSD UNIX software used by hundreds of workstations. In such situations, the users know absolutely nothing about how to obtain such software, but they need to keep it constantly upgraded on their machines.

    [...]

  • CFEngine is newer (1993) but technically still "90s".

  • Some sites had AFS – a clustered network filesystem with strong caching (and which I believe has some ancestral ties to Windows AD via DCE). I've read various reports of software packages being run directly out of /afs/example.com/bin/@sys/whatever rather than being deployed locally, e.g. see MIT Athena "lockers".

  • OpenAFS itself had an 'upserver' component for AFS servers and clients to fetch updated binaries.

(Though I haven't studied how MIT Athena configuration itself was deployed to workstations – I don't think I came across to any description – but it was one large and relatively public 'standardized' Unix workstation environment that might be worth investigating.)


Regarding "installing software in a reproducible way," my rough understanding – supported at least by the existence of Athena's "software lockers" – is that in many cases this was achieved simply by copying (often statically linked) binaries for that software to /usr/foo or a similar location (similar to /opt now), and so 'installing' wouldn't have needed to be a whole Separate Thing but merely another use case for "rdist'ing a bunch of static files".

3

First, some caveats about my experience:

  • it was solely as a user, not an operator or administrator;
  • the systems are smaller than what you are asking about (~20 workstations at high school, maybe 100 in university);
  • it dates towards the tail end of the era you are asking about (I graduated high school in 1997 and started university in 1999).

At my high school, we had a computer lab with about 20 PCs. The computer lab was originally set up for running DOS, which was kind of the standard for the time – if a teacher had any computer experience at all, it would have been with DOS or Windows; Unix was not a thing in high schools or general society, which is after all where teachers come from.

However, one of our teachers was a bit of a unicorn: he had been a maths professor and researcher at university, but he grew disillusioned with the quality of knowledge of the students. So, he decided to become a high school teacher instead to have an impact earlier in a student's education.

As a former university maths professor and researcher, he was intimately familiar with Unix, TCP/IP networking, and system administration, and he converted the whole lab to Linux. There was a slightly more powerful PC for the teacher which was running Linux, and all the student PCs would netboot either Linux or DOS off of the teacher's PC (there was no separate server, the teacher's workstation also acted as fileserver, DHCP, DNS, etc.), depending on the teacher's preference and/or the subject of the course. All filesystems were network-mounted, even in DOS. However, there was no persistent file storage for students, instead we carried around one FAT-formatted and one ext-formatted 3.5" floppy disk. OTOH, assignments were handed in by copying them onto a network share.

At university, the setup was much larger, but similar: there was a mix of RS/6000 and HP-UX workstations, and Linux PCs. Everything was netbooted and net-mounted.

So, in both cases, only one copy of software had to be updated. In case of the high school lab, it was only turned on during classes and there was no remote access, so scheduling updates was easy: walk into the room at the beginning of a period, if nobody's there, you have a 45 minute maintenance window.

1

For lots (tens to a hundred) of Sun boxes in a uni/research environment in the 1990s, we relied heavily on NFS mounted disks and network booting from NFS. A network mounted /usr partition was usable on most workstations and most workloads, and then you just have to update the NFS server image rather than every machine.

We also experimented with X terminals, diskless workstations that loaded everything from NFS, which were even easier.

Network booting was also possible to update the kernel. As @dave pointed out in a comment above, updates would usually be quarterly at most. While we didn't have Ansible, Sun did ship with rlogin and friends (and excessively trusting permissions by default) so it wasn't hard to write a shell script that rebooted all your machines.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.