I haven’t been around much the past few days. There are a few reasons for that. We’re starting to bind journals again for the first time in months. I’ve taken over a number of knowledge management duties for the library that are forcing all of the librarians here to rethink their work flow models. I’m putting together a proposal for the 2006 METRO Digitization Grant for delivery in about three weeks. We’re integrating PubMed into our Article Linker subscription. And I’m working with my boss on getting our journal subscriptions evaluated and renewed, so there is a lot going on right now.
That said, while I wondered about how to deal with the knowledge management responsibilities, I got to thinking about what I learned from my 20+ years working with computers and how that knowledge helps me plan work both tactically (day to day) and strategically (quarter to quarter, year to year). I’ve always liked machines, and as I got older and learned to think more abstractly I found that I liked thinking about systems in general. (I was one of those truly nerdy kids –now a nerdy adult–who would wonder who first created the laser death ray and why everybody in one sci-fi universe or another seemed to have them. Did the inventor not patent the thing? Couldn’t he have licensed it to the army? To the Time Corp? What about antimatter bombs? Heck, while we’re at it, who invented the time machine? Did people pay royalties to H.G. Wells when they built their own? If not, then why not? And how big an economy does a planet really need just to build a single Super Star Destroyer?)
The point is that in my daily interactions with the librarians here it became clear that I know more about these things than most of them do. And that’s fine–reference librarians don’t need to know every detail about the information systems they use the same way a concert violinist need not know how to build a violin. But that lack of knowledge works both ways–there are some things our network just can’t do, or at least, can’t do without enormous additional resources (time, money and staff, but mostly time . . . and money). Even our library’s director–for whom I have enormous respect as both an administrator and a librarian–sometimes thinks of the system as a magic box that works like any ship’s computer on Star Trek.
Well, the world . . . she don’t work that way. What follows are some extremely basic and blunt thoughts on the things that the planners of Libraryland might wish to take into account if they feel so inclined. Think of it as a friendly and well-meant rant.
10 Things About Electronic Resources That Need to be Understood
1. It is not magic.
I’ll say it right now for all electronic resource librarians
everywhere—you are buying (or adding to) a computer system. It is a machine, an
amazingly complex machine, with zillions of moving parts (make no mistake,
software is a collection of zillions of moving parts). It has very strict
requirements in terms of input and just as strict parameters concerning what it
will produce (output). It is not a magic show, although it may seem pretty
magical to the average library user (or even the average librarian). It’s not
power by gnomes, pixies, elves or small toads running around the motherboard.
There’s no link to hyperspace sitting on the CPU and no remote control that aliens
are using to send it messages. It’s a machine. Please remember that, and try
not to look too surprised if a staff person gently reminds you of that now and
2. It is not a black box.
Your computer is not a black box. It has individual parts that work
together according to its programming. It needs to be designed, built,
maintained, and sometimes repaired. Depending on the relative complexity in the
system, some networks might need more attention than others. Just because the
NYPL has one kind of system does not mean that it could one day swap with the
University of Michigan’s. Each system is designed to perform a very specific
function under distinct circumstances, and these machines get very quirky and
petulant if they’re asked to do something they’ve not been designed to do. Computers
cannot improvise, maintain or improve themselves (at least not yet).
3. “Even a monkey” cannot use it.
To be fair, I’ve never heard a salesman
for Micron, Dell or any other PC vendor claim that their product was so simple
that “even a monkey can use it.” But I’ve also never encountered a computer,
either in terms of software or hardware that worked exactly as advertised
“right out of the box.” “Plug and play” was long ago re-termed “plug and pray”
by IT workers all over the world. The machines come with cords and cables,
keyboard and mouse, formatted hard drives, operating systems (Windows or OS X
usually) and a variety of applications, and that’s pretty much all. Once your
IT people have determined that the parts fit properly and do not explode when
they turn it on, then the real work can begin.
What happens then is the process of customizing the applications you
bought for the machine. Networks need to be designed and built, then
programmed—depending on how many PCs you network and what exactly that network
is supposed to do can be a very time-consuming, complicated and expensive
process. Just designing the system can take weeks. The same goes for ILS
software, internet connectivity, training packages, etc. The new software needs
to be customized for your library’s use. And then once that’s done, your staff
needs to be trained in its use, and to become truly proficient with it they
need time and practice. There is no substitute for either. All this requires
time and money.
4. It is amazingly easy in war time to build a fleet bigger than can be
maintained in peace time.
If you play war games at all (I do) you’ll recognize this maxim.
Transposed to the world of the modern library, it means that you can budget for
all the fanciest highest-tech machinery and the software for the niftiest ILS
on the planet and neglect to budget enough money to fund the fixed cost of
running it, not to mention that of hiring someone to run it for you. The more
advance the system becomes, the more resources it will demand to retain the
same level of functionality, much less achieve a higher level of functionality.
Then you have to count the cost of training the regular staff to understand and
remember what’s been changed and once they do, then they must devise new
strategies to use the equipment for their patrons. Electronic resources, at
their best, can be amazingly productive tools for library services. They merely
do not come cheap.
5. Expertise is also a resource
Here’s a quick experiment you can try right now on your PC. Go into
Microsoft Word, create a new file and type a single line: “Hello There.” Save
it as “Web Page, Filtered.” Then open your favorite browser (I’m partial to
Firefox) and open the file you just saved. Then look at the source code (you
can see it by going to the View menu and selecting down the “Page Source” from
the drop down menu. What do you see? 30 or so lines of HTML that Word inserted
unbidden and unwanted into your document when you weren’t looking. That’s 30
lines of code that maybe you needed and maybe you didn’t need. That’s not the
worst part—had you saved Hello There as a “Web Page” you’d have ended up with
nearly 80 extra lines of code. All that code might mean nothing to you or it
might be something extra that an electronic resource librarian might have to
slog through when diagnosing a problem on a given website. Cluttered web pages
are slower and more complicated, and so are more difficult to fix than they
need to be.
My point is that simplicity in the world of computers doesn’t come
easily and it never comes out of the box. The simplest HTML page that says
“Hello There” should have five lines of code:
And that’s it. Everything else is
I’m not going to claim that this sample web page should be a model for
librarians, but it illustrates a point that’s often overlooked on the job and
ignored completely in library school, if my experience was any guide. Expertise
is something acquired through experimentation, patience, looking under the hood
of the system and occasionally screwing around just to see if something works.
6. What seems easy is often not.
One of the most serious misconceptions about electronic resources that I
encounter over and over—often from people who are extremely intelligent, highly
educated and notably professional—is that once a resource is set up, it runs on
its own without much additional work. Nothing could be further from the truth.
The point I’ve brought up before and will probably harp on again before
I’m done with this is that computer networks are highly complex machines with
zillions (gajillions, even) of moving parts. If you can’t imagine that at first,
think of it this way. If you ever cracked the case on a PC and peered inside
you’d see what looks like a machine with almost no moving parts. Sure, the
power supply is one, the fan another, there are wires and chips and lots of
glittery bits soldered onto the motherboard but nothing really moves, right?
Software is built from lines of code the way a tapestry is woven from lines of
thread—each line being a particular instruction for the CPU to follow. Each one
builds on the lines that came before it, so after several thousand of them,
you’ve got a set of instructions that few humans could hope to follow if directed
in the same format. ILS packages have hundreds of thousands of lines of code
(Microsoft’s Vista operating system reportedly has over 2 million). All these
instructions have to work together perfectly or you get a flawed product. In
this case a flaw could mean data lost, records sent to random places, or an
application that quits without warning at apparently random intervals.
With this in mind, consider that I’m speaking merely of a single
application—a modern library has dozens of computers, connected to each other
through a wide variety of media, shuffling data to each other all the time. In
a sizeable wide area network, we could be talking about tens of millions of
lines of code, all of which have to work together perfectly for the system to
work as a whole. I would think that the regularity with which modern networked
systems malfunction or fail to live up to the sales hype is proof enough that
it’s not easy to maintain. It only looks easy to use when it works properly.
7. Unintended consequences increase in direct proportion to complexity.
The obvious correllary to the previous point is that the more a library
PC network is expected to do, the more difficult it becomes to keep it running
at peak levels (note that I didn’t say “peak efficiency”—very few computer
networks are truly efficient in the industrial sense.) It’s a simple
progression. A machine with 10 perfectly interconnected moving parts has 100
things that can go wrong, as each part relies on the 9 other parts to work
properly to function. (I’m being illustrative here, not mathematical.) So a
machine with 100 parts can go wrong in 10,000 ways and a machine with a million
parts has literally a trillion ways to fail at any given moment. In real life,
it doesn’t actually work that way—some parts will fail more often than others
by dint of their design and the quality of materials used to build them, not to
mention frequency of maintenance and level of use. But the things to remember
is that when you add a level of complexity to any system, you (wittingly or
not) increase level of resources it requires to work properly. That means
money, time, maintenance by knowledgeable staff.
8. Your IT department does not know about library resources.
Make no mistake, your IT department is staffed by highly motivated,
skilled folks who work tirelessly to make sure that your email arrives on time
(in both directions), the servers talk to their neighbors and colleagues all
over the world properly, that data is backed up regularly and often and that
new equipment and software is properly installed, tested and fixed when
something goes wrong. But they do not know what your patrons need and they
probably never will. The librarians need to know what their patrons need and
communicate this to the IT folks. The two groups are indispensable to the
library’s functioning, but they are not interchangeable. The sole exceptions
are librarians who staff very small libraries, where three people need to tend
a collection of 25,000 books. In that case, some redundancy is inevitable.
9. Programming is a skill.
“Programming” is a big word. It involves manipulating instructions in
such a way that coaxes a particular type of behavior from a computer. In more
mundane terms, it means building a web page with HTML or designing an ILL form
using PHP or some other scripting language (like Perl). Library staff when they
program, generally are using scripting languages rather than building
applications from scratch, although plenty of IT folks do both for their
institutions. It’s like writing a short story: anybody can put words down on a
page but the quality of the story depends entirely on the expertise of the
writer. The really good writers have been writing and perfecting their craft
for decades. The same applies to programming: anybody can learn how to do it
competently, but not everyone with the skill will work to master it. (Not every
writer can be Stephen King, either.)
10. Decide in writing where you
want to go.
If there was one thing
I might point to that absolutely had the power to affect every other type of
library service, I’d say it was whether the library in question had a coherent
information technology policy that was updated every year. My MLS research
project was on this topic. The bottom line was that I found out that libraries
who’d installed or upgraded major IT components consistently wasted less time,
burned through less money, and misallocated fewer resources when they had such
a policy in place and updated it regularly. The reason is essentially one of
prioritizing the use of limited resources. When the priorities are vague and
not well understood, they tend to shift as different staff members approach the
problem of allocation. Putting it in writing is always the best route. No
competent library director would think of winging it when it came to collection
development, after all. Yet those same directors often approach the development
of electronic resources in a far more vague fashion. Any written goal has
greater power to keep one on track than one not written down.