• Skip to main content

Jon Frater

Just another WordPress site

  • Home
  • Books
    • Battle Ring Earth
    • Crisis of Command
    • Renegade Imperium
    • Salvage Ops
    • The Blockade
    • NYC Expocalypse
  • About
  • Blog
  • Contact
  • Newsletter

Tech Stuff

The Smartest Ones in the Room: A Review of Hidden Figures

January 16, 2017 by Jon Frater Leave a Comment

In 1961, America was all about the mission. A directive that sounds simple was but was anything but. The Space Race between the USA and the USSR was on. Both sides were engaged in a game of technological Can You Top This? and the Russians were winning. Cold War America was held in the grip of a simple fear. The Russians had already proved five years earlier that they could built a rocket capable of pushing an artificial satellite into orbit. The logic from there told us a simple story: If a satellite could be pushed that far that fast, then what was to prevent them from putting a nuclear bomb on the top of that rocket and flying it over to the US? World War II was only a decade and a half into history and the horror of Hiroshima and Nagasaki were fresh in American minds.

Into this setting we meet Katherine Johnson, Dorothy Vaughan, and Mary Jackson (played by Taraji P. Henson, Octavia Spencer, and Janelle Monae respectively), three black women who work as “computers” at NASA, calculating the trajectories for Project Mercury. They are part of the West Area Computers Group at NASA’s Langley Research Center in Hampton, Virginia. Despite their clear experience, talent, and proficiency with the work–and the ambition to improve their skills and experience–1961 Virginia is not an encouraging place. Despite making use of her skills, Johnson’s supervisor won’t allow her to put her name on the report she writes or attend briefings on mission updates. The local librarian would rather throw Vaughan out of the building than allow her to borrow a book on FORTRAN so she can learn about the newly installed IBM mainframe. And while she contributes to figuring out how to improve the quality of the Mercury capsule’s heat shield, Jackson can’t be trained or hired as an engineer without taking the advanced classes that are only available at a whites-only institution.

Hidden Figures is a movie about achievement and racism. History, until relatively recently, has tended to forget or ignore the stories of individuals who contributed significantly to our national success if they didn’t fit the narrative. It makes its point without being high-handed or manufacturing drama for the sake of a conflict. The setting provides conflict enough. 1961 Virginia was was a time and place where segregation was considered utterly normal, even banal. We’re shown this in a series of small but essential scenes on the NASA campus: Johnson’s most annoying problem isn’t her work load or her co-workers, it’s the fact to just going to the toilet entails a 40 minute trip from her office to the colored-only rest room on the other end of the compound. It’s not until her boss is made aware of this that he realizes just how insane the law is. His solution is to tear down the white-only signs from the building. Segregation doesn’t fit the Mission, so out it goes. Time is precious. Get back to work.

That’s really the point of the film: segregation doesn’t fit the national mission. It’s an archaic, emotional reaction to a shallow need to feel superior to those around us based on superficial differences. The decision to do away with it is one we never really made.

On that note, we could do worse than to encourage women and girls to get involved in determining our national mission.

So, be the smartest one in the room.

Be essential to the mission.

Demonstrate your ability, skill, and competence to the world.

And if the existing mission is detrimental to the country, then let’s create a new mission that isn’t.

In the meantime, make noise. Make them notice you. Make it clear to those who don’t value you that you must be valued. More importantly, show them why. Show them what you have done. Demonstrate your vision to anyone who will listen. Do it now.

Happy MLK Day. Go see this movie. Now.

 

Filed Under: Articles, Film, Science, Still True Today, Tech Stuff, Uncategorized

Digital Recovery Plans, Libraries, and Us

May 6, 2015 by Jon Frater Leave a Comment

I’ll just say that if you couldn’t make it to NYTSL’s Spring Program last night, you missed an excellent discussion.

The program was “”Disaster Recovery and the Digital Library”, and as such, it brought a bit of real world application to the often abstract world of disaster planning. NYTSL’s guest speakers were Frank J. Monaco, a retired Army Colonel and the recently retired CIO of Pace University; and Neil H. Rambo, the Director of NYU Health Science Libraries and Knowledge Informatics.

Both men spoke about experiences that put their training, planning, and experience to the test. In Frank Monaco’s case, it was managing the school’s recovery from the 9-11 attacks.

Verizon facility serving most of downtown Manhattan
140 West St, damaged by 7 WTC’s collapse.

Monaco has already written extensively on what he did that day but briefly: after he transitioned from the military to CIO of  Pace U., the first part of his plan was to move the institutional data centers as far away from downtown as possible, meaning Briarcliff Manor, the site of Pace’s Westchester campus. This turned out out to be a fortuitous decision. When the Internet collapsed (literally, as data transfers relied on the Verizon facility at 140 West Street which was damaged by the 7 WTC building collapse) Pace’s CTO had to physically carry the school’s mission-critical external servers and move them to a disaster-recovery site in Hawthorne, NY. After 24 hours to allow new IP addresses to propagate, web pages and e-mail returned to functionality.

Importantly, Monaco noted that their disaster recovery plan was still incomplete at the time of the attack. He also pointed out that restoring service was a very small part of the tremendous effort exerted by Pace’s president, executive staff, administrative staff, faculty, and students.

Neil Rambo told a hair-raising tale of the events that occurred in 2012 when Hurricane Sandy heaped a few million cubic feet of water on New York City. Long story short, a 14-foot  storm surge sent a wall of water powerful enough to blow steel doors off their hinges and send part of the East River into the lowermost levels of NYU’s Langone Medical Center on First Avenue. The result was a ruined library, destroyed archives, and a non-functional hospital.

Part of the problem was that NYU had previously weathered hurricane in 2011, Irene, which did minimal damage, and created a plan that expected similar damage from Sandy. After creating a response that planned for a cleanup and restoration of the medical center, library staff were enabled to work out of different facilities in a building across the street, reducing the loss of activity. It’s taken this long to determine that the library will be rebuilt into a superior environment which will devote nearly all its space to electronic resources, and is due to open later this year.

The lessons here aren’t really that surprising: make plans when things are running well, because there won’t be a chance when things break. A great plan, properly executed, is always better than an okay plan properly executed and light-years ahead of no plan at all. Optimally, the highest levels of administration need to be on board from the first stages of planning. Monaco’s advice on achieving this: “Scare the hell out of them.” Rambo’s advice was a bit more circumspect: “Imagine what would happen if you library was just gone, and work from that.”

And so I did. I’ll tell you what I figured out in the next post.
 

Get My Books

[author_books amount=”3″ size=”150″ type=”random” name=”jonfrater”]

Filed Under: Conferences, Events, Library Resources, Still True Today, Tech Stuff Tagged With: disaster recovery, libraries, NYTSL, NYU, Pace

Dear FCC . . .

September 10, 2014 by Jon Frater Leave a Comment

 

Yes, it’s a bit of slacktivism, but my concern about Net Neutrality is real enough. The Electronic Frontier Foundation has made sending a comment to the Federal Communication Commission as easy as possible.

Librarians should pay attention to this issue (and they are). We rely more than ever on internet resources for our livelihoods. As it is, we have regular down times and slow-downs of connection times on our public PCs. Being told to pay more for that level of intermittent service is just obnoxious.

But don’t listen to me. Lynne Bradley of the ALA says it better than I can:

Net neutrality is really important for libraries because we are, first of all, in the information business. Our business now is not just increasingly, but dramatically, online, using digital information and providing services in this digital environment. That means that we need to have solid and ubiquitous Internet services.

We’re interested in network neutrality for consumers at the home end, but also because it’s key to serving our public. And that means the public libraries, the academic libraries from two-year community colleges to advanced research institutions, as well as school librarians in the K-12 community.

Network neutrality issues must be resolved, and we hope to preserve as much of an open Internet policy as we possibly can. The public cannot risk losing access to important services provided by our libraries, our schools and other public institutions.

The point is that only by creating a flood of public commentary on this issue will the FCC even notice us. That’s fair and proper, condisering that what we call the Internet as developed with public money for an essentially public use. You don’t have to agree with me (or anyone) but please take five minutes and send the regulators the message that public resources should stay public.

My Books

[author_books amount=”3″ size=”150″ type=”random” name=”jonfrater”]

Filed Under: Angry Librarian, Tech Stuff, Web/Tech

10 Things About Electronic Resources That Librarians Need to Know

November 1, 2006 by Jon Frater 3 Comments

I haven’t been around much the past few days. There are a few reasons for that. We’re starting to bind journals again for the first time in months.  I’ve taken over a number of knowledge management duties for the library that are forcing all of the librarians here to rethink their work flow models.  I’m putting together a proposal for the 2006 METRO Digitization Grant for delivery in about three weeks.  We’re integrating PubMed into our Article Linker subscription.  And I’m working with my boss on getting our journal subscriptions evaluated and renewed, so there is a lot going on right now.

That said, while I wondered about how to deal with the knowledge management responsibilities, I got to thinking about what I learned from my 20+ years working with computers and how that knowledge helps me plan work both tactically (day to day) and strategically (quarter to quarter, year to year).  I’ve always liked machines, and as I got older and learned to think more abstractly I found that I liked thinking about systems in general. (I was one of those truly nerdy kids –now a nerdy adult–who would  wonder who first created the laser death ray and why  everybody in one sci-fi universe or another seemed to have them.  Did the inventor not patent the thing? Couldn’t he have licensed it to the army? To the Time Corp? What about antimatter bombs? Heck, while we’re at it, who invented the time machine?  Did people pay royalties to H.G. Wells when they built their own? If not, then why not? And how big an economy does a planet really need just to build a single Super Star Destroyer?)

The point is that in my daily interactions with the librarians here it became clear that I know more about these things than most of them do.  And that’s fine–reference librarians don’t need to know every detail about the information systems they use the same way a concert violinist need not know how to build a violin.  But that lack of knowledge works both ways–there are some things our network just can’t do, or at least, can’t do without enormous additional resources (time, money and staff, but mostly time . . . and money).  Even our library’s director–for whom I have enormous respect as both an administrator and a librarian–sometimes thinks of the system as a magic box that works like any ship’s computer on Star Trek.

Well, the world . . .  she don’t work that way. What follows are some extremely basic and blunt thoughts on the things that the planners of Libraryland might wish to take into account if they feel so inclined.   Think of it as a friendly and well-meant rant.

[Read more…] about 10 Things About Electronic Resources That Librarians Need to Know

Filed Under: Tech Stuff

FactCheck.org, MeSH and TypePad Metadata

October 24, 2006 by Jon Frater Leave a Comment

First, a big WooHOO! to Factcheck.org for receiving domestic and international awards.  If you haven’t subscribed to their e-mail news listserv you’re missing a lot.  I love this site.  If you’re looking for truly professional (i.e., fair and thorough) analysis of political ads and such, this is the place to go (or at least the place to start.)  Click on the previous links and experience the love first hand.

The 2007 MeSH is available now so I went and book marked it.  In truth, this has probably been available for some time and I just noticed it now, which is kind of sad since I use this page nearly every day for catalog work.  At any rate, there it is.

And we have broken the 7,000 visitor barrier (we’re at 7056 as I write this), for which I must thank  a billion or so librarians sporting body art of various types, styles and tastes,  offering  tidbits of information about themselves to satisfy my rampant curiosity.  Most of the traffic this site gets happens through federated search engines (lots of Google, rather less Yahoo, a mix of others) bouncing hits off past metadata entries, and in an average week, it’ll see about sixty or seventy web browsers pass through on their ways to elsewhere.  I’d love to find out how many are repeat visitors–I’d love even more to learn how many folks have linked to the RSS here.  I’ve gone through the Typepad help area and made a few calls to the help desk but I can’t find any support for that sort of statistic gathering.  Am I looking for the wrong thing in the wrong place?  Any and all advice would be appreciated.

Filed Under: Tech Stuff

Osmosing Data and Leaky Cell Phones

September 5, 2006 by Jon Frater Leave a Comment

In 1990 I found a wacky (and vaguely depressing) manga anthology by Joji Manabe (whose work I love) called "Dora." One of the stories in it (actually the first few pages of one of the stories in it) contained the germ of an idea I eventually turned into a short story called "Norma" (no relation to Dora). A couple of years later I decided that Norma had a lot more to do and so I spent the next five years turning it into a massive first novel with the pretentious title "The Electric Gods." To this day I  haven’t gotten it published, and considering the quality of the writing and the manuscript’s stupendous need for editing, that’s probably just as well.

Anyway, one of the things that Norma learned in her adventures in that (vaguely depressing) universe was that information has a way of osmosing from one place to another regardless of the efforts people and their machines put into restricting it. That’s probably a function of how we utilize the stuff–we tend to organize things into disjointed bits and pieces that we call "trivia", which is a distinctly twentieth century creation. In the nineteenth and earlier centuries, data were organized in rather more coherent forms. Little was truly disjointed, and details coalesced into a particular process. With the advent of assembly line manufacturing, you didn’t need to know the details of the process, just your little portion of it. That little portion, from the point of view of an earlier age, might have been essentially meaningless–trivial–but to the line worker, it was everything.

Eighty years later, we are walking data banks of trivia.

So it’s no surprise to see an article like this one: Don’t Keep Secrets on Cell Phones, from USA Today. People put what can be considered classified secrets (even just to themselves) on cell phones, and then tend to forget that they’re embedded on bits of silicon that can be mined by people who know what they’re doing. Random things: birthdays, addresses, phones numbers for cell, work, and home, as well as those for the spouse, parents, kids, coworkers, etc. Social security numbers. God alone knows how many e-mail addresses and whose. Work and home addresses, meeting schedules, date books. Bank accounts. Credit card numbers. Every one of which has potential value to someone who knows data systems and isn’t too scrupulous about selling that data to someone else.

I have read one economic (slightly off the wall) theory that says that money has a natural tendency to pool where it will do the least good to the fewest number of people. There may be a similar theory of information that says that information tends to pool in places where it will do the greatest harm to the largest number of people (Total Information Awareness, anyone?) Or maybe this is just the result a crapload of trivialities finding critical mass.

Anyway, read the article, and fry your cell phone before giving it away.

Filed Under: Tech Stuff

Two-Tiered Internet

March 2, 2006 by Jon Frater Leave a Comment

I like the Center for American Progress.  I really do, for the sheer level of research they utilize when writing any given bulletin they send out. I am in awe. Having said that, I’m not sure how I feel about their latest posts on what they call the Two-Tiered Internet. The NY Times has a better written account of what it means and why.

Filed Under: Tech Stuff

“‘Meta-Utopia'”? Who Said That?

October 20, 2005 by Jon Frater 3 Comments

Google Alert just dumped a link in my lap that I’m a bit conflicted about: it’s called "Metacrap" and it’s an angry and obnoxious attempt by Cory Doctorow to make what should be an excellent point. Namely, that not all meta-data is reliable and the level to which it is unreliable necessarily degrades its utility for everyone, including (especially?) libraries.

I’ll just say now that I have no idea who Doctorow is or why he’s so unhappy with the idea of meta-data–he seems really annoyed by the fact that the stuff is routinely misused by everybody from porn site designers to slick web marketers and novices who don’t know a thing about HTML getting involved.  But he concludes by saying that meta-data are actually quite useful:

"Certain kinds of implicit meta-data is awfully useful, in
fact. Google exploits meta-data about the structure of the World Wide
Web: by examining the number of links pointing at a page (and the
number of links pointing at each linker), Google can derive statistics
about the number of Web-authors who believe that that page is important
enough to link to, and hence make extremely reliable guesses about how
reputable the information on that page is.

This sort of observational meta-data is far more reliable
than the stuff that human beings create for the purposes of having
their documents found. It cuts through the marketing bullshit, the
self-delusion, and the vocabulary collisions.

Taken more broadly, this kind of meta-data can be thought of
as a pedigree: who thinks that this document is valuable? How closely
correlated have this person’s value judgments been with mine in times
gone by? This kind of implicit endorsement of information is a far
better candidate for an information-retrieval panacea than all the
world’s schema combined."

To him, I say only: Dude, calm down.

I think he’s making a few errors of his own here: a popular web site may be a better source of disinformation than fact, for example, no matter how many Google links point to it or how many hits have been logged over time–what about a completely factual site that nobody chooses to give credence to, for that matter? (Good info, no coverage. He also ignores Roy Tennant’s analysis of Google’s limitations). He’s right that as more and more pages spring up from more and more sources that nobody has first-hand knowledge of, the general quality of the information disseminated goes down, but that’s just common sense , or it should be.  People have all manner of bias and those biases invariably find their ways into the work they create.  No argument there.

Second, and perhaps more importantly: has anyone actually written anything on how the increasing use of meta-data will somehow solve all  information seekers’ problems? Has anyone actually put that idea forth? Or is "meta-utopia" something Doctrow just came up with because it sounded cool?  (I suspect the latter, but what do I know?)

Yes, meta-data are misused, sometimes badly abused, to the detriment of many. Of course there are problems.  Show me anything created by the human race without the potential for misuse or problems. If you can’t–and we both know you can’t–stop your whining and help those of us who believe in meta-data’s value as an information locating aid to fix the problems that you’ve found.   Lead, follow, or get out of the way.

This concludes the sermon. As I said, his general point has value; he’s just burying it beneath a few tons of cranky hyperbole. I’ve included the link above so you can go read what he says and judge for yourself.

Filed Under: Tech Stuff

Copyright © 2025 · Powered by ModFarm Sites · Log in