Searching for Dummies. This is a column from the New York Times by Edward Tenner. He writes that many students seem to lack the skills to structure their searches so they can find useful information quickly on search engines like Google, Yahoo or even MSN.
I heard Dr. Tenner speak a few weeks back at the Google Library Symposium. He was a good speaker and I think this is a thought provoking article.
Please note this is at the New York Times. At the moment, the article is freely available but it is possible that it may get moved behind a firewall in the future. Currently, registration is free if that happens so you should still be able to get access.
From the site:
While some blame reality television, MP3 players, cellphones or the multitasking that juggles them all, the big change has been the Web. Beginning in the early 1990's, schools, libraries and governments embraced the Internet as the long promised portal to information access for all. And at the heart of their hopes for a cultural and educational breakthrough were superbly efficient search engines like Google and those of its rivals Yahoo and MSN. The new search engines not only find more, they are more likely to present usable information on the first screen.
Google modestly declares its mission "to organize the world's information and make it universally accessible and useful." But convenience may be part of the problem. In the Web's early days, the most serious search engine was AltaVista. To use it well, a searcher had to learn how to construct a search statement, like, say, "Engelbert Humperdinck and not Las Vegas" for the opera composer rather than the contemporary singer. It took practice to produce usable results. Now, thanks to brilliant programming, a simple query usually produces a first page that's at least adequate — "satisficing," as the economist Herbert Simon called it.
The efficiency of today's search engines arises from their ability to analyze links among Web sites. Google led in ranking sites by how often they are linked to other highly ranked sites. It did so using an elaborate variation of a concept familiar in natural science, citation analysis. Instead of looking at which papers are cited most often in the most influential journals, it measures how often Web pages are linked to highly ranked sites — ranked by links to themselves.