Does This XBRL Make Me Look Fat?


      Bookmark and Share

BEST PRACTICES SERIES

I date myself by revealing that I remember back when a high-end information service offered not only bibliographic citations but also abstracts and controlled-vocabulary subject terms—amazing! Once the full text of articles became available, the consensus within the information industry was that the focus should be on how to best support free-text searching. All those controlled fields? So last year.

We are coming full circle, armed this time with data analysis tools we hadn’t dreamed of a decade ago. One of the more striking examples of how to effectively add value to the full text of articles is an initiative called the “Article of the Future.” Cell Press and Elsevier announced this project in mid-2009. They looked at new ways to enhance the value of print articles, using audio, video, graphical abstracts, animations, and data mining. Articles are easily navigated with hyperlinks that allow readers to zoom in on the portion of the content of most interest. (See http://beta.cell.com for examples of these features.)

The U.S. Securities and Exchange Commission (SEC) is also driving the application of structure to content that was formerly viewed as unstructured. In early 2009, the SEC announced that it would begin requiring companies to provide financial statements in eXtensible Business Reporting Language (XBRL); all companies will be filing in XBRL by 2011. The SEC’s XBRL site (http://xbrl.sec.gov) includes several prototype applications for analyzing the submitted data. As with the “Article of the Future” project, we are seeing an organization looking at the content it “publishes” with an eye to making the underlying data more intelligent and the documents themselves more structured.

Along the same lines, I find it even more intriguing to see the impact of Global Positioning System (GPS) technology on our interactions with, you know, the real world. Digital cameras and smartphones are able to geotag our photos. We can get the exact coordinates for our favorite restaurant (mine’s at 40° 6.1' north, 105° 10.0' west). Mash those two together, and you get what’s being called a reality browser.

If you have a smartphone, you have a tool that combines a GPS, a compass, and a camera. Together, you have something that knows where it is and what direction it’s facing. Combine that information with reviews of restaurants, which always include a street address, and you have a tool that lets you, well, “browse” what’s in front of you. Download an app such as Layar (for either an iPhone or an Android phone) and Google Goggles (not surprisingly, available only on Android phones). Point your phone’s camera at a landmark, storefront business, restaurant, or other public object, and the app looks up those geographic coordinates in a number of web-based databases and overlays the retrieved information on your screen. Point the phone at a nearby restaurant and read Yelp.com reviews of the restaurant. Point it down the street and find the closest subway station. Point it at the Golden Gate Bridge, and you’ll be able to read the Wikipedia article on the bridge.

In fact, Google Goggles functions as a “visual search engine.” Point your Android phone at a work of art, and Google Goggles identifies the work, executes a search, and displays the results on the screen. Point it at the cover of a book you are interested in buying, and it displays Amazon.com reviews to help you decide whether or not to buy a copy. No typing required.

Of course, the best application this Baby Boomer sees for any reality browser is for it to recognize that person walking up to me, smiling like she knows me, and have it remind me who she is and what we last talked about. I’ll be the first in line to download that app. In the meantime, I have been thinking about the other ways that digital content can be coded in such a way as to extract far more use out of it. Imagine having all statistical content in articles and reports formatted to facilitate importation into a spreadsheet, or three-dimensional images of patented technologies that could be virtually handled and viewed from various angles.

No, I don’t expect to be searching LexisNexis any time soon by simply waving my hands to sift through data like Tom Cruise did in Minority Report. But I’m watching for new offerings from content providers that allow us to do more with the information we find so that we can extract more meaning and insight from the content.