Editorial Content, UGC, and Social Media: The Triangle of Content Success

Page 3 of 4

      Bookmark and Share

BEST PRACTICES SERIES

Synergizing Content

Some media sites try grouping and synergizing all types of content, such as activity streams, editorial, and user-generated content, in one place. On her blog, Sarah Lacy, author of the book Once You’re Lucky, Twice You’re Good: The Rebirth of Silicon Valley and Rise of Web 2.0, has posted about the fact that UGC is good for bringing in users and content, yet it is still challenging to monetize.

However, crowdsourcing can also be used to reduce costs and, in that way, improve the bottom line. Old-school media such as CNN and Encyclopaedia Britannica are using UGC to make content’s tail longer and thicker. CNN’s iReport is a user-generated site where anyone can submit a story without it being edited, fact-checked, or otherwise screened before it is posted. Only selected stories  are then vetted for use in CNN news coverage. According to Ian Grant, managing director of Encyclopaedia Britannica, even Britannica will open its entries so that people will be able to comment on the articles, update or check any facts, and write additions to the articles. But, he says, the “entries will have to be fact-checked by our staff.” Although it is opening the encyclopedia to a more Wikipedia-like style, Britannica still chooses to keep a rather traditional process of adding content.

However, this shift in process goes both ways; some typical UGC sites are trying more editorial approaches. For example, blogs are converging toward more portal-like media. Some trends in blogging include multiple-authored or multitopic blogs. A similar trend can be observed in other user-generated sites: Video sharing sites use channels and playlists where users can take some kind of an editor’s role. The ever-increasing amount of fragmented content has triggered the creation of content aggregation sites such as Digg and StumbleUpon. In a video post on his blog, Jeff Jarvis, the author of What Would Google Do?, says he “sees a great value in the aggregation” and contends that it is really “a form of editing.”

If manual (editorial or user-based) aggregation isn’t appropriate, there are options such as Zemanta that automatically analyze the content of a page and then suggest additional links, tags, related pictures, and related articles that can be freely used. This not only improves SEO but also improves and shortens the publication process.

Improving Content Discoverability

When online, users have traditionally had two options for the discovery of content: Someone recommends it to them, or they search for it. While manual recommendations have limited reach, there are a lot of activities to improve search—for example, improving search technology (hakia, True Knowledge) or getting the knowledge of the crowds to improve search results (Google’s Wikisearch). Microsoft is even using groups and idiosyncrasies of search terms to improve the quality of the results. Yet the quintessential problem with search is that the users feel the need to perform a search in the first place. They have to request (pull) information rather than this information being pushed toward them. Search engines are only responding to needs, they don’t stimulate them.

Websites are interested in creating a need rather than just responding to one that is expressed and acted upon. Therefore, all non-search-oriented sites must focus on proactive recommendations—pushing the content toward the user. The key issue for content and service providers and for users is discovering the right content. Issues such as collaborative filtering, link structure analysis, and content analysis come into play. This is made even more complicated as it is computationally very intensive. So the more data and the more users you have, the more problems you (and they) face.

Publishers can use traditional editors to expose relevant content of a good quality. Aggregators approach the question of how interesting content is in a different way. Digg uses “Diggers” to find content of the best quality. According to its founder Kevin Rose, who was interviewed by the Los Angeles Times, at first, some of the hard-core Diggers went through every story, but soon they realized that they “needed a better way to comb through all the stories and present users with relevant information.” They created recommendation algorithms that push appropriate content in front of the user. CNN’s iReport uses a “newsiest” factor to expose content. This factor is a calculation that combines freshness, popularity, activity, and ratings. fav.or.it additionally tracks the time each user spends on an individual article.

An effective system needs to consider content relevance. Is the content in context with what the user is reading at that time? Platforms such as Zemanta’s can automatically add relevant content. On the other hand, publishers can leverage social context closely related to users’ roles. Every person has several roles in life—mom, boss, president of a book club, etc. In these roles, users have different needs and interests. Moli was an unsuccessful attempt at trying to solve the problem of mixing a person’s different roles. With it, users could create and manage several personal profiles and decide which of their profiles were shared and with whom. StumbleUpon takes another approach; it gives users information about how similar they are with the author of any given piece of content. Even websites without social connections can bring social context to sites through standards such as OpenSocial and other interfaces.

Choosing socially and contextually relevant content of good quality is not enough, because it may still result in information overload—too much good content is still too much. Social connections can be used to filter the content that users’ peers, friends, and even idols are creating, consuming, commenting on, or evaluating. Google has a patent pending on technology that ranks the most influential people on social networking sites.

It seems to be trying to apply the same approach to social networks as it has used to dominate the online search business.

As with search, however, default automatic algorithms can only go so far. On Facebook, users can simply click a link to receive more or less information about some type of content, about somebody, or from some application. StumbleUpon combines user opinions with machine learning of personal preferences.

Based on the knowledge a website collects, StumbleUpon goes a step further. In an Amazon-like way, even first time users, of whom no prior knowledge exists, quickly get personalized content. StumbleUpon adjusts the page on your first visit even before you click anything—it uses location to choose people that might be close to you.

Page 3 of 4