What Makes Good Journalism?

Journalists and others concerned about the status of the news industry in North America and Europe keep arguing that we are getting poorer journalism because of the economic state of the industry. But when you ask them “what makes good journalism?” they find it nearly impossible to articulate the concept.

Those trying to articulate the elements good journalism tend to use comforting and immeasurable platitudes and to describe it through attributes based on professional practices: pursuit of truth, fairness, completeness, accuracy, verification, and coherence. These are not a definition of quality, but a listing of contributors to or elements of quality practices. Each attribute alone is not sufficient for good journalism and degree to which each contributes is unclear.

In practice, most of us settle on identifying journalistic quality by its absence or by its comparison to poor or average quality journalism. Thus we know it when we don’t see it or we describe by giving examples of excellent journalism.

Other industries are far better in establishing their definitions of quality. If you ask what is quality in washing machines, the answer is that it quality machines clean clothing more effectively, operate quietly, are safe, and are durable and reliable. All of those can be measured by specific indicators of dirt and stain removal, water and energy use, noise decibels generate, user injury rates, and breakdown rates. A quality manufacturer strives for better performance on those measures, provides effective support and service, handles feedback and complaints well, and strives for high customer satisfaction.

The reason quality journalism is difficult to describe is because it involves a body of practices and the mental activity that goes into those practices. Good journalism results from the information gathering and processing activities, PLUS the knowledge and mental processes applied to it.

It is thus labor intensive; it involves collecting, analysing, structuring and presenting information. The best journalism comes from knowledgeable and critical individuals determining what information is significant, backgrounding and contextualizing it, and thinking about and explaining its meaning. It is a creative and cognitive activity. It is difficult to articulate what makes good creative and cognitive activity and nearly impossible to measure these mental processes. Thus, we are forced to use surrogate measures of quality journalism.

Good journalism involves engaging language and fluid prose, but it is not merely a well written and good story; it is not necessarily evident in stories that make the most popular list of stories or are most shared on social media. Good journalism involves stories that have import, impact, and elements of exclusivity and uniqueness; it wrestles with issues of the day, elucidates social conditions, facilitates society in finding solutions to challenges, and is independent of all forms of power. Good journalism is rational and critical; it is infused with scepticism, but not cynicism.

Although it is difficult to effectively measure such attributes of quality journalism, it should be much easier to define and identify quality journalism providers. There are some surrogate and attribute measures available to rate them, such as the percentage of total costs devoted to editorial costs, the amount of serious news content, the percentage of content originated rather than acquired, the amount and handling of errors, levels of reader satisfaction, and brand reputation.

In the end, however, the question of what makes good journalism has to be answered by answering the queries: Good or valuable to WHOM? Good or valuable for WHAT? Only then can one begin to establish direct measures that determine the effectiveness of journalism in achieving those objectives.

Google, Newspaper Archives, and the Business of Cultural Heritage

Google announced this month that it is ending its ambitious project to digitally archive newspapers. The project to scan the archives of the nation’s newspapers and make them available online as a searchable historical record was announced in 2008 with the level of hubris only found in online enterprises.

"Our objective is to bring all the world's historical newspaper information online,” said Adam Smith, director of product management at Google, announcing the project. Those lofty aims were echoed by Punit Soni, manager of the newspaper initiative: “As we work with more and more publishers, we'll move closer towards our goal of making those billions of pages of newsprint from around the world searchable, discoverable, and accessible online…."Over time, as we scan more articles and our index grows, we'll also start blending these archives into our main search results so that when you search Google.com, you'll be searching the full text of these newspapers as well.”

After scanning about 60 million pages and beginning to make them available as full page shots--because costs of disaggregating and indexing were too high and copyright clearances were difficult to obtain for older material—the company announced that it will quit scanning pages, but continue offering the existing pages available on it Google News Archive site. It said it would not invest any new effort to improve indexing or add tools to better search and manage the archive.

The project may have been well-intentioned, but it was not well thought out. It was a free service designed to use the search traffic at the site to raise revenue through advertising Google would put on the site. The scale of the project was enormous and requiring finding, scanning, and indexing thousands of daily and weekly newspapers--many no longer in existence. It would require a long-term commitment of funds, personnel and server capacity to catalogue and scan the material and provide and maintain search functions. The project ultimately incorporated on a fraction of the papers it had hoped to scan, did so spottily in many cases, and its usability was poor because it never mastered the problems of handling so much content. Worse yet, it discovered that history was not a money making business.

The exit announcement is not a surprise and is another sign that players the virtual world are stopping deluding themselves that they are replacing the entire world and that the laws of economics and finance to not apply to them.

As laudable the preservation of newspaper archives might be, expecting it to be completed and maintained by a commercial firm defied sense and historical experience. For centuries, the most important historical records, books, art have been maintain in governmentally and charitably funded collections because commercial enterprises were either unwilling to bear the costs or to allow the large scale efforts required to preserve, catalogue, index, and make available cultural heritage materials distract them from their business activities.

Why would anyone expect Google to act otherwise?

As Google increasingly acts as a mature business it will increasingly shed activities that were launched as goodwill gestures because the costs of their operations reduces the company’s financial performance and will diminish the value of its stock compared to other tech firms. Over time it will be harder for the firm to maintain the stance that it is not self-interested and motivated only by the opportunities to improve the lives of the public by providing access to all the world’s information.

The tentacles of its operations that have reached out into to many fields will increasingly be pulled back if they do not yield financial results. And fears that Google will rule the world will diminish. Google, Microsoft, Amazon and other big players of the digital world all have limits, just as did the handful of firms that once controlled steel, oil, and shipping through cartels. At some point even mammoth, wealthy companies do not have the resources and capabilities to keep expanding endlessly and their performance declines, leading shareholders to rein them in and competitors to find opportunities.

International Protection for Broadcasts Gaining New Momentum

The proposed international treaty on the protection of broadcasters is inching forward after nearly 10 years of consideration and member states of the World Intellectual Property Organization and other stakeholders are moving toward consensus on the central elements of what it is to do and what is the object of the protection.

Much of the rhetoric of stakeholders—particularly pay TV channels and sports rights organisations—has led many to believe it is about protecting their business models and revenue. They have done the proposed treaty a disservice.

It is about protecting the value creating activities of broadcasters in content selection, packaging and distribution—something that is not protected by copyrights, but can be protected with a neighboring right. What the treaty is intent on doing is protecting the broadcast—in a signal and derivative of the signal—which embodies the broadcasters value creation activities and is the object of the proposed protection.

The result may assist revenue generation and strengthen the business model of rights holders, licensers, and broadcasters, but it does not directly protect those.

What it will do is provide a streamlined mechanism for broadcasters to enforce their rights internationally when unauthorised reception, decryption, and retransmission and rebroadcast of their signals are done by other broadcasters and cablecasters. Such practices regularly occur in some countries and sometimes involve the second broadcaster substituting their own advertising and charging fees to obtain the broadcast.

The treaty essentially gives broadcasters the right to license other uses of their broadcasts and halt uses they have not licensed, but does not give them rights to the content in the broadcasts that they do not own.

The proposed treaty includes some protection of public interests, by permitting national limitations and exceptions for clearly public purposes such as education, service to visually or hearing impaired persons, etc.

Some scepticism about the proposals exists in developing nations, because most of the benefits will occur to broadcasters in high income and upper middle income nations and only limited benefits will occur in other states.

The thorns on the rose bush, however, involve the fact that many of the nations where egregious reuses of broadcasts have occurred have never well enforced copyright, so one must be highly optimistic to believe that passage of the treaty will solve the problem.

Editing, the Richness of Content, and the Current Limits of Web and Social Media

Editors matter.

The March 28-April 4, 2011, edition of the struggling news magazine Newsweek—which I admittedly have not read in years— provides some of the finest articles I have read in many months, illustrates the limits of online and social media, and shows why editors matter.

There is great benefit from both edited and unedited media and I don’t believe they have to be seen in dichotomous choices for the future of media. But I believe those who argue they don’t need to edited media doom themselves to narrowness and ignorance.

If I relied only on the links I receive daily from colleagues on Facebook, my news alerts for topics of interest, or digital listings of stories, I would miss the most important contribution of edited media—the service editors provide by reviewing and thinking about the world and putting journalists to work to provide a coordinated understanding of the available information. This week’s Newsweek epitomises that reality.

Although I often have my attention drawn to information and stories of interest from my social media, the pattern of stories and information sent to me would not have led me to Bill Emmott’s Newsweek story on the impact of disasters on politics, economics, and national psychology or Paul Theroux’s explanation of how Japan’s history has shaped its culture and how the generous global response to the earthquake and tsunami is forcing it to confront the fact that it is not alone and isolated in the face of geographical and physical constraints.

Had I relied on to the multiple news websites I peruse weekly, the ways they are presented and the ways that I search for news on them would not have led me to Newsweek’s fascinating story of the nuclear disaster at an Idaho test station in 1961 that may have been the result of a murder-suicide, its account of why a London murder has led to a boycott of Coca-Cola, or its account of why political ignorance in America is higher than that in European countries.

My point here is not that we should all be rushing out to subscribe to Newsweek (My apologies to Sydney Harmon, Barry Diller and Tina Brown), but that the functions of editors matter. Having someone look at the world and see ways that it fits together, have editors coordinate and incentive talented writers, and having editors create a collection of stories and information continues to produce value.

Those who believe that news, information, and understanding of the world can come through a disaggregated and uncoordinated flow of information and stories, much of which is not prepared by professional writers on a regular basis, miss the entire reason for the success of edited media over the past 300 years.

I do not wish to be construed as saying that online and social media do not make enormous contributions to our communications ability, but until they mature to the point they can support regular oversight and thought about the world and compensate professionals for whom investigating and reporting developments is their primary employment, digital media will not be able to replace the contributions of well edited print media.

After a decade and a half of digital media it is clear that we are able to move news and information to those platforms, but we are nowhere near the point we can shut off the presses without a great deal of loss of oversight and understanding about the world around our lives.

New Community Radio Opportunities to Increase Provision of Local Services and Information

Community radio in the U.S. received a large boost in January when President Obama signed a billed that will permit establishment of an estimated 800 to 1200 new local community radio stations

About 800 of the non-commercial community stations are already operating and providing music, health, education, and local information, news, and sports. The stations are run by community organizations, churches, and other civic groups, typically staffed by volunteers, and dependent upon donations from organizations and listeners.

Community radio operations tend to provide information about community and civic organizations that are overlooked by commercial broadcasting, focus on social issues in communities, and provide services to minority, ethic and immigrant groups. Programming on community radio is distinctively different from commercial radio and tends to be more local than, and providing alternative content to, that of public radio stations.

The stations operate on low power, making them useful for servicing small towns, counties, metropolitan suburbs and neighborhoods.

The expansion of spectrum devoted to community radio had been sought for several decades and the Local Community Radio Act signed by the president directs the Federal Communications Commission to make provision for the additional services. Some disputes with commercial channels over spectrum are expected in large metropolitan areas during that process.

FCC Moves to Halt Internet Service Provider Content Discrimination and Preferences

The Federal Communications Commission has moved to keep Internet service providers from limiting or unreasonably discriminating against content provided by competing services

The regulations are designed to keep telephone and cable companies that provide phone services from using their Internet services to limit use of Skype and other online telephone services. It is also intended to halt them from making content provided by audio and video service providers they do not own less desirable by limiting downloads from firms such as Netflix or Hulu or providing faster service only for their own content.

The rules are designed to maintain a level competitive position on the Internet and to restrict the abilities of companies that dominate access to the Internet from using oligopolistic control of the service points to harm content competitors.

The regulations require that services allow their customers equal access to all online content and services, but allow the services some flexibility to management network congestion and spam as long as the rules are clear and not anti-competitive.

The rules apply to fixed line services, but do not apply equally to wireless telephony which is becoming the primary means of Internet access though smart phones and electronic tablets and e-reader. Mobile phone providers are permitted to provide preferential access to their services or selected partners, but the rules forbid mobile providers from blocking access to competing sites and services. Mobile services are given more leeway to manage their networks because capacity is more limited than on the Internet.

The regulations are an important step in ensuring that major service providers such as Comcast and Verizon are not allowed to use their dominance in service provision to harm other companies and the FCC should be applauded for its efforts. Such companies have in the past shown their willingness to take advantage of their monopoloy power and are not widely noted for their consumer friendliness.

Major service providers and Republicans are vowing to fight the move, arguing that the FCC does not have the authority to issue such regulations. If the courts side with them on the issue, Congress should explicitly give it the authority or empower the Federal Trade Commission to ensure competivieneess online.

Content Farms and the Exploitation of Information

A growing number of firms are aggressively pursuing the market for information by providing material that answers online searches and employing strategies so their material appears high in search results.

These enterprises are providing high quantity, low quality material on topics designed to produce many search hits and driven by the desire to make money from advertising received as high traffic sites. Some are proving quite successful.

Demand Media, for example, uses about 13,000 freelance writers to produce about 4000 articles a day for which it gains about 95 million unique visitors with more than 620 million page views monthly. Its eHow.com site alone gets about 50 million users. Ask.com, Yahoo and AOL are also engaging in the market.

When you make a search and are taken to answer.com, dictionary.com, wikianswers.com or hundreds of other sites providing such information to the public, you encounter this mass produced content. The business strategy is working and many of the sites are among the top 25 sites in the U.S.

These producers and a whole range of similar organizations are producing material in content farms that rely on freelancers who are paid as little as $1 an article or get no payment except for number of page views for their specific work. It is a throwback to the penny-a-word days of journalism in the 19th century. The firms are increasingly seeking video producers, photographers, and graphic artists to provide similar material at similar levels of compensation.

Even established news organizations and other enterprises are starting to use the syndicated material produced by such content farms. Organizations such as Hearst publications and National Football League are relying on them for some content that appears on their sites, for example.

The implications of these developments on the quality of Internet information and the prospects for professional writers are clear and hardly encouraging.