02 January 2006

And the Gold Medal in Conclusion Jumping Goes to…

…unpublished authors everywhere—encouraged, no doubt, by self-publishing boosters and outright con artists—who will conclude, on the basis of this retread of a Doris Lessing "experiment," that they might as well just go ahead and self-publish their novels because their judgment is better than that of agents and commercial publishers. I'm in a rather foul mood this morning, so I think I'll just content myself with tearing the conclusions apart instead of the boosters themselves. Today, anyway.

First, a couple of words about the "experiment"'s form. I'm sorry if bringing some academic rigor into the area seems excessive, but nonetheless…

  1. Leaving aside every other criticism—and there are more than a few—the sample sizes were grossly inadequate. A sample size of "two" is not representative of a sample of over 30 Booker prize winners, nor of great post-Sterling1 novels in English. Neither is a sample of 22 publishers (actually, individual editors) and agents representative of the field "editors" or the field "agents". Then, too, the particular editors and agents very well may not have been assigned to acquire that kind of material in the first place, as noted by the Grumpy Old Book Man; in other words, it wasn't even in their area of expertise to begin with.
  2. It's all well and good to say that a 5% (or 9%, being generous) recognition rate of a minor classic and a moderate classic—in manuscript form, no less—"isn't good enough." Compared, however, to whom? Statistically, one cannot infer2 a causal relationship from a single uncontrolled test run. Instead, The Times should have established control populations of at least three, and preferably all, of:

    • Book critics who have reviewed at least 20 works of post-Sterling fiction in print in the last decade
    • University professors whose research area includes post-Sterling fiction
    • Publishing executives who do not have substantial editorial experience
    • University graduates who have read at least 20 works of post-Sterling fiction in the past decade in a variety of publishing categories
    • Derivative-rights acquirers, such as film producers, drama producers, and so on, who frequently acquire rights in works of post-Sterling fiction
    • Self-publishing gurus

    and drawn conclusions if, and only if, the results demonstrated a statistically significant difference among various of these population.

  3. We don't know if the responses reflected rejections or silence from those who did recognize the works in question. (More on this tomorrow.)
  4. More worryingly, the response rate is unverified. The article notes that its conclusions are based upon 21 responses. Given that this is hardly a randomized sample to begin with, it is important to know how many of these "submissions" were actually made. One would be much more confident of whatever conclusions one can draw—and, as should be clear by now, I think that is "none," but let's pretend that the previous three items had all been handled satisfactorily—if the 21 responses came from a total of, say, 30 submissions than a total of 100.
  5. Last—or, at least, last of the pure design problems I'll point out at this time—there is the "comparable markets" issue. Bluntly, what "sells today" as fiction is not the same as what sold in the 1970s. This "experiment" essentially assumes that "good writing in the 1970s is good writing now, and vice versa." One excellent counterexample of this is Susanna Clarke's Jonathan Strange and Mr Norrell, widely praised for invoking the nineteenth-century novel of manners—but, if one actually compared its style to that of Jane Austen or any of the Brontës (and particularly first chapters), it would be immediately obvious that it had been written much more recently. In short, good writing does change, and thirty years is long enough for that to be apparent. At minimum, Naipaul's novel would have received a different copyedit than it did thirty years ago; and that is more than enough to call this particular "experiment" into question, particularly if one accepts the premise that "editors do less editing now."

Enough of form. How about the substance? In no particular order (numbered only for convenience later on):

  1. Why these two particular works? The Naipaul represents a style of work that simply isn't being published by the big commercial houses at this time. I'm not familiar enough with the Middleton, as it was out of print even by the time I was living in England (a decade after it won the Booker prize), to judge, but I'd be far from surprised to find out something similar. And, of course, that presumes that selection for the Booker Prize—a notoriously fractious achievement—accurately measures the "true worth" of a work.
  2. The flip side of that question is also an obvious problem. It seems fairly clear from the tenor of the article that the works were sent to something perceived as a random sampling of editors and agents. That, however, is not how an intelligent author targets his/her submissions. Instead, working from admittedly vague and unsatisfactory data, an intelligent author tries to send his/her submissions to editors and agents who both have a track record of acquiring that kind of book and who appear to be actively seeking new authors. That probably rules out most of the publishers The Times approached directly in the first place—as the article itself noted, "Most large publishers no longer accept unsolicited manuscripts from first-time authors, leaving the literary agencies to discover new talent," and putting oneself in that category (as the "submission" no doubt did) virtually predicted this result.3
  3. Item 7 points out a more-subtle problem: Who, at the publishers in question, actually rejected the work? There is a substantial probability that at least half of the rejections came from low-level editorial assistants and assistant editors—uniformly with less than five years' experience—merely trying to get through the slush pile. It is virtually certain that at least some of the rejections came on non-editorial bases, such as "we've already got something similar under contract" or S&M dorks being unsure how to sell the work. In short, this particular "experiment" at best tested black-box publishing organizations, not the editors whose "expertise" is now being questioned.
  4. The converse of item 8 applies to agents. Many top agents—from what I have seen, even moreso in the UK than in the US—no longer acquire manuscripts from unproven writers without an introduction or referral from someone whose judgment they trust, or on occasion (in category fiction, anyway) having met the writer in question face-to-face. The process described for the "experiment" thus does not measure how agents actually acquire new fiction properties.
  5. Speaking of which, the "submission" as described would probably be rejected by most US publishers and agents for failure to conform to the submission guidelines. The "standard" sample of a work of fiction is three chapters, around 50 manuscript pages; further, that is part of a query package that also includes a synopsis.4

In the end, this is going to become another "See? Publishers can't tell what is good, so you might as well jump right to self-publication for your novel. And I can help you do so (and make a profit myself) by…" from various of the self-publishing gurus—and from vanity presses masquerading as self-publishing gurus. I will put a fiver on each of the following propositions:

  • On or before 15 February 2006, one of the recognized "self-publishing gurus" not otherwise affiliated with a publisher will cite this work as "further proof" that self-publishing a novel is a viable alternative.
  • On or before 15 February 2006, a branch of a major vanity publisher (more than 5,000 titles in print this century) will do the same, probably while mislabelling its service as "self-publishing" and using a name different from its recognized vanity-press parent.
  • On or before 15 February 2006, a major writers' conference will cite this "experiment" as part of a program or panel purporting to tell authors how to do better themselves.

As Don Maass noted not too long ago:

The fact is that roughly two-third of all fiction purchases are made because the consumer is already familiar with the author. In other words, readers are buying brand-name authors whose work they have already read and enjoyed. The next biggest reason folks buy fiction is that it has been personally recommended to them by a friend, family member or bookstore employee. That process is called word of mouth. Savvy publishers understand its power and try to facilitate its effect with advance reading copies (ARCs), samplers, first chapters circulated by e-mail, Web sites and the like.

Writing the Breakout Novel 24 (2001) (emphasis added).

Last, and far from least, The Times's "experiment" ignores one truth about publishing: It takes only one acceptance to have a publishing career. Just ask Joanne Rowling. But that's for another time.

  1. It's always awkward naming "literary eras" when they are based upon time, and not content or auctorial lifespan. The inception of the Booker Prize was at roughly the time the pound Sterling converted from shillings and pence to pence-out-of-a-hundred, so that's what I'll call it. In publishing terms, one might well call it the "post-type era," as the same period marks the abandonment of metal type for commercial books; or, perhaps, the "fantasy era," as the recently written non-child-oriented fantasy novel became a viable publishing category at about that time; or even the "MFA era," given that MFAs in writing began getting any kind of serious attention around that time. Being in an especially snarky mood, and tipping my hat to the British origin of the study, though, I'll stick with "post-Sterling."
  2. Not prove. Statistics can disprove a relationship, but they can at best support an inference. They are otherwise merely post hoc reasoning. That can be valuable in narrowing areas for further inquiry, but not in proving causation.
  3. A better approach would have been to have a mid-level agent brought on the inside for a third track: agented submissions of the work. That, however, would have required actual experimental design… something clearly absent from this "experiment." Even better, there should have been a fourth track for an out-of-print prizewinner seeking republication.
  4. And this, in the end, demonstrates that the "experiment" was designed to produce the "results" that it did. I strongly suspect that, had the design been valid in the first place, including a synopsis would have resulted in a far-higher recognition rate than an opening chapter. Although the publishing industry seems to believe that one can tell a great deal from a novel's beginning, that's not what people tend to take away from a book once they've read it.