If
there is one thing the finger or two who know me well out of the handful of
people who know me at all know about me, it is that I take a rather dim view of
the most ballyhooed technological innovations of our time. This is not to
say (and indeed the just-mentioned finger or two would not say) that I am by
any stretch of the imagination a Luddite even in the bastardized,
hyper-stultified sense in which that word is now understood—viz., a person who
categorically refuses intercourse with all electronically dependent phenomena
of less than, say, ten years’ antiquity. No: I simply maintain that
scarcely any electronically dependent phenomenon of less than, say, sixty
years’ antiquity has changed
our lives to an extent that
merits a fraction of the paper, printer-toner, electricity, breath, or (let us
not mince words) semen and vaginal ejaculate expended in celebration or
lamentation of it. There have been, I would argue, several great falls
(although there is only one Great Falls) since the dawn of the so-called
industrial age: the first was probably precipitated by the development of the
railroad networks and steamships, when it first became impossible for a
would-be civilized person to get away with living out the full duration of his
natural without undertaking some pointless trip to some faraway place; and the
most recent by the proliferation of television, when it first became impossible
for the would-be civilized person to get away with segregating himself from the
importunities of visible and corporeally non-present humans, when ghosts became
a species of vermin as quotidian and ineradicable as rats and
cockroaches. (The movies were bad enough, but one could escape from them
at home; and the radio, while domestically inescapable, was mercifully apictorial
[one has only to do the arithmetic: if a picture says a thousand words and a
motion-picture moves at 24 frames per second, then a moving image of a person
imploring you to buy Colgate toothpaste is about 16,000 times as eloquent as
the naked voice of a person delivering the same message]). Compared with
the televisual plunge, the descent collectively catalyzed by the WISYWIG
operating-systemed personal computer, the internet, and the mobile telephone
has, according to my lights, been a mere physiologically untraumatic down-jump,
like the one one has to perform in alighting from a bus or train. My
gorge rises with especial rapidity in response to any sort of tirade decrying
the impersonality and brusqueness of email, texting, and tweeting, as against
the hyperpersonal warmth and languorous, mint julep-sippin’ attitude to time of
the old-fashioned handwritten letter. I am heartily offended by such
polemics because I remember quite vividly what a sorrily moribund horse the
practice of letter-writing was in the later pre-email days. To be sure, I
myself then corresponded regularly and not-unlengthily with several friends
(indeed, almost a full handful of them!), but the habit had a rather pungent
whiff of campy anachronism about it, as if my correspondents and I were
collaboratively penning an epistolary novel: we wrote letters to each other for
the pleasure of writing letters, and of following in the footsteps of the great
letter-writing friends of old. The idea of exchanging letters with people
to whom I was bound by ties that I did not aestheticize–for example, my parents
or my younger brother—never crossed my mind. If I needed or wanted to say
anything to them, I gave them a phone call. And as near as I could tell,
it was solely by telephone that the vast majority of my contemporaries and
elders transacted the entirety of their non face-to-face business with all
other people, from total strangers to those supposedly nearest and dearest to
them. “But surely it was highly impractical to be on the telephone 24/7,
7/52 in those station-to-station, pre flat rate long-distance, landline-only
days.” Indeed it was, and people dealt with that impracticality by
generally not being on the phone, which was easy to
do, given that they generally had little interest in and often a positive
aversion to the people whom they knew to be immediately accessible via that
engine. Perhaps even in the most intellectually fertile and socially
integrated ages, the default attitude of one human being to another—whether friend,
stranger, or foe—is mild to severe hostility. When other people are
present, the most efficient medium of expression of this hostility is verbal
abuse; when they are absent it is silence. When paper letters were the
only affordable means of communication, most people did not write or send them,
because not writing to your awful cousin in Poughkeepsie was a much easier and
more eloquent way of saying “Fuck you!” to her than actually sending her a
letter reading “Fuck you!” would have been. The new or pseudo-new media
of communication, by effectively bringing us into one another’s presence,
encourage us to adopt the present-specific mode of expressing our
hostility. Thus the so-called social media would be more properly termed
the anti-social media, and the rudeness the historically purblind decry in
these fora is merely humanity’s native tone. Thankfully, Moore’s
so-called Law assures us that the days of such ubiquitously smellable
shit-talking and typing are numbered; for on the day when the latest mobilephonetablet
is no faster or smarter (except perhaps in a stylistic sense) than the latest
but one, the thrill of using such an engine will summarily and permanently
evaporate, the mobilephonetablet will take its place alongside the post-it note
and disposable razor in the class of commodities we take utterly
unsentimentally for granted, and humanity without the walls of the domestic
dwelling unit will mercifully revert to being a race of close-mouthed
hermits. In the meantime, the mobilephonetablet and the various
proprietarily named fripperies it has engendered and facilitated will join
company with Justin Bieber and global warming as mild but constant vexations
visited upon the would-be civilized person via the mouths (a.k.a. north anuses) of other people
decrying his freakishness in not taking them seriously enough in some
fashion. But the would-be civilized person cannot take these ephemera
seriously in any fashion, and least of all a polemical one. That the
world has for quite a long time been getting ever shittier he vehemently
asserts, that it is measurably shittier now than ten or twenty or thirty years
ago he willingly acknowledges, but that it is significantly shittier now than then is a notion
that he will never deign to grace with the soupcon of a smile, for in doing so
he would effectively be hanging out his card and setting up shop as a mere
laudator of the already long-since barbarous auld lang syne of living memory,
as a mountebank trading on the fraudulent notion of the 1970s as a prelapsarian
epoch, when he regards his true vocation as that of anEntferntervergangenheitsreinheitsseher,
a communicant with the purity of the distant past of pre-living
memory.
That
said-stroke-having said that (the that in both cases effectively being the
very first sentence of the preceding paragraph, to which the remaining
sentences merely add much needful if tedious illustrative evidence), there is
at least one technologically dependent transformation of recent years that has
caught me completely off-guard and left me utterly gobsmacked (I am no
fan of this g word, but it seems by now to stand in much the same relation to surprised or overwhelmed as “bullshit”
has long stood in relation to humbug or balderdash,
namely as the only word forceful enough to convey the meaning nominally contained
in the more genteel term). The change I am referring to is the by now
more or less fully accomplished digitization of all new or newly reproduced
mechanically reproducible images, and in particular moving images, the transformation of movies
and TV shows from phenomena originating directly from the chemical, electrical,
or magnetic excitation of corporeal matter into phenomena originating only
mediatedly from such excitation, and directly from the immaterial empatternment
of entities purported to be ones and zeros. It was a change that took me
completely aback (or, rather,genuinely aback,
the distinction between afrontness and abackness being a binary or
digital-esque one that does not admit of gradations) and has left me finding
the world not merely a more depressing place (that progression is par for the temporal
course) but also a stranger one. And as vis-à-vis probably all such
changes, in hindsight one feels remarkably obtuse in not having seen it
coming. One had after all known for decades that the mimetic sector of
the phenomenal world was becoming ever- more digitized. When was it that
one heard one’s first demonstration of the hyperphonographic properties of the
compact disc, via that J. C. Penny’s (or Sears) floor model player, and its
ostentatiously ffr-spanning rendition of some unnamable orchestral
warhorse? (While I do not remember the identity of the piece, I somehow
recall that the disc—which could be viewed spinning all those seemingly
impossibly numerous hundreds of revolutions per minute through the player’s
transparent lid—was at least partially robin’s-egg blue in color [but surely
the record labels weren’t yet bothering with painted upper surfaces, trusting
the public to be seduced and dazzled by the naked prismatizing silver].)
1983? At the very latest, 1984, when one was twelve, presumably still
young enough to take such a transformation in one’s stride. And even
one’s first acquaintance with digital moving pictures scarcely postdates one’s
attainment of one’s majority: how vividly (albeit dimly) one remembers that
postage stamp-sized Q*******e realization of the video for Radiohead’s “Creep”
that came bundled with the system software for certain M*******h computers in
the autumn of 1993. Whence I can only conclude that quasi-paradoxically
it was the early saturation of so many subsectors of the mimetic sector of the
phenomenal world with digitized things that left me unprepared for the
digitization of that final (or is it perchance only the penultimate or even the
antepenultimate?) one. Because the digital LP replacement and the digital
nickelodeon flick or flipbook were not immediately followed by the digital full-fledged
studio-produced motion picture or television program, it was easy enough to
assume that there would never be digital FFSPMPs or TV programs,
that producing audiovisual records of such gloss and sheen and depth and range
and sheer, ineluctable bewitchingness was one of those “things that computers
can’t do,” that it involved some arcane artisanal mystery akin to that entailed
by the manufacture of authentic Delft china or Stradivarius violins, as if
every frame of the master of a Hollywood blockbuster had to be individually
prepared by people in surgical caps and masks laying on layer upon
parti-colored layer of powdered chlorides and nitrates with apparatuses
resembling pepper-grinders, or every micrometer of videotape for the latest
episode of Home Improvement had to be inscribed with its
corresponding micrometer of pseudo-sine-wavage by people in radiation hoods and
gowns waving their arms about in front of the to-all-appearances completely
motionless capstans like virtuosos of that early electronic musical instrument
known as the Theremin. Obviously, (in hindsight) though, movies and TV
programs have always been manufactured by means of industrial processes relying
on lightning-fast image-registrations with which the producer (by which I term
I mean not only or mainly the person bearing that job title, but also the
director, cinematographer, script-writer, et al.) needed to have no conscious
involvement, and therefore the delay in the emergence of the digifilm motion
picture or TV program could only have been owing to the temporary inferiority
of the digital moving image, which could only have been owing to the
(temporarily) inadequate processing speeds of existing computers and storage
capacities of existing storage systems (CD ROMs, Bernoulli drives, or whatever
else happened to be state of the art at any given year). Once the
digitally produced moving image was technically not only equivalent but
superior to the best-generatable analogue one, the supersession became a
foregone conclusion. And on the score of this superiority I am under no
illusions. The reader may rest assured that the balance of the present
essay is not going to be one of those willfully ill-informed or disingenuous
tributes to the rich, lanolin slathered Corinthian leather-like, life-bearing
organicity of the older medium, of the sort that one associates with, say, the
lovers of gramophone records. In the case of a televisual image, the
digitization-worthiness has always been intuitively obvious even to the naked
eye. One had only to come within a foot of the television screen to see
that its picture was composed of a finite and in principle countable number of
indivisible squares of light (specifically red, green, or blue light), such
that any computer-generated televisual image, however egregious its
limitations, would never be falling short of a standard that one believed to be
perfect. Of the wartsandall-ishness of movies, on the other and more
prominent hand, one was blissfully ignorant: one fancied that every filmed
image was made up of wholly analog-ic lines and splashes of color (or grayscale),
and that no matter how closely one drew to that image, or many times one
enlarged it, one would always end up seeing some thing or collection of things
that lacked a border along at least one geometrical axis; that at no point
would a film image resolve itself into a mosaic of mutually tangent but
isolated squares (or hexagons, or octagons, or what have you). Of course,
one was familiar with the notion of a “grainy” film image, of a film image that
was at least partially made up of visibly discrete units, and one was familiar
with specific movies—mostly older ones—that were prevailingly composed of such
images. But one assumed that these were aberrations, no different in kind
from out-of-focus shots or wear-and-tear induced scratches on the celluloid, neither
of which impugned the mimetic prowess of the medium itself. Little did
one know that the intrinsically, infinitely analogic properties of film were
likewise an illusion. I have not investigated the chemistry or physics
behind this phenomenon at any length or in any detail—or at all as a matter of
fact—but I have been told or led to believe by people in the habit of crediting
only creditable sources that these analogic properties break down at a
dispiritingly coarse resolution, that conventional celluloid film stock, be it
of even the most recent laboratory standard (i.e., the latest-day equivalents
of Technicolor and Eastmancolor), can register only a dozen or so hues of color
or grayscale that will cease to appear to blend into each other when viewed if not
by the naked eye at front-row viewing distance, then at any rate by the eye
aided by a magnifying glass placed a few inches from the projection
screen. Needless to say, by the early years of the previous decade
digital moving images that did not break down into their constituent pixels
when subjected to such scrutiny were rolling off the assembly lines of Palo
Alto, and that by then the celluloid film’s days were numbered in a more than
technically fanciful sense.
This
is of course not to say that even now a majority of the digital moving images
one comes across in the daily round of commerce with electronic media are of
Alpine–standard (that’s the antithesis of b*g-standard) 1990s’ Hollywood
cinefilmic quality. The minuscule frame size of the standard Y** T**e
video, for example, is imposed not by the visual scope of its typical
displaying medium—say, the fourteen-inch screen of a largish laptop—but by the
incredibly low resolution of the video itself, which is such that it is only
when its images are viewed as if from across the width of a football stadium
that they can trick us into mistaking them for faithful copies of real-world
entities–an illusion that even an old nineteen-inch low-definition TV set
displaying over-the-air analogue reruns of Gilligan’s
Island managed to pull off by
seeming to keep its images only as far away as the length of a ping-pong table
of perfectly ordinary, garage-friendly dimensions. There is no inherent or invariable superiority of fidelity in a
digital image, any more than there has ever been any inherent or invariable
superiority of fidelity in a digital audio recording, although the history of
the reception of both digital sense-registrations
leads one to believe that this superiority has been and continues to be generally
taken for granted. One recalls that way back in that millennium-closing
year 2000, when the first, pre-castrated version of Napster went viral as one would say nowadays, that much
of the hysteria about the platform on the part of the so-called music industry
and its millions of disinterested partisans (one must never underestimate the
culturally industrialized other-directed human being’s proclivity for
identifying with the oppressor [although I think some nicer word than
“oppressor” should be come up with to designate people who merely want you to
give them your money and are using much less coercive means of trying to get
you to part with it than a whip or gun]) emanated from the notion that the
recordings disseminated gratis by it (Napster), were perfect copies of the CD, LP, or audio
cassette tracks from which they had been derived. These recordings were
so conceived because in those days, when cassettes remained the only medium of
home recording that was both reliable and affordable, the only sort of
imperfection of audio fidelity that anybody had any notion of was analogue and
contingent in character---surface noise, tape hiss, wow and flutter—all of them
were successful realizations of the contingent corporeal world’s effort to
contribute its own unruly two cents’ worth to the would-be necessity-driven
message of spirit. Little did anybody then seem to realize that digital
recordings were subject to their own sui
generis form of imperfection,
an imperfection imposed not by chance and the medium itself but deliberately by
the curators of digital archives both amateur and professional; that the degree
of this imperfection, visited on the recording by a process known as compression, depended on how
much –or, rather how little—storage space the curator was willing to allocate
to a given chunk of music; that even the audio standard of the original compact
disc, with its pathetic 16-bit sampling rate, had been a
data-capacity-per-square-unit of space-imposed compromise that fell well below
the threshold of apparent perfection of fidelity (presumably the fact that CDs
were so much smaller than LPs was owing entirely to problems of torque and
whatnot imposed by their rotation speed, as caeteris paribus a 12-inch laser
disc would have yielded appreciably higher fidelity than a four-inch one); that
what with the average Napster user’s hard drive’s possessing the storage
capacity of roughly two CDs, and the library stored thereon of several
thousands of minutes of music, in point of median sound quality the average
Napster download was barely a match for the likewise telephonic “Operaphone”
broadcasts that Marcel Proust had reveled in during the previous fin de siecle. On the evidence
of our collective experience of similar switchovers in more remote times, one
would have every reason to suppose that by now, fourteen years into the
millennium, we would have all grown savvy to the limitations on aural
experience imposed by compression, and moved on to a standard of audio
reproduction undreamt of in the original Napster days, such that CD-quality
sound would have long since been demoted to the status of a pathetic pis-aller suitable for only the most prosaically
quotidian communications—occupying a niche of the prestige level of AM radio
(which, by the way, despite having been deprived of its crown as the
predominant purveyor of live audio content nearly forty years ago, has not yet
disappeared and is unlikely to do so until analog radio en bloc is phased out [but who knows how far
off that day is, given that by all rational rights analog radio, being the
bearer of a less complex signal, should have been forced to give up the ghost
long before analog TV, and yet analog TV has been dead a full five years
now]). Instead, the MP3, an avowedly sub-CD quality medium, is both the
industry standard and consumers’ format of choice for downloaded music.
What gives? Whence comes this voluntary regression? Or is it even
voluntary? Is it possible that the very idea of a distinction between
superior and inferior sound-images is indissolubly tied to the old analogue
recording media—or, to be more precise, a phenomenology of listening attuned to
the potential shortcomings of such media—such that only those old enough to
remember when LPs and cassettes were the state-of-the-available
sound-reproducing art are still capable (supposing it to be a capability and
not a disability) of caring about or noticing deficiencies in sound
reproduction, and that when shove is reached by push, anybody under the age of
say, 35, will reflexively, unthinkingly go for the maximum time-span and number
of tracks even if the sound quality delivered therein is of sub Edison
wax-cylindrical poverty?
“Who
knows? But in any case, this is supposed to be an essay on visual media, from which everything you have
been saying about all these entities and phenomena pertaining to sonic media, cannot but
constitute a pure digression.” Actually, I’m not sure that it is a pure
digression. For whether we are talking about the reproduction of sight or
the reproduction of sound, it would seem that the past quarter-century has
witnessed some sort of radical reorientation of the viewer’s or listener’s
horizon of expectations. Formerly, from that hot or cold, rainy or
bone-dry, blustery or sultry day in 1830-something when Louis Daguerre legged
it down to the patent office with his big glass plate tucked under one arm,
right up until that hot or cold &c. 2000-something day when the last pop
album was released not only in CD but also in cassette format, everyone seemed to
be under and revel in the impression that mankind’s means of representing to
itself the world around it(self) were constantly improving and asymptotically
approaching the moment of indistinguishability from the real McCoy. From
black-and-white still photography we moved on to color still photography; from
black-and-white 16 mm motion picture film we moved on to color 35 mm motion
picture film and thence to color 70 mm motion picture film and thence (in
selected cinemas) to that three digit-millimeter’d monstrosity known as Imax;
from acoustically recorded records we moved on to electrically recorded records
and thence to CDs; from mono sound we moved on to stereo sound and thence
(eventually, a decade or two after the false start known as quadraphonia) to
surround sound. Every day and in every way we expected ourselves to be
more fully, richly, and accurately in touch with the world accessible by
the senses than we had been the day before. Now—and I say this while
candidly acknowledging the legitimacy of the interjection of a
Mixalotian-dimensioned but occasioned by the recent and for the
moment still-current recrudescence of pandemic hope in the possibility of
universal 3-D cinema and television–all that seems to have changed.
Now[adays], we (or should I say “they” or “you all” [or “you guys” or “yins”])
seem to care only about the accessibility and fluency of the mimeme, and no longer to give a
fig about its fidelity to the mimed phenomenon. If, when streaming (how I pine for an equivalent of so-called for grammatical entities other than
count nouns!) a movie, we (or “they” et al.) are forced to wait so long as two
seconds for the picture to refresh—in other words, to endure the briefest spell
of still imagery or blank screenage–we stamp our feet and shake our fists with
all the apoplectic fury of an early late twentieth-century paterfamilias
protesting a meteorologically induced interruption of The Wide World of Sports by a simultaneous catch-as-catch-can
assault on the horizontal and vertical holds of his rooftop aerial-attuned
living room hog of a 24-inch console set. But provided the transmission
elapses without a single hitch of this kind, we spectate upon its fruits as
fully satisfied customers, devoting not a micrometer of our so-called attention
span to the matter of whether they are every bit as verisimilitudinous, as
still life in motion-like, as their counterparts on DVD or Blu-Ray.
The
whole of the preceding two paragraphs, I blush to admit, is intended to
function as one big, walloping, waddling disclaimer of sorts, one that in
hindsight I now see would perhaps have been more seasonably sited before the
essay proper rather than within it—but fudge it, what’s done is dun,
&c. You see, when I alluded or referred some pages ago to cinema projection-screen
images “that do not break down into their constituent pixels” when viewed with
a magnifying glass, I was alluding or referring to a phenomenon that I had not
as yet experienced at first eye, and that indeed I have not as yet experienced
as of this writing. When I hear from those who have been to screenings of
digital films in chain multiplex cinemas that during such screenings “you can
see the pores on the actors’ faces,” I must assume that the state-of-the-art
technical standard of cinematographic mimesis has surpassed anything obtainable
in the most upmarket and recent-vintaged pre-digital cinematic setting.
But I myself have not been to a screening of any movie at a multiplex cinema
since, at the latest, 2003, when, if I am not mistaken (though I may very well
be) old-fashioned analog cinefilm was still the universal standard at the
consumer end. The virtual entirety of my acquaintance with the
post-analogue cinemascape has in fact been mediated by, at the quasi-immediate
or gateway level, my 14-inch 2010 laptop screen with allegedly high-definition
capabilities and my 2008 19-inch low-definition non-flat ordinary television
screen; and at the unabashedly mediate level by the reproductive limitations of
the original DVD format and the nearly equally ancient DSL interweb connection
system-cum-protocol, together with whatever such limitations are imposed by
N*****x et al./&c. on the so-called server end of the inline delivery
service(s). Accordingly any animadversions I may subsequently formulate
(and I plan to formulate scads of them) on the unsatisfactory character of the
new digifilm standard by comparison with certain standards of yore may for all
I know be vitiated by the impurities imposed by the intervening media just
mentioned. The reader may very well believe that it is incumbent upon me,
before I go off half-c***edly shooting my mouth (or, rather, fingers) off about
digital movies, to take the trouble to see one in all its blackhead-infested
pore-exposing glory; if so, I no less abjectly than warmly entreat him or her
to remit to me the ninety dollars I conservatively estimate would be required
to convey me by taxi to and from the nearest digitally equipped multiplex,
which local lore informs me is sited somewhere in the remote hinterland of
central Anne Arundel County, well past the airport. Failing such an
offer, this essay will at least have as its standard of reference for digital
motion photography a composite version thereof that I fancy is not radically
dissimilar to that carried in the heads of the majority of present-day
Americans, who, even if they do make it to the proper digicinema a couple
of times a month, still do the bulk of their moving image watching at home, on
their televisions, or at work, on their computer-screens; a standard, that,
moreover, will perhaps be ideally apposite in that its historical foil (i.e.,
the standard with which it will be unfavorably compared in the aforementioned
animadversions) was one with which I was likewise most familiar through the domestic
media.
*
Now
begins the essay proper, and with it our embarkation on the journey back to its
eponymous golden age, the (I repeat the title to spare you the admittedly more
than negligible effort of scrolling back up to it) “golden age of telecast videotape
and 16mm film.” In introducing a reader to an age it is customary—and
indeed held to be but the barest degree of civility—to supply him or her at the
very start with a pair of book-ending years, the year marking the beginning of
the age in question, and the year marking the end thereof, respectively.
And that I might not be thought to be breaking with this custom out of sheer
frowardness or wantonness I shall supply such a pair—viz., 1970 and 1986—right
here and now (or by now there and then), well before the full stop terminating
the present sentence. But before I employ either of these years in its
civilly mandated bookenderly function, I must in all candor and frankness
confess that there is more than a whiff of if not arbitrariness then certainly
factitiousness about putting them to such a use. You see, DGR, as this is
a subjectively-grounded chronology-cum-analysis (I point this out explicitly
just in case you’ve only barely figuratively been sleeping so far), in
tentatively situating the left or earlier bookend at 1970, I am bound by
default to mislead you into supposing that I was watching videotaped or 16
mm-filmed moving images in that year, which is not true, as I was not even born
until 1972. “So then this year marks an uncharacteristically objective watershed.” That’s not quite
true either, for by 1970 both videotape and 16mm film had been widely used in
television for many years. Vis-à-vis videotape, at least, one has only to
think of the funeral of President Kennedy in 1963. But I did not see a
second of the footage (if footage is a word that may be as aptly applied to
videotape as to film) of that funeral until its twenty-fifth anniversary, in
1988—in other words, a full two years after the right bookended year of
1986. Which brings me to the significance of that year—a mite
prematurely, though, as I have not yet fully elucidated the significance of the
left bookend year, 1970. And so in full the significance of this year is
that no television program originally filmed or recorded before that year has
ever been surrounded by the aura or imbued by the perfume of romance that
surrounds and imbues virtually every 16mm or videotaped (or
16mm-cum-videotaped) program made from that year onwards. But this
significance has in turn some apparent objective basis, in that it very
probably was in 1970 that videotaped television really took off as the industry
standard—or, at any rate, took its place as a co-standard alongside 16mm
film—owing very probably to its new encoloration. To be sure, color
videotape was not actually invented in 1970, but to my young (ca. 8 to
14-year-old) eyes it might as well have been, in that as soon as I acquired the
habit of looking out for copyright dates on television programs, that was the
earliest year I ever espied, and indeed, I can remember the very program in
whose credits I espied it, namely Let’s
All Sing a Song, a
musical-educational series
that was hosted by the redoubtable banjoist-cum-folksinger Tony Saletan, and
that I along with thirty or so other eight-or-nine year olds was regularly (or
at any rate not infrequently) compelled to watch by my second grade teacher,
Mrs. Foster. Let’s All
Sing a Song was for me a
watershed or milestone or whichever other clichéd metaphoric vehicle is most
appropriate for designating something separating two (of only two) historical
eras (yes, yes, yes: cf. the B.C. / A.D. divide if you must): in being in color
and on videotape it was a segment of the modern world, yes, but it was also the oldest segment thereof; any more ancient
soundtracked moving image segment, in being on film (as I then assumed all
pre-1970 moving image segments were), might as well have been generated in the
same month as the first Laurel and Hardy talkie or The Wizard of Oz (which one of course depended on
whether it was in black and white or color), even if its last frames had still
been being exposed at 11:59 p.m. PST on December 31st, 1969.
By the time I became acquainted with a single item from the slender corpus of
pre-1970 color videotaped television—I suppose it must have been a Laugh In rerun—I was well to the right of the
right bookend year (1986, in case you’ve already forgotten) and so the
watershed or milestone could no longer be budged back a single inch. So
anyway, what happened in 1986 that made it epoch-breaking? (I write epoch-breaking and not era-breaking because just as in
a sense or arguably we are still living in the iron age [or era], in that all
subsequent eras [space, nuclear, information, &c.] are in a sense or
arguably but pseudo-ages, so I [if only I] am in a sense or arguably still
living in the modern, color-videotape post-Saletanian era, to which, as I have
already hinted, all other eras [chief among them the digifilm one] are in a
sense &c.) Certainly it was not the supersession of videotape by some
other medium, for television shows continued to be recorded on videotape in the
thousands for at least another decade and a half, and indeed for certain genres
of television program—talk shows, news magazines, and every sort of show
centering on a public performance, were it of the Rolling Stones at NFL
Football Stadium X or the Metropolitan Opera at Lincoln Center—there was no
practical alternative to it. (To be sure, during this period such spectacles
were occasionally filmed, but most often for release in cinemas [consider,
e.g., The Last Waltz and Monty Python Live at the
Hollywood Bowl]—doubtlessly because the typical television production
budget was not large enough to recoup the costs of all the hundreds of hours of
chronographically redundant film [to say nothing of the parts and labor
involved in running all those multiply-stationed cameras simultaneously], most
of it ultimately unused, that were requisite to adequate realizations of the
busy-ness and spontaneity of these events.) Nor did 16mm film disappear from
the televisual scene before the advent of the millennium; for although by the
mid-1970s videotape had usurped many of its old offices, it remained the
most-favored medium for the more expensive situation comedies and the less
expensive drama series and made-for-TV movies; not to mention its impressive
career in the service of that most quintessentially fin du vigntieme siecle televisual genres, the pop music
video. (Indeed, throughout the 80s and 90s, there was no surer sign of a
band’s accession to the big time than its switching from video-making on
videotape to videomaking on 16mm film.) What happened in the
mid-eighties, rather, was that the symbiotic marriage of video and 16mm film appeared to
end, that television programs that made mutually complementary use of both
media ceased to appear to be made; such that when I try to recall dual Vid-16mm
shows that I have become smitten with since then, I inevitably alight on some
pre-1986 production I first saw as a rerun—The Sandbaggers, for example,
or Van der Valk, both of
which were aired on my home media market’s second public television station in
the early nineties. From 1970 to 1986 it seemed that on switching on the
telly one had a good chance of spectating on a recently produced dual Vid-16mm
mixed media program; from 1986 onwards it seemed that one could not reasonably
expect to come upon anything new or newish that was not either all video or all
16mm in medial provenance. “But why, given that the title of this essay
alone proves that you held both media in high regard, should you have been
disappointed in a transition that seems to have left you liberally supplied
with fresh instantiations of both of them?” Ah, to answer that question,
“I must,” in the words of a writer whom I dare not name, lest I mislead you
into thinking I know much more of his oeuvre than the sentence I am now in the
middle of quoting, “trouble you with a bit of history,” the history of my early
(some of it very early—i.e., pre second grade) reception of television, which
will in turn require me to trouble you with a bit of history of television
itself (or, rather, television as it was for the world at large, for I would
not hypostatize that television as a Ding an sich at the expense of my television.) (I hope you realize that
throughout the preceding sentence, including its concluding parenthesis, I was
talking about a viewing-phenomenology-cum-productive apparatus and not about a
cluster of individual television sets, although what with television being an OED-accredited count noun,
you are certainly well within your rights to misconstrue me along those second
lines). This history-fragment must begin (or, at any rate, is most
conveniently begun) with the introduction or undraping of a certain elephant,
the elephant in the room (i.e., the room that is this essay) that is 35 mm-and-upwards film. Of course, in a
certain sense, the reader has already met this elephant, which in a certain
sense has never been concealed, for I have indeed already both explicitly
mentioned and implicitly alluded to several 16mm+ gauges of film. (The mentions
may be tracked down by a verbatim search query; the allusions are to be found
in the droppings [ugh!] of names fairly tightly associated with 35mm and up
[e.g. Eastmancolor].) But heretofore I have either explicitly or
implicitly ascribed to them an exclusively non-televisual and narrowly
cinematic bailiwick; in other words, I have treated them as though they were
media one only ever encountered (or, perhaps more appositely, had ever encountered) at official Milk-Dud
and popcorn-purveying movie theaters. The truth about 35mm+ media’s
presence on television is in fact rather more complex—or, to put it less
gently, completely different. In the first place, in deference to the
antiquity of these interlopers’ residence in the headquarters or palace (albeit
also in defiance of the modernity of one’s discovery thereof)—one must
acknowledge that beginning at least as far back the early 1960s, 16+ mm films
were especially commissioned for broadcast on television. Ten or perhaps
even five years ago, I would have been not merely loath but positively
defiantly unwilling to make such an acknowledgment. Until ten or perhaps
even five years ago, I assumed that all film destined for televisual broadcast
had been of 16mm gauge. But then, one sultry or bone-dry night between
2004 and 2009, I was watching my Russico DVD of Grigory Kozinstev’s masterly
(and some would say definitive) cinematic realization of Hamlet, or perhaps his equally masterly
(and equally said to be definitive by some) cinematic realization of King Lear—in either case, a
black-and-white movie presented under the aesthetically equivocal offices of
the letterbox (remember that my TV set is an old-school square model, via which
all gains in authenticity of aspect ratio come at the cost of a reduction in
size of the image)—and I was struck by the indoor (or studio-set) scenes’
cinematographic dead-ringerly resemblance to the black-and-white episodes of
the early 1960s’ American television series The
Wild, Wild West, which in turn had mutato
mutando always struck me as
cinematographic dead ringers for the full color run of the original
Shatner-centered Star Trek series. But how could this be,
given that the Kozintstev Shakespeare movies, being letterboxed, had certainly
been shot on 16mm+ film? The seeming contradiction could only be resolved
by the supposition that The
Wild, Wild West and Star Trek had been shot on 16mm+ film—presumably
on 35mm film. Of course, that supposition immediately elicited (N.B.,
DGR: elicited, not begged) the question “What did
the producers of The Wild,
Wild, West and Star Trek do with those extra 19 mm of
film?”—i.e., with the 9.5 inches on each side of the frame that, falling beyond
the blinders of those old-school square TV sets, would have been exposed in
vain? To this day, that question remains as unanswered as the one
musically posed by Chuck Ives probably roughly and possibly exactly a century
ago. But this unansweredness need not concern you or me right now (or,
indeed, perhaps even ever); for our present purposes it suffices to hone the
above presumption into an assertion, and from that assertion move back into
anecdotal mode as a propaedeutic to a mildly modified reiteration of a slightly
f(a/u)rther-above generalization, as follows: Star
Trek and the Wild, Wild, West had been filmed on 35mm stock, and the
moment I realized this I began performing a kind of comparative mental
split-screen screening of ST and TWWW alongside as many other filmed TV
shows as I could remember, towards the end of ascertaining which of them had
likewise been given the full (or any rate a fullish) Hollywood treatment
between the sprockets (“The Columbo so-called mystery movies? Almost
certainly. The Partridge Family? Possibly. But Mary Tyler Moore, The Bob Newhart Show, and Eight Is Enough? Almost
certainly not. Indeed, that troika of programs practically defined the look of stateside commercial 16mm
television, aFaIWC. But what about those big-budget weekend
second-tier-celebrity-cowcatchers and weekday evening soap operas, the likes of The Love Boatand Fantasy Island , and Knot’s Landing and Dynasty,
respectively? Possibly but by no means almost certainly.” And so
[albeit not much f(a/u)rther] on.). So, as I said, a great deal of
expressly televisual television in our period (remember: 1970-1986)—and more
specifically, as we have just seen in the preceding parenthesis, prime-time television—was evidently 35-mm in
provenance. But in addition to this swath of properly televisual 35-mm
offerings, during our period prime time, together with its late-night schedule
follower, was perennial host to movies originally and relatively recently
produced for and released via the cinema—in other words, very nearly
axiomatically, movies shot on film gauges of 35 mm and upwards. Now,
because, as mentioned before, in those days television sets were not shaped in
such a way as to accommodate the geometrical proportions of a widescreen film
image, and because, as not mentioned before, letterboxing had either not yet
been thought up or (as is more likely) thought up and immediately dismissed as
a piece of poncey arthouse cinema-fannish wankery, the preparers of
cinemagenetic movies for television broadcast (or perhaps even the broadcasters
themselves, at the moment of transmission; I suppose either would have been
capable of it) would simply zoom in on the film-frame in order to make its
complete vertical aspect commensurate with that of the television screen, thus
cropping the aforementioned 9.5 inches of horizontal aspect on each side; then,
through a repertoire of camera maneuvers known as pan-and-scan, they would every
now and then shunt bits of the frame aside, to the left or to the right, in
order to make room for other bits that they supposed the viewer would take more
interest in. Thus, the televisual preparer effectively served as a second
director, subdividing each shot (actually, perhaps only most shots, as presumably there were plenty
of shots in which from beginning to end nothing interesting was deemed to be
happening in the margins) into a series of sub-shots. Most of the time
one didn’t notice these interventions, but occasionally one did and was
unsettled by them (without at all knowing why, as the incommensurability of
aspect-ratios was a discovery one made only long
after 1986): for instance, in a scene of a profile tête-à-tête across, say,
a broad table, each of the interlocutors would be seen only while he was
speaking, such that one was repeatedly and bemusingly denied the pleasure and
intelligence afforded by a so-called reaction shot. Surely, one reasoned,
it would have made much more sense to film the chinwag as a static shot, with
both waggers in view from beginning to end. Little did one know that it
had in fact been filmed in just such a manner. But probably more
disruptive of one’s so-called viewing experience in the case of these
retrofitted movies was the palpable impoverishment of resolution effected by
the aforementioned in-zooming. Even mutato
mutando—that is to say, with all due regard for the inferiority of
resolution of pre high-definition television screens—as seen on television, and
even when hosted by the primest bits of real estate in the reigning network’s
prime time schedule, cinemagenetic movies looked much less sharp, much less
crisp, much less properly cinematic,
than in the cinema. And the f(a/u)rther the hosting was sited from these
A-listed sites, the more egregious the pan and scan-induced shortcomings, limitations,
and distortions tended to be, and the more often were they augmented and
compounded by other, non pan and scan-induced shortcomings &c. that were
equally or even more egregious. Take, as something approaching a limit
case of viewerly awfulness, the example of a movie shown on a local,
independent, non network-affiliated station at 2:00 on a weekday morning [not
that it was at all common in the most usual present-day sense for a station to
be on the air at such an hour during our period; and for this very reason any
station that was so was to be regarded as common in the most usual pre-ca. 1920
sense]. In the first place, the movie was likely not to be anywhere near
new. This is not to say that it was anywhere near likely to being a classic from the so-called golden age of
Hollywood—an early Marx Brothers vehicle, say, or a Bogart-centered noir
picture, or a Powell and Loy-powered screwball comedy—no: in those pre-TCM days
such films—which, being black-and-white and 16 mm-gauged, were largely immune
to the depravations now in point (albeit also subject to their own, more
aristocratic, strains of corruption)—tended to be aired on Sunday afternoons,
either on one of these independent stations or on one of the two quasi network
(i.e., PBS)-affiliated public stations. (The independent channels’ Sunday morning schedules, on the other hand, were the
preserve of golden-age shorts—The Little Rascals (a.k.a. Our Gang), newsreels, the
original Lone Ranger series,
Laurel and Hardy and Three Stooges one-reelers, and the like. [Among the many
praiseworthy reasons that I have never been able to bring myself to jump on to
the seemingly backless bandwagon of microgenerational solidarity—the peremptory
fiat that as a nominal adult one should tog oneself out cap-a-pie in the cultural
bric-a-brac that were explicitly fabricated by the culture industry of old for
the consumption of children within two or at most three years of one’s own age
(such that I, being born in 1972, am required to collect memorabilia of the live-action Incredible Hulk show but prohibited from dropping
references to H. R.
Puffenstuff [target birth
year-swathe: 1965-1969 or He-Man:
TBY-S: 1974-1978]), not the least compelling (or praiseworthy) has been my
awareness that courtesy of the sheer dumb luck of the draw a significant
proportion of my childhood and teenage television viewing was devoted to movies
and programs that had been produced years or even decades before I was
born—that, indeed, I actually found it harder to avoid The Little Rascals &c./et al. than most of the
official kiddie-targeted programming of that time; an awareness that is
naturally consubstantial with the surmise that most of my exact and near-exact
contemporaries were likewise all too familiar with many of the cinematic and
televisual mainstays of (in the words of the voiceover lead-in to one of the
newsreel-rebroadcast series) “those thrilling days of yesteryear,” and that
their garish display of enthusiasm for their own microgenerational niche is
made not entirely in good faith.] The late-night offerings of the
independent stations tended to be drawn indiscriminately from the vast but
finite pool of non Oscar-awarded R-rated movies released between five and
fifteen years earlier. By this I mean not they were necessarily bad
movies or even good movies that required any Golden Gate Bridge-dimensioned
suspension of disbelief to appreciate—I am not talking here of, for example,
the stereotypical late-night B-grade horror movie (e.g., Night of the Living Dead), the
likes of which in our market tended to be shown only on or around Halloween [in
contrast to the stereotypical daytime B-grade horror movie (your Peter Cushing
‘60s Hammer anvils, Stateside ’50s Werewolf operas, &c.) which were
regularly seen on the Saturday afternoon Creature Feature slot presided over by the imitable yet
irreplaceable Dr. Paul Bearer])—but merely that, having generally not been
graced with the most lavish budgets, they tended not to sport the flashiest
cinematography, being prevailingly composed of mid-focus interiors rather than
deep-focus outdoor panoramas. Moreover, they tended to be presented with
a minimum of broadcasterly second-directorly intervention: only the predictable
recurrence of the usual commercial breaks dissuaded one from believing that
the entire screening was being superintended by a single feckless dogsbody of a
station graveyard-shifter who had simply pointed a camera at a projection
screen and stepped out for a two-hour cigarette break. But beyond and
probably above this, the relative age of the films imparted a peculiar aura of
insalubriousness to them. I am not talking here in the main about the
films qua medium-alienable documents of a
particular historical microepoch—although that quaness certainly did come into
play—but about the age of the physical print, the three or four canisters of
celluloid that served as the material basis of the broadcast. You see,
over time—and even a fairly brief time at that—old-school color film has a
tendency if not exactly to fade then at least to apply to itself a treatment of
spectroscopic selection, as a consequence of which greens and blues soon find
themselves being crowded out by reds, yellows, browns, and oranges. Even
in the all analogue days this process could be sharply retarded if not quite
arrested by scrupulous storage within certain humidity and temperature
thresholds, but only a small fraction of the total volume of cinema-ready
film-stock was ever vouchsafed such storage, and the fraction of that fraction
that ever made it on to the late-night non network-bolstered airwaves must have
been infinitesimal. One assumes that in 1977 George Lucas took whatever
pains preservationists assured him would be necessary to keep the master prints
of Star Wars in pristine condition for the six(!)
years that remained until its first television broadcast, such that throughout
that broadcast I felt more or less as though I were reprising my only previous
viewing of the film, in some north Tampa cinema, half a lifetime (as far as an
eleven-year-old was concerned [the phenomenon is beautifully encapsulated by
Dean Stockwell in Paris, Texas])
earlier. One likewise assumes
that in 1980, Michael Ritchie, the director of The Island, a minor thriller
starring Michael Caine, took no such pains over that film’s televisual destiny,
such that throughout my first and only viewing of that film, at 11 p.m. to 1:00
a.m. or thenabouts on Channel 28 or 44 on a ca. 1985 Friday or Saturday
night-cum-Saturday or Sunday morning, the tropical mugginess that was its genius
loci seemed to emanate principally not from its eponymous setting, but rather
from the material tissue of its celluloid base, which looked as if it had been
liberally smeared with a preparation of Red Dye No. 5-infused Vaseline.
And generally speaking, if one were pressed to choose one adjective to describe
the look of televised 35-mm cinema during our period, one would unhesitatingly
opt for hot—provided, of
course, that that word were purged beforehand of every last dram of
extra-thermometric connotation (i.e., of all its associations with sex, pre-bop
jazz, &c.). Supersaturated as these moving images were with the
aforementioned reds, browns, and oranges, they could not fail to give the
impression that the world they depicted was one of oppressive, and indeed
life-extinguishing, warmth. “But are not red, brown, and orange the
signature hues of autumn—of the season of ever-crescent coolness?” Indeed
they are, but in order to impart to the viewer a sense of this E-CC these hues
must participate in an image of superlative crispness,
an image in which each and every fallen leaf and newly bared tree branch (or
pumpkin or turkey wattle) is sharply and cleanly set apart from its neighbor,
as if these items have just been freeze-dried out of all tendency to fraternize
with one another. When the constituents of an autumnal-paletted image are
allowed to bleed into each other, the effect is one of ever-crescent hotness abetted by an equally
ever-crescent humidity; as
if the entire composition is about to implode into a single undifferentiated
glutinous mass of infernally superheated goo. And owing to the
axiomatically blurrier resolution of panned-and-scanned 35-mm film (and
possibly also to the remoteness of the print used from the master print [I
remember reading somewhere long ago that the films shown by local TV stations
were copies of copies of copies of &c.]), it was this second genre of
autumnal-paletted image that the typical late-night independent
station-screened movie habitually presented. Watching one of these movies
from beginning to end was like sitting for two hours in a sort of sauna with a
Victorian dress code, but frequented exclusively by people hailing from the
1970s and early 1980s. I suppose the spiritual nadir of my late-night
indie-station movie-watching career came at the very end of our period, or
perhaps even slightly after that end, during my spectation of Hardcore, a relentlessly grim
1979 release in which George C. Scott plays some sort of ultra-conservative
Bible-thumper (a member of the Dutch Utterly Unregenerately Unreformed
Protestant Church of Christ the Unregenerate Underwear-Nonchanger, as I recall)
trying to retrieve his runaway teenage daughter from the sub-demimonde of
illegal pornographic movie production. Perhaps not quite needless to say
[for I understand that nowadays on most channels almost anything goes after a
certain hour of the night], the film had been edited to such an extent as was
intended to forestall its bringing the blush to the cheek or shudder to the spine
of a person even younger than myself, such that what survived of the sex and
violence could not on its own have been enough to traumatize me. But
combined with the saunified and infernalized autumnal palette, it proved quite
unbearable. In scene after scene, the heroine was forced to submit to
some unspeakably terrifying or degrading act while cooped up under lock and key
in a “King-of-the-Road”-ishly small closet of a room carpeted in a fiery ochre
shag fabric whose every shageme seemed to be straining to lick one’s eyeballs;
a room whose hazily white windowless walls deprived one of even the hope of
escaping to cooler, freer, safer, kinder, or otherwise more wholesome environs.
But
all the afore-described visual deformations, for all their oppressiveness, by
no means exhausted the reservoir or mine of misery supplied to me by the
televisual screening of 35-and-up-mm cinemagenetic movies, for there was also
an auditory (or audiogenetic) component to this misery. (Incidentally, the
present paragraph even less arguably constitutes a digression than the earlier
audiocentric passage does or did, although—and here the reader will simply have
to take my word for it—the arguing to that effect is best postponed to a later,
if it-is-to-be-hoped not too distant, paragraph.) Even in the pre-Dolby
all mono days of ca. 1950-1980, movies produced for the widescreen cinema were
soundtracked with a three or four digit-watted sound reproduction system in
mind, a sound reproduction system capable of re-delivering the timbres of steam
whistle and howitzer blasts, quadruple-fortissimo orchestral tutti, and Niagara
Falls at their original volumes and then some, and (what was even more
important) with a faithfully uneven distribution of those volumes among the
panoply of frequencies each of these timbres idiosyncratically participated in
(such that, for example, the steam whistle would be expected to have a very
loud high or treble end, and the howitzer blast a very loud low or bass one);
such movies were also, it should also be mentioned, soundtracked with a large
and happily wide-awake audience in mind, an audience that was positively aching
to have its eardrums pummeled with such larger-than-life mimeses of such
louder-than (everyday)-life phenomena. The audio-reproductive powers of
the average and indeed even the highest above-average television set, on the
other hand, were confined to a single two to four inch-diameter’d loud (or
rather soft) speaker cone powered by a one or at most low two-digit watted
amplifier. It was a setup roughly consubstantial in strength with a
largish but still eminently portable transistor radio, and ideally suited to
conveying ordinary conversational speech at the volume it would be heard in an
ordinary actual chinwag, and hopelessly un-cut out for conveying any sound even
slightly louder or bassier or treblier than that. One assumes that
shortly before or after the introduction of stereo TV at the very end of our
period television sets with more forceful and capacious pipes began to be manufactured,
but none of these made it into our house until long after I had moved
away—indeed, possibly not even until the present millennium. For the
entire duration of our period, the vanguard or upper threshold of my domestic
television-programming consumption was marked by a nineteen-inch set with a
sound-reproducing apparatus that one could efficiently muffle with the palm of
one’s hand (yes, even one’s not-yet-full-sized child’s hand). Only once,
twice, or at most thrice a year was this pauperly audio regimen enlivened by
louder and higher fidelity fare: these were the occasions of the so-called
stereo simulcasts, when, in the laudable aim of giving the best possible
presentation to some (usually live) broadcast event of nationwide significance
or historical importance—e.g., the National Symphony Orchestra’s annual
Independence Day concerts, or an operatic or orchestral performance
commemorating some milestone in the career of some eminent singer, composer, or
conductor—one of the public television stations would team up, as they say,
with their shared so-called sister station on radio; such that the radio
station would double the soundtrack of the broadcast in its usual high-quality
FM stereo sound. On these occasions, and these occasions only, one got a
televisual earful that one was more inclined to invite in than to block out,
and that was indeed not markedly inferior to the best sort of televisual earful
I am now capable of wresting from the 1990s-spanning sound-reproduction system
to which my television and laptop are both connected; or, indeed, I suspect, to
the best sort of televisual earful the average present-day American household
is capable of wresting from whatever combination of gadgets it employs for the
conveyance of the sonic part of its motion-picturely genres of choice.
You see, my family’s living room high fidelity system, although hardly
exorbitantly high-budget, was of borderline audiophile quality, in carrying and
emitting, as I recall, a full hundred watts per channel, and as near as I can
tell the amplifying and emitting side of audio reproduction has not improved
much or perhaps even a jot in the intervening three decades, except perhaps at
the bottom end, such that while in 1987 I would have preferred silence to an
audition of, for example, a Mahler symphony over a portable monophonic cassette
recorder, I will now grudgingly if not quite cheerfully in a pinch (i.e.,
essentially, the pinch imposed by travel) stoop to listening to a conductor’s
complete discography of Mahler over my laptop’s invisible built-in
speakers—this because amazingly enough the engineers have somehow contrived to
impart to the laptop the sound-reproductive capabilities of a mid-1980s
ten-watt stereo so-called boom-box. And as for analogue FM sound—why, to this
day, when weather cooperates, I will always record the Saturday matinee
Metropolitan Opera broadcasts from the FM analogue over-the-air signal provided
by my local so-called classical music station (WBJC) in preference to BBC Radio
3’s simultaneous digital online feed. But here I really am on the verge
of beginning to digress, because any further speculation on the merits of an
analog versus a digital audio signal will inevitably bring us back to the vexed
(if not necessarily Ivesianly unanswerable) question of the weightiness
of the detractions introduced by digital compression. Let it suffice for
us to take away from this mini-excursus on the stereo-simulcasts the inference
that if such simulcasts had been the norm rather than the very rare exception,
and more specifically, if they had by some economically incomprehensible logic
been extended to broadcasts of 35mm+ movies, my infantile and early-youthful
disposition to such movies might very well have been very different, which is
to say much more favorable. As it was and so happened, the audio segment
of each and every inch of 35mm+ film footage I became acquainted with outside
the cinema before the age of 14 at the very earliest was conveyed to me via the
impossibly small medium of the aforementioned child’s hand-sized single
television speaker. If I had only ever taken in such audio segments in a
state of bright eyed-cum-bushy tailed wide-awakeness, while seated bolt upright
on the living room sofa and bathed in the Kellogg’s raisin bran-worthy light of
the matutinal sun streaming through the living room curtains (for our living
room faced and indeed still faces east), I might (again) have been much more
favorably disposed to them. But owing to the typically nocturnal
scheduling of 35mm+ offerings, I was perforce obliged most of the time to hear
these segments at night, and owing to the double-digit hour sleeping schedule
imposed on me by nature and nurture working in tandem, I was perforce obliged
to hear them in a state of sub-alertness usually verging on or decaying into
somnolence. Naturally, my mandatory retirement hour was not static
throughout our period: at this period’s beginning (or rather the portion of its
beginning that begins with my earliest memories—i.e., ca. 1976) I suppose I was
made on so-called school nights (i.e., Sundays through Thursdays) to go to bed
by seven, while by its end I am pretty sure I was allowed to stay up until
ten. And on each of these school nights, I was required to sleep in my
bedroom, which did not have a television. (Whether or not I have any
right at this point to thumb the underside of my suspenders [i.e., the things
that hold your pants—or, rather, trousers—up, not the things that hold your
stockings up] and nod with the unsmilingly smug equanimity of a bow tie and
boater-sporting nonagenarian male member [!] of the minor New England gentry
naturally depends on whether the reader has reacted to the preceding sentence
with a look of appallment of the sort the most polite Anglo-Saxon does not
blush from obtruding upon a fellow Anglo-Saxon who confesses to having grown to
a full and healthy adulthood without having seen, let alone used, so much as a
square of toilet paper [I understand the Continentals have their own apparatus
for cleaning up en bas] ;
which of course in turn depends on whether or not a television is now something
children are universally expected to have as readily to hand as toilet paper, a
disjunction that, being a childless singleman who for the past twenty years has
made a so-called beeline for the nearest open or force-openable exit the moment
anybody has mentioned his or her child or children, I am naturally incapable of
resolving. But I think I should at least mention the look and the
counterlook, lest, in case the television-owning child is now indeed the norm,
the reader should suspect I grew up without toilet paper as well.) On Friday and Saturday nights, a more relaxed
dormitory dispensation tended to prevail: while I was invariably still required
to turn in at an hour that was early by adult standards (albeit perhaps a good
two hours later than the school-night hour), I was often allowed to take my
rest in a sleeping bag laid out on the floor of the living room. And
because at least until midnight on each and every one of these Friday and
Saturday nights—or, rather Friday nights-cum-Saturday mornings and Saturday
nights-cum-Sunday mornings—the television was both on and turned up to full (or
at softest half) volume, my sleep was then frequently interrupted (or, during
dreams, pervaded) by the din of Howitzer blasts, Niagara Falls, quadruple forte
orchestral tutti, and the like, as filtered through the distortion-ridden
medium of a single digit-watt soft-speaker cone being violently shaken almost
to the shredding point. On the whole, it was awful. Mind you, I do
not wish to blame my parents for this awfulness, for the author of my misery
was without a doubt myself, as I regarded sleeping in the living room with the
television on (and with the lino-shelled concrete living room floor for a
mattress!) as an unalloyed treat,
and would have been inconsolably disappointed (as indeed I very probably was on
more than one occasion) if I had ever been deprived of that treat (as indeed I
very probably was OMTOC). But what phenomenon in life is more common than
the treat that is really a trick? The living room weekend sleepover of my
late single and early double digits now strikes me as a sort of
practical-aesthetic forebear of the weekend bar-hopping of my late twenties and
early thirties (or, to record the chronology more frankly, mid-twenties through
mid-thirties): a genre of situation one sought out over and over again even
though it seldom if ever yielded any moments of pleasure. I suppose the
compulsion in each case was catalyzed principally not by the expectation of
pleasure but rather by the dread of pain at not finding oneself where the action was—which
essentially meant finding oneself not behaving in conformity with the
species-being of one’s then-current Shakespearean age of man, of being cooped
up indoors (whether the door was that of one’s childhood bedroom of or of one’s
adulthood apartment) when one was simply supposed as a red-blooded nine-year-old to be
staying up as late and catching as much late-night television as one could; or
as a red-blooded twenty-nine-year-old to be staying up late and downing as many
400 percent-upmarked beers as one could; such that it is probably unjust to
upbraid one’s former self for being unduly perverse in his choice of
recreations, inasmuch as one’s present self’s aversion to such recreations most
likely springs in the main not from any subsequently acquired insight into
their essential pointlessness and pleasurelessness, but rather one’s sense of
their unseemliness in a register that in virtue of one’s
place on the amusement park-ride conveyor belt of life one simply cannot not avoid feeling. Another way of putting
this is to say that were I now suddenly presented with some document attesting
with indefeasible officialdom to my being nine or twenty-nine years of age, I
would very probably immediately (or, rather, at the next Friday or Saturday
night) betake myself to the nearest living room floor or drinking
establishment, respectively, in flagrant contradiction of my forty
one-year-old’s flagrant lack of interest in frequenting such a locale.
Speaking
of flagrancy, the past dozen or so sentences have indeed amounted to a flagrant
digression from our topic; and yet despite their flagrancy, I make no apology
for them, although I will tender the following apologia for them: that they
amount to something that I would very much like to say in some setting, and the
present setting seems better than any other that I can imagine. “But why
not,” you ask, “simply write a separate essay on the factitiousness and experiential
imperviousness of age-designated recreations?” Because, I retort, such an
essay would perforce have to include an exposition of my childhood self’s
late-night weekend routine, an exposition that would perforce bore the pants
(or trousers) off anybody who had already read the present essay; and,
moreover, make me look like a right besonnenheitslos git for banging on verbatim or nearly
verbatim about something I had banged on about some weeks, months, or years
earlier.
But
anyway, by this point both the digression and meta-digression have spent
themselves, and so let us return to the topic digressed and meta-digressed
from, namely the late-night audiophenomenology of my childhood, via an
exemplum, an aural analogue to the above-described George C. Scott movie,
namely the lead-ins and lead-outs of the network Given-Night Movies.
Whether any particular network or night is more to blame than any of the others
for these lead-ins is beyond my ken, or, rather, will, to find out. All I
know is that throughout our period it seemed as though no cinemagenetic movie
could be shown at night on a network in the absence of a veritable battalion of
heralds seemingly equipped with the full panoply of orchestral or wind-bandial
brass instruments (trumpets, cornets, trombones, French horns, sousaphones,
etc.); a battalion that thrice or four times an hour would usher the movie into
or out of a commercial break with a raucous and extensive quintuple (sic)
fortissimo tucket, sennet, or fanfare; a tucket, sennet, or fanfare of such
loudness and length that no matter how drowsy I had been before I laid me down
to sleep, I could count on being roused every ten or fifteen minutes from that
sleep as thoroughly as if by the last trump. How could I fail to develop an
animus against such an interlude, and against the medium to which it served as
a wrapper, namely telecast 35mm + film?
Whence (i.e., from this animus), at long last, to my
itemization of the virtues of the first of my two eponyms, and simultaneously
to a very probably gratuitous resolution of what should be a glaringly apparent
contradiction in my argument, although TBT/TTTT, I strongly doubt any empirical
reader of this essay will yet have noticed it. (Such is the helotophilic
myopia of empirical readers nowadays [i.e., roughly since the last turn of the
century but one]: refer to a historical microepoch ever so passingly without
appending the full statute book of rights that were then denied to so-called
underprivileged groups {for my unabridged assault on this bugbear, please see
my essay “Gluttony and Panpsychism”} and they will call the police on you, but
maintain that orange is puce in one sentence and puce green in the next and
they will scan the two sentences in succession as unperturbedly as a steamroller
gliding over two adjacent squares {or whatever they’re called} of
sidewalk.) The apparent contradiction becomes nascently apparent as early
as the second sentence of my second paragraph, when after having described the
advent of television as the most recent “great fall” “since the dawn of the
industrial age,” I complain of having been “gobsmacked” by the digitization of, inter alia, the moving images
shown on television. “Why,” the attentive and empirically very probably
nonexistent reader will query at this point, “should you be ‘gobsmacked’ by any
change impinging on television when you have effectively denounced television
as a barbarous medium?” The answer to this question may smack of willful
frowardness in virtue of its cursed circuitousness, but as it happens to be the
truth--or, at any rate, as near as I can get thereto—it deserves a full airing,
as follows: “Like any person born in the developed world (and, indeed the
developed world plus perhaps the better part of the undeveloped world) in the
past half-century, I was at least diurnally bombarded by televisuogenetic
sounds in the womb, and by televisuogenetic sounds combined with
televisuogenetic images from the earliest moments of my egress from that
chamber and thence for many years, certainly long past my attainment of the age
of discretion. From this fact it follows that I could never have started
out by regarding television as barbaric and that any sense of the barbarity of
the medium I have since acquired has arisen via some sort of process of subtraction, of a sudden or
gradual (and in either case sustained or recurring) withdrawal from the regimen
of glare and din that I was initially perforce compelled to regard as
altogether natural and civilized. And the initial and as it turned out
definitive moment of such subtraction was provided by videotape. I have already made passing mention of the
videotaped daytime educational children’s programming of my late single-digit
years; and it was almost certainly from the schedule of those offerings—in
other words, the daytime schedule of our market’s older and more prestigious
public television channel, WEDU, which broadcast nothing but educational shows
for children from summer-solstice dawn to winter-solstice dusk, Monday through
Friday—that I first came to appreciate the relative coolness, quietness, and
politeness of videotape—albeit probably not at school itself, as there I was
forced to encounter these offerings via a television set that in all its
essential technical specifications was identical (or perhaps even slightly
inferior) to the crummy nineteen-inch one back home, and the so-called
classroom setting introduced its own cluster of distractions, chief among them
the distance imposed by the necessity of making the image visible (at least
barely) to twenty or thirty viewers at once as against three or four; and the
off-putting glare imposed by the in-hindsight inscrutable policy of keeping the
overhead fluorescent lights on during the viewing session (I call the policy inscrutable because the lights were always
switched off for the viewing of a filmstrip or
proper projected [16mm!] movie.) In any event, I am certain that my
primal moment of WEDU daytime educational programming videotape-engendered Ruhe is or was sited not in any schoolroom
but in the living room (or parlor as
it was called) of my grandparents’ house, specifically a spot thereof lying a
few inches in front of their wood-encased 24-inch console set (yes—the very
same 24-inch console set that served as the model of the set mentioned in the
above exemplum of the early late twentieth-century paterfamilias). What
business I had spectating on that programming block in a place sited (I refuse to stoop to using the trisyllabic
ell-word for mere elegant variation’s sake) a full fifteen-miles east-southeast
of my school I do not know: perhaps I was ill with a fever, as I seemed to be
every other week throughout my single-digit years, or perhaps WEDU did not
alter its daytime schedule during the summer vacation, either in deference to
summer school students (of which, I am smug to say, I never was one) or out of
a lack of will or funds to scare up more seasonable air-fare. In any
event but the last one, I remember finding a particular image or rather pair of
images from that particular programming block, as seen through that particular
set, particularly soothing. The pair of images participate in a genre
that I believe is known in the so-called business as an ID card, as if to cause maximum
confusion with the other sort of ID card, the one one flashes at security
guards, bartenders and the like—I mean something that was broadcast between
programs or program breaks and that displayed the station’s channel number,
call letters, and region of service, all for the benefit of the presumably
quasi-literal handful of people—out-of-towners or locals who watched TV only
very infrequently—who had not committed such coordinates to memory for all six
or seven broadcast stations in the area. As I recall—and I invite the
statistically non-existent reader with documentary evidence to correct me if
I’m wrong (yes: I’ve already checked Y** T**e, and their earliest WEDU ID card
appears to date from the mid-1980s)—the foundation of the WEDU daytime weekday
ID was apportioned among vertical swathes of two colors—yellow and blue, a
yellow tending conspicuously towards neither lemon nor eggshell, and a blue
tending equally conspicuously towards neither navy nor powder. The left
edge of the screen commenced with a band of blue extending rightwards no farther
than an eighth of its (i.e., the screen’s) total breadth, at which point its
work was taken over by a band of yellow occupying perhaps the remainder of the
left third of the screen, the remaining two-thirds (or thereabouts) thereof
being re-ceded to blue. This rightmost blue field hosted the business end
of the card’s business—viz. displaying the above-mentioned coordinates, plus
some sort of schedule-specific motto along the lines of “In-School Programming
Service.” The strip of yellow, on the other hand, served to set off a
purely pictorial element, an unshaded black line drawing-like graphic whose
original might have been (meaning actually may have been) executed
in charcoal. Of the two of these graphics that I remember, one was of a
world-mapping globe set in one of those stands that look roughly like an
inverted umbrella minus the actual rain-shielding bits, and the other of a
studious-looking and possibly periwigged gentleman sitting at an elevated
writing desk of the sort one imagines Bob Cratchit sitting at, a desk whose
upper surface was entirely occluded by an open folio-sized book (Yes: perchance
it was even the first folio). Some might call
this or these a rather stuffy and prosaic starting-point for a lifelong (or, at
any rate, thirty-something years and counting-long) romance. But in
hindsight the weekday ID card of WEDU of the early late 1970s seems almost
calculated to showcase the signature visual virtues of the medium of
videotape—a more than figuratively electric vividness of color, a sharp and
completely untransgressable division between fields of specific color, and a
pin-drop disclosing quietness (for this card was as static and
silent as any ordinary, tangible card of any genre [although, to be sure, its
not-much-younger successors in later years would be enlivened and cluttered
with all sorts of Named-Night Movie-ish bric-a-brac—bumptious basso voiceovers,
space-operatic digital animation sequences, and, yes, tuckets, sennets, or
fanfares {although these tended to be more dynamically understated than their
nationally broadcast counterparts, doubtlessly because they were obliged to
exploit a less expensive and hence smaller complement of session musicians})—in
short, the very virtues I had come to find so signally lacking in telecast 35mm+
television. Naturally, as the statistically nonexistent empirical reader
will have surmised, the interval between my initial smittenness by these
virtues and their incorporation into my organism as my personal desiderata and exspectanda for any session of television viewing
was a long one. The chief back-holder of this incorporation was the fact
that for many years the WEDU daytime weekday schedule was the only
uninterrupted multi-hour stretch of videotaped television programming I knew of
or had access to and that this access was, in the very nature of things
occurring routinely in the Alltag of an elementary-school pupil,
intermittent and, as already mentioned, generally imposed in an
even-less-than-ordinarily-less-than-ideal setting. I would conservatively
and roughly date the moment of incorporation to my early pre-teen double-digits
(“In other words when you were aged either ten or eleven?” Yes, I suppose so.)
when, for reasons best known to themselves (if even poorly known by anyone at
all now) my parents began at least occasionally taking in the Saturday-night
prime-time offerings of WEDU. On these WEDU-dominated evenings, virtually every
minute of television I saw, from Washington
Week in Review starting at about 7:00
to the test pattern in-ushering Star
Hustler at 11:00 , 11:30 , or even slightly after midnight , depending on the length of that night’s Doctor Who serial, was founded in
videotape. The centerpiece of this schedule was undoubtedly the so (or
should have been)-called British Block, consisting of Doctor Who followed by a half-hour episode of
some east of the-pond-originating comedy program—Monty Python’s Flying
Circus, Fawlty Towers, Not the Nine o’ Clock News, Sorry (a show ridden with especial pathos
for me at this moment, only 28 hours before my forty-second birthday, as I
recall its hero Timothy Lumsden [played by Ronnie Corbett]’s non-eponymous
signature, “I’m forty-one years old, mother!”), To the Manor Born, andOnly
When I Laugh pretty much
comprise the complete Golden Age catalog thereof. (’Allo ’Allo!, Are You Being Served?, and Keeping Up Appearances did not begin airing on WEDU until the
early nineties–which was just as well, as I’ve never cared much for any of
them.) But it would be entirely wrong to view my infatuation with
videotape as an epiphenomenon of my coevally nascent Anglophilia, for some of
my most treasured memories of the prime-time WEDU schedule hail from its
Stateside fringe: for example, a segment from a show called Alive from Off-Center hosted by
National Public Radio’s Susan Stamberg, a segment nominally centered on a dog
named Man Ray that made especially arresting use of the canine’s namesake’s
prevailingly monochrome photocollages-cum-self portraits in conjunction with a
color-separation-overlay background of electric magenta; or the general mises-en-scène of such shows as the aforementioned Washington Week in Review, the
first show I ever saw set in the eventually quite commonplace setless environs
of the black void (its butcher’s half-dozen strong panel
of journalists hebdomadally clustered around a conference table that might as
well have been a gigantic raft-shaped piece of space junk, for where the next
tangible object beyond that table lay was any viewer’s guess), and Austin City Limits, PBS’s
answer to the Grand Ole Opry, in which a stage gel-empurpled butcher’s
half-dozen strong band of beardy, Stetson-sporting ladies (sic) and gents would
sit leisurely-ly pickin’ ‘n’ grinnin’ for ninety straight minutes against the
starkly naturalistic background of its Texan eponym’s low-slung starlit (and
hence patently nighttime) skyline. And
the leisureliness of ACL puts me in mind of yet another virtue of
videotape-founded television, and indeed perhaps its greatest virtue, which I
have so far neglected to mention perhaps only because it is (perhaps) the
hardest to substantiate in terms of any demonstrable intrinsic properties of
the medium itself—namely, its nobly relaxed pacing, its sublimely unhurried way
of dealing with time. While viewing
virtually any film-founded telecast one could not escape (and indeed can still
not escape) the feeling that one was being constantly (and indeed &c.)
jostled along willy-nilly from one shot to another like some budget package
tourist through a museum, and catching only a succession of all-too-brief,
fugitive glimpses of the world ostensibly being captured by the camera. In contrast, while viewing virtually any
video-founded telecast, from the most ineptly produced local university credit
lecture to the BBC Shakespeare’s realization of The Tempest, one invariably
seemed to be afforded the privilege or luxury of lingering over every segment
of that world as long as one needed to in order to grasp its essence, both in
relation to the immediately ambient diagesis, and as a thing-in-itself. Not until I became acquainted with the films
of Tarkovsky, in 1999 or 2000, did I meet with a cinemfilmic mise-en-scène-ic
habitus that rivaled this videotapic one in point of this virtue, and I can
count on one two-fifths of one hand the number I have met with since—viz. those
of Cassavetes and Tarr. The question is,
whence did this virtuous lackadaisicalness hail, and why should it have been so
widespread--nay, essentially universal—in the world of televisual videotape? Obviously television attracts its share, fair
or otherwise, of great talents, such that it would not be at all surprising to
learn that at some point in the 1960s or -70s some humble assistant director or
floor manager at Yorkshire Television or WBFE Poughkeepsie alighted upon and
mastered the same means and manner of treating old kronos as were concurrently
being alighted upon and mastered by big-name (albeit lone-wolfishly
idiosyncratic) cinemfilmic regisseurs in Moscow and Los Angeles. But as for the notion that, say, ten thousand
such obscure persons simultaneously alighted upon and mastered such a means and
manner, and simultaneously voluntarily kept mum about the discovery, and
subsequently contented themselves with toiling away in continued and presumably
indefinite obscurity—that is obviously as absurd a scenario as the proverbial
one about the typing monkeys. Clearly
the relaxed chronomancy has to be owing to some property intrinsic to videotape
(itself), or to the way this medium is (or, rather, by now, was) universally
and reflexively handled in television studios.
For a long time I used to think I had discovered this property via a
certain discovery about cinefilm that I made at an embarrassingly late age (an
age that I will not reveal—though the revelation that I made it via an
essay on Bela Tarr will at least provide the reader with a terminus a quo
thereof)—the discovery that at the production or [to appropriate a seemingly
indispensable word from the videotapic lexicon] recording end all gauges
of film, from Super-8 through 16 and 35 mm right on up to Imax—were restricted
to a maximum reel length of ten minutes.
(Incidentally, the reader may get a sense of the level and flavor of
disillusionment this discovery induced in me by picturing himself or herself
opening a state-of-the-art digital camera to find it containing a wee bird
whose job is to peck out in Flintstonian manner a likeness of the image seen
through the viewfinder. But of course at
first blush it seems implausible that I of all people should be disillusioned
by the discovery that every feature-length cinefilm-based film ever made has
been pieced together basically literally with scissors and sticky tape out of
several-to-many ten minute-long little bits; for after all, wasn’t my love for
pre-digital cinema predicated from the beginning on its presumptively more
intimate relationship with the human mind and hand? At first blush it seems that the discovery of
such piecemeal-ishness should make up for (and perhaps even then some) my
disappointment in the discovery of the fundamental quasi-digital properties of
cinefilm, in rightly rendering back to God-like man what had wrongly been rendered
to the Caesar-like machine. But in truth
there is no paradox here. For in what
passes for the mind of the early twenty-first century would-be subject (e.g., I
say neither proudly nor shamefacedly, in the mind of the present writer), there
is a firmly defined and Hindoo caste-like hierarchy of labor. In such a quasi-mind, any kind of labor plausibly
answering to the name of artisanship is prized. This is labor regulated by a ritualized set
of codes and prescriptions handed down from generation to generation, as they
say. It is above all a kind of
labor that is only seen from the outside, that is opaque, inscrutable,
mysterious—whence the medieval and early modern synonymity of the words mystery and profession—and
therefore imparts an aura of authority to its process and an aura of smoothness
and infallibility to its products.
On the lower stratum is a mode of labor known in French as bricollage,
and in English as Do-It-Yourself or handyman-work. This is labor as seen from the inside, labor
that is improvised as needed, on a so-called ad hoc basis, out of available
materials and whatever ingenuity happens to suggest itself at the exigent
moment, the labor of shoestring and bubblegum, of sticky tape and scissors—of
any number of dyadic idioms that in virtue of their heterogeneousness impart an
aura of incompetence to this version of labor’s process and an aura of roughness
and ramshackleness to its product. That
another man’s artisanship is one man’s bricollage ought to go both
without saying and without provoking any finger-wagging or nose-pinching: one by
all rights ought not to be scandalized to find the airline pilot, the
wheelwright, the surgeon, the silversmith or indeed the film editor, making
shift with the same sorts of makeshifts one recurrently makes shift with in the
diurnal course of plying one’s own trade or managing one’s own Alltag. But regrettably one is thus scandalized, and
by this universal and universally selective scandalization the universal and
universally unselective fetishization of the commodity—a fetishization that is
the mainstay and principal prop of the present system of life—is safeguarded. End of digressive soapbox speech—and
beginning of resumption of main argumentative thread, as follows: if there was
one thing I had learned about videotape during my by then already
embarrassingly ancient period of acquaintance with the medium, it was that it
was not apportioned in intervals capable of registering no more than ten
minutes of time—that, indeed, even in its humbly domesticated format of the VHS
cassette, the smallest length of recording time allotted to videotape
was thirty minutes, and that the most common duration in that format was 120
minutes, which, by dint of a retardation of the recording speed, could be
extended to as many as 360 minutes, or six whole whopping hours. Whether these luxuriously protracted
recording intervals were even matched, let alone exceeded, by the stretches of
videotape used in the recording of television shows I did not know and indeed
still do not know for certain. But it
seemed and indeed still seems reasonable to assume that what was attainable in
the dumbed-down domestic version of the medium was attainable on an even
grander scale in its professional version (This seems like an instance of a
phenomenon worthy of a surname-owned law, does it not? “Then why not make it your own?” Because you don’t get two shots at having a
law named after you, and I would very much like Robertson’s Law to center on
something at least a bit less parochial and prosaic than the domestication of
technology.), that the producers of videotaped television programs had an even greater
durational edge over their cinefilmic counterparts, that they had at their
disposal not merely twelve or twenty-six times but fifty or a hundred times as
much uninterrupted time as the producers of proper filmed films. And such being presumably the case, and all
other things being equal, one would have expected and indeed would still expect
a videotaped film-like entity to have a more relaxed look or feel than an
actual filmed film. For after all, a
cinefilm-based film can record no complete, uninterrupted so-called real-time
action of longer than ten minutes’ duration.
Consequently the user of cinefilm cannot but feel a certain amount of
pressure to get as much possible into a single ten-minute shot. To be sure, if his budget permits (although of
course it often doesn’t), once one camera-load of film has run out he can
simply resume filming with a fresh camera-load, but assuming he is working with
subjects more active than boulders, even if he manages to keep the camera
perfectly still throughout the interval of reloading, some trace of movement in
the finished product will always betray the composite shot’s origins in two
discrete time-stretches. So he will
always find it most expedient to get everything that needs to be seen from a
given angle in before the end of the reel, and to begin the new reel by filming
from a different angle. Sometimes the
shift in angle will correspond to a shift in locale, a shift to an entirely
different scene; in such cases there will always be a jarring or, if you
insist, a “stressful” sense of temporal dislocation. At other times this shift will occur within a
given scene; in such cases, provided the director has a good continuity
manager, the shift will quasi-paradoxically afford a smoother sense of a
unified time-strand than continuing to film from the same angle would have
done. Nonetheless, the shift will not be
able to avoid imparting a sense of restlessness to the proceedings; the sense
that although everything is happening in one time and place, no part of it is
worth lingering over for very long—as though it envisages the whole state of
affairs as being registered through the eyes of somebody who is incorrigibly
anxious or bore-able, and who consequently makes the viewer feel like the
above-mentioned budget package tourist. For
the worker with videotape, in contrast, there are presumably no such built-in
pressures, and consequently there presumably need be no such built-in
disturbances of the organism of the viewer of his productions. If a videotape-using cineaste’s actor needs
to utter a veritable two-hour filibusterer’s soliloquy, why should he not be
allowed to do so? After all, the whole
damn thing will only amount to a few thousand inches out of the million or so
remaining on the left spool. “And of
course,” our V-UC may be expected to remark as a jovially smug afterthought,
“if he flubs a line we can always go back and record the entire speech over
again, because videotape, unlike film, is erasable and consequently
reusable.” Such, I say, are the improvements (or, at any rate, things
that strike me as improvements) that, all other things being equal, must have
automatically been gleaned from any switchover from cinefilm to videotape. And there are indeed cases in the history of
the moving visual media in which because all other things really were equal,
such benefits were unmistakably embraced.
Take for instance the oeuvre of the almost invisibly obscure yet for all
that perhaps still grossly overrated American so-called underground cinematic auteur
George Kuchar. His early movies, shot on
sub-35mm gauges of cinefilm in the 1960s and ’70s, are highly compressed
melodramas generally consisting of extremely brief shots in which
world-convulsing reversals of plot often occur within a span of a few seconds. His signature productions of the ’80s and ’90s,
by contrast, are travel diaries, long, lazy documentary snoozefests in which Kuchar
is seen, for example, sitting at a table writing a letter, or sunning himself
on the grass, for a quarter of a static camera-shot hour at a stretch—the whole
new-model kit and caboodle being made both possible and retrospectively
unthinkable by the magic of VHS videotape.
Then there is the abovementioned Bela Tarr’s 1982 all-videotape version
of Macbeth, recorded as a single 90-minute take, to which the scene
changes were accommodated through the construction of the entire set as a
single tunnel-like structure, through which the camera processed in tandem with
the action of the play. But both the
Kuchar and Tarr examples are marginal instances, at least with respect to the eponym
of this essay, and the more I learn about the technical side of television
production during the golden age of telecast videotape and 16mm film, the more
I am inclined to see these improvements as they were evinced in my most beloved
golden age programs as a collective epiphenomenon not of the immediate medium of
videotape itself but rather of certain more contingent and evanescent
intermediating phenomena—the high cost of time spent with the recording
machines, and of the mainly unionized technicians authorized to operate them, and,
perhaps above all, the cussed lumbering unwieldiness of the videotape-destining
TV studio camera. Any student of the
secondary literature on the great cinefilmic auteurs of the 1950s, ’60s, and
’70s will have come across scads of paeans to this or that fringe director’s
“pioneering use of the steadycam” or “…of the hand-held camera,” and from them
inferred that the traditional, established unsteady unhandholdable camera of
mainstream Hollywood cinema of those decades was some sort of Tyrannosaurus
Rex-sized contraption with the mobility of a Brontosaurus stranded in the La
Brea tar pits. But actually in point of
fleetness and flexibility of access that traditional Hollywood
camera was like one of those present-day swallowable pea-sized surgical cameras
compared with the contemporaneous television studio video camera. The TV studio video cameras of the 1960s, 70s
and early 1980s, I have recently learned, were like little windows propped up
on massive half-ton concrete blocks on wheels.
You wheeled them into place very slowly and laboriously at the beginning
of a scene, and to that place they were effectively rooted until the shooting
of that scene was over and it was time to shift the production to another set. Thus, for anywhere between, say, five minutes
and an hour at a time the TV studio-bound director was both graced and stuck
with between (at the WBFE-Poughkeepsiean end) two and (at the BBC Television
Centric-end) six more or less fixed points of view. Between or among these fixed points of view
he could cut more or less ad libitem infinitumque, but any bit of action
that happened to take place outside the frame of all these viewpoints was
destined to suffer the fate of the unheard falling tree or s***ting bear or
pope. The only practicable means of both
fully exploiting this set-up’s assets and fully eluding its liabilities was to
make sure that the sequence of camera angles was planned out in advance and
that the actors were prepared to perform the scene from beginning to end and
knew exactly where they were supposed to be standing (or sitting, etc. [I
believe the industry’s term for such a place is a mark]) at every line
of the script. Such a system effectively
required the actors to treat each and every scene as they would have treated a
scene in a stage play—in other words, to act on and react to nothing but each
other and the ambient scenery and stage props over the course of some rather
lengthy stretches of time—and consequently to begin to get used to these people
and things, and sub-consequently to begin to comport themselves to them more or
less as they would their counterparts in the real world, and
sub-sub-consequently by default (i.e., when the scene in question is not
centered on a genre of event in which panic is de rigueur) to deal with them in
a fairly and ever-crescently relaxed way. The uniformly resultant results of
such a chronomantic dispensation are (or, rather, were) extraordinary: namely,
that on the one hand a measured, leisurely, understated delivery of dialogue
comes into its own, and on the other even the hammiest overacting comes to seem
an expression of the natural vulnerability of the performer rather than of the
megalomania of the director. Just
picture to yourself the treatment of a typical shot, which is to say a
sub-scene, on a proper film set of the high cinefilmic epoch: A woman walks on
to the set in some sort of undress, be it a teddy or a nightgown, sits down at
a nightstand with a mirror, and begins brushing her hair. She continues brushing her hair and looking
at herself in the mirror for ten seconds.
The director yells “Cut!”; the woman stands up, walks off the set, dons
an overcoat from whose pockets she extracts a book of matches and a pack of
cigarettes, and lights up. Doubtlessly
the woman has been informed by the director that while gazing at herself in the
mirror she is supposed to give the impression that she is thinking apprehensively
about an impending confrontation with her angry or depressed spouse or lover or
son or daughter (or, if you insist, statistically impossible housemate), while
at the same time remarking to herself something to the effect of “Gosh, I’m
looking old. Where have all the years
gone?”; and doubtlessly because she is a highly skilled and experienced actor
of the Stanislavskian stamp, she manages to force herself to think those very
thoughts at that very moment. All the
same, being only human, as they say, and having the boredom and jitters of the
wait for her scene only seconds behind her and the soothing warmth of the
overcoat and the cigarette only seconds ahead of her, she will be unable to
convey a truly convincing representation of an actual woman combing her actual
hair with an actual brush in front of an actual mirror in her own actual
bedroom, let alone thinking the thoughts that a woman in such a situation may
be expected to think. In contrast even a
mediocre or novice female actor in a golden-age videotaped television production
making use of the very same script as the one used in the above-described shot
will have had little trouble conveying such a sense of naturalness, because the
production routine will have required her to spend much more uninterrupted time
literally in the shoes (or, rather, I suppose, slippers) of the character she
is portraying. Before taking up her
station at the nightstand, she will perhaps have spent a minute or two pacing
about the room or reclining in bed with a book, and immediately after finishing
her toilette she will certainly have to confront an angry or depressed spouse
or lover or son or daughter (or statistically impossible housemate) whose knock
will perhaps be heard just as she is standing up and getting ready to return to
bed. Whether this videotapic actor is as
they say utterly committed to the role or merely regards it as the most
evanescent of meal tickets, whether she has been swotting up on it for months
or has only just read its first line of dialogue today, she is forced to treat
it in a manner that is intrinsically more involved and more serious than that
enjoined on her cinefilmic counterpart.
She cannot think about the cigarette break with the same frisson of
anticipatory pleasure because it is much further in the future, and because
standing between it and the present in so-called real time is a confrontation
with another person. Of course, unless
some uncommon but by no means unheard-of quirk of casting or bit of off-set
drama has decreed otherwise, the person she will be confronted with does not
enjoy or suffer the degree of intimacy with her that the character she is
playing enjoys or suffers with the character her prospective confronter is
playing. Nevertheless, a confrontation
with another human being, be he or she a bosom intimate or a total stranger, is
a confrontation with another human being, and entails a quantum of subjective
trauma that is invariably much greater than that entailed by a confrontation
with even the most intractable cellophane wrapping of a pack of cigarettes.
I
realize that in positing this hierarchical dichotomy of natural televisual
videotape versus artificial cinematic cinefilm I am laying myself open to all
sorts of demurrals that I cannot with a clear conscience (or rather, perhaps,
with a sense that I have not made myself out to be a hayseed) dismiss as
frivolous nitpicking. In the first and
least reprehensible place it will be objected that the studio-centered shows of
the golden age of telecast videotape and 16mm film were not simply Old Vic-realizable
stage plays on which the video cameras happened to eavesdrop, that in the
preparation of these shows there were often edits and retakes and that their
casts and crews were also afforded third wall-breaking cigarette breaks. To this demurral I can only counterdemur that
it is all a matter of degree, that in the course of a performance even a stage
actor will be granted several breaks centering on the same sorts of
character-breaking luxuries indulged in by his cinefilmic counterpart. Nevertheless, he or she will be forced to
treat each and every one of his or her scenes as a self-contained world unto
itself, and the same intradiagetic immersion on an admittedly smaller scale
seems to have been habitually expected of the videotaped television actor;
whereas it seems almost never to have been expected of the Hollywood or
Pinewood cinefilm actor. Next, from
certain people who have no problem with the notion that all the videotaped
televisual world is (or was) a stage, it will be objected that there is nothing
less aesthetically interesting than a ‘filmed stage play’—I put the phrase in
inverted commas (a.k.a. uninverted quotation marks) and minus a (sic)
after the filmed because it—i.e., the phrase, i.e., ‘filmed stage
play’—was a sort of indissoluble bit of boilerplate invariably availed of by
the enemies of the great videotape-centered productions of our golden age when
they were trying to encapsulate the essence of their aversion to these
productions, for it seemed that in virtue of an obdurate gormlessness that
cannot have been coincidental, those people who disliked videotaped television
the most usually seemed never to have registered the vast divergence in look
between the two media, and—what was even more baffling and frustrating—to be
impervious to any attempt to draw their attention to this divergence. Time and again, one would find oneself, in
attempting to account for a certain discrepancy in production styles between,
for example, two situation comedies, referring with a sort of tweedy chumminess
to this divergence, would find oneself saying, “Well, of course, you must
remember—as I’m sure you do—that Cheers is filmed, whereas Family
Ties is videotaped,” only to be met with the proverbial blank stare
seasoned with equal dashes of unproverbial stroppiness and suspicion. “No, I don’t remember any such fing,” one’s
interlocutor would coldly rejoin, while jerking his or her eyes to one side
once or twice as if in confirmation of the accessibility of a certain baseball
bat that he expected to have to make use of very soon: “What on f**k’s green
earth are you talking about? Surely a
sitcom’s just a sitcom. They all look
the same, don’t they? Surely the
stchewdios just shit them all out of a single machine, like so many
sausage-like turds (or turd-like sausages).”
“Indeed they don’t, my good man or woman,” one would counterrejoin, no
less tweedily, if a trifle less chummily, than before, “as within each of the
networks that commission sitcoms there prevails a kind of hierarchy, a Hindoo
caste system-like system, if you will, that determines which situation comedies
shall be filmed, and which videotaped.
Now, in the case of a flagship ensemble cast program(me) like Cheers
featuring two bona fide B-list Hollywood stars like Ted Dansen and Shelly Long,
videotape has always been hors de—”
Then--chez one, not chez one’s interlocutor—everything
would go proverbially black; and the next thing one knew one would be waking up
face down in the proverbial lawn while clad in nothing but the proverbial
assless chaps. The local bottom line or
gist is that at least vis-à-vis the so-called non-factual sector of
moving-pictural production (a sector that comprehends both television and
cinema-destined documents), the vast mobility of viewers in the golden age of video
tape and 16mm film seemed to conceive of the various genres solely as
expressions of difference in duration: in their eyes a seven-minute long motion
picture was a cartoon, a half-hour length one was a sitcom, a an hour-long one
was a so-called drama or soap opera, and a 90+ minute one was, or should have
been, a movie. A thirty-minute chunk of
videotaped so-called non-factual television subject to all the above-delineated
constraints on camera placement the mobility could watch with cheerful complacency,
in that because it was not a movie they did not expect any cinematic flair from
it. And corollarily, as a consequence of
harboring the same lack of expectations, they could spectate on a thirty-minute
chunk of filmed television in cheerlessly blasé heedlessness of its
exploitation of all the above-delineated flexibility of camera movement. But when forced to sit for ninety minutes or
more before a screen of any kind, they peremptorily demanded to be
unremittingly regaled by every trick in the cinefilmic cineaste’s hat, and
kicked up a right poltergeist-worthy fuss at the briefest reminder that the
foundation of what they were seeing was an event of finite situation and
duration, an event that had actually been enacted in some actual place over the
course of an actual sequence of minutes or hours. For that is what the objection to ‘filmed
stage plays’ effectively amounts to: an objection that the long-form videotaped
dramatic production is too realistic, that it reminds one too forcefully
that the sub-world it has registered is a part of our own world and is subject
to all its embarrassing contingencies; that it is not some sort of fairytale
realm beholden to no law other than the intentional fiat of the Godlike
regisseur. When watching one of the classic
videotape-based dramatizations of our golden age—an installment of the BBC
Shakespeare, or a Dcotor Who serial, or an episode of I, Claudius—I
feel as though I am spectating on the events enacted before me more or less not
merely as I would spectate on them from the vantage point of the stalls of a
theater, but as I would spectate on them from the perspective of a so-called
fly on the (fourth) wall in so-(and in this case justly)called real life, or,
rather, life as it is being reenacted in a stage-like setting. To be sure, I am not afforded a complete view
of the entire tableau at all or even most times; to be sure, the director is
both restricting and enhancing my view of the scene, mainly by cutting to and
zooming in on whoever happens to be speaking at a given moment, but these are
mostly natural restrictions and enhancements—natural in that they
correspond to the choices I would make if I had eyes equipped like a video
camera, or, to take a more feasible example, if I were viewing the scene
through a pair of opera glasses. But of
course dropping the n-word even a single time is enough to let slip the dogs of
the cinephile intellectual petit bourgeoisie, who will hasten to point out with
rabid smugness that there is no such thing as “natural” movie- or television
show-making, that cinema and television are inherently and fundamentally
artificial media, and that accordingly in the entire history of moving
image-making there has never been a single natural movie or television show
ever made. And nat-, erm, unsurprisingly,
this will not be the first time that I have encountered their ranks or
their objections. At the very latest, I
made their acquaintance no later than 1992, during a college course on the
French so-called New Wave. From the
supplementary reading for the course, I learned of this bloke name of Jacques
Bazin (not to be confused with the Franco-American cultural historian Jacques
Barzun), the founder of this French cinema buff’s fanzine name of (Les?)
Cahiers du Cinema. Anyway, this
Bazin dude, while not a filmmaker himself, was in the late forties and early
fifties a sort of spiritual mentor to all the jeunes turques (or turques
jeunes) who would later morph into the celebrated auteurs of the
(so-called) New Wave—Godard, Truffaut, Chabrol, et five or six al. According to the supplementary reading,
Bazin’s cardinal rule or one commandment of filmmaking was “Mise-en-scene tells
the truth, while montage lies.” By
mise-en-scene the ’Zin meant any stretch of film footage composed of a single
unedited take, and by montage he meant any stretch of film footage composed of
two or more edited-together takes. At
first, so the supplementary gospel reported, the young sparks who were the
future New Wavers (had) lapped up this prescription as though it were mother’s
milk (I apologize for any dissonant overtones of milking the bull produced
by my employment of this hackneyed but unimprovable simile), but then they saw
their first Hitchcock movie (I admit I cannot recall whether it was actually
just one movie and the lot of them actually took it in as a gang), the scales
fell from their eyes, and everything changed—that is to say, from Hitch’s use
of montage, they discovered the marvelous truth-bearing capabilities of this
technique. This supposed supersession of
Bazin by Hitchcock was presented as both monumental and apodictic, as a
(supposed) supersession on the order of Lamarck’s (supposed?) supersession of
Darwin and Einstein’s (supposed!) supersession of Newton or (or indeed-stroke-of course, J. Christ’s
[supposed?!] supersession of Moses). And
na-, erm, unsurprisingly, like all evangelists, the authors of the
secondary gospel of the French New Wave “sound[ed] certain” the reader would
“approve this audacious, purifying, elemental move” (P. Larkin, “Poetry of
Departures”). And unsurprisingly, I did
approve, without at all understanding why, and thereupon, for about the next
ten years, believed that the proper way to “read” a movie consisted in decoding
or analyzing each and every one of its camera angles and editorial transitions,
all of course as a propaedeutic to a grand synthesis of these decipherments or
analyses that would yield up the film’s metaphysical essence. But at some point slightly more than ten
years ago, a point that I unregretfully cannot manage to pin to a specific
epiphanic moment, I came to regard this meta-hermeneutic notion as downright
and arrant claptrap—or, rather, and more broadly, I came to view with contempt
the notion of film as a “language,” on par with language as such (and no: I am
not going to call it “verbal” or “discursive” language, because that would be
to concede the enemy’s point; for language in a non-figurative sense is always
verbal and discursive) either in point of transparency or truthfulness. To be sure, all the pre and extra- cinematic
elements involved in the production of a moving picture are intrinsically
artificial, in that they all involve a calculated censorship of certain
impulses and an equally calculated heightening or accentuation of certain other
impulses, as well as an appeal to experiences that diverge in certain salient
respects from the phenomenon being represented or dramatized. We don’t really believe that any fresh uxoricide
has ever held forth as eloquently about his crime as Othello does, or that
Shakespeare could not have written Othello’s post-uxoricidal lines just because
Anne Hathaway was still alive at the terminus ad quem of their
composition (although I can well imagine that the Oxfordians and Baconians have
tendered this very thesis). But these
artifices are of an altogether different and more redeemable sort than the
artifices of montage-driven cinema, inasmuch as when they are competently
realized, we regard their divergence from their real-worldial counterparts as
contingent rather than essential. Admittedly, no fresh uxoricide has ever held
forth as eloquently about his crime as Othello does, but presumably many a
uxoricide has held forth fairly eloquently thereupon, and considerably
more eloquent than the inarticulate monosyllables of history’s anonymous least
eloquent uxoride. And the imparter of
this eloquence, Mr. Shakespeare, despite having never committed uxoricide
himself, has presumably read enough sufficiently empirically trustworthy
accounts of people who have committed this crime to acquire some inkling of the
specific gravity of its drag on the conscience.
In short, the artifice in Othello qua exemplary piece of
pre-cinematic diagesis, is aiming at something that just might be attainable in
the real world if we “had only world enough and time.” “And I suppose your principal objection to
cinematic montage is that it produces something that would be unattainable even
if we had all the world and time in the world and then some: for example, a
sequence composed of an image of two chimpanzees coiting in a jungle followed
immediately by launch-pad footage of a NASA rocket taking off.” No, that is not my principal objection to
montage. To be sure, it would be
incredibly annoying to have to watch a movie composed entirely of nothing but
mutually incompossible images such as the two you just mentioned, but it is not
the incompossibility of the images as such that irks me; nor need a pair of
montage images imply such an unfeasible scenario in order to irk me. The irksome genre of montage sequence I have
in mind is one pioneered in the wee, small, and silent years of cinema by
Charles Chaplin, in one of his early features, A Woman of Paris. Chaplin’s sequence begins with what I am
recalling as a two-shot of the eponymous woman chatting in her boudoir with one
of her lovers. This shot is followed by
a cut to a semi-close-up of the lover, then a cut to a flower lying on the
floor of the boudoir, and finally a cut back to the lover’s face. The viewer is meant to recognize the flower
as one carried by the lover’s rival in an earlier scene and thereby, by the end
of the sequence, to infer that the lover has just realized that he is being
two-timed. To the sequence itself I have
no objection, as the point it makes is a simple one, and it gets it across
unequivocally. But it unfortunately
inaugurated a rich or rather feculent tradition of the wantonly habitual, and
indeed almost compulsive, use of montage as a device for exposition, above all
and most objectionably the exposition of the psychological states of
characters. Scarcely any feature-length
cinefilmic movie is devoid of a sequence in which a character’s beliefs and
intentions on a pivotal plot-point are meant to be inferred via a montage
sequence, often one far more complicated and hermeneutically vexed than the
Chaplin boudoir-flower sequence. And the
upshot of decades of movies ridden with such sequences—and of film commentaries
extolling them—is that viewers are wrongly far less interested in what
characters are unquestionably saying than in what they are supposedly thinking,
and are satisfied with inferring their motivations principally from non-verbal
cues. Such a directorily-cum-viewerly habitus
is problematic enough when applied to sequences involving human beings
exclusively, e.g., as follows: “Cowboy A’s left eyebrow is arching; Cowboy B’s
upper lip is twitching; Cowboy A’s hand (which I recognize as Cowboy A’s
because it is bordered by a blue shirt-cuff) gives a slight jerk…Eureka ! I now know Cowboy
A is about to shoot Cowboy B, even though neither of the two cowboys has said a
single word in the entire fifty minutes of the film so far. (The present
writer, daring to ask an eminently sensible question: “Then why does the one
cowboy want to kill the other one in the first place?” This is a tiresome characteristic of Westerns
in general: that their superabundance of silent swaggering and sneering is
mostly doing duty for the lack of any intelligible impetus to action beyond the
pseudo-one supplied by a few mumbled bullshittic platitudes about “the
frontier” or “family honor.”) But when
it is applied to sequences involving both human beings and inanimate objects (I
don’t even want to think about human cum animal-centered sequences), it implicitly
attributes a degree of instantaneous agency to trivial external stimuli that,
as Dr. Johnson would say, has never prevailed in any system of life (or unlife
[as Dr. Johnson would not say]). I’m
thinking here of the following sort of sequence: a fairly handsome and respectable-looking
middle-aged man is first seen jauntily walking along a city street and
whistling a merry choon like, I dunno, “I’m Sitting on Top of the World”; his
hat is cocked at a jauntily rakish, or rakishly jaunty, angle; thanks to the
vigor and buoyancy of his gait, he is fairly ice-picking the air behind him
with the spike of the upfolded umbrella he holds in his right hand–in short, he
is the very picture of high spirits. After a dozen or so seconds of this he is
seen stopping for a breather on a bridge, over one of whose parapets he takes a
seemingly absent-minded gander. Then
there is a cut to a view of some water whose surface is marred by a visible oil
slick and several unsalubrious bits of litter—used condoms, empty Styrofoam
soft-drink cups, discarded Kleenexes, etc..
Then there is a cut to a shot of the man climbing over the parapet, and
a few seconds after he has vanished one hears a splash. Poppycock!
Sheer, unregenerate, unadulterated, grade-AAA, super turbo-charged
poppycock! “Do you seriously wish to
have me believe that no man or woman has ever looked at some inanimate object
or collection of such objects and consequently thought, ‘My life is meaningless.’?” Well, I suppose a naked ever would
slightly overstate the point, but hardly ever probably wouldn’t. Well, all right: hardly ever should
plus the necessary grammatical modifications to has and looked
would certainly cinch it. By which I
mean that I can well imagine that there are and have been people who take in
such sights and consequently say such things to themselves, but certainly not
before, say, 1925 (i.e., two or three years after the release of Chaplin’s Woman
of Paris), in that they could have acquired the habit of doing so only
thanks to the movies and to the commentators on movies. And I cannot imagine any even vaguely sane and
minimally reflective person of our time actually allowing such thoughts to be
occasioned by such sights, or, at any rate, allowing them to be occasioned qua thoughts that he considers his
own. On suddenly catching sight of a litter-strewn body of water the at
least vaguely sane and minimally reflective person of our time will at most
find himself thinking “I guess I’m now supposed to be thinking ‘My life is
meaningless’,” recognizing as he will do that the association of litter-strewn
water with anomie has been inculcated in him by his movie-viewing experience,
even if he does not remember the specific movie or movies via which this
association was established. And having
digested and excreted this thought, perhaps over the very faintest of very
faintly sardonic smiles, he will continue on his merry—or, mayhap, dejected
(albeit not suicidally dejected)—way.
Mind you, DGR, I would not be thought to intimate that I believe that
the inanimate world exerts no force whatsoever on our psychic constitutions, or
even that no human being has ever been driven to commit suicide at the
principal or even exclusive behest of his unhmuman surroundings. But the force in question is virtually
impervious to pictorial representation, mainly because it requires so gosh-damn
much time to come into its own qua force—I am talking here about weeks,
months, years, and sometimes even decades; time-stretches that at minimum dwarf
and most typically mite or perhaps even amoeba the ten-minute maximum allowable
by cinemfilm. Only once have I seen (or rather felt) it captured effectively in
a cinefilmic context—namely, in Tarkovsky’s Solaris, specifically in the
presentation of a certain room of the space station in orbit around the
eponymous planet, a room whose centerpiece is a large piece of electronic
lumber, a disused computer or something of that kind. Anyway, by dint of showing the viewer this
room a dozen or so times over the course of the movie, in takes varying in
length from several seconds to several minutes, Tarkovsky somehow makes it seem
not merely familiar, but cosy, like the den in the house one grew up in (or
rather, in the present viewer’s case, the den of the houses certain childhood
friends of his grew up in); and this in turn compels us to believe that for
certain of the characters of the film the space station really is home, the
place in which they are most contented to reside. I concede that all this effective exposure of
the space station’s lumber-room comes at the direct and extensive cost of what
most filmmakers and viewers would regard as narrative efficiency: Solaris is one of the longest movies
in the cinefilmic canon (such as it is), and it is hardly jam-packed with
incidents. Indeed, one could fairly
serviceably summarize its plot in about twenty words: “Man goes into space. Man decides space is not all it was cracked
up to be. Man returns to earth (or, at
any rate, the next best thing thereto).”
I, for my part, could easily deal with a
cinefilmscape in which all movies were as long, scantily plotted, and lavish of
their facetime with hyperlocal space as Solaris is. I thrill at the idea of,
say, a Wizard of Oz in which I would be afforded the
luxury of hanging mightily with the Wizard in his throne room for a dozen or so
minutes before the intrusive, train-of-thought-breaking arrival of those
importunate vagabonds Dorothy &co., or a first Superman movie in which the so-called fortress
of solitude lived up to its name for a few blessed instants, in which
life-sized holograms of Marlon Brando and Susannah York weren’t constantly
lecturing the Übermensch about how to be a
streetwise earthling and Mr. El the younger was suffered at least briefly
to twiddle his thumbs in silent contemplation of the sub-arctic and
superterrestrial vastnesses. But of course I am never going to be
vouchsafed such topo-temporal treatment from a significant segment of the
cinefilmic canon (such as it is), because it was not in that leisurely,
slow-motion, deep-focus manner that the cinefilmic cookie ultimately crumbled.
Another,
less conspicuous, but perhaps equally pernicious, abuse of montage may be
termed ethical mishandling (a.k.a. snubbage) through
displacement.
A few days
ago (a.k.a. on July 5, 2014) I noticed a particularly irritating
instance of this vice in François Truffaut’s The Last Metro, in many
ways a fine movie, but mise-en-scenically more nearly akin to Truffaut’s best American
friend Spielberg’s blockbusters than to FT’s own (The) 400 Blows. The film’s central character, played by
Catherine Deneuve, is the wife of a Jewish theater owner-general manager in Paris during the Nazi
Occupation. This man, played by an
obscure German actor (who despite his obscurity is presumably the ninth-most
famous German film actor in history), has ostensibly fled the country, but in
reality he is hiding out in the theater’s cellar, where Deneuve pays him
regular visits; although she officially resides at a hotel where she must turn
in each night so as to preserve the illusion that her husband is gone. A goodly portion of the film’s first half centers
on the couple’s rendezvous in the underground lair—on their touching and
character-revealing confabs, spats, and petting sessions—along with the
husband’s various ingenious contrivances for running the show upstairs in
absentia. This whole part of the movie
really is quite a bit, and in a good way, like The Diary of Anne Frank
(both the book and the play, and perhaps even like the movie, which I have not
seen). But then Truffaut has to go and perversely
upset this Frankian apple cart. Deneuve
inexplicably decides she needs a night out and takes the cast of her current
play to a (night) club, where she is compelled to make nice to certain Nazi
bigwigs. One of the actors (Gerard
Depardieu) is too politically conscientious to join in the fellation-fest and
walks out; Deneuve, having been shamed into solidarity with him, soon also
leaves. The camera now cuts to a maid
knocking on the door of Deneuve’s hotel room.
Receiving no answer, she opens and peeks to see a still-made bed
surmounted by a still-folded nightgown.
From this one is plainly expected to gather that in a gesture of political-caution-to-the-wind-throwing
Deneuve has broken with her routine and spent the entire night in the cellar of
the theater with her husband. But of
that subterranean night itself one sees nothing. “But isn’t this a masterful (sic) stroke of
concision on Truffaut’s part—his conveying solely through images a point that
would have taken dozens of minutes, or even hours, to convey in words?” No, it isn’t.
The very same point could have been conveyed every bit as concisely
masterfully (and a good deal less expensively) through Deneuve’s muttering to
her husband, “I just couldn’t take another second of the company of those Nazi connards,”
immediately after entering the lair and immediately before exchanging her first
pre-coital kiss with him. The
subterranean trysts between Denueve and Was’st-seine-Nüsse are the viewer’s
spiritual home base in this movie, and in flagrantly omitting one of them
Truffaut evinces an irredeemable callousness towards both the viewer and these
two characters. “But surely to the
contrary he is evincing a remarkable tenderness towards those two characters,
in omitting a scene that presumably was too intimate to be exposed to the
prurient (de facto male) gaze of the viewer?”
Why presumably? Truffaut
has given us no reason to believe that this particular rendezvous is any more special
than the others. And in any case, who in
the fudge is he to judge it too intimate for our (incidentally by-no-means de
facto prurient or blokey) okies? Such
IPBPC-ic drivel shows with what ridiculous wantonness the notion of the
so-called artist as creator has been allowed to run off half-c***’d in
ultra-modern times thanks to the Golden Calf-worthy mesmerism of the cinematic
image. A so-called artist has a duty to
his so-called fictional characters, a duty every bit as binding as his duty to
other people in rightly called real life (and yes: the reader is right to
detect a paradox in the coexistence of wrongly termed fictive entities and
rightly termed factual ones). Once he
has made a certain person or cluster of people his center of attention, he must
either keep him, her, or them in that privileged spot and not shunt them out of
the picture (literally so in the case of films), however briefly, for any
reason, least redeemably of all such reasons the realization of a piece of
would-be-clever ellipticism; or, if he does thus shunt him, her, or them, he
must keep them shunted by way of showing retrospectively that they were
never quite as important as they seemed.
As an instance of this second, appropriate sort of shunting I cite yet
again an entry in the filmography of Bela Tarr, his early masterpiece The
Outsider, wherein the eponym, a struggling young musician, simply slips out
of camera range in the film’s last ten minutes or so, his place being taken by
an eloquently raving middle-aged drunkard, as if by way of informing the viewer
that the young man’s predicament, while genuine, is hardly dire enough to be
consecrated as tragedy by the final curtain.
Truffaut plainly has no such deflationary designs on Deneuve and the
German in The Last Metro; indeed, by keeping the two of them alive and
together through the end of the picture despite the casus belli and the
intrusion of Depardieu as a fleetingly successful homewrecker, he obviously wants
us to regard them as a very special couple indeed. (Incidentally, this unethical sort of
shunting did not originate in the cinema, but rather in the so-called realist
novel of the early nineteenth century.
Balzac’s inconsistent treatment of Lucien Chardon-de Rubempré is a case
in point. At the end of Lost
Illusions Lucien is unquestionably the central figure, and everything is
presented more or less from his point of view.
By the beginning of Splendors and Miseries of Courtesans his
place has been taken by the courtesan Esther, and we see him entirely from the
outside, as a reliably functioning if important cog in the machinations of the
criminal mastermind Vautrin. But no
sooner—towards the end of S&M’s second third—have these machinations
landed him in prison than he is restored to his former pride of place and dies
a death of sufficient pathos to wrest oceans of tears from the
should-have-known betterly likes of Oscar Wilde. As the novel became more adept at corralling
its subjective energies in the second half of the century, in the high realist
(or proto-modernist) novels of James and Flaubert. this sort of scatterbrained,
Lahore power grid-esque juggling of the author’s sympathies gave way to a more
ethical dispensation in which the author identified consistently [albeit often
ironically] with a single character—e.g. Emma Bovary or Isabel Archer. Hence the revival of the scatterbrained
ethical dispensation in the cinema—and in the work of a so-called art house
director at that –is yet another example of that medium or mode’s tendency to
regressiveness [q.v. under the auspices of “middlebrowism-cum-naturalism”], to
an unlearning of the truths taught via other modes and media long ago.)
Fortunately,
while cinefilm movies that both practice the Tarkovskian virtue of giving repeated
and lingering shots of specific locales and forbear committing the late
Truffautian vice of ethical mishandling are as rare as diamonds, videotapic
productions that do the same are as common as pebbles. To be sure, though, such ripeness of virtuous
videotapism is hardly owing to anything in the way of a higher average moral
fiber content chez the makers of videotaped television.
Without a
doubt the largely and perhaps justly (cf. George Kuchar) unsung semi and sub
auteurs of videotape-based television— Douglas Camfield, Lenny Mayne, Graeme
Harper, Christopher Barry, et mil. al.—would have loved to beat me about the
eyes and ears with montage as copiously and egregiously as their more
illustrious colleagues in the cinefilmic world did. But editorially hamstrung as they were not so
much by the medium in which they were working as by the impedimential intermedia
via which they were obliged to work in it, they instead presented the inanimate
world to me in long, static, room-sized, mise-en-scenic chunks, such that in
some cases at least a soupcon of the chronophagic mesmerism of that world
managed to creep in; and they filled these chunks with actors forced to avail
themselves of little more than the only substance less spectacular than
dynamite that really is capable of provoking instantaneous
transformations in the human organism–viz. language. Likewise, these same semi and sub-auteurs
undoubtedly would have loved to snub their principal personages left and right
by cutting every thirty seconds to some gewgaw or piece of luggage purportedly
sited on the other side of the galaxy from the main event. But within the confines of a
videotape-centered television studio, one hardly had room, let alone money, to
build a separate set for the recording of ten or twenty seconds of footage: one
was compelled, rather, to confine the mise-en-scene to the sub-handful of sets
in which the main business of the script was transacted, and consequently
further compelled to keep the viewer’s focused like the proverbial laser on the
sub-handful of actors dedicated to the transaction of that business.
But now, admittedly more than a quarter-century after the Untergang
of our golden age, all of that is, as the French say, foutu. Now in the hand-holdable (or indeed even
lapel-bearable) digital moving-picture camera even the smallest-budgeted
television production has at its disposal an apparatus that augments the
flexibility of the cinefilm camera perhaps a thousandfold. To be sure, it also presumably augments the
duration-capacity of a cinefilm reel a thousandfold (inasmuch as ten thousand
minutes does not sound like a duration beyond the capacity of today’s
mid-priced hard drives), hence presumably more than matching the duration-capacity
of a 1970s television studio videotape reel, and thereby facilitating the
slow-paced, long-take, dialogue-dependent style of moving image-making that I
prize so highly. But in practice it has
been the hyper-cinefilmic flexibility alone that televisual auteurs have
availed themselves of, at least according to the lights of my very spotty
though by no means unextensive survey of the televisual offerings of the
post-cinefilmic-cum-videotapic epoch. Like
its 35mm-and-up cinefilmic predecessor, the digital television camera is always
on the move, and like that predecessor, it is always being pressed into the
service of some melodramatic revelation or plot-reversal. But whereas the movements of the old
35mm-and-up cinefilm camera seemed to be actuated by the merely neurotic
jitters of a chain-smoking coffee tippler, the digifilm camera behaves as
though it is in the hands (or on the lapel) of some sort of psychotic, perhaps
even schizophrenic, gourmandizer of LSD-laced cocaine. Digifilm-based TV shows do not even allow the
rationales for their revelations and reversals to sink in: one second—yes,
literally, one second—the viewer is attempting to make sense of the state of
affairs implied by one image, and the next he or she is being whisked away to
another image whose relation to the previous one he is axiomatically unable to
decipher, having not yet deciphered the significance of that image. I think back with especial vertiginous
irritation on a certain episode of the new version of Doctor Who wherein
the Doctor (Matt Smith) was seen jawing with an assortment of friendly humans
and aliens on the street in his usual Jimmy Olsenesque mufti, then dressed as
Sherlock Holmes and squaring off indoors with the villain-cum-guest star
(Richard E. Grant), then back in his own clothes and chinwagging with a
different assortment of friends in a different interior—all within (at least so
it seems now) the timespace of a single minute.
Wherefrom and when the Doctor got the idea to dress as Sherlock Holmes;
where he found the outfit; where and when he donned it; and where, when, and
why he doffed it, are all questions that are suffered to go unanswered during
this sequence. Of course, though, the
classic intellectually petit bourgeois professional cinephile will hasten to
point out, I am being a philistine and a pedant in smarting from the want of such
answers; for if I had the true esprit de finesse du cinema, says the
IPBPC, I would recognize these epistemological lacunae as worthy homages to the
classic Warner Brothers cartoon shorts, to the inexplicable and impossible
quick-change artistry of the likes of Daffy Duck, Buggs Bunny, and Foghorn
Leghorn. Moreover, he will add, as an
admirer of Doctor Who in particular, a show conceived in
and dedicated to unfettered whimsy, I should positively revel in that show’s
newfound ability to indulge in that whimsy uninhibitedly now that it is no
longer literally chained to the studio floor by those mammoth videotape
cameras. To this I would rejoin, first, that the recent endowment of
every director with a Tex Averyesque capacity for flouting the laws of physics
and metaphysics is by no means a cinematic analogue to the proverbial pair of huevos del perro, that from the
fact that one now can direct every prevailingly live-action
movie in the style of a cartoon it by no means follows that one should direct any of them in that style. I know that the current intellectually
petit-bourgeois orthodoxy holds that there is nothing intrinsically infantile
about cartoons, that even the tweest animated rodent operetta, provided it is
liberally enough larded with nudge-nudge, winkish allusions to such adultiana as the most
favored coitional positions of the great porn stars of those thrilling days of yesteryear (q.v.), may
be as worthy of the attention of a grown-up as Citizen Cane (or, indeed,
the Odyssey, The Divine Comedy, or Hamlet), but I
bumptiously must demand to disagree therewith, although not for the reasons a
pedant or philistine might be expected to adduce (not that I expect any
empirical reader to be able to distinguish these reasons from those of the
pedant or the philistine). For I am not
in principle against suspensions of the outward-oriented, linear narrative
thread of so-called realistic drama; in fact, I by far favor movies in which a
remorselessly outward-looking presentation of space and a relentlessly
forward-moving presentation of time are the dramaturgical exception rather than
the rule—movies like, for obvious and therefore unavoidable example, Rafael
Ruiz’s Time Regained. In other and
more general words: I certainly have nothing against whimsy, and I emphatically
do not believe that a capacity to indulge whimsy is among the proverbial (or,
more properly, epistolary [q.v.]) childish things one is obliged to put away as
an adult. Something I do believe one
must give up as an adult—and, indeed, well before the onset of puberty—is the
fantasy that one’s complete catalogue of whimsical whims can ever automatically
and en bloc be reconciled with the phenomenal world in which we are all to a
greater or lesser extent obliged to participate. In other and equally general words: while
childhood is quite rightly thought of as the headquarters of whimsy it is quite
wrongly thought of as the antipode of realism and positivism. The true hybrid of whimsy and anti
realism-cum-positivism, namely, idealism, is a Weltansichtsart
that is capable of arising only after the attainment of the age of
discretion. The idealist is well enough
attuned to the phenomenal world to realize that the things that he values most
are not to be found there—whence his tendency to daydreaming and general absent-mindedness. The young child, in contrast, for all his
imaginative immersion in unrealizable scenarios addresses the phenomenal world
with all the ruthless bottom line-orientedness of a corporate executive primed
for the exigencies of just-in-time production.
(Those who laud certain artists and so-called artists for having
“retained the ability to see the world as a child does,” know not whereof they
speak, as that very childlike outlook is the one best suited to meet the most
ruthless, and cynical exigencies of the present system of life, whether one is
participating in it as a consumer or as a producer.) Hence, when he contrives to have his “cool
guy” of an alter ego crash his Matchbox formula racing car into a toy medieval
castle in an attempt to kill the king, he is not enacting a version of the
world he yearns for, a version of the world that he knows to be inaccessible;
he is, rather, rehearsing for his entrance into the world as he believes it to
be—he believes that actual, tangible formula racing cars are actually capable
of driving into actual, tangible medieval castles inhabited by actual, tangible
(and non-constitutional) monarchs. And
the storyboard or script of any cartoon may be seen as either conforming or
catering to this positivist, Fordian infantile worldview-cum-libido, as
delivering in hyper-tangible form the goods of a narrative the instant they are
desired, in ruthless disregard of their compatibility with the other elements
of the story as things-in-themselves. This
is why cartoons are evil and above the notice of adults. Tarkovsky once said something to the effect
that the purpose of any authentic work of art is to reconcile us to our own
deaths. I think he was really on to
something there. At any rate, I would
argue (and hope to find the bosom of every adult reader returning an echo to my
argument) that at minimum any even vaguely art-like object should try to give
us a keener sense of our personal
indissociability from the broader course of the world, of the necessity
of our submitting to the force that Proust (or perhaps Samuel Beckett in the
name of Proust) called Time the Destroyer; to impress on us the justness of the
motto of Haydn’s 64th symphony, “Tempora mutantur et nos in illis,”
i.e., times change and we change with the times. And under this umbrella of the
at-least-vaguely-art-like I would unreservedly place Doctor Who, a show
that for all its intrinsic whimsicality has always at least purported to be
based in the existing phenomenal world—and, of course, to claim as its especial
bailiwick the temporal aspect of that world.
And it is the catering-to of this fantasy that is the very
raison d’être of the motion picture cartoon and to which any self-respecting
cartoon must be dedicated.
Perhaps no
less dispiriting than televisual digifilm’s upratcheting of cinefilm’s most
flashily regressive editorial trick is its perversion through ubiquitization of
what was, in the setting of the old cinefilmic dispensation, a prevailingly
progressive directorial technique—namely, the encouragement of improvising by the
actors. In the old days, when employed
judiciously—and, let it be remembered, within the ineluctably regimenting
constraint of the ten-minute limit—by a true, besonnenheitsvoll master
director, a Cassavetes or a Tarr, a director who could, as it were, carry a
start-to-finish epitome of the film’s diagesis and metaphysical purport in his
head, and consequently preempt, via his preliminary instructions to the actors,
any ill-conceived improvisational sallies; in those days and in those hands, I
say, this technique could and often did produce miraculous results. But when employed as it is nowadays, as the
very Grundtechnik for entire multi year-spanning series (in other words,
stretches of diagesis that would have taxed the most besonnenheitsvoll
of the directors of cinefilm features, and that are certainly much more than a
match for the lightweight pseudo-auteurs ostensibly in charge of the most
celebrated drama and comedy series of our day [The honorific “executive
producer,” while conferring Louis XIV-esque veto power, can scarcely confer Godlike
omniscience.]), it almost invariably conveys an impression of wanton slovenliness
and aimlessness. It reminds one of Mike
Watt’s perhaps unwittingly compendious conspectus of the shortcomings of jazz
(and yes, I know, the judiciously improvisational auteurs of the cinefilm era
are forever being compared to their contemporaries in the jazz scene--but in
fact their methods have much more in common with those of J. S. Bach than with those
of Ornette Coleman): “Two dudes talking.
But everything ain’t just two dudes talking.” And to this one should add: “—let alone two
dudes talking while being chased around by an LSD laced cocaine-gourmandizing
paparazzo.”
Finally, but
not necessarily least fatally, I must take exception to how unfavorably the basic
or overall look of digifilm compares to that of videotape. Some paragraphs ago I praised the breadth and
vividness of videotape’s color palette, as well as its sharpness of delineation
of mutually adjacent hues. All of these
qualities are consistently absent in telecast digifilm. In their place is a kind of rich man’s
version of the “Victorian-stroke 1970s sauna”-like color effect I complained of
in televised cinefilm: while the blurriness of that effect is mercifully
absent, it shares its dispiriting preponderance of brown, red, and orange. That the medium itself should be inherently
prejudiced against blue and green seems almost unthinkable: one would as soon
imagine a word-processing program that was incapable of reliably registering
the letters R through Z. I can therefore
only imagine that this prevailing rubricity is owing to digital film-makers’ habitual
treatment of the elements of their composition before filming—to their shared choices
of decor, costumes, makeup, or, what seems to me the most likely culprit,
lighting. I gather that lighting is to
blame from a remark that some director who flourished during our golden age let
fall on the commentary track of the DVD of one of his projects, a remark to the
effect that for studio-bound videotape-based productions you were obliged to
apply lighting effects to the entire set with an eye to the overall effect as
taken in by the viewer over the full course of the scene; and that consequently
you could not afford to be picky about the lighting of specific portions of the
set at specific moments—notably during medium shots and close-ups centered on
the upper bodies and faces of actors. If
only, this gormless or perfidious hack lugubriously continued, he had been
vouchsafed the use of film, he could and assuredly would have applied a lovingly
customized, Skryabinian color-organesque complement of bulbs and gels to each
and every angled-take on each and every actor’s person! But anyway: this remark constituted a
revelation that cast every bit as much unfavorable light (pun unintended but
unavoidable) on the world of old-school cinefilm production as the earlier one
about the ten-minute reel-length limit had done. To promulgate my beef bluntly: in the course
of my two-score year plus-long trek or peregrination through this tale of
Vere’s (don’t ask: it’s quite literally a long story) I have yet to notice any crew
of lighting technicians following me around like a passel of guardian angels and
ensuring via a judicious apportionment of chiaro and oscuro that I cut the most
seductive figure possible in the eyes of my fellow citizens, coworkers et (enim
vero pauci) al. Surely, for the sake of
verisimilitude and the viewer’s mental hygiene the photogenicity of individual
actors and props must yield place to the photogeneticity of sets and locations. Surely, if one is shooting a scene set in “a
poorly lit cave” one is best advised to light the cave or polystyrene
succedaneum therefor both dimly and sparingly, and force certain actors to
sacrifice the dazzling alabaster-like albinity of their smiles and complexions
to the general gloom; surely, if one is shooting a scene set in “a brightly lit,
white-walled dining hall,” one is best advised to multiply and turn up the
lights and force certain actors to sacrifice the craggy, high-relief expressiveness of their
cheek and brow-lines to the prevailing blancheur. Surely the alternative cannot but result in a
sequence that utterly fails to give the impression that it was shot in any
specific kind of place. “And surely this
is an impression that has likewise failed to be made in every cinefilm-based
film ever produced, and yet you somehow neglected to or chose not to include
this failure in your catalogue of the shortcomings of cinefilm.” Touché, I suppose, up to a point. But in truth I think the above-quoted
videotapic sub-auteur was rather overstating the differences between the
videotapic and cinefilmic treatments of lighting, that in fact most cinefilmic
directors were required and content to impart a general genius lucis loci
to their sets and locations, and that in practice the hyperselective lighting
schemes he salivated over were the mandatory desiderata of a mere smattering or
handful of inordinately fussy regisseurial precieuses, whom the current
digifilm-based crop of directors are reflexively imitating because they
foolishly regard spot-lighting as the above-mentioned proverbial huevos del
perro. I admit that when I write of
“a…handful of… precieuses” I am really only thinking of one, and indeed
of only one item in his corpus, namely Stanley Kubrick and his 1975 snoozefest
of a Thackeray adaptation, Barry Lyndon.
This movie was famously or notoriously shot using only natural lighting,
meaning lighting available to the characters depicted in it, meaning in
turn—what with Thackeray’s novel being set in the eighteenth century--candles,
oil lamps, and fireplaces. In principle
the application of such a constraint need not result in a picture that is not
eminently viewable from start to finish: for, after all, the residents of the
eighteenth century liked a well-lighted room every bit as much as those of the
twentieth, and although the brightest of their lighting apparatuses quasi-literally
could not hold a candle to the dimmest of Mr. Edison’s bulbs, what they lacked
in wattage they made up for in the distribution of these apparatuses, in the
placement of just enough of them in this corner and that to impart a more or
less uniform and unwavering glow to the entire space. Mr. Kubrick, though, seemed not to find such
a lighting schema dramatically interesting enough, and so he filmed each of his
nocturnal interiors with just one source of light emanating from just one
apparatus placed in just one part of the room, with the result that half
the scenes in Barry Lyndon come across as brazenly sycophantic pastiches
en tableau-vivant (and -mouvant) of Wright of Derby’s Experiment
with the Air Pump or The Orrery.
Now I certainly have nothing against Signor Giuseppe da Derby’s chefs
d’oeuvre; to the contrary, they are two of my favorite paintings (not that
I know whether the rest of my favorite paintings number in the tens or in the
milliards). But I’m not sure the portentous
aura of Frankensteinian discovery emanated by both of them—specifically by
their lone well-illuminated figures of the astronomer and the experimental
biophysicist standing out magisterially against a background of unenlightened
pitch night—was appropriate for the kinetic depiction of the quotidian
transactions of a man of the world like Barry Lyndon. And what is bad for the imperturbably dignified
eighteenth-century gander is perforce worse for the brazenly undignified
twenty-first century goose. And yet we
are now forced to spectate on the fatuous fulminations, the borderline aphasiac
rodomontades, of such geese—all those footballers’ wives and stockjobbers’
husbands—as if by the fugitive and inexhaustibly eloquent light of an
infinitely portable Franklin
stove. On watching a run-of-the-mill
digifilm production, one is invariably reminded of Roland Barthes’s
rib-splitting apercu about (and in) “The Romans in Films”—viz. that Roman
conspirators always have to be filmed with rivulets of sweat streaming down
their foreheads lest the viewer fail to realize that “thinking is work.” The tongues of virtual flame that perpetually
lick the temples of digifilm actors convey essentially the same message—or rather,
perhaps, given the sentimental cast of the Zeitgeist, the kindred message that feeling
is work. Either way, the effect is
consistently fatuous to the point of embarrassment. For fudge’s sake: at least those human Schwitzbrunnen
of the mid-twentieth century were playing Roman patricians whose
deliberations were purposive enough to merit a drop or two of sweat. A clinically obese early twentieth-first century
housemum’s deliberations on whether to make the big switch to cloth diapers or
to serve her guests a kale or an arugula salad are clearly unworthy of any
dramaturgical helpmeets.
In summary
and short: in its barely decade-old history digifilm has already managed in
classic Horkheimer-and-Adornonian form both to realize cinefilm’s most ignoble
ambitions and to pervert its most utopian achievements into agents of dystopia. “And naturally, in your view all that is
required for the realization of cinefilm’s noblest and utopian ambitions is for
the makers of television programs to revert to the use of videotape.” Not quite or necessarily: first, because, as
I’ve already pointed out, most of the virtues of videotape were imposed rather
than chosen; consequently they were never appreciated by the videotaptic
directors qua virtues; and sub-consequently those unwilling golden-age masters
of the videotapic medium young enough to survive into the digifilm epoch have
gone on to make digifilmic programs that are technically indistinguishable from
the productions of their wettest-behind-the-ears contemporaries. What I am trying to get across here is the intrinsic,
fundamental, and therefore utterly demoralizing subject-lessness of the
superannuated videotapic norm. When
Adorno reminds us that a composer like Satie is worthy of our attention despite
not having been as technically au courant as his contemporary Schoenberg,
he is taking for granted that Satie thought he was on to something, that he was
not writing tonal piano miniatures merely because he could not afford the extra
capital outlay on score paper required for longer orchestral pieces, or because
writer’s cramp prevented him from writing in all those extra accidentals
necessitated by the absence of a key signature. Secondly, I can’t pretend that the digifilmic
void does not participate in a much graver catastrophe , a catastrophe from
which the more austere technical dispensation of videotapic moving-image making
would ultimately be powerless to save us—namely, the senescence of the entire
dramaturgical-cum-narrative quasi- or pseudo-tradition. One often hears cinema described as “a young
art form.” In the first place this is
not really true: a hundred-and-twenty years is a long term of existence for any
art form. At barely a generation old,
tragedy was a brand-spanking new art form to the Elizabethans when Shakespeare
wrote his first works in the genre; by the time he wrote his last it was an art
form in plain and irreversible decline.
The collateral stretch of time in cinematic history is that separating Birth
of a Nation from Casablanca, which makes the pseudo-masterpieces of the
1970s—Taxi Driver, Chinatown, The Godfather,
&c.—plausible analogues of the bathetic Shakespearean pastiches of Otway
and Dryden, and the admittedly ingratiating but ultimately dispensable
noodlings of Wes Anderson, Alexander Payne, et al. genereval contemporaries of
the borderline farcical and unrevivably topical comedies of the young Colley
Cibber. Mindful as I am of these
chronological parallels, I can seldom repress a groan no matter what company I
am in whenever I hear the Bard’s métier, ethos, habitus, or techne likened to
that of a film director of the present day: to call Shakespeare “a hard-nosed,
no-nonsense, box office-minded, jobbing playwright” is to besmirch him with
decades-thick accretion of glad-handing, fist-brandishing, self-mythifying
bullshit that comes ready-bundled with the very desire, the most guilelessly
boyish impulse, to be a movie director nowadays. In the second place, cinema is not an autochthonous
art form: it did not spring ready-made from the aesthetic ether with its own
completely self-contained bailiwick of phenomena to be treated of and means of
treating of them; rather, it is highly and deeply parasitic on a number of much older art
forms, notably every mode, strand, and genre of drama--plus the novel, of
course. Preeminently and specifically,
and even in its most esoteric and so-called artistic sectors, cinema is
inextricably beholden to a kind of dumbed-down naturalism characteristic of the
middlebrow early twentieth-century novel.
For many decades and indeed one or two centuries, demographic statisticians
have been plotting what they call the United States’ center of population, the
geographical point north, south, east, and west of which the number of
inhabitants is exactly the same, and which may accordingly be held to be the
epicenter of Middle America. This point
started out somewhere in the mid-Atlantic and along what would eventually be
the I-95 corridor, and it has been drifting steadily westward and southward
ever since. But of course this
westward-cum-southward drift is neither unstoppable nor irreversible: for,
according to its tendency the entire U.S. population must eventually end
up in the Pacific Ocean to the immediate west
of Tijuana , and
long before that happens the preponderance of intranational migration is bound
to be vectored back towards the north and the east. By a none-too-licentious analogy, one may
speak of an equally—and hence by no means infinitely--portable center of
literary-historical aesthetic acclimatization, one that started out at the
novels of Richardson, Sterne, and Fielding and that for the past eight decades
has remained stalled at those of Dreiser, Orwell, and Steinbeck. These are novels in which the characters are
rightly—insofar as the trajectory of history is concerned—held to be at the
mercy of various (rightly so-called and hence quotation mark-bereft)
dehumanizing elements—capitalism, scientism, totalitarianism, and so on—and yet
are wrongly (vis-à-vis this selfsame historical trajectory) allowed to
luxuriate in introspective ruminations with all the multi-mansard gabled
mahogany-wainscoted abandon of a Henry James heroine. Accordingly, the mise-en-conte of such
texts perpetually oscillates in neatly delineated jerks between bouts of
solitary brooding by the protagonist (or on behalf of the protagonist via the
irredeemably perfidious device of style indirect livre), and stretches
of objective description in which the protagonist passively submits to the
slings and arrows of the objective world that he is supposed to be struggling
against. Orwell figuratively represented
the totalitarianism to which his Nineteen Eighty-Four was intended to be
a two-finger salute as “a boot stomping on a human face,” but the face in
question, at least insofar as his Winston Smith is to be taken as instance of
it, was merely that of an ineptly executed wax dummy, the panegyrical
complement of a straw man—as all depictions of character that still rely on the
nineteenth-century haut-bourgeois cartography of psychology are bound to be. But a poetics abjectly beholden to such a
cartography has constituted the aesthetic ne plus ultra of even the most
ostensibly sophisticated readers for the past three-quarters of a century, and
it is mirrored in the semantic (or, if
you prefer, semiotic) norms of cinefilm and its successor, digifilm. This is not to say that the novel—or at least
some novels—has/have not moved on from The Grapes of Wrath and Nineteen
Eighty-Four—indeed, stroke-of-course, it/they already had moved on therefrom
many decades before the 1940s, perhaps beginning with Lautréamont’s marvelous
encapsulization of commodity-mediated intersubjectivity as “the chance
encounter of a sewing machine and an umbrella.” But such novels have never had a seminal
effect on the general readerly outlook, and indeed the middlebrow naturalistic
novel has constituted the point of departure for all readers of
light and (supposedly) serious literature alike, just as the classic
montage-driven Hollywood cinefilm has constituted the point of departure for all
viewers of commercial and (supposedly) artistic movies alike; from birth (or,
indeed even before, as previously asserted) we are corrupted by the middlebrow
naturalistic outlook, and the proverbial vast or overwhelming majority of us
never defecate ourselves of this middlebrow naturalistic corruption no matter
how many post-naturalistic movies we see or post-naturalistic novels we read
(witness the recent pullulation of Proust-inspired self-help manuals and
cookbooks). By default when we read a
book or see a movie we expect to sit through spells of introspective posturing
punctuated by ego-annihilating encounters with the mediated or unmediated
natural world. In a book the
introspection will be registered (note that I do not say “expressed”) verbally,
via a succession of “on the one hand he thought…and on the other hand, he
thought” constructions, and in the movie visually via much whistling,
arm-swinging, sweating or brow-furrowing, but the purpose and the effect are
the same in each case. And yet-- say—and
this is a very big “and yet,” the biggest, indeed, to appear so far in this
essay—and yet, I say, some of us have managed, if not to extirpate the
middlebrow naturalistic corruption within ourselves entirely then at least to
ferret out an alternative way of conceiving of the world—of our world, yes, and
also of the world of our forefathers and foremothers, as well as of the
differences between those worlds and of the way and extent in and to which they
all participate in The single and indivisible World (as against imagining
ancient Rome as Oceana with togas instead of boiler suits or medieval France as
1930s Oklahoma with tights and lutes instead of overalls and banjos). How did we manage this at-times seemingly
miraculous feat? It is tempting for our
intellectual pedigree’s sake to ascribe one hundred percent of the requisite
steroidal kick or adrenaline rush thereunto to our acquaintance with the canon
of great non middlebrow naturalistic literature. And those of this some of us—if they
exist—who have attained the age of discretion since the dawn of the millennium
and hence of the cinefilm era may very well be able to make such an ascription
in good faith. But the present writer is
not so able, for he can date the beginning of his acquaintance with that canon
only as far back as his fifteenth year, when he came across an English
translation of Voltaire’s Candide that convinced him to put aside the
middlebrow naturalistic blandishments of science fiction and so-called fantasy literature
for good. To be sure, he had known of
and even owned specimens of the non-middlebrow-cum-naturalistic canon long
before then, but these specimens had theretofor failed to capture his
imagination, as they say. He remembers
as a particularly significant lost opportunity in his personal cosmology the
moment of his first attempt at Dickens’s Tale of Two Cities, at the age
of seven or eight. A recently aired
made-for-TV movie adaptation, a 35 mm cinefilm adaptation, of the novel,
had impelled me to seek it out. I had
been hoping to see cobblestone streets swimming with wine, guillotine blades
slicing off periwigged heads, from the very first page. What I got instead was a lot of tedious
attitudinizing about the physiognomies of a passel of kings and queens. Certainly well into the first chapter nothing
of any kind was actually happening.
“FTS,” I said—or would have said, if such a formula had then been
current—and made a beeline for the four-volume Ballantine edition of The
Hobbit plus The Lord of the Rings, which I was content to make my
Bible and Baedeker for the next four or five years. What event that befell me during those four
or five years was of a sufficiently Weltansicht-altering character to
prepare me to immerse myself in the profoundly uncinematic likes of Candide
with a self-outraged forehead-smite and an exclamation of What the heck have I been wasting my extracurricular
time on for the past four or five years?
“Would this event by any chance have been your acclimatization to the
usages of telecast videotape?” Ding-ding!
as they used to say—not that on reading the first sentence of that translation
of Candide I delightedly ejaculated to myself, “Why this here
book is exactly like an installment of Only When I Laugh or The Six
Wives of Henry VIII!,” but that having grown accustomed to a televisual
dramatic presentation that was neither visually ablaze with lightning-fast cuts
from furrowed brows to litter-strewn rivers to twitchy lips to beetling cliff
faces, nor sonically awash with the leitmotivic effluvia of some conjectural
pastiche of a Richard Strauss tone poem, I very probably found it much easier
to take in a narrative presentation that did not call for immediate realization
in conventional cinefilmic terms, a narrative presentation that seemed to imply
a world dominated by conversation between pairs or among handfuls of
individuals in room-sized settings only very intermittently visited by sounds
more obtrusive or less conventionally articulate than those of the
conversers’ voices. “Hmm.
I see. Well, this is all
plausible enough as a genealogy of your own modus legendi. But as a genealogy of the populi modus
legendi it is, to put it bluntly, barmy enough to secure you a
certification of barminess; for it implies that the only living people who possess
even the minimum physical qualifications for appreciating the beauties of
non-middlebrow-cum-naturalistic literature, are people whose childhoods ended
between 1970 and 1986 (i.e., between the dawn and dusk of our eponymous Golden
Age); in other words, a collection of persons that cannot by much exceed a
tenth of the present world’s population.”
Yes, I suppose prima facie it does rather tend to imply as much, which
prima facie is rather barmy. Of course,
vis-à-vis those whose childhoods ended before 1970—which is to say the people
of my elder cousins’, parents’, and grandparents’ generations—the implication
flies in the face of received anti-whiggish wisdom, of which, it must be
remembered, I professed myself an unabashed receiver in the opening paragraph
of this essay. Surely the fact that
these people were weaned on a motion-pictural diet whose every last ingredient
was dictated by the montage-happy Hollywood studios and were denied formative
acquaintance with The Carol Burnett Show’s gloriously lumbering
videotaped parodies of those studios’ flagship productions—surely this fact
cannot have made it any less easy for them to put aside their Steinbeck and Orwell
and take up their Sir Thomas Browne and Laszlo Krasznahorkai? Surely not.
And yet (and this is a merely medium-sized but by no means negligible
“and yet”), perhaps some scintilla of a portion of my comparative lack of
enthusiasm for even the most brilliant of the novelists of those generations
may be owing to a more thoroughgoing enculturation by cinefilm on their
part. At the very least, the madcap,
relentless barrage of so-called pop cultural references in the novels of, say,
Thomas Pynchon and Dom DeLillo, bespeaks a zealous attunement to the rhythms of
cinefilm that I, for all my posterior naissance (a posteriority that received
anti-whiggish wisdom equates with a shorter attention span) am unable to share
or keep pace with. Perhaps the only
tenable generalization to be made about the aesthetic habitus of these
generations is the admittedly empirically useless one that the amount of
benefit they were able to reap from the absence or lower-key presence of
television in their childhoods varied in inverse proportion to the amount of
time they spent at the cinemas. And so:
those of them who grew up in the sort of household in which (as in mine)
children did not go to the movies unescorted, and in which a trip to the cinema
was a comparatively rare treat (one awarded, say, roughly once every two
months), may rightly conjecture themselves more lightly scathed by the
cinefilmic scourge than the present writer.
Those of them, on the other hand, who as children spent—because allowed
to spend—their every last free waking moment at the movies, and who, indeed,
would routinely play hooky to take in the latest Tom Mix or Gene Autry horse
opera for a gratuitous seventeenth time (examples of such precocious
flickaholics are of course legion in memoirs and autobiopics published and
released between the 1970s and 1990s), are probably very nearly as
unsalvageable as the most mentally goldfishlike teenager weaned on the
hypercinefilmic antics of Spongebob Squarepants. But what about this selfsame teenager, or, to
put it appropriately more broadly, about him and his contemporaries and
immediate elders? Surely by the logic of
my conjecture every man Jack and woman Jill of them, who by default have never
known any other motion-pictural dispensation than the hypercinefilmic post-cinefilmic
one, must be past all hope? Regrettably
vis-à-vis the present essay (if perhaps unregrettably vis-à-vis my personal mental
hygiene), I do not know enough about this/these post-Golden Age generation(s)
to form any sort of judgment about it/them.
Certainly the abundance of interwebbial traffic centered in one way or
another on non-middlebrow-cum-naturalistic texts suggests that the
hypercinefilmic habitus has not won out completely. On the other hand, the geometrically greater
abundance of interwebbial traffic devoted to phenomena patently founded in the
base middlebrow-cum-naturalistic literary-cum-cinematic tradition suggests that
the hypercinefilmic habitus is much more securely hegemonic than it was in my
day. “But haven’t you already asserted
that this same middlebrow-cum-naturalistic literary-cum-cinematic tradition was
already hegemonic by the third decade of the last century? And such being the case, must we not regard
this geometric preponderance of interwebbial middlebrowism-cum-naturalism
merely as a transposition of this hegemony into a new register?” Perhaps.
But perhaps it is something more or else. One somehow gets the feeling that the
middlebrow-cum-naturalistic habitus is much more respectable than it was
in my day. In my day, one got the sense
that the middlebrow-cum-naturalistic habitus was something one outgrew,
and that once one had outgrown it, one could count on anyone who appreciated
any sector of the non-middlebrow-cum-naturalistic canon to have sloughed off
every vestige of the middlebrow-cum-naturalistic habitus; one assumed, for
example, that the entire body of a text in the narrative mode was generally
fair game for discussion, that provided one avoided bingo halls and nursery
school playgrounds, one could refer to the events at the conclusion of a novel
or movie without provoking a duel-challenge.
But now the infantilism of the plot-humper has been institutionalized in
the spoiler alert, and DVDs of even the most esoteric, room-clearing
movies preface their extra features with the warning, “Certain details of the
plot of the film will be revealed in the following interview/documentary/autc..” Then there is the greater respectability of
videogames, which since the late 1990s have been uniformly both capable of and
addicted to the replication of the representational mode of classic
cinefilm. To play a videogame, any
videogame, no matter how highbrow or low-key its scenario, and be the game in
question Grand Theft Auto or Afternoon Nap at the Library, is to step into a Vorstellungsschaft
governed from soup to nuts by the norms of classic cinefilm. And so one is not surprised—appalled, yes, but
not surprised—to hear beard-stroking alumni of Oxbridge and Ivy League
humanities departments calling for the recognition of the videogame as an “art
form.” After all, so their argument
goes, certain cinefilm movies like Taxi Driver and The Godfather were
works of art, and what made them great works of art but their relentless
manipulation of the viewer with flashy camera placement and editing, the same
flashy camera placement and editing that are now the stock-in-trade of the
designers of videogames? “And all this niaiserie, you really do submit,
is owing to the younger generation’s/s’ unfamiliarity with televised
videotape?” I do not submit, but I do
conjecture. Certainly I think it would
do the entire non-blind viewing device-owning population of the world—including
my fellow Golden Agers, most of whom, it is reasonable to suppose, have long
since become inured to the hypercinefilmic digifilmic norm—a great deal of
ethical and mental-hygienic good to watch a pair or trio of hours of some
Golden Age-originating videotaped programming each week. “Try just to sit still for a quarter of an
hour,” I would say to each of them just before the first of his or her videotapic
viewing-sessions, “and take in either for once or for the first time in
twenty-something years a Vorstellungsschaft that invites you to pay
attention to what is happening right now in the place you are beholding,
and between or among the people inhabiting—and more than figuratively
inhabiting—that place. Take, why
don’t you?, a look, a long and careful look, at the furniture of this room, at
the clothes the actors are wearing; take, why don’t you?, a listen, a long and
careful listen, to what the actors are saying, and reflect on why they
are saying it now, while you and they are both privileged to linger here, in
this place, this obdurately unbudging place. And after six months of doing this a few
hours a week, report back to me, why don’t you?, and let me know if you aren’t a
trifle less enthusiastic about the latest course of Star Wars
suppositories, or the latest Michael Chabon kitschfest, or the upcoming season
of Game of Commodes.” To give my
conjecture a proper test, though, I would of course need to have at my disposal
the resources of a Victorian paterfamilias.
I would need to have an infant son or daughter and a James Mill- or
Thomas Arnold-esque monopoly on every facet of this child’s upbringing: I would
need to be allowed not only to impart to him or her any form or body of
knowledge I deemed instructive or improving but also to insulate and indeed
isolate him or her from any stimulus I deemed deleterious in any way. Having been granted such a charter, I
naturally would make sure the child never saw a single millisecond of a single
digifilm or wide screen cinefilm-based motion picture, and I naturally would
confine his or her viewing to Golden Age-originating videotaped
productions—starting out, of course, with the old Mr. Hooper-epoch Sesame
Street, and proceeding thence to Mr. Rogers’(s) Neighborhood, and
thence to The Electric Company, and thence to so-called classic Doctor
Who and some of the better British and American videotaped comedy programs,
and finishing up with I, Claudius and the BBC Shakespeare, along with a
few of the lesser-known short drama series like The Duchess of Duke Street
and Oppenheimer; and naturally, the inculcation of this visual
curriculum would proceed in lock-step with that of a literary curriculum
consisting entirely of non-naturalistic texts—such that, for example, at the early-middle
part of the syllabus E. T. A. Hoffmann, Lewis Carroll, and Hans Christian
Andersen would be well within bounds, but Black Beauty, the Hardy Boys
adventures, and the novels of Judy Blume would be right out. But though this course of study would
perforce be centered on discrete chunks of discourse or moving peinture
bearing specific titles, it would by no means be intended to culminate in the
establishment of a kind of personal canon or private museum gallery of
cherished texts and visual artifacts; I would by no means wish my bairn to
emerge from his or her Lehrlingsausbildung possessed of and by the
notion that Jonathan Miller’s televisual realization of Troilus and Cressida,
or even Troilus and Cressida itself, was a so-called masterpiece or
great work of art. I would, rather, wish
him or her to emerge therefrom possessed with and by the notion that Troilus
and Cressida was an impressive but by no means immaculate or necessarily
unsurpassable effort of one man’s imagination to synthesize two-and-a-half
millennia’s reflections on love and war, and that Mr. Miller’s realization of
it was the impressive but by no means immaculate or necessarily unsurpassable effort
of twenty-odd people’s collective imagination to make the word of this
synthesis flesh. For I believe that it
is only through the de-fetishization of the artwork that any slender extant
remnants of that much nauseatingly (because utterly wrongly) taken-for-granted
human faculty known as creativity can attain what vulgarians style its true
potential, itself woefully slender at this pseudo-historical pseudo-moment. Perhaps not quite needless to say, such an
agenda of de-festishization is by no means tantamount to or coextensive with
the umpteen-times warmed-over agenda of the fetishization of the aesthetic
grandeur of ordinary productions or utterances made by ordinary
people: to the contrary, it is meant to impart the greatest catalytic potency
to “the best that has been thought and said” by the most extraordinary
people by filtering therefrom the centuries-old metastasized honeycomb of modal
and generic categorization that at its naissance may have come within spitting
distance of some genuinely numinous distinctions, but that by now (at this
pseudo-historical pseudo-moment) has become indistinguishable and inextricable
from the hyperfactitious taxonomy of marketing consultants. And it seems to me that the Golden Age
videotapic corpus, in virtue of its clunkily eloquent simplicity, affords just
the right sort of sturdy, fibrous, yet pliant material requisite in such a
filter.
THE END
“Hang
about. It seems to me that you’ve
neglected to fulfill all but, say, by a generous estimate, a tenth of the right
half of your bargain.”
Come
again? as they probably still say.
“Your
title implicitly promises a 50/50 split between videotape and 16mm film, and
yet you stop talking about 16mm film altogether at about the 2,000-word
mark. I admit I shouldn’t mind the
falsity in advertising all that much if your retirement from the 16-mm boosting
métier hadn’t coincided with your explicitly avowing your distaste for
videotape-only (i.e., 16mm-free) televisual productions, and, indeed, your
demarcation of the end of the Golden Age at the point when such productions
ceased being the exception and started being the rule.”
This is,
as they used to say in Monty Python’s Flying Circus, a fair cop, and
unlike the Monty Python nabee, I shan’t blame society for my criminality. The truth is that, although my aversion to 16
mm-free videotape productions was and remains genuine, it is likewise
irreconcilable with the case for the estimability of videotape I have so far
managed to muster. Seriously-stroke-in
all frankness: if I admire videotape so much for reminding me of the former
actuality and present-ness of the viewed image and loathe 35-and-up mm film for
its occlusion of this selfsame actuality and presentness, it inexorably follows
that I should at least mildly dislike 16-mm film as an actuality and
present-occluding medium, much as a vegetarian, although abhorring chicken and
fish less than beef and pork, must still eschew both of them and would be
accounted mad or a traitor to the cause if he or she were to reject a Caesar
salad because it had no chicken or anchovies in it. Having made a clean breast of my
semi-apostasy, I might as well explain what it is that I find videotape ill
suited to do and that I believe 16-mm film does better. In brief yet toto: I find videotape an extremely
unsatisfactory medium, and 16-mm an ideal one, for exteriors, especially
exteriors in classically natural settings—forests, moors, mountainsides,
and the like. A mass of treetops
kaleidoscopically refracting the light of a midday summer sun, or a cluster of
hilltops blanketed in fog, somehow simply seems to look exactly like what it is,
in all its Burkean-cum-Kantian sublimity, on 16-mm film, and to look like the
most drably prosaic footage from a 1980s family camping trip (at every instant
one genuinely expects somebody’s mum or dad suddenly to shout out: “Did you
kids remember to put your sunscreen on?” or “Rain’s on the way, time to pull up
the tents!”) on videotape. As exhibits
in proof of this assertion, I cite two videotexts: the BBC Shakespeare’s
production of As You Like It and ITV(or was it Channel 4’s)’s
Shakespeare biominiseries Young Will Shakespeare. In the play-production, Helen Mirren is the
perfect (and for all I know the best-ever) Rosalind, and her first-act verbal
tussles with her cousin Celia within the walls of Duke Frederick’s castle are
intoxicatingly fetching. But no sooner
has she pitched up in the forest of Arden—some sort of National Trust-bolstered
nature preserve indistinguishable from the genuine article –than she ceases to
fascinate. In Young Will Shakespeare,
Tim Curry—to the immense surprise of those of us who count our sole viewing of The
Rocky Horror Picture Show among the most demoralizing ninety-minute
stretches of our lives, and Dr. Frankenfurter’s contribution thereunto as the
most demoralizing portion thereof—both shines and compels as the budding Bard, and
his studio-bound tête-à-têtes with the young Earl of Southampton (here posited
as the principal model of Prince Hal in the Henry IV plays) really do
give one the sense that this was how it might actually have happened, that
Shakespeare might have become Shakespeare in just this sort of bedchamber or
banqueting hall or cockpit. But the
moment the scene switches to a predictably grubby view of Elizabethan street
life—of uncanopied horse carts trundling over mud-logged cobblestone streets
flanked by half-timbered houses, of spastic one-eyed costermongers stridently
hawking their wares in various inscrutable patois(es), and so on—one finds
oneself instantly transported back to the prosaically kitschy environs of the
local Renaissance festival of one’s middle-school years, or even further back,
to the hyperprosaically twee environs of one’s summer day-camp’s mock-up of an
Old West town—or, rather, to the thirty-odd seconds of videotapic footage of this
mock-up that made it on to one of the local breakfast news shows. But as to why videotape should be such
a prosaifying agent without doors and only without doors—I am afraid the answer
to that question is as yet unknown to me.
Certainly I am loath to throw the towel of my analysis into the laundry
hamper with the lid labeled “METONYMY-INDUCED PAVLOVIANISM”—in other words, to
assume that I am organically incapable of admiring videotaped exteriors merely
because I got used to seeing them in broadcast television programs at precisely
the moment I got used to seeing them in closed-circuit replays of footage
gathered at my friends’ birthday picnics, and thereby to capitulate to the
corollary assumption that had I never seen videotape being put to such prosaic
out-of-doors uses I would now be capable of championing televisual videotape
exteriors as ardently and sincerely as I do televisual videotape interiors. So let me at least hazard an admittedly
utterly technically uninformed guess as to the reason, or, as it will turn out,
reasons, for videotape’s inadequacy as a medium for exterior shots. The first properly appertains to the medium
itself and goes as follows: somehow videotape can realize its unexcelled
capacity for bariolage, for superlative vibrancy and brilliance of
color, only when four walls and a ceiling are present as a container of all
those resplendent hues. The outdoor
color videotapic image is invariably muddy, matte, and washed-out looking. Secondarily, and extrinsically, the
sound-recording devices used in tandem with outdoor-ready videotape cameras,
not unlike the glorified ear-trumpets used at the receiving end in the wax-only
acoustic days of early audio recording,
seem(ed) unable to take in sound from more than a single, targeted,
stationary source. Actors on outdoor videotaped
shoots always seem(ed) to be shouting in order to force their voices through
the intervening air currents, which, however gentle and zephyr-like, always
possess(ed) enough Beaufordial force to carry a conversationally decibled
utterance well beyond microphone range.
So much for the demerits of outdoors videotape. What, then, of the merits of outdoors 16-mm
cinefilm? Clearly its superiority to
35-mm film cannot inhere in any greater fidelity of image registration, for in
virtue of its narrower gauge it is less than half (i.e., 16/35ths) as faithful
in those terms. The answer here—as in
the case of indoors videotape, but a fortiori—seems to lie more in the institutional
restrictions imposed on the employment of the medium than in the medium qua
medium. From DVD commentary tracks I
have learned that at least at the BBC of the 1970s it was standard
practice to allocate only one camera to an exterior 16mm film shoot.
The most intuitively obvious way of filming a scene with only one camera
is of course to start out with all the actors (and if possible the ambient
props, flora, etc.) in the frame together and thereafter to pan over to and
zoom in on specific actors as dramaturgically needed, thereby imparting an
informal, photojournalistic look to the finished sequence. The 16mm cameras of the BBC were presumably
portable enough to permit such a cinematographic treatment (they certainly look
to be in production stills), but the house style guide seems to have dictated a
treatment approaching as nearly as possible that of the indoor, studio-bound,
videotapic one—a sensible enough prescription given that the difference in look
between videotape and 16 mm cinefilm is quite glaring to begin with. “But in the videotape studios didn’t the BBC
directors have as many as eight cameras at their disposal?” Indeed they did, and the only way of
approximating such multiperspectivity with one camera was to film each scene in
its entirety as many as eight times in succession, with the camera being fixed
at a certain angle and centered on a certain actor, prop, bit of flora, autc., during
each take; and thereafter to splice bits of all the takes together in such a
way that only who or what was meant to be seen at each given moment ultimately
appeared onscreen at that moment. “But
surely then only a fraction of each camera’s footage would end up being used,
and the majority of the accumulated footage would end up in the dustbin.” Surely.
“It sounds awfully wasteful.” So
it indeed does. But evidently the Beeb
found it more economically viable to throw away five or six hours of film
footage than to hire a single extra cameraman.
Anyway, the most immediate and inevitable effect of this modus
filmendi was to reproduce to some degree the dégagé, stage play-like
chronomantic atmosphere, the relaxed attitude to time, of the studio-bound
videotaped sequences. To be sure, as the
final cut had always been assembled out of several discrete time-stretches, one
never got quite the same temporal mood as one did from a multi-camera
videotape scene shot almost as live, in one or two takes. Still, inasmuch as the cut had been assembled
from a series of more or less complete and uninterrupted performances—inasmuch,
basically and in other words, as the actors had consistently been afforded the
luxury or constraint of playing off each other in so-called real time—one never
entirely lost the sense that the sequence had its origin in a single event
occurring in a single place. This
wholesale-cum-piecemeal approach may also have had much to do with exterior
16mm’s salutarily heavy reliance on close-ups.
Presumably the first and last run-throughs of a scene on a 16-mm shoot
were typically separated by an interval of several hours, an interval during
which early morning may pass into mid-afternoon or late morning into early evening. A lot can of course happen in one place
during such an interval, especially outdoors—the sun can dramatically change
position, or be completely obscured by clouds, a hundred-strong roost of birds
can take flight, an autumn tree can lose half its leaves, autc.—and such
changes can of course lead to risible breaches of continuity. But the 16mm-wielding director may have tried
to minimize and marginalize such breaches by filling the camera frame as often
as he could (i.e., as often as such fillage did not contradict the script) with
elements of his composition whose appearance he could best control—preeminently
among them the bodies and especially the faces of his actors. Unlike most of the set-door Johnnish
commentary in this essay, this last bit is genuine and unadulterated
speculation: I have never heard tell of any 16mm wielding director’s rationalizing
his use of close-ups in such terms.
Whatever the cause of this prevalence of close-ups in 16 mm exterior
production-slices, a prevalence even greater than that witnessed in their close
up-heavy interior videotape counterparts, there is no denying its salutariness
from a dramaturgical—and hence, ethical—point of view, for in concentrating the
viewer’s attention on the vocally articulate human face by excluding semantic
(or, if you will, semiotic) competitors therewith, it emphasizes the causal
efficiency of language—“—i.e., ‘the only substance less spectacular than
dynamite that really is capable of provoking instantaneous
transformations in the human organism’?”
Indeed. You have learnt well,
grasshopper. And as you have learnt
well, you doubtlessly will be able to anticipate the substance of my two cents’
worth on the legacy of exterior 16-mm practice in today’s digifilm-only
productions. “You mean, I take it, that
this legacy therein ‘is virtually nonexistent, for what goldfish
attention-spanned digifilmic pseudo-auteur, despite having image storing space
to thermonuclear-annihilate, would be so inventively perverse as to shoot a
ten-minute scene chock full of tiresome chit-chat five or six times in its
entirety? No: such a pseudo-auteur
obviously finds it far more prudential to devote that selfsame unlimited
storage space to a thousand takes of a ten-second shot of an actor wordlessly
(but ever so expressively!) scratching his or her crotch.’?” Even so.
THE (ACTUAL) END
No comments:
Post a Comment