It has been forever since I wrote on this blog! Today I felt like writing and thought I’d lay down some of the thoughts and feelings going around in my head, being in my mid-twenties. I’m later 27 this year and my main goal is to be working towards a career in an area I think I could feasibly work and thrive in and also get my live performing off the ground. A few of my friends are now deciding to settle down and start families, and a few are engaged or married already. The last few years since university have flown by, but kids are now also on the menu for many; I myself have decided to never have children but when I try to condense the reason, I end up thinking of loads:

1. I love my financial freedom, and freedom generally. I’ve sort of carried on the student lifestyle since leaving university, and will carry on doing so if I get a position I hope to apply for in September, doing librarianship studies. I like to have my own money and freedom to go after my creative pursuits, which I could never do if I had kids.

2. I’ve never had a paternal instinct. That’s it, really! I’ve never felt like I should be a parent or would be a good one. I think I’m okay with kids, but don’t really know what to say to them.

3. Risk of potential problems. Obviously if you have a child, there is an inherent possibility that something will happen to them, or by them. There is a lot of responsibility that comes with raising them, and worry which comes with it, for at least 18 years. This is basically something I’m happy to miss out!

4. People without kids are generally happier. It’s true, and all the survey’s show it. There are better prospects for long-term relationships for those couples who don’t reproduce.

5. Global overpopulation. Obviously the world and country is so overpopulated, and it’s only getting more condensed all the time. Not reproducing is better for the environment and the world as a whole. The last thing my country needs is an injection of a bunch of geeky white kids!

Thankfully nobody, including my family, particularly expects me to have a family of my own being in a same-sex relationship, so adoption or surrogate would at present, be my only options. I’m very lucky to have that natural barrier. But I love being childfree and would be interested to see if my attitude will change at all in five, ten years time. That’s one of the reasons I’m putting this here and it’s a good way to mark the resurgence of my blog as I inch towards 30. More posts to come!


If you find yourself in Amsterdam at some point before April and need a break from the freezing wind or gliding bicycles, be sure to visit the Stedelijk museum. A three minute walk from the much celebrated Rijksmuseum, De Stedelijk is the curious setting which houses the late American artist Mike Kelley’s retrospective exhibition. Contained within the price of a general museum admissions ticket (around 7 Euros) is access to several rooms and galleries, pertaining Kelley’s life lived through art.

Mike Kelley’s exhibition presents an audio and visual experience accessible to many with his unique and bizarre take on America and its values, culture, ideology, ‘constructions in all their messy contradictions’ and crude representations of pop. Nothing is covered up from Kelley’s inspective eye, his dark and drastic sense of humour. An artist who freely interplayed with a vast manner of forms, the legacy of Kelley on show here features both the age-old practice of nude drawing and modern sculpture and light projection, as well as television constructions broadcasting a loop of unsettling YouTube clips – itself interspersed with minimalist dots and clinical beeps. It seems Kelley was a master of portraying 20th Century popular culture in art. Who else would burst the mythic bubble surrounding the question of supressed memory in his work (‘People tend to think about these works in a very generic way as, somehow, being about childhood. That was not my intent’[1]) but Kelley himself? Critics have made comparisons to the 1960’s avant-garde, but if Mike Kelley was the Frank Zappa of the art world, then their similarities end where deep emotion and personal anguish within it are concerned; for the serious decorum of authenticity, that of the oblique creator – is present throughout the show.

Sadly, following a bout of depression, Kelley committed suicide in January 2012 at the particular age of 57. This has enhanced his status as a troubled figure (as it undoubtedly will), but the work Kelley left us was always slightly alarming and at least interesting (I didn’t know about the nature of his tragic demise at the time of visiting). Whilst security will have to hear the sounds of feminine shrieks and indefinable circular clangs on a daily basis for the next couple of months, it is slightly irking to know that this distinctive collection will never be added to, enhanced or shown in a new capacity. When cruising around the gallery in De Stedelijk, I noticed some of the pieces on parade consisted solely of words on a page, affirming any suspicion that writing can be a form of art too. Where, though, is the line drawn -so to speak – between literature, or simply empty words, and art itself?

Words and the practice of using them can only ever be conservative in that one is restricted by the ‘box’ of the words meaning and the limitations of actual speech. Art by its nature rejects this limitation so where words fail, art can often suffice – permeating those depths of the soul that letters can never reach. But I’m convinced these magisteria can and do overlap, as I’m sure is Kelley; the written piece may not simply be just a craft. Who, that takes an interest in the practice, has failed to be affected by a high calibre essay or a timeless work of fiction in the same way that they are by music or painting? Kelley himself trod the fine line between the two positions, and between the role of spectator and partaker.

As I was backtracking through Kelley’s life in words I came upon an interesting quote in an interview from the Guardian page about critics: “Artist and critic are dependent on each other but have fundamentally different social positions and world views. As the story goes, the artist is uneducated but has a kind of innate gift for visual expression, which the educated and socialised critic must decode for the general population.”[2]

Kelley himself attained the unique position of being both artist and critic, releasing a book of art criticism called ‘Foul Perfection: Essays and Criticism’ with John Welchman in 2003. What’s more, he could remove himself further from the positions to gain perspective on the whole ordeal itself (hence the above quote); Kelley undermines his own words through his actions. Historically, the two positions are not a separate as is made out. The poet and writer John Betjeman divided his life’s work into both occupations: a profound lover of the English countryside and Anglican churches, you would never be sure if he was to write a verse of accolade for the country in its days of old, or a structured critique, albeit full of adoration. It also portrays a strange effect to the observer.

In his general idiolect Betjeman sort of spoke in the manner of the poet, an odd halfway position between general conversation and metaphorical interjection; videos of him speaking show him gazing up in slow candour, almost as though he were picking the words from some noble encyclopaedia in the sky. In an interview conducted by Betjeman on Down Cemetery Road[3] (1964), Philip Larkin makes the confession that “really one agrees with them that what one writes is based so much on the kind of person one is”. Larkin too was a critic, particularly of early jazz records and modernist literature and his polemical prose is collected in the fine publication ‘Required Writing’, though he asserts that he didn’t very much enjoy this hack job.

Is this split of roles, this cognitive dissonance at all praise worthy? It is certain that one is not required to be detached from the world of creativity to be able to assess it themselves. It can produce some odd or even negative results; apparently Martin Amis’ ‘Koba The Dread’ which deals with Stalinism fails boorishly where the writer neglects his familiar output of fiction. If the writer is painting with words then Kelley’s own words must be wrong.  The counter-examples show that even if the results are disingenuous or quirky (or brilliant), the two worlds do collide and often produce a lovely light in return.

Fifty years have passed almost to the day since Sylvia Plath was writing the last of her journals. Thoughts, feelings, stories and personal confessions of Plath were committed to paper in the last weeks of her life, only to be destroyed by her late husband Ted Hughes, in respect of the maxim of forgetfulness as ‘an essential part of survival’[1]. It will bring no surprise to the reader to learn that he himself was a poet, the artist lovers famously entwined in books, separated by their passion.

Plath’s suicide at the age of thirty in 1963 secured her reputation as the archetypal modern ‘suicide writer’, a hero of deep prose for the younger generation. So what is it of the enigma surrounding Plath which remains? A quick glance at the history of literature would show that she was hardly the first ‘troubled’ female author to gain such a reputation. Virginia Woolf had preceded Plath by about 35 years and was twice her age when she finally killed herself in 1941, not before declaring “A woman must have money and a room of her own if she is to write fiction.” Plath built upon this affirmation with her tale of the conflicting desires inherent in the young writer in The Bell Jar, published a month before her death in 1963, sometime ‘Between the end of the Chatterly Ban and the Beatles’ first LP’ ( if we are to take Larkin’s account of the year).

The Bell Jar, arguably Plath’s most acclaimed work and the only novel she actually completed, is constructed in remarkably readable prose with each turn of the page seeming more lucid than the last. Plath drew on her own youthful experiences for the novel and the main protagonist is undeniably hers; Plath, like her doppelganger Esther Greenwood, lost her father at the tender age of eight and was somewhat of a wondering nomad, securing a position as editor at Mademoiselle in New York City following her college education. Plath moved around different parts of Massachusetts throughout her childhood. Born to an Austrian mother and a German father, Plath seemed to carry a sense of Holocaust guilt and a minor preoccupation with Jewishness, mentioned several times in the novel and some of her seminal poems in Ariel (1965). In fact Plath’s father, Otto, shared the same first name as the tragic Anne Frank’s father, a name that might ring odd in the ears of the modern-day observer. Plath moved herself to Cambridge, England to study and then to Devon, finally settling in London, burying herself in the English winters.

One factor which makes The Bell Jar and Plath’s work in general widely accessible is her ability to relay both the obscure aspects of a life lived in capitalism and the more general, the universal. An example of this realism is contained on page 53 where Esther describes the predicament of having wished she’d said something different in reply to a comment or jibe: ‘These conversations I had in my mind usually repeated the beginnings of conversations I’d had with buddy, only they finished with me answering him back quite sharply, instead of just sitting around and saying ‘I guess so’. The metaphors employed by Plath for casual observations are succinct and mentally pleasing, ‘My secret hope of spending the afternoon alone in Central Park died in the glass egg-beater of Ladies’ Day’s revolving doors.’, raising the question of why Larkin (who was a contemporary poet, though slightly older) never conjured such an image considering his inevitable daily entry to Hull University’s Brynmoor Jones library.

Plath’s prose reads well on the page and gives one the impression that Plath barely had to try once pen had hit paper, an achievement most writers will understand to be harder than it looks. Plath’s writing contains mountains of clarity in the manner of George Orwell, though she leaves Orwell behind in her sometimes naive worldly observations, and attitudes surrounding mental illness, the charming nihilism for which she is often famous. Plath is rejected for this too often by those who feel fit to be critics and troubled Manic Street Preacher Richey Edwards even had the audacity to proclaim ‘I spat out Plath and Pinter’ in 1994, though he was undoubtedly influenced by her literary clout. One must keep in mind the fact that at the time of writing, manic-depressive heroes were in short supply and the approach of emptying the contents of the mind, un-judged, was relatively new in the pre-sexual revolution of the early 1960’s – the dark split between the social norm and personal reality.

A useful but humorous comparison may be drawn between The Bell Jar and Jean-Paul Sartre’s Nausea (1938) in that both contain characters dealing with the oppressive sensation of nausea. Sartre’s existential ‘masterpiece’ draws its strength from the sense of displacement one feels through the character of the lone wanderer, the individual who is incurably sick at the comprehension of nature’s reality and the movement away from a solipsism previously held (and an idea which had historically plagued so much philosophical thought). Esther too became sick in the social arena of hotels and cinemas. One presumes the reason for this is a similar sort of existential crisis – but we soon learn that flamboyant Esther had simply consumed too much bad crabmeat at someone else’s expense, as did many other girls in her company. The resulting impression (one that is welcomed) is that she too is a material being who is not to be singled out as any special exception, which relates in a curious way to the reader, who might’ve noticed such a distinction between fiction and reality on one’s own accord.

Plath’s poetry, for which she was most well known in her lifetime, is notable too for its fractured imagery and the ability to incorporate life experience into a simple turn of phrase. She returns to her upbringing and family in what is arguably her most famous poem (published posthumously) ‘Daddy’, with the displacing opener:

You do not do, you do not do,

Any more, black shoe’.

Reading the stanzas for a BBC program in October 1962 following a creative spurt in which she produced around 50 poems in just a few months, Plath has a remarkable tone of voice, which recalls Judy Garland’s Dorothy in point of directness and sweetness. This clashes irresistibly with the ghoulish content that permeates Plath’s work, establishing her as the first mentally ill, feminine housewife who wrote poetry for the masses. Plath fell forever in love with Ted Hughes and was consigned to a miserable last few months, looking after two children on her own in a cold London flat, following their fateful split. And what was left of Plath’s riddled conscience?

‘And I

Am the arrow,

The dew that flies

Suicidal, at one with the drive

Into the red

Eye, the cauldron of morning.’[2]

If there is any social statement which oversees Plath’s work it is surely a critique of the American dream, the Western dream; the fact that even pretty suburban girls like herself can fall far from grace. This is the real face of depression, a girl with the need to die. Plath wanted badly to caste out the monster of mental illness and portray it in the English language.

Half a century since her death, Plath is remembered for achieving this above all things, and The Bell Jar is a novel I’d recommend to anybody in pursuit of either education or pleasure. This type of writing had and has been done before. Only Plath did it with such a distinctive style.



In recent years the push-back against those who argue against religious faith in public arenas (those people commonly classed as the ‘new atheists’) has become clouded by what I class as a pseudo-intellectual way of thinking, where all too often the person arguing on behalf of faith will turn the tables on the sceptic and equate their rational, scientific beliefs with their own faith in the Gods and the Heavens. It is not uncommon to hear these people say things like ‘trust in science involves just as much faith  and susceptibility to dogma as religion’; such statements are not only asked by undergraduates. In the last couple of months, two respectively written articles have appeared in the New Statesman based around the topic of religion, faith, evidence and reason, which I argue are essentially guilty of what I’ve just talked about: one is titled ‘Giant Leaps for Mankind’[1] by John Gray. The other is ‘The Goebbels of the English Language’[2] by Alan Moore.

In his review of Brian Leiter’s book ‘Why Tolerate Religion?’, John Gray discusses the difficulty in defining religious belief: ‘there is nothing particularly irrational or otherwise lacking in religious belief. After all, what counts as a religious belief?’. Defining a nuanced idea of religious belief may certainly be no easy task, but we can at least form an idea of some of its necessary conditions if we are to get anywhere in the English Language: religion must involve some belief in a supernatural creator of the World, and/or Universe. If this is not so, the belief does not accord with any recognizable or traditional interpretation of the original three monotheisms, the ones with which I’m sure Gray is primarily preoccupied. Gray then goes on to rather strangely and irrelevantly conflate the motivations behind certain acts and events in history to those acts committed by people because of religious motivation. For instance, he says that the horrors of Soviet Russia imply that ‘faith’ claims about the workings of communism are flawed, and that the 2003 American intervention in Iraq was a secular ‘faith’ driven adventure. Meanwhile he also invokes the ‘hunger for oil’ argument. But surely there either was an evidential reason to go into Iraq or there wasn’t, regardless of whether it was the right moral decision; Gray wants to affirm both at once, and in addition to this seems greatly confused about what we might term as the ‘a-religious’ faith that is supposedly the motivation behind this . The arguments have nothing to do with what secularism in the philosophical or intellectual sense means and Gray is determined not to acknowledge that some ‘faith’ is more justified than others. This may be because he doesn’t believe this to be true. But the point is elementary; the faith I have that I shall be nourished by my lunch today contains far more merit than the faith that an overseeing, all-powerful spaghetti monster awaits my death so that I can transgress into heaven (…just for example). So there are different kinds of faiths and they can be judged on their weighting and merits on a case-by-case basis.

Is Gray seriously claiming that belief in a God who created the world and everything in it, observes our earthly movements and who judges us upon our death (for sins which were brought upon us without our having any say in the matter), contains the same level of rationality or faith as the study of empirical, observable evidence to make judgements and decisions in the here and now?  Gray, to me, rather condescends the layman in bringing what are often absurd religious claims on a par with complex but reliable scientific ones (that is to say, these claims are brought about through a reliable method). Aside from annoying this ‘militant’, ‘new’ atheist, mainly by employing the facile oxymoron in the first place (how can one be ‘militant’ in their unbelief of something? What classes as religious militarism and atheistic militarism is considerably different in public terminology), Gray never actually explains how and why ‘most of  our beliefs are always going to be unwarranted’, one of his mains failures in the article.

This leads me onto the second article I mentioned by Alan Moore. The subtitle of Moore’s piece is ‘We cannot state conclusively that anything is true’; this is a fairly accurate summary of the theme of the piece and intentions behind it. His main beef with the concept of evidence seems to be that its validity relies on, well, evidence. This appears at first to be true – such a proclamation is indeed, self-evident and in a sense grants itself – but in terms of pragmatics, real life day-to-day stuff, the concept is not so circular. We could not live without evidence. We need it for helping to solve crimes, create life-saving medicines and conduct scientific experiments. And yet Moore seems to define the concept of evidence in strange, anthropomorphic terms, as though it were an individual event or quantifiable foe: ‘A glance at evidence’s back-story reveals a seemingly impeccable and spotless record sheet…’. What? All things in the world can be evidence; literally anything. Precisely what is he pointing to when he says ‘evidence’s back-story’? Is it evidence for things he doesn’t like?

Moore is within his rights to make the distinction between ‘evidence’ and ‘proof’ (though the former often constitutes the latter), because proof can be had without evidence. But when Moore invokes philosopher Karl Popper’s theory of falsifiability, he commits an error of categories. It is certainly true that nothing can conclusively be proven to be true, for we would have to have infinite time and ability to assess it all, and that the principle of falsifiability – that we can only demonstrate at most that a hypothesis has not yet been falsified – is the best way to go about conducting scientific enquiry. However, religion is primarily a scientific question, for it makes bold empirical and perhaps eventually, testable claims; one should not take the jump of making a truth claim about god’s existence simply because it hasn’t been proved he does not exist. This is the principle of falsifiability in action; the burden of proof is not to show that things do not exist but that they do. Doubly so with grandiose claims about the nature of the Universe and the things that happen to us when we die. Again, there is nothing new about what is being said here. Evidence is crucial and it is absolutely right to ask for its consideration, especially when so much is at stake as it surely is with religion.

The verification principle is useful for questions of scientific enquiry, but it cannot really be put into practice with regard to supernatural claims. Such questions certainly are not meaningless (there has to be a truth-function to these claims, unless you are a relativist), but by ignoring the distinction between ‘faith’ and beliefs based on reason, Moore falls into the same trap as Gray. Do these two Gents not trust science, or the concept of evidence? If not, they are kindly invited to climb to the top of a ladder and jump off to test their ‘unreasonable’ faith in gravity – ah, but of course neither would want to do any such thing. The basis for their attempts to create a level-playing field for reason and faith, for skepticism and credulity are flawed, and they really ought to not be so disingenuous. If they want to be relativists about truth, they should be consistent and come out with what they really mean.

The correlation between military combat and the writing of poetry has a long and historic tradition. There are many suspected reasons for this: war unfailingly touches the hearts of everyone involved in it and raises life’s deeper questions about death, justice and the nature of humanity. It is also about incorporating the ‘unchanging aspect’ of the primitive fight into a literary tradition, according to former Army Captain Patrick Bury, of the Royal Irish Regiment.

Bury, 31, was at Hull University last week to discuss his book Callsign Hades (2011), an account of his time fighting the Taliban in Helmand Province, Afghanistan over a period of four years. The presentation, which lasted around an hour, featured anecdotes about his regiment in combat in Sangin (a Taliban stronghold), his personal admiration of the war poets gone by, and the eternal relationship between politics and prose.

The tragedy of the First World War is imprinted in the minds of all British schoolchildren through the storytelling of Wilfred Owen, Siegfried Sassoon and Rupert Brooke, and for Bury at least, it seems to have had a lasting impact. I asked whether the mythology of living up to these literary giants was what spurred him to begin his project:  ‘It was definitely something I decided to do after I went in. I’m not the biggest fan of war poetry but I felt a need to record the events I was witnessing in some way’.

Bury accordingly touched on how he sometimes felt as though he was signing up for the ‘old lie’, an idea immortalised in Owen’s Dulce et Decorum Est (1920), which deals with justice amidst the fighting and the heart-breaking reality of conflict. Further scepticism was allowed for when discussing the ‘corrosion of combat at a moral level, leading to the realisation that young men were sacrificed for the policies of old men’. Bury used this sweeping maxim to bring the issue up to date, making important points regarding the severe lack of equipment and numbers of soldiers in the Afghanistan War.

In contrasting again the wars of old with our modern clash between western ideals and the theocratic terrorists of the middle-east, I asked Bury whether he felt a combative distinction between ‘localised’ wars like the Great War and our current ideological battle: ‘I don’t really think there’s much difference – except for the lives of those back at home’. Bury is clearly not too interested in getting into a political debate regarding war, for he wants to channel the unchanging nature of conflict and incorporate this into creating ‘self-identifying’ poetry for soldiers.

After a discussion about Bury’s childhood, it seems obvious that his early ambition to be in the military shines through as being the deciding factor for engaging in the war, way before any political or literary ambition. This very ‘masculine’ desire permeated the young Patrick Bury growing up during The Troubles in Ireland (with a couple of ‘hippies’ for parents), as did the relatively ‘simple’, masculine poetry of war, following the template of poets like Alan Seeger and Wilfred Owen.

This macho drive, it seems was far-reaching across the army. At one bizarre point in the presentation, Bury played a YouTube video of the AC/DC song ‘Hells Bells’, claiming that he and his comrades in the regiment would play the song loudly before combat, to get them pumped up for the fight. This recalls perhaps one of the worst clichés about the US and British Army, encapsulated in Francis Ford Coppola’s classic Apocalypse Now (which Bury briefly mentioned), in the scene where the helicopters blare out music whilst blasting a Vietnamese village with gunfire. Perhaps, in light of the earlier discussion, ‘War Pigs’ by Black Sabbath would have been more appropriate considering the lyrics: ‘politicians hide themselves away, they only started the war, why should they go out to fight, they leave that role to the poor’.

At moments like this, and when Bury tries to fuse together cross-generational war poets by using generalisations, the presentation can feel more like an amateur high school assignment than a university guest lecture. But first and foremost, Patrick Bury is an ex-soldier with a clear and valuable insight into what is a relatively normal soldier’s life in the army and the traditional relationship between this experience and literature. And I have learned about the unbridled motivation behind Bury’s and probably countless other soldiers passion for warfare, an impending feeling of duty, summed up best by Alan Seeger himself: ‘I have a rendezvous with death’.

It is fair to say that spectators and speculators on Philip Larkin, from Anthony Thwaite to Christopher Hitchens, reveal in their prose the idea that Hull says more about Larkin than the other way round. This might be only partially true. For studying Hull and its literary tradition, we can try to understand how this popularly neglected, post-industrial northern Town provides a fertile establishment for some of our countries great writers.

The old King’s Town of Hull produced its earliest significant literary father Andrew Marvell in the early 1600’s. Marvell was a republican hero as well as poet and was elected into Parliament from Hull in 1659 as kind of a predecessor to the much celebrated William Wilberforce. The obscure Marvell relished in the tradition of metaphysical poetry of the love kind, developing his craft at the intellectual haven of Trinity College in Cambridge, alma mater of Newton, Russell and more recently, Stephen Hawking. He happily served Hull from the houses of Westminster for the rest of his life and created a significant legacy to those studying such periods in English poetry.

Most, if not all of the popular writers descending from Hull are not part of any scene or movement. I sometimes realise with mild annoyance that we are so decidedly out of the popular music circuit in England, apart from a brief spell in the 90’s with the Adelphi Club, and would probably be unfashionable to be so for the reasons illustrated below. None of Hull’s poets really form any meaningful collective – Motion was just starting to divulge his poetry toward the end of Larkin’s career and indeed, life – and have tended to reside at the University, a place which attracted Professor David Wheatley.

In spite of declining to host such musical endeavours, one of the wittiest and lyrically provocative songwriters in pop music, Paul Heaton also made Hull his adopted home as commander of The Housemartins and The Beautiful South. As a modern champion of left-wing causes in a Town one feels would be unwilling to propagate him otherwise, Heaton encapsulates the idea that the mundane can be beautiful and must be celebrated with a pinch of irony. Hull, it seemed, was the perfect catalyst for this spirit.

Hull certainly feels decorated by the baby boomers with its long concrete 60’s department stores, interspersed with Victorian structures like the Train Station Hotel and old theatre.  Fashion, fame and glamour somehow clash with this, being perched on the rugged coastline reaching out toward Scandinavia. Working class-ness and the blunt demeanour of the people supply ample opportunity for one learning the craft of kitchen sink realism, demonstrated through the writers I have included.  The attraction seems plausible; what else by way of art and reflection can be done but observing, writing, residing in the cocoon? Larkin sums up this unlikely breeding ground in a letter to Robert Conquest; ‘Hull is like a back drop for a ballet about industrialism crushing the natural goodness of man, a good, swingeing, left-wing ballet’. [1]

Pursuing the location further, Hull is unique because it is awkwardly positioned and so, if you come here, you come for a reason. Cold, rugged, halfway to Scotland from London, numbingly unexciting, it maintains its satisfying quirks: the cream telephone boxes, its artistic flare, approachable via the immense suspension bridge, a good University.  There couldn’t be more of a subtle place for a writer to be, particularly one desiring to be induced by their surroundings.  Interestingly, Hull is where many such writers chose to live rather than originate here, like Larkin, Roger McGough and Andrew Motion. Larkin, unlike the others, never became emancipated from the place and eventually immortalised Pearson Park in high windows in 1967.

To make sense of this, one must conclude that writers require the blank setting to fill their imagination, with deprivation of high luxuries being a familiar virtue, in particular for those inclined toward poetry. It was of course the deep isolation of the trench which lead poets Siegfried Sassoon and Wilfred Owen to record the tragedy they witnessed forever on paper.  Terry Eagleton says ‘The Hull setting was symbolically apt for Larkin: as the 20th Century unfolded its wars and revolutions, he cowered behind the book stacks in this remote provincial outpost’[1]. Whilst I think perhaps that last description is a little unfair, Larkin truly did create a fragile nest and often complained about Hull and the political state of his beloved England in his letters to, among others (like lifelong friend Kingsley Amis), his fellow librarians. In contrast, contemporary George Orwell actively engaged in politics, writing extensively on the Soviet revolution and WWII, whilst fighting for the Imperial Police in India in his earlier days. One cannot imagine such a ‘street fighter’ would choose Hull as an apt setting for his life’s work.

There is therefore hope that the literary tradition of Hull will continue, so that the same attractions that are highlighted bring a few more significant novelists, poets, essayists and intellectuals here. The ‘Larkin 25’ celebrations a couple of years ago officially acknowledged the importance of Hull for modern literature, but to me sent out a more profound message: there’s inspiration in this soil yet.

[1] Unacknowledged Legislation: Writers in the Public Sphere, Christopher Hitchens, first published in 2000 by Verso.

[1] Selected Letters of Philip Larkin 1940 – 1985, edited by Anthony Thwaite, first published in 1992 by Faber & Faber Ltd.

Human privacy is something we don’t tend to think about much, other than when considering the prospect of having it removed from us. Except I did, the other day when walking past a window with some particularly short and shoddy curtains which, whilst helping to illuminate the room somewhat, also drew my attention towards the student-standard MacBook, football posters, etc. there for all to see. It may also have been because I had on that day seen Andrew Sullivan’s ‘classic tory’ views on transparency within the American government, who claimed that whilst the government should certainly be accountable for its behavior  they shouldn’t be totally exposed, nor should the content of meeting discussions be disclosed. They must have their privacy. So, is privacy perhaps the defining concept in the life of the free individual in a functioning society, those of us in the West?

Retreating to a Marxist perspective, one can muse on the idea that the existence of the private sphere comes from an oppressive capitalist society and that we are social creatures by nature, and a free-market global economy has isolated us from this very nature for hundreds of years. But we seem to have a more tenacious and inherent natural desire for privacy than almost anything else in Western society. We will fight for it at the risk of losing our lives, and overall it has clearly triumphed. Perhaps then, privacy is the embodiment of freedom itself.  For those cynical about what this life virtue can produce, please listen to Little Boxes by Malvina Reynolds as the soundtrack for reading and you’ll note a caricature of the perils of isolation centered on the private home.

What we consider the most oppressive societies in the world are surely those who do not permit a private life for the individual. North Korea is probably the best example of such a place which has strict ‘lights out’ blackout in the capital of Pyongyang around 10 p.m. All phone calls are monitored so private discussion is difficult to have at all, giving an unfailing reminder of Orwell’s Nineteen Eighty-four, actually published one year after the founding of this horrific state.  A Korean citizen however, probably would not be able to think like Winston Smith; from a young age the confines of their social environment (and language, as is again demonstrable in Orwell’s newspeak) would undoubtedly constrict their concept of freedom or privacy or the possibility of such ideas even existing, let alone being attainable.

Again, I seem to have made ‘privacy’ synonymous with ‘freedom’, but can the two ideas actually conflict with each other? Those who feel uncomfortable around public toilets (like maybe those who are not cis-gendered) may dislike the social pressure of having to enter a designated male or female cubicle, in which case their freedom is limited by having to make a choice. Meanwhile, their business is still private. Social critics and Marxists will point out that our structures of upholding privacy in life, like this such example, or say being pressured to move into our own houses and flats from a young age, make us prisoners of ourselves and our true human nature. I myself see some aspects of this in society (with things like schooling), but reject the overall interpretation of a great social human nature. If we have any ‘natural’ aspects as humans, it is surely that we are curious and envious. These traits are shown through our fanaticism with tabloid newspapers in Britain and it is shown through the stoning of the woman who might have had a secret love affair in Saudi Arabia. Privacy, it seems, in the name of individual benefit, must be retained to a significant extent.

We like our privacy, but I wonder if we are so quick to permit others the same right. The sensation surrounding Julian Assange over the last two years is an intriguing case. Given that he and his associates leaked private files containing sensitive government information, does he still have a right to move freely without interruption or defend himself in the court facing a rape charge? Those who think not have to admit that, for them privacy is a content-free construction: if they permit it for one body, they must permit it for everybody. So bearing this in mind, I wonder whether I agree with Andrew Sullivan’s view. In the context of this discussion it must be pointed out that when we punish individuals, we do so by taking away their privacy (amongst other things), something I don’t necessarily disagree with, a thought which I think we like to relish for ‘moral’ or actual criminals. Assange if convicted might face the consequences of this in prison and it is up to the law to take into account the full details of the case. Still, I am sure we like to delve in the private lives of others more than we might admit. Trans* people, if asked often say that the reason so many people feel uncomfortable when they cannot immediately gender-type a person is because they want to know about the genitalia of strangers. And yet we would hardly say that a stranger has the ‘right’ to know such personal details.

If we are going to learn about humans and their habits and behavior for anthropological, socially progressive and historical means, a good degree of transparency seems like a must. There is a conclusive split between personal and social privacy and this is an argument elaborating on the former. There is a reason we feel horrified at the extreme invasions of privacy featured in The Lives of Others (2004), which is that we are so used to having our private lives relatively un-tampered with. But the more we shut ourselves off from the rest, the quicker society follows the trend.