Categories
Arts & Humanities History

In the twenty-first century, what is monarchy for?

This long-read article was written by Edward Eves for the 2022 Robinson College essay competition, and was highly commended.

Estimated read time of essay: 15 minutes

The twentieth century was a disaster for royalty. In July 1900, King Umberto I of Italy was assassinated at Monza and in July 1946, his son, Vittorio Emmanuele III, scampers from the throne. In 1908, King Carlos of Portugal suffered the same fate as Umberto and two years later, the Portuguese monarchy was abolished. In 1913, the King of Greece was also murdered. As Leonard Woolf writes, the period was “a holocaust of emperors, kings, princes, archdukes and hereditary grand dukes” [1]. The most notable extirpation was in Russia, where the last Tsar and his family were murdered in the basement of their home on July 17, 1918. In each case, the killings were marked as prerequisites to the building of democracy in Europe. However, in almost every European nation which was ruled by monarchy, the void left behind was filled by an ambitious dictator. Was this really the solution the people envisaged when they ousted their monarchies? Now in the twenty-first century, there are only 12 European monarchies that remain. What do all these monarchies stand for? What did they stand for in the past? And what will they stand for in the future?

Since Charles I, considered the last absolute monarch of the United Kingdom, the monarchy in the United Kingdom has suffered a steady decline in political power. In the modern day, their role exists as unbending support for the party in power, whoever it may be, while they still hold various smaller roles such as the weekly meetings between monarch and Prime Minister. Such meetings have been fruitful for Prime Ministers, with Clement Attlee, a former Prime Minister, commenting,” Yet another advantage is that the Monarchy is continuously in touch with public affairs, acquires great experience, whereas the Prime Minister might have been out of office for some years.” [2] This experience and knowledge from an alternate perspective remains important as advice and counsel for each Prime Minister that holds office. Another common characteristic amongst surviving monarchies is subtlety. Their ability to exist as symbols of their state, whilst, in almost every nation, the people are more concerned with government than the monarchs themselves. As Amman, editor of The Economist magazine, writes of Queen Elizabeth II,” Indeed, one of her greatest achievements is that she has never said anything of any interest in public.” [3] Amman refers here to the Queen’s propensity for remaining unremarkable in opinion, and thus, unoffending towards the people. The monarchies that remain, remain as emblems of the nations they shepherd. Their role is not to disturb the public opinion, rather to remain neutral and in full support of their nation’s political leaders. In doing so, they can allow democracy to take its proper shape whilst their position on the throne avoids the threat of dictatorship that we have seen in other European nations.

Alongside political discretion, the pageantry associated with monarchy provides tremendous joy and a vivid sense of community for the people every year. From performers to military parades, to street parties and live music, events such as the Queen Elizabeth’s platinum jubilee this year provide an enormous feeling of patriotism and pride in one’s own nation. Johnathan Welch of The Critic perfectly sums up the essence of these events, writing” Yet our need for solidarity remains. For want of battle, we can find such solidarity in festival. In days such as these, that need is met through celebration of a person who embodies our whole nation” [4]. In bringing the nation together in celebration and festival, the monarchy underlines its importance to national pride. Inevitably, there are those who do not buy into the occasion and the collective buzz of pageants. The same article from The Critic writes,” To the cynic, perhaps this all tells the sorry tale of cultural decline. Of a people lost at sea, searching without rudder for cohesive cultural identity” [5]. A growing portion of the population have fallen out of touch with the monarchy and Welch highlights the increasing feeling of cultural uncertainty around the surviving monarchy and its unnecessary ceremonies. Nonetheless, whether foraging for cohesive cultural identity or painting a thick facade over a nation’s cultural decline, the value of national parties in celebration of the monarchy cannot be understated.

In the UK, the Queen and other members of the royal family are patrons of over one thousand charities and organisations in the UK and the Commonwealth [6]. This staggering number of charitable organisations that the royal family continue to support highlights their desire to aid those who are less fortunate. The late Prince Philip alone was associated with 992 charities in one capacity or another throughout his 99-year-long life. A testament to his remarkable devotion to his people and a strong example of all surviving monarchies and their capacity for charity. A strong argument amongst those in opposition to the continuation of monarchy throughout Europe is their extreme wealth and the flouting of this wealth while there are those who live in desperate poverty. However, the distribution of this wealth via charity and support of over a thousand organisations helps to distil this animosity. In addition, it must be noted that whilst monarchies are of a particular burden to the average taxpayer, in the UK, for example, in 2021, the Sovereign Grant to the monarchy was £86 million [7] while the estimated income via tourism alone due to the monarchy every year is around £500 million. Therefore, while growing numbers of republicans are particularly offended by the wealth and extravagance of the monarchy, there is strong evidence to suggest that the monarchy is highly profitable, especially in the UK. Considering these figures and the sheer number of charities that the British royalty continue to support, the influence of monarchy over social and financial equality is a crucial element of their existence in modern times.

Moreover, Britain’s Royal Family and its Armed Forces have a special relationship which goes back centuries. As sovereign, the Queen is the official Head of the British Armed Forces, and this is extremely important. Paxman emphasises this, “armies work by cultivating emotion…To do so requires the development of an instinctive loyalty. Military organisations act upon commands, so they need a hierarchy, at the top of which will inevitably sit an individual – the monarch.” [8] The army has a very strong commitment to the queen and their loyalty towards her is unmatched. Further on Paxman quotes The Commandant of Sandhurst who states that he “never, ever heard a soldier say that he is fighting for Britain. They’re fighting for the Queen.” [9] For many soldiers, simply fighting for their own country is not enough motivation, but to fight and defend their Queen, and their monarch is an honour. More to this effect, the Royal Navy has particularly close ties with the Royal Family with numerous members of the House of Windsor such as George V, Edward VIII and Prince Charles all attending naval college and training to become sailors. The idea was for these future monarchs to gain important virtues like punctuality and self-reliance that would be required as future kings of the country. This idea highlights the monarchy’s respect towards the Armed Forces and the close connections which continue to exist between the two prove to be extremely beneficial for the spirit of the Armed Forces and the defence of the United Kingdom.

Similarly, the monarchy is hugely beneficial for the Church of England. As ‘Defender of the Faith and Supreme Governor of the Church of England’ the Queen is a vital figure and essential for the promotion of faith in the UK and the Commonwealth. The Queen’s strong association with the Church was symbolised at her coronation when she was anointed by the Archbishop of Canterbury and took an oath to “maintain and preserve inviolably the settlement of the Church of England” [10]. The strong ties to religion and faith add to the mystical aspect of the monarchy which convinces many that they are almost “above human’’. Additionally, with the advice of the Prime Minister, the Queen appoints Archbishops, Bishops, and Deans of the Church of England, who then swear an oath of allegiance to her. Furthermore, the Queen is known to be a devout Christian who regularly attends services with other members of the Royal Family. This is regularly documented and is the perfect advertisement for Christianity and other faiths in England where atheism is on the rise. Once more, the close ties to the monarchy bring huge benefits for religion and particularly the Church of England, with the Queen a perfect image of hope and faith.

However, beside all their current roles and functions as monarchy, one question still remains. What will the monarchy be for in the future? With Queen Elizabeth II turning 96 this year, it is only a matter of time until her legacy is passed down to her first son, Charles. The Queen has endured the longest reign ever seen in the United Kingdom and has rightfully earned the undying support of the majority of her people. How will this change when she dies? The Queen’s death will mark a new age for the monarchy and Prince Charles will have to adapt the monarchy’s function in order to remain relevant and maintain the majority backing from the British public. In his support for global conservation and as a champion for environmental concerns, Charles has found his cause. In a world where global warming is on the rise, serving as a serious threat to human existence, Charles will garner huge support in his campaigning for the protection of our planet. The most important balancing act for the monarchy continues to be staying in touch with the people whilst staying far enough ahead to be marvelled at and admired. In championing conservation across the globe, Charles can connect with the people whilst remaining an idyllic figurehead of our nation. This is only one example of how King Charles may wish to lead his country when the crown arrives on his head, but the central problem that will no doubt outlive his reign, will be how the monarchy, an ancient system of rule, can survive in the twenty-first century.

To draw to a close, in the twenty-first century there are still may roles that remain for monarchies. Arguably the most important, is the devout support for government and the increasing modesty and discretion of modern monarchies which allow democracy to thrive. In the UK, the Queen’s roles as Supreme Governor of the Church of England and Head of the Armed Forces prove vitally important for the sustained popularity of the Church and the strength of will of the British Armed Forces. Lastly, monarchy’s numerous roles in charity and their importance for tourism in modern states continue to provide enormous benefits to their economies and contribute heavily towards social equality. As we look ahead into the future and the rest of the twenty-first century, the problem that surviving monarchies face is their relevance and suitability in modern governments and economies. Their ability to modernise and adapt to the current climate whilst preserving their enchantment of their people. However, for now, the monarchies prove to be tremendous symbols of devotion and commitment to their nations and whether they are removed or not, they will remain hugely important figures of history.


References

[1] Woolf, After the Deluge, (London: Penguin, 1. Jan 1937), pp. 71-2.

[2] https://www.politicalscienceview.com/what-is-the-role-of-the-monarchy/

[3] https://www.economist.com/international/2019/04/27/how-monarchies-survive-modernity

[4] https://thecritic.co.uk/the-power-of-the-pageant/

[5] https://thecritic.co.uk/the-power-of-the-pageant/

[6] https://www.ucl.ac.uk/constitution-unit/explainers/what-role-monarchy

[7] https://www.instituteforgovernment.org.uk/explainers/royal-finances

[8] Paxman, On Royalty, (London: Penguin, 2006), p.112

[9] Paxman, On Royalty, (London: Penguin, 2006), p.113

[10] https://www.royal.uk/queens-relationship-churches-england-and-scotland-and-other-faiths

Bibliography

Amman, ‘How monarchies survive modernity’, The Economist, April 27 2019; online edition,[ https://www.economist.com/international/2019/04/27/how-monarchies-survive-modernity , accessed 27 July 2022 ]

Johnathan Welch, ‘The power of the pageant’, The Critic, June 5, 2022; online edition, [ https://thecritic.co.uk/the-power-of-the-pageant/ , accessed 25 July 2022 ]

‘What is the role of the monarchy?’, UCL, The Constitution Unit, [ https://www.ucl.ac.uk/constitution-unit/explainers/what-role-monarchy , accessed 27 July 2022 ]

‘Royal Finances’, Institute for Government, June 1, 2022, [ https://www.instituteforgovernment.org.uk/explainers/royal-finances , accessed 28 July 2022 ]

The Week Staff, ‘How the world’s monarchs are adapting to modern times’, June 16, 2019; online edition, [ https://theweek.com/articles/847076/how-worlds-monarchs-are-adapting-modern-times , accessed 23 July 2022 ]

Jeremy Paxman, On Royalty, (London: Penguin, 2006)

A.Purdue, Unsteady Crowns – Why the world’s monarchies are struggling for survival ( Cheltenham: The History Press, 2022)

Andrew Marr, Elizabethans – A history of how modern Britain was forged, (London: William Collins, 2020)

Categories
Arts & Humanities Classics History

Did racism exist in the ancient world?

This long-read article was written by upper sixth former Sebastian Norris.

Estimated read time of introduction: 1 minute

Estimated read time of essay: 11 minutes

Introduction

Setting aside preconceptions and a modern, judgemental approach is difficult when discussing racism in the ancient world, due to the pertinence of the issue in our society. However, it is also easy to justify ancient attitudes by the argument that racism as we know it has only existed in more modern times. Therefore, to avoid getting involved in debate about how to define racism, I am regarding racism as the belief that one’s race is superior in some way to others, as such beliefs are at the very heart of racial discrimination.

In order to discuss the question in some depth, I am only considering the ancient Greeks, as their beliefs and prejudices have been so influential, especially since they were formative for the Romans’ views and therefore endured for a long time. However, it is also important to consider the difference between racism and xenophobia, as despite the common use of the word racism describing hate crimes or speech, it is fundamentally the ideology of superiority of a race, whereas xenophobia is the fear or hatred of foreigners.

Within these parameters, clearly some form of racism did exist in the ancient world, as the ancient Greeks had a strong sense of superiority over non-Greeks, seen both in their attitude towards foreigners, for whom they had a collective term (‘βαρβαροι’) to signify that they were not Greek, and in how they saw themselves, that is, as a pure race descended from the Earth itself, giving them a sense of superiority over other races.

To view Seb’s full article, follow this link below.

Categories
Arts & Humanities FTRP History Law & Politics

To what extent does Mao Zedong deserve his reputation as one of history’s most notorious dictators?


This essay was written by lower-sixth former Austin Humphrey, and shortlisted for the 2020 Fifth Form Transitional Research Project. The following provides a short abstract to the full essay, which can be found at the bottom.

Estimated read time of abstract: 2 minutes
Estimated read time of essay: 11 minutes

Mao Zedong was Chairman of the People’s Republic of China from 1949 to his death in 1976. Mao was a Communist revolutionary, described as having an ‘emphatic aura’ and ‘exuding overwhelming power’. He is known globally as an infamous killer, responsible for the deaths of millions, but can he be compared to the likes of Hitler or Stalin? Another question to consider is what makes a notorious dictator, and due to these factors we can determine whether or not Mao deserves his reputation. 

Firstly, we can examine the death toll of Mao. In 1958, Mao’s ‘Great leap forward’ killed approximately forty million people, by forcing peasants to stop work on farms and begin production of steel. Mao took over all agriculture in China, with no farming experience. He demanded that farmers kill sparrows, to stop them eating the crops. However, the sparrows were only eating pests, thus improving crop yields. Hence Mao’s arrogance and ignorance caused one of the most devastating famines in history.  

To compare Mao’s numbers here, we should look at Pol Pot, the former dictator of Cambodia. Pol Pot killed only two million people, which seems inconsequential compared to Mao. However with perspective, Pol Pot is responsible for the death of a quarter of his while country, while Mao only 6%. Therefore one reason for the extraordinarily high number of deaths is just because China’s population was so much greater than other nations’: 670million.  

Just examining the number of deaths may not be as important as analysing the intent behind them. Whilst the number of people Mao killed was almost double what Hitler and Stalin killed together, his primary intention was to increase China’s industry to make it a world superpower. This highlights Mao’s noble intentions whilst in power. 

On the other hand most would agree that Adolf Hitler’s intentions were horrific. When he murdered eleven million people in death camps, he singled out groups in society as second class humans, then purposely slaughtered them. Therefore, as death by ill-judgement is not the same as death by ill-intent, Mao doesn’t deserve to be compared with the likes of Hitler, who set out with the aim of genocide.  

Mao wasn’t completely innocent of malicious aspirations. In 1956 he launched ‘The Hundred Flowers Campaign’, which was an opportunity for everyone to present ideas on how to improve China. After a few months, the campaign stopped and anyone who criticised the government was persecuted. Many people, including former deputy chief of MI6 and British diplomat in China at that time: Sir Gerry Warner, believe that ‘The campaign was a deliberate attempt to flush out those who opposed Mao and Communism’. 

In conclusion, there are many ways to judge notoriety, the most important of which I believe is intent. Therefore due to Mao’s mass number of killings, but honourable intention on the whole, he deserves his reputation as one of history’s most notorious dictators, but falls short of the notoriety of those who intended to harm others. 

To view Austin’s full article, follow this link below.

Categories
Arts & Humanities FTRP History

What was the most important initiative carried out in response to problems posed by the Crimean War in improving healthcare for infantry soldiers?

This essay was written by lower-sixth former Sebastian Evans, and shortlisted for the 2020 Fifth Form Transitional Research Project. The following provides a short abstract to the full essay, which can be found at the bottom.

Estimated read time of abstract: 1 minute
Estimated read time of essay: 9 minutes

This FTRP is about the main medical advances in healthcare for British infantry units during the Crimean war, which lasted from 1853 – 1856, and how these advances helped the British infantry in future wars fought by the British empire. The document talks about advances and improvements in hospitals, improvements in the soldiers’ diets, the introduction of ambulance trains and ambulance ships, and improving the medical staff working to save wounded soldiers. The document looks at not only the achievements of Florence Nightingale, but also other factors and initiatives that helped improve the chances of survival for sick or wounded infantry during the war. While all the initiatives mentioned were important in helping the infantry and improving healthcare, the single biggest one was undoubtedly the improvements to hospitals during the course of the war.

To view Sebastian’s full article, follow this link below.

Categories
Economics FTRP History STEM

How Gambling in the 17th Century has shaped insurance markets in the 21st century

This essay was written by lower-sixth former Moog Clyde, and shortlisted for the 2020 Fifth Form Transitional Research Project. The following provides a short abstract to the full essay, which can be found at the bottom.

Estimated read time of abstract: 1 minute
Estimated read time of essay: 11 minutes

In 1654, the Chevalier de Mere, a French nobleman, posed the notorious ‘Problem of the Points’ to Blaise Pascal, an esteemed mathematician. The Problem of the Points concerned a game of chance containing two players with equal chances of winning any given round, and posed the question of how to split the stakes if one gambler has to leave the game prematurely. Despite several attempts, finding a definitive solution stumped even the greatest minds of the previous two hundred years, most notably Luca Pacioli (the ‘Father of Accounting’ ) in 1494 and Niccolò Tartaglia (solver of cubic equations and the first to apply maths to the paths of cannonballs, otherwise known as ballistics) in 1556. Even the great Galileo failed to discover a reasonable solution to the problem. Pascal was determined to find a logical and fair solution, and thus reached out to Pierre de Fermat, a brilliant mathematician himself. In their resulting correspondence, the pair developed the first explicit reasoning about what today is known as ‘expected value’ and laid the groundwork of probability, earning them both joint title of ‘the Fathers of Probability.’

Although it is easy to underplay the significance of this breakthrough as merely a clever, tidy solution, to appease opposing gamblers, in reality, it was truly revolutionary. It is difficult to understate how vast and significant the cognitive shift across Europe that occurred following this solution was. The notion that you can hang numbers into the future was alien to mathematicians merely years before this solution was proposed. Soon, others began to see the possibilities that this concept generated.

Within three years Christiaan Huygens adapted Fermat’s theory into a coherent pamphlet entitled ‘De Ratiociniis in ludo aleae,’ which was used as the standard text on probability for the next 50 years. Huygens attributed his developments to “some of the best mathematicians of France” (i.e. Pascal and Fermat). This text spread like wildfire among the academic community as it was evident that the new science of probability had the potential to transform the world. In the next few years, Huygens’ text was ripped out of the context of gambling and thrust into several aspects of life, including law and maths. In particular it was applied to a very different, brand new data set: mortality tables. Almost immediately, by using specific intricate data, insurance shifted from a form of blind gambling, based on hunches and guessing, to a remarkably accurate science.

It now is clear that this rapid chain reaction of discovery underpins all notions of mathematical ‘expected value’ and insurance came not from savvy merchants but from avid gamblers, eager to improve their craft.

To view Moog’s full article, follow this link below.

Categories
Arts & Humanities History Law & Politics Social Sciences

Irving vs Lipstadt- The Precedent on History

This article was written by sixth-former Omeet Atara.

Estimated read time: 3 minutes

In the case of Irving vs Penguin Books Ltd, the law was embroiled in a difficult case, which forced them to decide on the validity of a historical claim. Whilst it was labelled a libel case, this was a fundamental question about history. Experts included Richard J Evans were called to the stand to work as witnesses throughout the trial. The significance of this trial is not in the actual arguments, but the result delivered by the judge and historical judgements made.  

History is a complex subject and is about interpreting and understanding the past. Historians use a variety of primary and secondary sources to, put colloquially “work out what happened”. By using these sources, they can justify arguments and theories about past actions. However, historians do disagree and in this case, the argument was over the Holocaust. David Irving brought a British libel suit against Deborah Lipstadt and her publishers Penguin Books for claiming he was a Holocaust denier her book, Denying the Holocaust. Significantly, the case was brought in Britain rather than America, where Lipstadt was based; in British libel suit, the defendant holds the burden of proof whilst in America, it is the other way around. Hence, Lipstadt was forced to legally and historically show her claim about Irving was true. The mixing of historical information and legal complexities caused this trial to gain widespread media coverage within historical circles but also the academic media.  

The case itself was a bench trial and both sides hired high-quality reputable lawyers in what was not just a legal case but a defining moment in academic history. The lawyers for Lipstadt spent significant periods, with expert historians, trawling through the works of Irving. They were ultimately forced to prove that Irving was historically incorrect, and they did this by reading the footnotes. They would search through each of his sources and ensure that they represented the view Irving took. What they found was a group of misused and distorted historical sources. They were able to argue the comments by Deborah Lipstadt to be true. Therefore, this proof made the libel claim impossible to justify- it was not libel but academic truth. 

However, they also asked key historians like Richard J Evans to look at the work of Lipstadt and Irving to try and gain his expert opinion. This brings in the idea of historiography; which is simply the study of written history. He writes the book In Defence of History, which explores the value of history and historiography in the modern age. This has been a key debate at university and in academic history over how we should use this skill. As the expert witness, he concluded that Irving had been factually and intellectually incorrect in denying the Holocaust. He compared the reasonings and the factual evidence provided to make this judgement. He presented written and oral testimony to the court; he was also subject to a cross examination. This formed the basis of the Lipstadt defence which can be described as the justification defence. Rather than use legal escapism, she simply ensured her actions were shown to be fair and justified. 

Irving and his lawyers began with the advantage due to the burden of proof. However, the irreconciled actions of falsely manipulating sources inevitably caused significant difficulties when he came to argue his side. Ultimately, his defence was doomed because there was no libel case- what Lipstadt had said was blatantly true now that the sources had been explored.  

The judge delivered a crushing 397-page verdict in which he ruled in favour of Lipstadt and gave a damning report of Irving. They concluded him to be a holocaust denier, disappointing historian and the defence was entirely correct. This was a judgment that has set an important legal and historical precedent for the future.  

The law and history interacted in what was a case of incredible interest and importance. David Irving was proven to factually incorrect and it established the value of evidence in historical law. Despite the claim from Irving about the personal, economic and academic hardship he suffered the truth and history remained prioritised. The competition between historians over finding the truth makes it an interesting discipline. Regardless of the topic or personalities involved the history and evidence should come first rather than persona and economic disputes. Academic history which has a reliance upon evidence was strengthened once again. 

Further to this, the law was integrated with historical debate. Legally, the precedent was set for the value of evidence and removed the potential for other historical libel cases. This is a topic with no legislative agenda and hence the civil case uses precedent entirely. Hence, this ruling will be significant for years to come. The law also proved the strength of evidence no in academia but also in legal cases.  

History and law are both academic and complex subjects however and have been discussed and debate together in this example. The intertwining of topics has caused civil law to address historical issues; it is impressive to see how the law controlled and acted upon these issues. The Holocaust was a tragedy and to be debating abut its existence is disgusting- that is not the significant thought here. It is that the law sets a precedent for historical works on evidence, not personality.  

Categories
Arts & Humanities History

Slavery: A Catalyst for the Civil War?

This long-read article was written by sixth-former Jack Farrant.

Estimated read time: 8 minutes

The influence of slavery has long been considered to be the most important contributor to the start of the American Civil War. Historians since the days of the Civil War itself have often cited slavery as the primary, or even singular, point of tension. This view, although up to a point valid, is a gross simplification of what was in reality far more complex situation. The government of South Carolina, the first of eleven states to leave the Union, chose slavery as the main cause for the succession in their 1860 Declaration of Succession, saying that there was ‘increasing hostility on the part of the non-slaveholding States to the Institution of Slavery’. Although it is clear that the tensions of slavery were a factor in the outbreak of the Civil War, it is no doubt useful to also take a more Revisionist point of view. Indeed, while the divisive issue of slavery was a cause of tension among States, the problem of inherent disunity between those States encompasses much more than the dispute over slavery. It is more fitting to argue that it was the role of slavery within larger, more complex issues of economy, demography, and geography, that was more of a factor in the outbreak of the Civil War, as opposed to declaring slavery the sole source of tension. 

Historiographical debate over the last two centuries has provided the framework of modern opinion about the outbreak of the Civil War. For many years, it has been understood that the origins of the Civil War cannot be questioned without also looking at the wider context of international affairs and domestic tension within American society at the time. In addition to this, it is important to take into account the difference in opinion about the origins of the Civil War in Northern and Southern accounts. Especially in the years directly following the end of the Civil War, and into the Reconstruction Era at the end of the 19th Century, general Southern collective memory was that States’ Rights and Northern Aggression were the key factors in the outbreak of the war. On the other hand, Northern abolitionists, as well as the majority of today’s professional historians, point to the institution of slavery as the primary cause. 

On the 20th August 1619, an English trade ship, The White Lion, arrived at Point Comfort near Jamestown, Virginia. It carried with it approximately twenty Africans, who became the first slaves to arrive in the British Colonies in America. By 1860, the slave population was four million. Although Revisionism is appropriate when considering the causes of the Civil War, it is still pertinent to acknowledge the importance of slavery as a source of tension. The uncomfortable question of slavery had remained unanswered since the early days of the Revolutionary War; a shortcoming of the revered Founding Fathers. Slavery had been practised in America for as long as it had been a colony, and so became a contentious issue in the new Union. George Washington, despite being a slave-owner himself, claimed that ‘There is not a man living who wishes more sincerely than I do, to see a plan adopted for the abolition of it.’ This duplicitous idea is at the core of early-Union hypocrisy over the morality and legality of slavery. The nationalistic sentiments of the Declaration of Independence and the United States Constitution affirmed ‘that all men are created equal’, but despite these claims, slavery would remain legal in the former colonies for the time being. In the years following the Revolutionary War, certain States began to prohibit slavery within their territory, creating a great divide within the Union. The politicians of the early Union were far more content to compromise than to take on the problem of slavery outright, a sentiment emphasised in the inclusion of the Three-Fifths Compromise, which decreed that each slave would be counted as three-fifths of a person, in order to increase House representation for slave-holding States. Further legislation, such as the 1793 Fugitive Slave Act, allowed for escaped slaves in free States to be returned to their masters in slaveholding territories. The sentiment of compromise rather than action confirms slavery as a cause for tension in America, and so it is clear that the response to early-Union slavery was an important factor in the lead-up to the Civil War.  

Political mismanagement of slavery no doubt also contributed to the inter-State tensions preceding the Civil War. Throughout the early 19th Century, a careful balancing act of slaveholding and free States in the Union was undertaken to ensure no side of the argument had majority representation in Congress. Just as had been prevalent in the 18th Century, a number of compromises were made to try and preserve the Union. The 1820 Missouri Compromise created the Free State of Maine to counteract the admission of the Slave State of Missouri, and banned all slavery in Louisiana Purchase territory North of the 36° 30’ parallel, excluding the State of Missouri. This was contentious legislation; contemporary writers such as ex-President Thomas Jefferson claimed that the division of the country along sectional lines would lead to the breakdown of the Union. Although the Missouri Compromise undoubtedly delayed the outbreak of war, Jefferson was proven correct just forty years later.  

The concessional nature of the legislation did nothing but delay the inevitable confrontation between North and South, rather than avoid it entirely. The Compromise of 1850 not only enhanced the power of the Fugitive Slave Act, but also defused a confrontation over slavery in the recently acquired New Mexico Territory. While this bill lessened tensions in the short-term, it was yet another example of compromise rather than pragmatism, and so did nothing in the longer-term to quell the confrontation. Further events such as the Kansas-Nebraska Act of 1854, allowing self-determination over slavery in the new States of Kansas and Nebraska, and the infamous Dred Scott v. Sandford court ruling, which devalued of the Missouri Compromise of 1820 by forcing Scott to remain a slave even though he had lived in free territory for four years, brought the country closer and closer to war, arguably dooming the Union to its impending division. It is clear that the lack of political pragmatism, and by extension the willingness to compromise, did nothing to stop the inter-State problems that had existed since the days of the Revolutionary War. In this way, political mismanagement of the institution of slavery caused just as much tension as the existence of that institution. 

For many years, the President of the United States has been one of the world’s most influential and powerful political figures, and is supposed to act as the defender of the Constitution and of liberty across the world. The position of President has, over American history, been held by some of the greatest leaders such as Abraham Lincoln, among many others. Despite this, the Presidency in the years leading up to the Civil War was not nearly as reassured or steadfast as it had been. The Election of 1856 saw Pennsylvania Democrat James Buchanan carry every Southern state. In his inaugural address, he left the question of slavery up to individual states, perpetuating the passive approach taken to slavery that was common at the time. Buchanan was a highly divisive figure, and as an advocate of the continuation of slavery, he alienated many Northern abolitionists. Some of the clearest evidence for the divisions within America was the election results in 1856. The divide between Northern and Southern States was obvious, with Buchanan winning every Southern State, and Republican candidate John Fremont, who arguably would have taken a more pragmatic stance to slavery, winning almost every Northern State. This was symbolic of the regionalised nature of American society in the years preceding the Civil War. The anti-abolitionist ideology of President Buchanan was generally popular among Southern voters, and highly unpopular among Northern voters. His politics, just like the Kansas-Nebraska Act, divided the Union among sectional lines. The Election of 1856 was a microcosm of a wider split in the Union; a situation getting closer and closer to Civil War. Overall, the influence of the Presidency during this time did nothing to quell tension within the rapidly failing Union. General historical opinion tends to disregard this factor of the Civil War, but it can be argued that the divisions of American politics were just as important to the start of the Civil War as the existence of slavery is usually considered to be. 

Ever since the early days of post-Revolution America, the issue of States’ Rights had been highly contentious, and the source of much debate among Northern and Southern politicians for decades. For decades, the split between North and South was obvious, encompassing economics, politics, and society. Many in the Southern States argued that Congress favoured the North, and despite being proven correct on multiple occasions, the feeling of dejection felt by many in the South fuelled inter-State rivalry in the Antebellum Union. The Articles of Confederation in the days of the Revolutionary War had allowed the central government little authority in the running of individual States, instead allowing the Union’s constituents to govern themselves on a self-determinist basis. The Constitution a few years later strengthened the government, decreeing that the Federal Law was ‘the supreme Law of the Land’. Despite these efforts to strengthen central government, the federalism present in the early Union meant that post-Revolutionary America was not much more than a loose confederation of individual entities. This lack of complete unity would continue to perpetuate through Antebellum America; it can be said that the Constitution itself split the country along sectional and regional lines, with each constituent member of the Union governing largely separately from the central government. The problem with federalism was most obvious in the early 19th Century, in particular regarding the Nullification Crisis of 1832. This event highlighted more than anything else the innate differences between the Northern and Southern States. The North viewed overseas trade as problematic, due to its industrialised and domestic economy. On the other hand, the much more rural and backwater South relied heavily on international trade, due to the larger emphasis on agriculture and exportation. In the late 1820s and early 1830s, Congress passed a series of tariffs that clearly favoured the Northern economy over that of the South, and the divisive Nullification Crisis began in 1832 when South Carolina declared the tariffs of 1828 and 1832 void within the State, prompting President Andrew Jackson to threaten military force. This brief showing of anti-Union sentiment turned out to be a precursor to the events directly preceding the Civil War, with South Carolina the first State to succeed from the Union in 1860. Overall, the existence of anti-Union sentiment in Southern States, and the popular Southern idea that the government favoured the North helped to fuel tensions between the constituent States of the Union, at the time a broad confederation of entities rather than a singular united body. The inherent split between North and South highlighted the single largest problem with creating such a Union; the political, economic, and social situations between the two sides of the country were so different. 

In conclusion, the influence of slavery in the outbreak of the Civil War cannot be understated. Its continued legality in some parts of the Union fuelled debate and division for decades after the Revolution, and in time tore the Union apart along sectional lines. However, from a Revisionist frame of reference, it is vital to understand that slavery as a part of American society was not wholly to blame for the start of the war. Indeed, the split legality of slavery based upon which State you lived in was symbolic of the innate problems within the early Union, as was the lack of pragmatism from politicians who were much more willing to compromise than to confront issues. The multifaceted split between the North and South was as much a problem of economy and society as it was slavery, with the Antebellum Union arguably trying to hold together what should really have been separate nations in first place. Regardless, the most important factor in the lead-up to war in 1861 was not slavery itself, but rather the divisions in the Union caused in part by slavery, and the half-hearted attempts to reconcile the problems of slavery. The fundamental differences between North and South, and the inability of politicians to effectively reconcile the problems caused by the division, is more influential to the outbreak of the Civil War American slavery itself. In the opinion of President Lincoln, the goal of the Civil War was to preserve the Union, not to end slavery, and so it is clear that the Union fell apart due to its own incompetence in dealing with slavery and other issues dividing North and South, not due to the outright existence of slavery in post-Revolutionary America.