Search

misreportedandmisremembered

Bringing context and perspective to the chaos

Man Up

One of the more curious aspects of modern Internet culture has been the spread of the masculinism movement, wherein groups come together to preach the gospel of men’s rights and speak out against movements focused on addressing inequity within social groups. The most visible way this manifests is in the “anti-feminist” movement, where members of forums or comments sections speak out in favour of holding both genders to the same legal and cultural standards, often times falling back on unconsciously or overtly sexist rhetoric to reinforce key messaging.

However, when examined in greater depth, an interesting pattern emerges: the majority of individuals who use these discussion forums will overtly claim to be in favour of equality between genders. Exceptions obviously exist, but the discussion condemning feminism often contains a variant on the phrase “Of course men and women should have equal rights”. So if the conversation is not one surrounding why one gender should dominate another, where then does the issue of the supposed inequality of feminism arise?

Gender as a societal construct is well-denoted, particularly in more recent years with the growth of the transgender movement. But, as discussed in a previous post, societal constructs are developed as direct indicators of status within society. And much like being white has always been an indicator of status, being a male has equally been an indicator of superior social standing. Men have always held traditionally dominant roles, a practice explored in greater depths in third-wave feminist theory, which focuses less on the disparity between sexes or genders and more on the manner in which power is assigned across cultures. And if gender is a construct, then being a man is separate from being born a member of the male sex, and can be deconstructed along social and biological lines accordingly.

In simple terms, to be a man is to be masculine, and masculinity in the Western world is defined by three core constructs – physical dominance, wealth and sexual conquests. There are indicators that these are cultural, and not biological, factors, given that they differentiate historically. For example, hegemonic masculinity has been at odds with male homosexuality since the Italian Renaissance, but was readily practised before then and was a cultural norm in earlier European societies. And, due to the role men have played in society, these three core traits were not only associated with being male but also with being powerful – indicators of power and influence became directly tied to the accumulation of wealth and the dominance of others. Myths and legends across the globe are filled with tales of heroes striking down their foes and bedding beautiful women, all while demonising the non-intimidating intellectual as spiteful, cowardly or simply not a real man.

Being male in the Western world of today is also equally nuanced due to the fact that that status is made up of a series of privileges, not a designated set of explicit benefits. But privilege is often non-visible to those who hold it, given that it manifests as a lack of barriers encountered in day to day life. But, oddly enough, the lack of systemic bias against males in a Western culture can lead to difficulties as well. Removal of an individual’s status as a man, due to a lack of sexual conquests or a perceived physical weakness, can result in being exiled from your peer group and not accepted by other surrounding groups. The lack of systemic injustice being ingrained into the cultural DNA of Western males means there is no urge for support in times of exile – to fail is to fail with every advantage, and is a reflection on the lack of capacity of the individual, not of the ingrained bias of the system. This can mean that men who lose their status find they have nowhere to turn, and can become increasingly isolated from social circles before attempting to assert their supposed masculinity in potentially harmful ways (In the last three decades, 97% of school shooters have been male, and 79% have been white, a startling indicator of the isolation felt by young males).

This systemic notion of manhood can also be seen in more mainstream terms in today’s world: men are meant to be breadwinners and the heads of their respective households. But as times evolve, and other minority groups begin to see their own status rise and their definitions evolve in societal terms, men in the Western world have failed to see the same progress. An example: the modern woman can have it all. She can be career-oriented and a mother and can do anything a man can do. While there continues to be barriers and stigma attached, it is no longer societally unaccepted for a woman to be dominant in her personal and professional relationships. However, it remains unacceptable amongst men for a man to be dominated – such a thing sees the revoking of his status as a man and the individual is feminized by his peer group (meaning he loses his status and power associated with being a man).

This fear of being shamed or exiled can create feelings of strong animosity towards grassroots movements of minorities attempting to increase their own social status, thereby seeming to encroach on the relative dominance of men. As the definition of what it means to belong to a minority group in the West has evolved (Black civil rights movements, first and second wave feminism, same sex marriage equality movements), the definition of what it means to be a man has not. Men are still expected, for the most part, to be economic breadwinners and to earn the bulk of their household incomes. Men are also expected to defer child-caring to their spouses. These are norms that are shifting, but they nevertheless remain the current norms of the way society interacts. And while the perception of women who choose to pursue a career instead of remaining at home to care for children is changing positively, the view of a man who sacrifices his career for his children is not evolving at the same pace, resulting a perceived sense of inequality when viewed as who will lose greater status through the undertaking of a certain action.

Studies have shown that with men, the simplest way to change embedded behaviours is to instil a sense of shame. The isolation felt by individuals when their status of manhood is removed is palpable, and has been known to inspire action in more positive directions. This strategy has been used in education systems to encourage young men to stop bullying and be more accepting – the hope is that the lessons learned at a young age are not lost in adolescence as social acceptance grows in importance within the minds of young people, preventing it from being translated into the minds of the next generation of grown men.

There is no debate around whether it is easier to be a man or a woman in the Western world – across the board, men have distinct advantages and still occupy a higher position of status than women do, shown every day through the episodes of sexism in personal and professional environments that men need never encounter. But when deconstructed as a symbol of status, it is easier to understand why men in chat forums would feel animosity towards the opposing gender while still seemingly espousing their legal rights – the appearance of acceptance of empowerment is perhaps the only privilege men have not managed to take for themselves.

The Ivory Coin Pt. 2

This is a sequel to an earlier post titled “The Ivory Coin”, focused on dissecting the dual perspectives of white and non-white residents of the United States. The post sought to outline certain elements of mainstream views, but discussed race as a lens through the presenting of opinion as fact. This post hopes to expand beyond that subjectivity to take a more constructivist approach.

Countries in the Western world have been brought together through a shared set of values and interests, which have been codified through organizations, institutions and norms. While many are regarded amongst democratic countries as bastions and protectors of enlightened views (ie. NATO, the OECD and the World Bank), each has an imbedded duality. NATO may serve to protect American interests in Europe, but some Eastern groups view it as an existential threat. The World Bank may provide a funding mechanism for indebted or developing countries, but it’s reformist and austerity terms are often prescriptive and can form an unwanted dependence. As a duality exists in it’s institutions, so too can it be found in it’s norms. Western cultures have traditionally been pro-democracy, on the leading edge of technological innovation and great creators of the arts. They have also been hell bent on assimilation, religiously monotheistic and since the spread of Europe throughout the world stage, predominantly white.

The role of race in history is as a construct wielded to divide, often with the aim of oppression. From the caricaturization of groups with certain skin colours to the enslavement and oppression of those same peoples, race (the classification of human beings based on physical traits, geography, familial relations or ancestry) has served a useful construct for those seeking to divide. This division gave rise to the development of a norm amongst Western societies, even as they grew more diverse in modern eras: being white was a symbol of status, a form of non-purchasable wealth and power. Although whiteness once overtly offered benefits simply by creating an avenue in which you were not persecuted for your racial identity, the introduction then subsequent success of civil rights and racial equality movements throughout the 20th century sought to eliminate the idea that one race was explicitly superior to another by introducing laws to treat all as equals.

The introduction of these laws across the Western world was a symbol of hope and progression for the freedom of all peoples. But upon their passage, some misconstrued their impact as having closed the issue. An example can be found in the Civil Rights Movement in the United States in the 1950s and 60s. Although the 14th and 15th amendment of the American constitution guaranteed basic civil rights to African Americans, a grassroots struggle emerged to ensure federal and legal recognition of those rights in the eyes of the law. The civil rights movement served to break the pattern that emerged on the community level that continued to separate groups based upon race. In the push back that emerged against the civil rights movement, racism was certainly involved, but it was also a result of what was perceived as a shift in status. While feeling threatened due to a fear of loss of status is not in itself racist, race was a primary factor in dictating how status was assigned at an individual level. When another group appears to be raising their cultural status to match that of another, the latter group will naturally feel threatened.

That sense of loss of relative status can also be found today, in the opposition towards grassroots movements organized by groups that have been traditionally perceived as minorities. When polled by the Economist in 2017, conservative voters from a sampling in the American beltway failed to cite race as a major concern for voters, instead voicing their concerns on issues relating to a loss of status amongst their communities. The clan instinct of sympathising more with those in your community (geographic, racial or otherwise) has manifested in a focus upon dealing with issues plaguing primarily white communities, like the opioid crisis and a lack of created manufacturing jobs. While these issues plague other communities as well, a majority of those spoken to cited a feeling that life had become “unfair”, and viewed the election as an opportunity to greater tip the national focus towards their issues. While this can be dismissed as simple politics, a deeper trend can also be seen in the desire to preserve the status of white individuals relative to other groups, often by denigrating attempts to address institutional bias by emerging grassroots groups.

Race has long been a construct used to assign societal value, heavily skewing in the favour of the historically powerful. But race as a reflection of status shows a more individualistic underbelly, one where a loss of relative status (or privilege) is equated with a loss of absolute position. This is dangerous and a false characterization. It must be understood that the benefit of some is not always to the detriment of others – race relations cannot be summed neatly into game theory, nor can racism be ascribed with certainty as the primary reason why tension plagues non-racially diverse communities. A lack of economic opportunity or job prospects and health crises are universal issues, and addressing them will go a long way towards creating a greater benefit for all parties. But when we only look at how the actions of others affect us, we risk perceiving diversity as a weakness instead of a strength. Hardly a claim that can be made proudly in an enlightened world.

 

The cost of sharing

A popular idiom in Western culture is that possession is 9 / 10ths of the law. While this may hold true for disputing civil or domestic suits about who gets to keep the kettle in divorce proceedings, there is one notable exception: land. In every country across the globe, throughout all of history, issues of land rights have been discussed, questioned, disputed, protested and have typically ended in either a settlement or war. Few topics are as universally inadvisable to bring up in a community discussion as who owns disputed territory, a testament to our similarities across geographical and cultural boundaries. If only we had more in common than wanting to kill each other over bits of grass and dirt.

The fate of humanity has been tied to the land since we first established settlements. Land is used for agriculture, to build houses and resources grown from it or buried within it are extracted and used to build or develop – without the ability to nurture the land, we would quite literally not have evolved past roving groups of nomads. As our societies grew more complex, so did our numbers and our demands – land that was once freely available began to have greater value due to resource richness, proximity to other regions, or size. Societies all offered a similar recourse for remediating disputes over which land was owned by whom – land contested could be ascribed under the guardianship or ownership of a particular individual, group or community, with the idea being that the stewards of that land would then tend to it and they alone would be able to exploit whatever resources lay within their set geographical boundary.

Earlier cultures were less focused on drawing specific  or arbitrary territorial lines upon maps, but groups across the globe (religious, cultural, familial) typically found themselves operating in set territories or regions, and any interaction between groups resulted in rough boundaries needing to be defined to prevent competition for resources and potential escalation into conflict. As societies evolved, and distinct cultures began to interact in greater, more violent ways, land was often taken by force, and the land use rules of whatever culture was victorious were implemented. A typical example of this is the imposition of land rights and land use agreements upon aboriginal cultures throughout the world by Western colonials, wherein treaties and agreements were typically signed and agreed upon without a firm understanding of shared vs individual ownership. Cultures in North America, Australia and Africa all tell similar stories of “placelessness”, wherein they were removed from their traditional territories and forced into environments with different cultures and customs. Some were able to stay on their original lands, often at the expense of safety or freedom. Many of these tales end with the sad stories of assimilation, slavery or extinction.

Today, the issue of land rights has only gotten more complex. Central governance has simplified the codification of land rights into law, but the same issues that plagued original communities exist today, echoes of the flawed systems installed in eras past. Compounding the problem, there are more humans than ever before, and land is growing to become a scarcer and more valuable resource. Agriculture, urban development, construction and industry all require resources and land, painting a picture where we are currently inhabiting upon or using 90% of the world’s land. This resource limitation has lead to increased conflicts, often between the same structures as before: centralized authority (governments) and communities. Specifically, land rights have arisen time and time again in issues of development in countries that bear scars given by recent conflicts.

Take Vietnam and Kenya. Vietnam’s Communist government operates under a system where the government ascribes land-usage rights, but maintains that all land remains property of the state. Vietnam’s current economic growth rate is around 6%, and cities and urban areas are growing rapidly, eating into surrounding farmland in order to satisfy increasing demand. As a central government hoping to avoid conflict, Vietnam must offer recompense to farmers who have lost their land, must host consultations to discuss land-use plans with the public, must offer avenues for legal recourse if civil disputes arise, and need to ensure all land is maintained in a registry to be sure all of this can be tracked and maintained. Protesting farming communities have offered much feedback on this – specifically, that recompense is not matching the value of land, that consultations are often ignored and civil suits dismissed, and that land registries are poorly maintained by corrupt officials, meaning any issues brought up cannot even agree upon who owns what land in the first place. All of this risks disrupting economic momentum and fostering discontent within the populace, seemingly creating the exact problem it set out to avoid.

In Kenya, each of these same issues arise, with an added dimension of dealing with the land claims of indigenous groups who have traditionally occupied territories. More than two thirds of Africa’s land is still rooted in communities and not written down or legally recognized, meaning governments and communities repeatedly clash when the issue arises (one is viewed as conquerors, the other as squatters). Kenya’s government has attempted to address this issue by undergoing a large-scale titling initiative to ensure those who claimed their land could be entitled to the legal rights of ownership. But even while a third of Kenyans can claim to legally own the land on which they reside, insufficient infrastructure and endemic corruption, as well as cultural issues surrounding the rights of women and minority groups, can pose severe threats to the legal implications of enforceability – the registry may end up just being a long list in a back closet if land owners are not able to take meaningful recourse when their rights are infringed upon.

Even in rich countries, where property rights are secure and land can be used as collateral in financing, land rights issues emerge on the community level. Municipal governments in Canada have grand powers regarding their capacity to expropriate property for “municipal purposes”, and land rights issues of indigenous peoples continue to plague communities faced with poverty and an endemic lack of economic opportunity. Greater steps, including the development of land-use registries accessible to all (Blockchain technologies offer interesting opportunities), the further titling of rural properties to prevent government land grabs, and the greater codifying of the rights of communities into land-use charters, should be taken to ensure transparency and fairness for all involved. If not, the risk of impeding legitimately-needed development while squabbling over patches of dirt grows ever greater. Then it doesn’t matter who possess what – we’ll all be stuck squabbling in the mud.

Coming to a Consensus

The 1970s saw the end of much throughout the Western world: the Vietnam War came to an official close, numerous other proxy wars were launched throughout the globe, and mainstream society began to adjust to the countless cultural revolutions of the previous decade. But for all their similarities, the world’s two brightest Liberal superpowers each underwent a series of minor economic crises of identity, with the decade serving as the worst performing for industrialized countries since the Great Depression. Low growth rates and oil shocks in 1973 and 1979 rang through the global order, reverberating to signal change in both the United States and the United Kingdom.

The United States experienced it’s last trade surplus in 1975, and the latter half of the decade witnessing the economy undergo a period of stagflation, in which both inflation and unemployment steadily rose correspondingly. Trust in government was at a low point following the Vietnam War and the Watergate Scandal under President Nixon, and the solutions attempted (broadening the money supply as a fiscal stimulus measure and the phasing out of Nixon-imposed price controls) had mixed impacts in the short-term. Great Britain experienced a parallel slowdown, as Butskellism (an economic strategy favouring public ownership and public/private sector collaboration towards a grand “Industrial Strategy”) saw a welfare state expand to become burdensome, reducing the competitive advantage Britain once held over West Germany and Japan.

The problems of the 70’s were well-analyzed and researched by regional think-tanks and experts, the majority of whom favoured different policies with a few consistent themes: greater liberalization of markets, reductions in taxes and regulation, and increased competition to foster innovation and growth. Each nation saw a champion of this cause rise in the form of Ronald Reagan and Margaret Thatcher, with their own plans and complex legacies that helped give shape to the world of today.

Ronald Reagan was elected on the promise of reducing taxes, promising a return to growth levels that existed in the Industrial Era before FDR’s New Deal. His combination of removing price controls from petroleum, slashing corporate and individual taxes, simplifying the tax code, investment into military spending, increases in government borrowing and lending, and federalization through delegating greater autonomy to the States served to raise spending levels, both individual and institutional, reinvigorate growth and boost employment. His economic policies, named under the sphere of Reaganomics, have been credited with leading to the second longest period of peacetime expansion in the history of the United States.

Margaret Thatcher was carved from a similar mold of conservatism, formed by a belief in free markets and small governments. More than Reaganomics, Thatcherism encapsulated the age in which she dragged the UK. By transitioning the national economic focus from managing unemployment to managing inflation, Thatcher prioritized monetary objectives, warring with unions and industrial strategists to achieve her objectives of reducing the tax burden, reducing state subsidization programs, and exerting control over monetary policy. Thatcher also embodied the toughness with which she governed, often centralizing federal decision-making around herself to deal with crises and bringing together a Britain once thought as ungovernable. Under Thatcher, the inflation rate fell from almost 15% to below 5% and growth rates exploded.

But each leader’s policies of small state governance had disenfranchising impacts as well. Reagan’s policies saw poverty rates rise, with trickle-down economics proving largely ineffective at spurring growth. Thatcher’s policies rose unemployment levels in the UK by over 1.5 million people in her first five years. The international aid programmes of each began to focus upon structural reform, with the IMF and World Bank becoming stronger in their urging for austerity programs in struggling states, which are nothing if not unpopular amongst citizens. Each leader’s brand of social conservatism, military hawkishness and economic libertarianism further polarized voters, and their support of traditional marriage, penalization of criminality and disdain for the needy served to create figures for the mold of Conservatism currently being shaken off today. But even in today’s environment, each leader would find a place: both were tough straight-talkers who utilized populist speech to mobilize supports and justify a values-driven agenda. Reagan’s pursuit of the Soviet Union and Thatcher’s disembowelment of trade unions were fuelled by rhetoric of “welfare chisellers” and scapegoating of an opposition often not capable of speaking for themselves.

The legacies of economic policy are not best measured in quarters, but in decades. Many credit Reaganomics for aiding in facilitating capital allocation to allow for the tech boom of the 1990’s, and Thatcherism saw a return of London as the world’s financial hub, prompting greater economic evolution past the age of manufacturing in Great Britain. Each nation saw their roles change to being greater consumers of goods instead of producers, enriching some and unemploying many. The policy themes in each platform are credited with influencing similar actions taken in both Australia and New Zealand, ensuring that the world’s Anglo-Saxon colonial powers all remained allies on similar footing moving forward.

The economic objectives of each (lowering tax rates and reducing regulation, restraining government spending and noninflationary monetary measures) spurred growth rates and reshaped the economic identity of each nation for years to come. However, their legacies lie not in their success at growth, but in the side-effects enacted by these policies. Milton Freidman, an economic advisor to both cut from the mold of the Chicago School of neoliberalism, admitted that trade deals and deregulation devastated the working poor across the globe. Increased inequality, exacerbated by falling wages and increasing unemployment, are now what defines the economic legacy of these two leaders who each managed to resurrect their countries from struggle. Each was followed by conservatives who doubled down on their models and fuelled further inequality and the push of boom and bust bubbles. And each dismantled systems to lay the groundwork that gave rise to today’s modern populism currently sweeping the US and the UK. Trickle-down indeed.

 

Silent but Deadly

The modern environmental movement owes much of it’s mainstream awareness in the Western world to Rachel Carson’s Silent Spring, an environmental science book that primarily focused upon ascribing the visible loss of biodiversity and environmental degradation to the use of artificial pesticides in agriculture. The book accused chemical companies of spreading misinformation concerning the use of certain compounds, and is attributed as a pivotal turning point in the creation of the US Environmental Protection Agency and the categorization of the chemical compound DDT, once sprayed on crops, as a pesticide not safe for regular consumption.

The movement also brought onto the mainstream conscience a new concept – that our impacts upon the environmental were tangible, attributable and could be traced back to externalities from human activity. This conceptual leap facilitated the adoption into the mainstream the idea that carbon emissions from industrial activity were harming the environment, even if individual people had difficulty ascribing the impacts of this change into their daily lives. But describing a pollutant – defined as a substance introduced by human activity that has a detrimental impact on the surrounding environment – often conjures the images of sea turtles eating plastic bags, landfills of scrap metal and used electronics, and noxious smokestacks belching black smoke over the stockyards in 1900’s Chicago.

It is this variant of pollution – physical – that actually poses a greater risk than many anticipate. Simply understood as the introduction of discarded materials into the environment, the scope is enormous. Studies have shown that in plastic alone, 4-12 million tonnes are dumped into the ocean every year, with concentrations currently found as high as 580,000 pieces of plastic per square kilometre of ocean. Exact figures are notoriously hard to come by when calculating dumping, as it is a practice conducted by individuals, groups, industries, organizations and governments in every single coastal nation on Earth. But the average annual wastage of 8 million pounds tonnes (over 17.6 trillion pounds) has the “triple threat” impact of interfering with the feeding patterns of animals, increasing the likelihood of entanglement and drowning for marine life, and leaching toxins into the environmental as degradation occurs over thousands of years.

Let’s start with plastic: Plastic production has doubled every 11 years since mass production began in the 1950s, with corresponding increases in the concentrations found in the marine environment. Since the 1970s, trends have traced that over half of all seabird species are in decline, with a 2015 study in the National Academy of Sciences estimating that over 90% of all seabirds are now contaminated with plastic, with figures estimating that the number will increase to 99% by 2050. Most startlingly for humans, the areas with the highest concentrations were not the expected regions, such as the infamous Great Pacific garbage patch currently making up as much as 8% of the entire Pacific Ocean, but rather in Southern coastal regions where ecological diversity of species is the highest. These findings show that areas once thought as pristine have an elevated risk, and with one in six humans depending primarily upon fish as the protein source in their diets, the trophic repercussions of excess plastics could see the degradation of 16% of the global food supply faster than expected.

Plastic is far from the only problem: mass consumption of electronics, along with an average replacement rate of 3-5 years, has seen disposal of electronics and manufactured goods skyrocket. Once disposed of, electronic goods can be recycled – the United States of America claims to recycle 15% of all electronic goods disposed of nationally. But with recycling programs for used electronics only introduced 70 years after commercial popularization of the goods themselves and the EPA estimating that over 2.17 million tonnes of electronics were disposed of without being recycled in 2015, that is a far cry from being the norm. Additionally, a 2015 UN report found that over 90% of global electronic waste (approximately 41 million tonnes) is illegally dumped annually by developed nations into developing nations, costing $52M USD and bypassing the recycling market entirely.

The danger here is twofold: electronics do not degrade in natural environments, creating vast landfills of e-waste, and complex machinery leaches chemicals into the environment, including volatile compounds and heavy metals. Lead, cadmium and mercury have been found in excess quantities harmful to human health in e-waste dumps. And unlike developed nations, where worker safety regulations exist to protect employees from contact with harmful substances, e-waste dumps are manned by recyclers, often children, who burn plastics to access valuable copper and rare earth minerals contained in circuitry and hardware. A similar problem exists with the disposal of scrap metal and used appliances. High levels of zinc, cadmium, mercury and copper were found in yards mined by children, working gloveless and inhaling the fumes from stacks of burning garbage.

Nations have taken steps towards regulating and penalizing the dumping of physical pollution, which has served to change some behaviour. Recycling rates for electronics are increasing year-over-year for both consumers and industry-members. Technologies focused on improving recycling methods improve annually, and the global recycling industry for electronics alone is now valued at $410B USD. But the indication as to whether to panic can be best informed by Canadian scientists in 2015 claiming that marine plastics are “the new DDT”, posing the same level of systemic risk that the chemical compound once did. One can only hope that such warnings inspire the same level of action that Carson drew with her plea for a better world.

 

Inflated Importance

The year is 2045 and life bears little resemblance to today. The modern economy collapsed following the destruction of the American empire in the depression of the 2030s following the third great war in 2026. Pockets of humanity have begun to rebuild from the ashes as feudal societies emerge in the craters of ancient cities. You are a high-ranking official put in charge of overseeing the economic growth of your new community. You immediately identify a problem that may potentially hamper growth – your town utilizes a barter system, trading goods for services directly.

You, an intrepid sort, take it upon yourself to find a more efficient system of exchange, and invent a monetary system around units of currency called dulers (unfortunately, literacy rates aren’t what they used to be. For the sake of efficiency, we’ll call it a dollar.) As with the birth of any new currency, you must first create the value this dollar represents. Lacking both the metal gold itself and a use for shiny nonsense, you choose instead to utilize a fiat system, meaning currency is backed by governing decree. This allows for the store of value to exist because, in times of scarcity, dollar value is not tied to a finite good (commodity money), but represents a claim on its own value (representative money).

At first, your experiment is a great success – the simplified medium of exchange no longer requires individuals to cede tracts of land for services rendered. But a problem emerges when the village experiences a shortage of cows, and the price of beef rises to unbearable heights. All of a sudden, the value of a cow skyrockets while the value of money fails to rise accordingly. You scramble to adjust, raising interest rates to make money more expensive to borrow, and therein more valuable, but it is too late: your precious duler has lost all semblance of sensibility, and you must watch as villagers push wheelbarrows of paper currency to stores in an effort to buy a quarter pound of grazing grass fed German Angus.

Inflation, the rate at which the value of goods and services rises and the purchasing power of a single unit of currency falls, is inherently difficult to understand and even tougher to manage. Inflation is marked by an inflation rate, measured via thorough examination of goods included in the Consumer Price Index (CPI), an index that measures the market price level of a basket of consumers goods and services purchased by households. This index excludes volatile goods, such as food and energy prices, to ensure that data is not skewed. The relationship goes that as goods and services become more expensive, the relative value of a dollar falls. It is the responsibility of the Central Bank to ensure that dollar inflation, or deflation (where the value of goods decreases relative to the value of the dollar) is managed to prevent a positive feedback loop that sees wheelbarrows full of currency being needed to purchase pieces of short horned cattle.

Central banks manage a variety of tasks, all of which centre around creating long-term, stable growth and creating wealth for the highest possible number of people. The key word is stable – maintaining a consistent inflation rate means focusing on the supply of money itself. Preventing large volumes of capital from entering or exiting the financial system at any one time is the simplest measure of doing this – history is abound with examples of empires going bankrupt from acquiring too much wealth at any one time, flooding the market with value and reducing the value of the dollar as a unit of currency (what is the use of one dollar when you are surrounded by piles of gold?). Examples also exist of the opposite – the recent demonetization of 86% of distributed rupees in India saw consumer spending nosedive and inflation hit it’s lowest rate in two years (an economy with 7% growth targets can scarcely afford lack of inflationary stability).

This swift entrance and exit was a bigger problem when currencies were backed by precious metals. But given that fluctuations in money supply still impact value, and money is created via government decree, policymakers can use these fiscal policy tools to fluctuate value when desired. Pulling the value of money in one direction versus another has its benefits: inflation, manifesting in the form of increased investment or economic growth, can be an indicator of a strong economy, is a prompt to adjust wages and pricing, furthering economic growth for all parties and avoiding stagnation. It is also incredibly useful for borrowers, ensuring that the value of cash rises with interest rates, meaning they can ideally be repaid without direct loss. Deflation, the decrease in the value of goods relative to a dollar, can be advantageous for consumers in the short-term, but is indicative of a lack of spending or investment overall, meaning that there is no wealth being created, and that a salary paid to an individual will be actually be worth less in the long run than it was when they started (given that the price of goods fluctuates in the short run, this is not a good thing for individuals or economies).

A typical target rate for inflation for Central Banks the world over is 2%. Inflation or deflation at extremes see the word Hyper added to each. Hyperdeflation can see the value of goods being driven so low that each unit of money becomes incredibly expensive to acquire, fuelling such periods as the Great Depression and the Asian Tiger Crisis in the late 1990s. Hyperinflation, where the value of goods skyrockets corresponding to the value of a dollar, has resulted in scenarios such as late 2000s Zimbabwe, where in 2008 a single US dollar could be converted to the local currency for the measly sum of $2.6T ZWD. Comical images and economic collapse aside, the country fully switched over to utilizing the American dollar in 2015.

Inflation has such wide-ranging impacts, we often fail to recognize that it is the fundamental reason why our economy can exist: Investment into equity, such as housing, or debt, such as bond purchases, would be non-sensical were it not for the tendency of money to increase in value over time. Hyperinflation sinks entire nations, and hyperdeflation can cause depression that sinks entire regions. The very nature of economics and money having value itself is tied to this concept – without it, stagnant currency would have long ago been done away with, as it would have failed to suit the needs of today’s economy and present value would be all there was to financial calculations. All you would have is cash without the cows.

 

Mind the Gap

Introductory-level economics is based on the principle that perfect competition is theoretically possible – this means that all buyers and suppliers are equally well-informed, that all firms holds equal degrees of market power, and that entry and exit of the market can occur at any time without barriers. In the real world, this concept is laughable – any competitive industry consists of firms of varying sizes (the exceptions being oligopolies or monopolies) with varying degrees of market power. There are also external actors looking to ease or impose pressure, primarily by reducing the monetary burden placed upon suppliers or consumers to make specific products more appealing to purchase or produce. These are known as subsidies, often offered by the state, that provide direct support to incentivize certain behaviours.

The theory of economics is that markets, within a free market system, will self-regulate as supply adjusts to changes in demand and vice versa. The provision of subsidies is one of the more minor ways of distorting this “balance” (a more extreme version being currency manipulation or any other action taken in a planned economy), as the reality of free-market systems is that they are riddled with power structures that reduce the buying power of consumers, especially in larger global markets. Often, subsidies and support on the supply side are directed towards essential but non-competitive sectors (agriculture) or industries in their infancy (clean technology). By artificially lowering the cost of production and adoption, governments have the ability to encourage or reduce the cost of certain behaviours, which often include essential services like providing education, lowering the cost of health insurance, or providing rebates on installing solar panels.

But here a snag arises – subsidies are funded through taxpayer dollars, meaning that the effectiveness of programs must be judged against the potential ROI of investment into a venture with a lower opportunity cost. This is part of the fundamental divide between fiscal hawks and doves – ensuring the finite resource of taxpayer funding is not wasted in areas where greater utility could be generated elsewhere (a true conservative often believes the greatest utility of a dollar is to be had in the hands in the consumer). The line between direct and indirect subsidies is often that the government can either provide support directly and recoup a percentage of their investment, or can provide funding indirectly and see no fiscal return, hoping that this method is indirectly funded by increases in support-related GDP or employment growth in targeted sectors. So, government subsidies play an essential role in ironing out the gaps left by the free market – right?

To simplify a technical explanation, a subsidy to a firm acts as revenue, which mimics increased demand for a product. It allows the producer to reduce their costs of production per unit and sell on the market at a reduced price, passing savings onto consumers. Governments often claim to operate with a mandate of only providing support to industries with a societal benefit – but the vague nature of this definition makes it easily exploitable for political opportunists. Deciding where subsidies are allocated can be the result of careful policy-decisions or “pork barrel spending”, an American term coined to describe government spending in a specific congressional representative’s district. Subsidies can be provided either to sectors – oil & gas, telecoms, utilities and the financial sector received 56% of total US government support from 2008 to 2010 – or to individual companies, which include struggling companies such as Alphabet/Google and Walt Disney, with the former receiving over $630M USD since the year 2010.

But the aid provided to organization does trickle down to employees and consumers – the USDA offers rebates on insurance for farms to make it more affordable, supporting domestic industries that may otherwise get outcompeted on the world stage. And until recently, the same theory was used to lower premiums for out-of-pocket costs for health insurance on low or middle income households in the United States. These examples of how providing support to industries impacts employment and lowers costs for consumers illustrate just how complex these discussions can be – even Alphabet/Google, beneficiary of potentially unneeded support, has used it to grow into the world’s 2nd most valuable firm, become a cutting-edge innovator and employ over 72,000 people.

Theoretically, in the case of successful organizations or industries, subsidies can be removed once they are established enough to no longer require support to be successful. But once a subsidy exists, removing it is a tricky process. In theory, industries are supported until they are capable of being competitive in a free-market. But the transitional effect of removing subsidies can see the costs of certain goods actually increased above market value, since firms would have remained profitable without the need to implement the same efficiencies as competitors. Additionally, continued support of an industry can result in lower mobility amongst the labour force, as workers become dependent on subsidies in highly specialized industries and can lack the education needed to find another job when those subsidies are removed. This can create a whole new set of policy problems, where the marginal benefit experienced by many is offset by the severe cost to a smaller few. Saudi Arabia’s recent struggles also illustrate how easy reversing this progress can be – the Crown Prince Muhammad bin Nayef eliminated a layer of government support before reinstating it only months later once he realized it was not politically favourable to attempt to break down the deep state.

So the provision of government subsidies, a monetary tool used to level an unequal playing field, is subject to political whim, is difficult to eliminate without regressive ripple effects, and can result in taxpayers directly funding discounted land taxes for conglomerates. But that does not mean mistakes cannot be rectified – ensuring support can justify a societally favourable return (often generated in non-monetary terms) and can be readily eliminated when the time is right is key. Take Nigeria: a dip in oil prices saw the government remove the national fuel subsidy and increase the price of oil by almost 60% overnight. While previous such attempts has resulted in protests, results this time were more measured, with the majority of Nigerians accepted that higher costs were necessary to relieve the fuel shortages that had arisen. The lesson? Waiting until wasteful subsidies cripple the economy appears to be the only way to remove them without pissing everyone off.

Moving South for the Winter

Gardening is one of the few hobbies in which sitting back and patiently watching plants grow constitutes as an activity. Effective gardeners know that the best gardens require the most planning – focusing on a specific design means early plantings, seasonal alignment of florals and a rich tapestry of species to truly see an anthropogenic ecosystem flourish. But in chaotic conditions, where digging rodents and dry weather wreaks havoc on predictability, growth can be more difficult to manage. But effective planning can help flora mitigate or adapt to even some of the harshest conditions.

As with flowers, so to with businesses – the fostering of a friendly business climate is one of the core tenants of economic liberalization, and policy planning must involve the balancing and execution of multiple regulatory actions to create a climate that removes hurdles from starting businesses, encourages investment and promotes infrastructure growth to support new ventures. But, as with gardens, a difficulty emerges in fostering a business-friendly economic climate in developing nations due to higher than average levels of systemic corruption and poverty, laughably-poor infrastructure and a lack of governmental stability. The World Bank’s Ease of Doing Business Rankings outline 10 criteria by which economic climates can be benchmarked against or measured from, with only Malaysia placing in the top 30 rankings internationally to represent developing economies (it landed in 23rd place, bolstered by existing protection to minority investors and strong energy infrastructure transmission capacity).

The ten factors by which nations are measured include themes of dealing with governments (ease of starting a business, dealing with construction permits, registering permits and paying taxes), attractiveness for investment (trading across borders and access to credit lines) and legal protections from chicanery (protection for minority investors, ability to enforce contracts and regular capacity to resolve insolvency). Issues of access to electricity and basic services are also measured, as an ability to tap into existing infrastructure greatly reduces needed capital investments for new ventures. As expected, 20 of the top 30 most attractive climates could be found in the OECD, a club of mostly rich countries. The first countries from both Africa and South America to appear respectively were Rwanda in 56th place and Mexico in 47th place respectively.

These two continental regions, often viewed as hubs for resource extraction and neo-mercantilist foreign investment, have an odd double standard that makes them more difficult to evaluate – the rankings index is based not off of polled data from firms, but from evaluations of governmental policies. And strict rules in weak states, which are found more prevalently in the Global South, mean the way business operates in theory and practice can differ drastically. Well-connected firms and organizations in countries with low rankings have seen speed in getting permits and real tax rates similar to firms in more attractive economic bastions (In one state, policies outline a 177 day process for construction permits that local firms say can take as little as 30 days) – leading to the question, to what degree does corruption factor into the unattractiveness of economic climates in the developing world?

Nations with higher rankings within the index are consistently found to have lower rates of graft and bribery than those who placed lower – partially a result of stable and functional bureaucratic infrastructure. Transparency International’s annually published Corruption Perceptions Index provides a similar benchmark to the Ease of Doing Business Rankings, save it examines endemic corruption in a given country’s public sector. In the 2016 index, African and South American countries did not fair well, though the issue of corruption was much discussed in the public sphere. Corruption scandals have plagued governments in the Sub-Saharan African region, with democratic elections seeing graft pushed to the forefront of campaigns in the DRC, Ghana and South Africa. Administrative anti-corruption efforts in Nigeria and Kenya have had little progress, with voters universally expressing frustration and dissatisfaction at the lack of progress made. A common practice of politicians in the region is to run on a platform of “anti-corruption” – laudable, but ultimately meaningless if change never materializes.

South America saw corruption bear a similar volume of headlines through the year, with an average state score of 44/100 (anything less than 50 signifies no effective action is being taken to address graft). The Panama Papers, Brazil’s Lava Jato scandal, and a widespread leadership crisis, either in lack of capacity to tackle systemic issues or in repeated visible attempts to amass personal power beyond their legislative mandate, made news consistently throughout the region, with voters crying out for judicial action against lawmakers. But untangling systemic corruption takes time, and while much has been made of efforts, unravelling complex webs has proven to be an act of much posturing and little progress. A universal suggestion in both regions is the increased strengthening of institutions that hold governments accountable and provide a more equitable playing field for citizens.

Therein lies the answer: increased corruption with a system creates an uneven playing field, where entrenched and well-connected incumbents see their performance continuously rise, and competition from smaller firms is oppressed. Regions with poor business climates see competition as a more fluid concept, with larger firms being offered 30 day permits and less-well connected newcomers being forced to play by established rules, or worse, face extended periods as penalties for avoiding bribery and obeying the law.

Hope exists – placing corruption under the spotlight does ensure the issue is discussed, and government promises to address it can have significance. President Temer’s support of an independent investigation in Brazil has lent credence to it’s findings, and two SSA countries held democratic elections that were ruled as exemplary by third party observers (disappointingly, out of 53 nations, this constitutes progress). But lawmakers must crack down on corruption to eliminate internal barriers to growth and policy implementation that too often see developing nations get in their own way when attempting to attract foreign investment. You can’t expect to grow flowers out of a garden that you keep stomping on – and it takes time and patience in waiting for a garden to bloom.

 

Blog at WordPress.com.

Up ↑