Quadrant opinion article
In Denmark, a Bruising Multiculturalism
The statistic is shocking: some 70% of Danish youths in detention are Muslim. As psychologist Nicolae Sennels (left) learned when he set out to discover why, the reasons have much to do with an insular community's disdain for what, other than welfare payments, a modern and liberal Western nation represents
As a regular visitor to Europe I have long pondered the delicate question of Muslim immigrants’ ability to integrate in the West, particularly in France the Netherlands, Great Britain and Scandinavia. In preparing for a planned trip to Denmark last year, I came across the outstanding work of Nicolai Sennels (left), a clinical psychologist working in youth prisons in Copenhagen, and his recent, thought-provoking book, Among Criminal Muslims. A Psychologist’s Experiences from Copenhagen Municipality.
Not yet published in English, the book is based on ten years’ intensive clinical work with around 150 Muslim and 100 non-Muslim Danish youths. It provides a unique understanding of the culture and minds of young Muslim offenders, their often violent behaviour and the high crime rates that characterise their communities.
The highly controversial publication by Jyllands-Posten of satirical cartoons of Mohamed put Denmark on the world stage overnight, but that occurred nearly a decade ago, so I went to Copenhagen to get a closer look at how things have developed since then. I was able to speak briefly with Sennels by phone, following up that encounter with emailed questions. After our brief discussion and subsequent exchange of emails, I find it hard not to think of Shakespeare’s Marcellus and his observation that there are indeed some things rotting in the state of Denmark.
Sennels set himself the mission of learning why violence and criminality figure so prominently in the Muslim community, and further, why Muslims appear to have difficulty integrating into Western society generally. According to Denmark’s Bureau of Statistics, some 70% of inmates in Danish youth prisons come from immigrant backgrounds, and almost all of those were raised in Muslim families. In terms of numbers, the top seven nationalities listed for criminal behaviour come from Muslim countries.
Through hundreds of hours in a clinical setting, Sennels came to realise that he had to understand the psychological differences between Muslims and Westerners in order to understand such disproportionate statistics. He sets out his analysis of the behavioural problems under four headings:
anger versus weakness
honour versus security
victim-hood versus self-responsibility
Muslims versus non Muslims
Sennels explained that in these four areas Westerners and Muslim are quite different — even diametrically opposed in their attitudes.
With the first area, concerning anger, we in the West see threatening expressions of anger as the quickest way to lose face, perceiving it as a sign of weakness. However, among Muslims, Sennels found that anger is not only accepted, it is seen as a sign of strength; indeed, anger in itself is regarded as an argument. In his anger-management classes he observed his Muslim clients believed, just as they had been taught, that “aggression is an accepted and often-expected behaviour in conflicts.” In addition to his own observations, he drew upon a recent study conducted by the Criminal Research Institute of Lower Saxony in Germany, where 45,000 teenagers of both Muslim and non-Muslim origin were interviewed. “Boys growing up in religious Muslim families are more likely to be violent,” Sennels said.
The second area for Sennels’ attention concerned self-confidence. He points out that Westerners mostly regard criticism as, while perhaps unpleasant, an honourable thing when offered honestly andon its merits. Accepting valid criticism is, in other words, a sign of trust in oneself and what one stands for. Westerners thus manage to handle criticism in a relatively unemotional fashion — perhaps even with an expression of gratitude if the critic’s observation is right, or a shrug on the shoulders if not.
By contrast, in Islam and Muslim culture generally, criticism is seen as an attack on one’s honour, with the lack of a aggressive response considered dishonourable. In the West, what we would regard as an insecure and childish response to criticism is seen by Muslims as fair. Sennels’ professional experience led him to understand that demands for integration by the wider society fuel many resident Muslims’ feeling of being criticised, leading them to develop an enmity towards the non-Islamic society that surrounds them.
The third psychological difference that Sennels talks about concerns self-responsibility and what, in psychological terms, is called “locus of control”. Inner Locus of Control is fostered in Western societies, where most people see their lives as the result of their own choices. In the West we have a whole industry, of which he is very much a part, which sees people pay good money to therapists for advice on how best to solve problems and attain goals. Within Islam all of life is ins’Allah — ordained not by individual choice but the will of Allah. Meanwhile, the daily lives of the Muslim delinquents he counselled were primarily governed by sharia, cultural traditions and male family members. The experience is of being controlled. Personal wishes, democratic impulses and individual choices are disregarded, even punished.
To ask a Muslim about his own choices has little relevance, Sennels told me, as his clients do not see it as their responsibility to integrate into Danish society. Somehow, they expect the the state to make that happen, changing its ways to match their own. In relation to the crimes with which the young men he studied were charged, they tended to view the victim as being at fault for “provoking” their response. Sennels is alive to the argument within many professional circles as to whether Muslim culture — by creating an outer locus of control in the individual — creates psychopathic tendencies, or if the lack of empathy for outsiders and abrogation of personal responsibility is simply a superficial phenomenon.
Finally, there is the matter of Muslim identity versus the non-Muslim — the issue at the centre of the fourth point, concerning tolerance. Westerners are taught that tolerance is both good in itself and a defining characteristic of a decent citizen. Within Islam, intolerance of non-Muslims, members of sexual minorities, women, non-Islamic authorities and secular laws is expected. This spawns parallel societies, along with alarming crime statistics, terrorist activity, and the all-pervading suppression and oppression of women.
As many before Sennels have pointed out, including the Australian writer Mark Durie, Islamic scriptures underline the concept of the “infidel”. Among his Muslim clients a mere handful thought of themselves as Danish. Most saw themselves as Moroccans, Somalis and Pakistanis, etc., who happened to be living in another country. Almost all felt alienated towards Danes and said they were in opposition to Danish society. This shocked Sennels, as many of these clients were from second- or third-generation immigrants families.
Statistics bear this out. In Denmark, only 14% of resident Muslims identify with the organisation Democratic Muslims, whose charter avows that Muslims can be both democratic and Danish. Sennels notes that this strong experience of “us” and “them” has very concrete consequences, most of all to non-Muslims who are the victims of violence, robbery and attempted murder are non-Muslims. The exceptions were violent acts directed at rival Muslim gangs or so-called honour-related violence.
Sennels amasses some convincing confirmatory data. Whilst he acknowledges a nexus between anti-social behaviour and poverty, he is emphatic that crime and anti-social behaviour lead to poverty, not the other way around. Research by the Danish Centre for Knowledge about Integration (Randers, Youth, education and integration”, May 2005) shows that 64% of all schoolchildren with Arabic backgrounds are so poor in reading and writing after 10 years in the Danish school system that they are not able to succeed with further education — double the rate of Danish students from other backgrounds. In addition, Muslims’ failure to reach IQ levels acceptable for recruitment into the Danish military is three time higher than among native Danish applicants.
Sennels discusses other cultural factors, notably that their countries of origin place less emphasis on knowledge and education. According to 2003 article in Nature, the world average for production of published articles per million inhabitants was 137, whereas in the Organisation of the Islamic Conference countries the number was just 13. According to a large survey in Turkey, “70% of Turkish citizens never read a book”. Research published by the UN’s Arab Human Development Reports (AHDR) points out that “the cumulative total of translated books since the Caliph Maa’moun’s time (the ninth century) is about 100,000, almost the average that Spain translates in one year.”
Sennels makes the obvious deduction that not being able to read and write, dropping out of education and coming from a culture that, in general, has very little interest in science and knowledge severely minimise one’s chances of getting a well paid job — or any job at all. This leads to anti-social and criminal behaviour, and ultimately to poverty and welfare dependence.
Immigrants need three things in order to integrate, Sennels notes. They must want to be part of the host society, they must be allowed to join that society, and they need to have the capacity do so. Very few Muslim immigrants meet all three criteria.
The Danish government has not been completely idle in addressing the problems of an unassimilated and crime-prone minority. Without fanfare it introduced two basic policies: individual repatriation and reduction in child support. The repatriation policy targets unintegrated immigrants and pays them 1,000 euros, a one-way ticket to their homelands, free medicine for a year and extra money if they intend to start a business. This expense is reckoned to be cheaper than paying the 300,000-plus euros a non-Western immigrant is estimated to cost the state over his lifetime. The policy is administered through local municipalities and sees hundreds returning every year to their countries of origin.
The second policy, limiting child support to no more than two children per family — the average of all Danish families — was introduced to discourage both immigration and welfare dependency. However, the policy was scrapped the day the new Social Democrat Government assumed power in a minority coalition in 2011.
While other Western countries face similar difficulties to those afflicting Europe, it would appear that the problems are less acute in the United States, Australia and other migrant destinations. Nevertheless, with news report of Muslim-Australian jihadis lopping heads in Syria and Iraq and drive-by shootings increasingly common in Sydney’s West, Sennels’ insights would seem to have a definite Australian relevance, most particularly to any debate about the wisdom of ongoing Muslim immigration.
Monday, August 18, 2014
Sunday, August 10, 2014
RISE OF ANTI-SEMITISM FROM THIS WAR IS NO ACCIDENT
11 Aug 2014 The Australian Andre Oboler is CEO of the Online Hate Prevention Institute (OHPI). OHPI’s guide on reporting antisemitism on Facebook was released on Friday.
People around the world are being misled by a deliberate Hamas propaganda strategy
WHILE no civilian casualties are good, reports are emerging that strongly question the number of civilian casualties in Gaza.
The BBC’s head of statistics has highlighted that the figures presented are highly improbable. He says “some of the conclusions being drawn from them may be premature”. With that, Hamas’s illusion begins to crack. The reality behind it is far uglier, and more dangerous, than many have realised.
Far away from the Middle East there are two additional sets of victims, neither Israeli not Palestinian. The first are Jews, facing rising anti-semitism. The second group of victims includes many of those spewing out anti-semitism. They too are victims as they act against their values in aid of a greater purpose. Those who have fallen into this trap will explain the uniqueness of the current conflict, and reflect on the reported number of civilian casualties. Their call to arms, however, rests on a carefully manufactured illusion.
It seems that, statistically speaking, the high civilian casualty rate in Gaza is very likely to conceal many Hamas combatants. This is no surprise as a similar situation occurred in Operation Cast Lead (2008-09) when Hamas, at the time, claimed only 50 fighters were killed but later admitted to a figure of 600 to 700 fighters, a figure almost identical to Israeli reports during the conflict.
The supposedly disproportionate civilian casualty rate has been used not only to justify and mobilise hostility to Israel, but also to defend outright anti-semitism including comparisons to the Holocaust. How dare you raise the issue of anti-semitism when so many people are being killed, a former Facebook friend wrote to me. If the number of civilian casualties is in fact similar to other conflicts, or proportionally less than other conflicts, when comparing the rate of civilian to combatant casualties, then a lot of people have been working off a false premise.
The reliance on a false premise led many to the conclusion that Israel deserved unique condemnation, and the issue deserved priority above all else on the international agenda. If the conflict was not exceptional, there was no basis for this special treatment.
As I write this, rockets have resumed and the IDF just announced they were about to take action to eliminate the threat. One Twitter user, with a free Palestine image, responded saying that another Nuremberg was waiting for Israel.
This Gaza Holocaust analogy is spectacularly bad, and deliberately anti-semitic. It has been repeated so many times that it is becoming a Big Lie. And therein lies the second crack in the Hamas illusion.
The anti-semitic imagery used in this conflict is beyond anything we have seen before. It looks like a deliberate social media strategy of Hamas, and one that follows perfectly from the anti-semitism in their mainstream media channels, including on children’s TV shows like Tomorrow’s Pioneers.
The treatment of all casualties as civilians, the overt anti-semitism and the comparison of Gaza to the Holocaust are part of a coherent Hamas social media strategy. The strategy has been openly promoted to activists via official Hamas channels, in Arabic of course. MEMRI translated this guide in mid-July, but it didn’t get enough attention.
Consider two points taken from the guide: Avoid entering into a political argument with a Westerner aimed at convincing him that the Holocaust is a lie and deceit; instead, equate it with Israel’s crimes against Palestinian civilians”; “Anyone killed or martyred is to be called a civilian from Gaza or Palestine, before we talk about his status in jihad or his military rank. Don’t forget to always add ‘innocent civilian’ or ‘innocent citizen’ in your description of those killed in Israeli attacks on Gaza.”
Our cracks suddenly open into fissures, and ground on which Hamas’s illusion rests starts to fall way. Both these points from the guide are repeated constantly in social media. Only some comes from hard core Hamas supporters who may have seen the guide. Much comes from people who have no idea about the ideology of Hamas, or its social media war strategy.
The technical term for people who have been suckered in to supporting the Hamas social media strategy, and therefore Hamas more broadly, is “useful idiots”. Rich Lowry has written a great piece about the impact of these useful idiots, but missed the Hamas strategy to deliberately create more of them. I previously noted that Facebook was caught in a social media war, but I missed how anti-semitism and the creation of useful idiots was part of this strategy.
I’ve created a resource page to explain the problem with the Holocaust analogy it. Some, including Muslim friends, quickly saw the problem. Others, anti-racism activists with no specific connection to the conflict, refused to see it. For them raising anti-semitism was trying to dodge the issue of the casualties and the criticality of stopping Israel. I felt I was staring down a rabbit hole. Comparisons between Israel and the Nazis are given explicitly as an example in the Working Definition of AntiSemitism. It’s not a matter of interpretation or debate.
The way people around the world have been misled as part of a deliberate Hamas propaganda strategy is outrageous. The real civilian deaths, inevitable in armed conflict, are still a tragedy, but to use support for human rights as part of a war strategy is morally reprehensible. We knew they were doing it with the living, now we see they are doing it with the dead as well. To promote Holocaust trivialisation as part of a war strategy is also utterly reprehensible. Hamas advocates genocide of Jews in its charter, but how did anti-racists come to adopt this vile poison and promote the agenda of genocide?
Many who have fallen for the Hamas propaganda strategy have reacted angrily when told their comments are anti-semitic and defended their position with reference to the “unique nature” of the current conflict. With that premise exposed as a deliberate illusion, they have a bitter pill to swallow. Many seek other ways to validate their actions. In doing so many may fall further into the racist arguments flooding across social media.
Unless people stop and take stock, Hamas may well achieve its real purpose, to harm Jewish people around the globe. The rise in anti-semitism is a key outcome of this war, and it seems it is far from an accident.
Monday, July 21, 2014
Myth of Palestinian Identity
From Quadrant magazine
The Tactical Myth of Palestinian Identity
There are no such people as Palestinians and there never have been. Nor was there ever a state called Palestine or a Palestinian culture or a Palestinian language. If truth prevails over the left's revisionist history, Palestine will never be more than the name of a geographical region
pal flag disintegratingFollowing the rise of the arch-terrorist Yasser Arafat to the leadership of the Palestine Liberation Organisation in 1969, there was a push to convince the world that Palestinians are a unique people of Arab origin who had inhabited the lands of Israel for thousands of years. The uninformed and gullible believed it, and leftists still promote it.
In fact, there are no such people as Palestinians and there never have been. If anyone doubts this they should consider the following statement made in 1977 by Zahir Muhsein, a PLO executive committee member, during an interview with the Dutch newspaper Trouw:
The creation of a Palestinian state is only a means for continuing our struggle against the state of Israel for our Arab unity. In reality today there is no difference between Jordanians, Palestinians, Syrians and Lebanese. Only for political and tactical reasons do we speak today about the existence of a Palestinian people, since Arab national interests demand that we posit the existence of a “distinct Palestinian people” to oppose Zionism.
Nor was there ever a state called Palestine or a Palestinian culture or a Palestinian language. Like the Jordanians, Palestinians are a recent creation. If truth and justice prevail over revisionist history promoted by the Left, there never will be a state called Palestine in place of Israel and the word will remain no more than the name of a geographical region.
The Roman emperor Hadrian, in an effort to destroy Jews once and for all, renamed the territory of Judea (the present-day West Bank), Palaestina, after the Philistines, an Aegean people who had conquered the coast of Canaan. They appear in the Old Testament as depraved pagans and have long since vanished from history.
Hadrian’s intention was to eradicate memory of the Jewish people whom the Romans exiled from their homeland as punishment for having rebelled against them in 130 AD. Nevertheless, a handful of Jews remained in Israel, and their descendants have been a Jewish presence ever since. In fact, Jews are the only group of people to have lived continuously in Israel for the past 3700 years.
With the fall of the Roman empire, a host of interlopers took over the region called Palestine—Umayyads, Abbasids, Fatimids, Crusaders and Mamluks—but they never created an independent country. Even under Ottoman rule (1517 to 1917) Palestine did not become a state.
Following the defeat of the Ottomans in the First World War, the term “Palestinian” most often referred to the region’s Jews, not least because so few Arabs were prominent in the British mandated territory of Palestine. For example, there were the Jewish newspaper the Palestine Post and the Jewish Palestine Symphony Orchestra (later the Israel Philharmonic Orchestra).
Present-day Arabs who call themselves Palestinians are relative newcomers to the region. The earliest Arabs arrived in small numbers in 632 as part of the Muslim invasion. However, they had no historical ties to the ancient land of Israel or the territories of Judea and Samaria (popularly but mistakenly called the West Bank).
In Jerusalem, Jews have been a majority since the 1840s. In 1899, the Arab mayor of Jerusalem, Yusef Diya al-Khalidi, said, “Who can challenge the rights of the Jews in Palestine? Good Lord, historically it is really your country.”
In addition to the early Muslim invaders, large numbers of Arabs from Egypt, Syria, Lebanon and Iraq settled in the region in the nineteenth and twentieth centuries, attracted by the commercial opportunities opened up by newly-arrived Jewish settlers, mainly from Eastern Europe and Russia, pursuing the Zionist dream of re-establishing their homeland and transforming it into the biblical land of “milk and honey”.
In 1948 about 1.2 million Arabs lived in Palestine. However, with the invasion of the territory by Egypt, Jordan, Syria, Lebanon and Iraq on the day the modern state of Israel was founded, the demographics of the region changed quickly. Large numbers of Arabs fled to escape war and because false tales of Israeli atrocities, spread by the invaders, caused them to panic.
However, instead of being welcomed and helped by their brother Arabs, they were herded into concentration camps, to be used as pawns in the Arabs’ neverending attempts to liquidate Israel. Jordan offered them some comfort but even that was intermittent.
If Arab invaders had succeeded in destroying Israel, it is unlikely that the Palestinian Arabs who had taken flight would have been allowed to return to their homes. There is no love lost among Arabs. The spoils would have been shared out among the victors; the people we call Palestinians would have received nothing.
To appreciate the plight of the Palestinians it is important to understand that the Arabs are largely a tribal people whose first loyalty is not to the nation-state but to their families, clans and tribes. They see Palestinians, not as brothers, but as landless aliens.
People in the West who pity today’s inhabitants of Gaza and the West Bank would do well to remember who put them there—not Israel, but their fellow Arabs.
Leftists who call for the destruction of Israel and its replacement by a Palestinian state should compare Israel’s one and half million Arabs who enjoy the benefits of full citizenship, to Palestinians in Arab countries who are looked upon with contempt. Israeli Arabs hold seats in the Knesset, run successful businesses and serve in Israel’s civil and diplomatic services. Unlike most Jewish Israelis, they are exempt from serving in the Israeli Defence Force. Only in Israel are Palestinian Arabs treated humanely and their rights recognised.
UN Resolution 181, passed in 1947, spoke of the founding of a Jewish state and an Arab state. Yet the prospect of there ever being an independent Palestinian Arab state is about zero. This is not because Israel opposes a two-state solution (in fact, it has consistently favoured it) but because no Arab country seriously supports it. To do so would mean recognising the right of Israel to exist.
A Palestinian state at peace with Israel is anathema to most Arabs. Moreover, peace between a Palestinian state and Israel would be a stumbling block to Iran’s ambition of becoming the region’s dominant power. Only the existence of Israel thwarts Iranian imperialism.
Consequently, the sight of Mahmood Abbas negotiating a peace deal with Israel, under pressure from Obama and Kerry, is a pathetic farce and the actors know it. The slightest compromise by Abbas would effectively sign his own death warrant. The farce of Palestinian-Israeli negotiations proceeds largely for the benefit and amusement of the rest of the world.
There is an even deeper reason why an Arab peace with Israel is unlikely. According to Muslim law, once a territory has been under Islamic control, it may never revert to ownership by non-Muslims. It is the Islamic version of the Brezhnev Doctrine, under which a communist country must never be allowed to revert to a non-communist one.
Meantime, the talks will go on, terrorist attacks on Israel will proceed under Iranian and Syrian directives, and Palestinian Arabs living beyond Israel’s borders will continue to wallow in the ordure created for them by their fellow Arabs.
The Tactical Myth of Palestinian Identity
There are no such people as Palestinians and there never have been. Nor was there ever a state called Palestine or a Palestinian culture or a Palestinian language. If truth prevails over the left's revisionist history, Palestine will never be more than the name of a geographical region
pal flag disintegratingFollowing the rise of the arch-terrorist Yasser Arafat to the leadership of the Palestine Liberation Organisation in 1969, there was a push to convince the world that Palestinians are a unique people of Arab origin who had inhabited the lands of Israel for thousands of years. The uninformed and gullible believed it, and leftists still promote it.
In fact, there are no such people as Palestinians and there never have been. If anyone doubts this they should consider the following statement made in 1977 by Zahir Muhsein, a PLO executive committee member, during an interview with the Dutch newspaper Trouw:
The creation of a Palestinian state is only a means for continuing our struggle against the state of Israel for our Arab unity. In reality today there is no difference between Jordanians, Palestinians, Syrians and Lebanese. Only for political and tactical reasons do we speak today about the existence of a Palestinian people, since Arab national interests demand that we posit the existence of a “distinct Palestinian people” to oppose Zionism.
Nor was there ever a state called Palestine or a Palestinian culture or a Palestinian language. Like the Jordanians, Palestinians are a recent creation. If truth and justice prevail over revisionist history promoted by the Left, there never will be a state called Palestine in place of Israel and the word will remain no more than the name of a geographical region.
The Roman emperor Hadrian, in an effort to destroy Jews once and for all, renamed the territory of Judea (the present-day West Bank), Palaestina, after the Philistines, an Aegean people who had conquered the coast of Canaan. They appear in the Old Testament as depraved pagans and have long since vanished from history.
Hadrian’s intention was to eradicate memory of the Jewish people whom the Romans exiled from their homeland as punishment for having rebelled against them in 130 AD. Nevertheless, a handful of Jews remained in Israel, and their descendants have been a Jewish presence ever since. In fact, Jews are the only group of people to have lived continuously in Israel for the past 3700 years.
With the fall of the Roman empire, a host of interlopers took over the region called Palestine—Umayyads, Abbasids, Fatimids, Crusaders and Mamluks—but they never created an independent country. Even under Ottoman rule (1517 to 1917) Palestine did not become a state.
Following the defeat of the Ottomans in the First World War, the term “Palestinian” most often referred to the region’s Jews, not least because so few Arabs were prominent in the British mandated territory of Palestine. For example, there were the Jewish newspaper the Palestine Post and the Jewish Palestine Symphony Orchestra (later the Israel Philharmonic Orchestra).
Present-day Arabs who call themselves Palestinians are relative newcomers to the region. The earliest Arabs arrived in small numbers in 632 as part of the Muslim invasion. However, they had no historical ties to the ancient land of Israel or the territories of Judea and Samaria (popularly but mistakenly called the West Bank).
In Jerusalem, Jews have been a majority since the 1840s. In 1899, the Arab mayor of Jerusalem, Yusef Diya al-Khalidi, said, “Who can challenge the rights of the Jews in Palestine? Good Lord, historically it is really your country.”
In addition to the early Muslim invaders, large numbers of Arabs from Egypt, Syria, Lebanon and Iraq settled in the region in the nineteenth and twentieth centuries, attracted by the commercial opportunities opened up by newly-arrived Jewish settlers, mainly from Eastern Europe and Russia, pursuing the Zionist dream of re-establishing their homeland and transforming it into the biblical land of “milk and honey”.
In 1948 about 1.2 million Arabs lived in Palestine. However, with the invasion of the territory by Egypt, Jordan, Syria, Lebanon and Iraq on the day the modern state of Israel was founded, the demographics of the region changed quickly. Large numbers of Arabs fled to escape war and because false tales of Israeli atrocities, spread by the invaders, caused them to panic.
However, instead of being welcomed and helped by their brother Arabs, they were herded into concentration camps, to be used as pawns in the Arabs’ neverending attempts to liquidate Israel. Jordan offered them some comfort but even that was intermittent.
If Arab invaders had succeeded in destroying Israel, it is unlikely that the Palestinian Arabs who had taken flight would have been allowed to return to their homes. There is no love lost among Arabs. The spoils would have been shared out among the victors; the people we call Palestinians would have received nothing.
To appreciate the plight of the Palestinians it is important to understand that the Arabs are largely a tribal people whose first loyalty is not to the nation-state but to their families, clans and tribes. They see Palestinians, not as brothers, but as landless aliens.
People in the West who pity today’s inhabitants of Gaza and the West Bank would do well to remember who put them there—not Israel, but their fellow Arabs.
Leftists who call for the destruction of Israel and its replacement by a Palestinian state should compare Israel’s one and half million Arabs who enjoy the benefits of full citizenship, to Palestinians in Arab countries who are looked upon with contempt. Israeli Arabs hold seats in the Knesset, run successful businesses and serve in Israel’s civil and diplomatic services. Unlike most Jewish Israelis, they are exempt from serving in the Israeli Defence Force. Only in Israel are Palestinian Arabs treated humanely and their rights recognised.
UN Resolution 181, passed in 1947, spoke of the founding of a Jewish state and an Arab state. Yet the prospect of there ever being an independent Palestinian Arab state is about zero. This is not because Israel opposes a two-state solution (in fact, it has consistently favoured it) but because no Arab country seriously supports it. To do so would mean recognising the right of Israel to exist.
A Palestinian state at peace with Israel is anathema to most Arabs. Moreover, peace between a Palestinian state and Israel would be a stumbling block to Iran’s ambition of becoming the region’s dominant power. Only the existence of Israel thwarts Iranian imperialism.
Consequently, the sight of Mahmood Abbas negotiating a peace deal with Israel, under pressure from Obama and Kerry, is a pathetic farce and the actors know it. The slightest compromise by Abbas would effectively sign his own death warrant. The farce of Palestinian-Israeli negotiations proceeds largely for the benefit and amusement of the rest of the world.
There is an even deeper reason why an Arab peace with Israel is unlikely. According to Muslim law, once a territory has been under Islamic control, it may never revert to ownership by non-Muslims. It is the Islamic version of the Brezhnev Doctrine, under which a communist country must never be allowed to revert to a non-communist one.
Meantime, the talks will go on, terrorist attacks on Israel will proceed under Iranian and Syrian directives, and Palestinian Arabs living beyond Israel’s borders will continue to wallow in the ordure created for them by their fellow Arabs.
Monday, January 27, 2014
sustainability
Bitten by the dispiriting dogma of sustainability
NICK CATER THE AUSTRALIAN JANUARY 28, 2014 12:00AM
WHAT a splendid start to the year it has been for Australia. First the Ashes, then the one-day trophy and finally the corporate sustainability prize awarded to our own Westpac Banking Corporation in Davos.
Sustainability is "a leading-edge issue", which means no one has a clue what it is, not even Wikipedia. The best it can manage is that sustainability is "a multi-faceted concept" and "a matter of ongoing argument". So much for the wisdom of crowds.
Even the judges of the Global 100 awards were forced to admit that "determining which companies are 'sustainable' and which are not is challenging enterprise".
It is not enough just to stay in business for 197 years, keep the capital-adequacy ratio respectable and return a stonking great profit. Westpac had to score against "a set of quantitative and clearly defined key performance indicators" determined by "a rules-based construction methodology". It couldn't be clearer than that.
For those who missed the live coverage of the green-collar Oscars, however, we will run through the main categories again.
Leadership diversity. It goes without saying that women run more sustainable corporations than men and, with chief executive Gail Kelly in the big chair, this one was Westpac's to lose.
Horizontally integrated remuneration framework. To achieve a perfect score in this category, the CEO's salary should match the company's average wage. Since Kelly took home $5.6 million last year, Westpac did not quite get 10 out of 10, but since all bankers get paid pretty handsomely, it was presumably good enough.
Percentage tax paid. It may be counterintuitive to suggest that sustainable corporations pay more tax but that's what the rules say. In the weird world of Davos, the percentage of profits paid as tax is regarded as a measure of corporate virtue rather than government vice.
Energy productivity. Westpac could teach BHP Billiton a thing or two here. The banker's profit was only a third of the miner's, but its carbon footprint was 250 times smaller. Clearly BHP needs to start thinking about recycling its printer cartridges.
The clean capitalist utopia as pictured by the economic romantics in Davos is, it turns out, unsustainable since it rewards those in comfortable offices who buy and sell money rather than those in fluoro who actually make it.
The fetish for paying taxes transfers money from the private to the public sector, and there is nothing remotely sustainable about that.
In these historically incurious times, it is worth reminding ourselves that business did not always operate this way.
As Adam Smith once noted, the baker, the butcher and the brewer used not to provide our dinner out of the goodness of their hearts "but from their regards to their own interest".
Nowadays, however, we like our businesses to be socially responsible, environmentally aware, ethically orientated, big-hearted Arthurs.
Thus the corporate sector has surrendered to the dispiriting dogma of sustainability, the heresy that took hold among the hippies in the late 1960s and mutated into a misanthropic, deep green movement in the 70s.
Today it wears a pinstriped suit and sits in the boardroom signing off on the most egregious muddle-headed nonsense in the name of corporate responsibility.
Sustainability may present itself as harmless mumbo-jumbo that helps build a brand, but
its underlying philosophy is antithetical to freedom and to enterprise.
"The uncontested absurdities of today are the accepted slogans of tomorrow," Ayn Rand wrote in 1972. "They come to be accepted by degrees, by dint of constant pressure on one side and constant retreat on the other until one day they are suddenly declared to be the country's official ideology."
Four decades later, her prophecy has been fulfilled. Sustainability is one of the three priority themes in the new Australian curriculum, polluting everything from algebra to zoology.
"The sustainability priority is futures-oriented, focusing on protecting environments and creating a more ecologically and socially just world through informed action," the curriculum says.
Students are encouraged to consider "that unlimited growth is unsustainable; sustainability - that biological systems need to remain diverse and productive over time; and rights of nature - recognition that humans and their natural environment are closely interrelated".
Sustainability is Malthusianism for the 21st century: the fallacy that population is growing faster than the available resources and that ruination is just around the corner.
The world viewed through the prism of sustainability is a deeply depressing place in which dreams are discouraged, imagination is restricted and the spirit of progress frowned upon.
Sustainability means never having to say sorry. In 1990 the World Hunger Project calculated that the ecosystem could sustainably support six billion people, and then only if they lived on a vegetarian diet.
More than two decades later, with 7.1 billion people living on the planet, global beef production has increased by 5 per cent per capita, pork by 17 per cent and chicken by 82 per cent, and that's not counting the eggs.
The World Food Programme estimates that there are 170 million fewer malnourished people than there were in 1990.
The inconvenient prosperous truth is that the human beings have, since the dawn of time, created more than they used on average over the course of a lifetime.
The happy by-product of an expanding population ever more interconnected is that the sum total of human knowledge grows exponentially.
The energy crisis, the one that is supposed to lie just around the corner, has been creating anxiety since the 1600s when Britain began to run out of firewood. Scarcity spurred the development of coal. The great whale oil crisis of the 1840s stimulated the search for oil. Time after time the coming catastrophe is postponed through abundance, and the inherent dishonesty of sustainability is exposed.
Human ingenuity is an infinitely renewable resource. Prosperity comes from seizing the elements of nature and rearranging their form.
"Wealth does not exists as a fixed, static quantity," wrote Rand. "It is the creation of a dynamic, boundless mind. And it has no inherent limitation."
NICK CATER THE AUSTRALIAN JANUARY 28, 2014 12:00AM
WHAT a splendid start to the year it has been for Australia. First the Ashes, then the one-day trophy and finally the corporate sustainability prize awarded to our own Westpac Banking Corporation in Davos.
Sustainability is "a leading-edge issue", which means no one has a clue what it is, not even Wikipedia. The best it can manage is that sustainability is "a multi-faceted concept" and "a matter of ongoing argument". So much for the wisdom of crowds.
Even the judges of the Global 100 awards were forced to admit that "determining which companies are 'sustainable' and which are not is challenging enterprise".
It is not enough just to stay in business for 197 years, keep the capital-adequacy ratio respectable and return a stonking great profit. Westpac had to score against "a set of quantitative and clearly defined key performance indicators" determined by "a rules-based construction methodology". It couldn't be clearer than that.
For those who missed the live coverage of the green-collar Oscars, however, we will run through the main categories again.
Leadership diversity. It goes without saying that women run more sustainable corporations than men and, with chief executive Gail Kelly in the big chair, this one was Westpac's to lose.
Horizontally integrated remuneration framework. To achieve a perfect score in this category, the CEO's salary should match the company's average wage. Since Kelly took home $5.6 million last year, Westpac did not quite get 10 out of 10, but since all bankers get paid pretty handsomely, it was presumably good enough.
Percentage tax paid. It may be counterintuitive to suggest that sustainable corporations pay more tax but that's what the rules say. In the weird world of Davos, the percentage of profits paid as tax is regarded as a measure of corporate virtue rather than government vice.
Energy productivity. Westpac could teach BHP Billiton a thing or two here. The banker's profit was only a third of the miner's, but its carbon footprint was 250 times smaller. Clearly BHP needs to start thinking about recycling its printer cartridges.
The clean capitalist utopia as pictured by the economic romantics in Davos is, it turns out, unsustainable since it rewards those in comfortable offices who buy and sell money rather than those in fluoro who actually make it.
The fetish for paying taxes transfers money from the private to the public sector, and there is nothing remotely sustainable about that.
In these historically incurious times, it is worth reminding ourselves that business did not always operate this way.
As Adam Smith once noted, the baker, the butcher and the brewer used not to provide our dinner out of the goodness of their hearts "but from their regards to their own interest".
Nowadays, however, we like our businesses to be socially responsible, environmentally aware, ethically orientated, big-hearted Arthurs.
Thus the corporate sector has surrendered to the dispiriting dogma of sustainability, the heresy that took hold among the hippies in the late 1960s and mutated into a misanthropic, deep green movement in the 70s.
Today it wears a pinstriped suit and sits in the boardroom signing off on the most egregious muddle-headed nonsense in the name of corporate responsibility.
Sustainability may present itself as harmless mumbo-jumbo that helps build a brand, but
its underlying philosophy is antithetical to freedom and to enterprise.
"The uncontested absurdities of today are the accepted slogans of tomorrow," Ayn Rand wrote in 1972. "They come to be accepted by degrees, by dint of constant pressure on one side and constant retreat on the other until one day they are suddenly declared to be the country's official ideology."
Four decades later, her prophecy has been fulfilled. Sustainability is one of the three priority themes in the new Australian curriculum, polluting everything from algebra to zoology.
"The sustainability priority is futures-oriented, focusing on protecting environments and creating a more ecologically and socially just world through informed action," the curriculum says.
Students are encouraged to consider "that unlimited growth is unsustainable; sustainability - that biological systems need to remain diverse and productive over time; and rights of nature - recognition that humans and their natural environment are closely interrelated".
Sustainability is Malthusianism for the 21st century: the fallacy that population is growing faster than the available resources and that ruination is just around the corner.
The world viewed through the prism of sustainability is a deeply depressing place in which dreams are discouraged, imagination is restricted and the spirit of progress frowned upon.
Sustainability means never having to say sorry. In 1990 the World Hunger Project calculated that the ecosystem could sustainably support six billion people, and then only if they lived on a vegetarian diet.
More than two decades later, with 7.1 billion people living on the planet, global beef production has increased by 5 per cent per capita, pork by 17 per cent and chicken by 82 per cent, and that's not counting the eggs.
The World Food Programme estimates that there are 170 million fewer malnourished people than there were in 1990.
The inconvenient prosperous truth is that the human beings have, since the dawn of time, created more than they used on average over the course of a lifetime.
The happy by-product of an expanding population ever more interconnected is that the sum total of human knowledge grows exponentially.
The energy crisis, the one that is supposed to lie just around the corner, has been creating anxiety since the 1600s when Britain began to run out of firewood. Scarcity spurred the development of coal. The great whale oil crisis of the 1840s stimulated the search for oil. Time after time the coming catastrophe is postponed through abundance, and the inherent dishonesty of sustainability is exposed.
Human ingenuity is an infinitely renewable resource. Prosperity comes from seizing the elements of nature and rearranging their form.
"Wealth does not exists as a fixed, static quantity," wrote Rand. "It is the creation of a dynamic, boundless mind. And it has no inherent limitation."
Tuesday, January 14, 2014
GlobalWarming=ClimateChange
Mother Nature suggests the party's over for IPCC
MAURICE NEWMAN THE AUSTRALIAN JANUARY 15, 2014 12:00AM
GIVEN the low-grade attacks on me following my piece "Crowds go cold on climate cost" (The Australian, Dec 31) readers of Fairfax publications and The Guardian may be shocked to hear I believe in climate change. I also accept carbon dioxide is a greenhouse gas. The trouble is, I cannot reconcile the claims of dangerous human CO2 emissions with the observed record.
I admit it. I am not a climate scientist. That said, I have closely followed this debate for more than two decades, having been seasoned originally by the global cooling certainty of the 1970s.
The climate consensus of the 70s, like the period since the Intergovernmental Panel on Climate Change was established in 1988, was dominated by politics, not science. I was reminded of how deeply political awareness has infected today's academies when I received an apology from a respected climate scientist who corrected his own public cheap shot at me. He said, "I attempt to be politically even-handed ... I try to steer a middle course as a scientist."
Really? Surely science is not about neutrality? It is about evidence and conclusions which fall where they will. So when an internationally acclaimed climatologist like Roy Spencer from the University of Alabama at Huntsville dispassionately analyses climate models covering 33 years and concludes that both the surface and satellite observations produce linear temperature trends that are below 87 of the 90 models used in the comparison, he does not politically neutralise his findings. They are empirical fact.
They eventually become political because the models he demonstrates to be seriously flawed are the bedrock on which the IPCC's global warming case is built. As Spencer said recently, "The modellers and the IPCC have willingly ignored the evidence of low climate sensitivity for many years ... The discrepancy between models and observations is not a new thing ... just one that is becoming more glaring over time."
Spencer is joined by celebrated Massachusetts Institute of Technology climatologist Richard Lindzen, who says: "I think that the latest (AR5) IPCC report has truly sunk to a level of hilarious incoherence. They are proclaiming increased confidence in their models as the discrepancies between their models and observations increase." He is "willing to take bets that global average temperatures in 20 years will in fact be lower than they are now". Any takers?
The lengthening pause in global warming is influencing the political climate. The language has changed from the specific "global warming" to the more general "climate change" and now to the astrological "extreme weather events" where "I told you so" can be almost universally applied. For example, we are to believe the recent cold spell in the US and the heat wave in Australia are both examples of global warming. Yet 2013 was one of the "least extreme" weather years in US history.
Political will is also flagging. The Copenhagen summit was almost five years ago, yet there is still no global, legally binding international agreement on emission reduction targets. Only talk.
Canada's Environment Minister Leona Aglukkaq last year discarded a proposal from her department to publicly state that the Harper government recognised scientific evidence that humans were "mostly responsible" for climate change and that it took this advice "seriously".
And now, no doubt in response to the political backlash from the economic cost of green schemes, the European Commission is to order Britain to end wind farm subsidies. According to Britain's The Telegraph: "The commission ... is about to argue that the onshore wind and solar power industries are 'mature' and should be allowed to operate without support from taxpayers." Germany's renewable energy industry virtually shut down for almost a week in December when nearly 23,000 wind turbines and one million solar panels ceased to generate. Faced with uncompetitive electricity prices and the fantasy of cheap, reliable renewable energy, Germany is building 10 coal-fired power stations over the next two years with 15 more planned. The green delusion is finally confronting economic reality.
What we now see is the unravelling of years of shoddy science and sloppy journalism. If it wasn't for independent Murdoch newspapers around the world, the mainstream media would be almost completely captured by the IPCC establishment. That is certainly true in Australia. For six or seven years we were bullied into accepting that the IPCC's assessment reports were the climate science bible. Its chairman, Rajendra Pachauri, told us the IPCC relied solely on peer-reviewed literature. Then Murdoch papers alerted us to scientific scandals and Donna Laframboise, in her book The Delinquent Teenager, astonished us with her extraordinary revelation that of 18,000 references in the IPCC's AR4 report, one-third were not peer reviewed. Some were Greenpeace press releases, others student papers and working papers from a conference. In some chapters, the majority of references were not peer reviewed. Many lead authors were inexperienced, or linked to advocate groups like WWF and Greenpeace. Why are we not surprised?
The IPCC was bound to be captured by the green movement. After all, it is a political body. It is not a panel of scientists but a panel of governments driven by the UN. Its sole purpose is to assess the risks of human-induced climate change. It has spawned industries. One is scientists determined to find an anthropogenic cause. Another is climate remediation. And, naturally, an industry to redistribute taxes to sustain it all. With hundreds of billions of dollars at stake, this cartel will deny all contrary evidence. Its very survival depends on it. But the tide is turning and Mother Nature has signalled her intention not to co-operate.
In the meantime, childish personal attacks on those who point out flaws in IPCC reasoning and advice only increase scepticism. They are no substitute for empirical evidence and are well into diminishing returns. The party's over.
Maurice Newman is chairman of the Prime Minister's Business Advisory Council.
MAURICE NEWMAN THE AUSTRALIAN JANUARY 15, 2014 12:00AM
GIVEN the low-grade attacks on me following my piece "Crowds go cold on climate cost" (The Australian, Dec 31) readers of Fairfax publications and The Guardian may be shocked to hear I believe in climate change. I also accept carbon dioxide is a greenhouse gas. The trouble is, I cannot reconcile the claims of dangerous human CO2 emissions with the observed record.
I admit it. I am not a climate scientist. That said, I have closely followed this debate for more than two decades, having been seasoned originally by the global cooling certainty of the 1970s.
The climate consensus of the 70s, like the period since the Intergovernmental Panel on Climate Change was established in 1988, was dominated by politics, not science. I was reminded of how deeply political awareness has infected today's academies when I received an apology from a respected climate scientist who corrected his own public cheap shot at me. He said, "I attempt to be politically even-handed ... I try to steer a middle course as a scientist."
Really? Surely science is not about neutrality? It is about evidence and conclusions which fall where they will. So when an internationally acclaimed climatologist like Roy Spencer from the University of Alabama at Huntsville dispassionately analyses climate models covering 33 years and concludes that both the surface and satellite observations produce linear temperature trends that are below 87 of the 90 models used in the comparison, he does not politically neutralise his findings. They are empirical fact.
They eventually become political because the models he demonstrates to be seriously flawed are the bedrock on which the IPCC's global warming case is built. As Spencer said recently, "The modellers and the IPCC have willingly ignored the evidence of low climate sensitivity for many years ... The discrepancy between models and observations is not a new thing ... just one that is becoming more glaring over time."
Spencer is joined by celebrated Massachusetts Institute of Technology climatologist Richard Lindzen, who says: "I think that the latest (AR5) IPCC report has truly sunk to a level of hilarious incoherence. They are proclaiming increased confidence in their models as the discrepancies between their models and observations increase." He is "willing to take bets that global average temperatures in 20 years will in fact be lower than they are now". Any takers?
The lengthening pause in global warming is influencing the political climate. The language has changed from the specific "global warming" to the more general "climate change" and now to the astrological "extreme weather events" where "I told you so" can be almost universally applied. For example, we are to believe the recent cold spell in the US and the heat wave in Australia are both examples of global warming. Yet 2013 was one of the "least extreme" weather years in US history.
Political will is also flagging. The Copenhagen summit was almost five years ago, yet there is still no global, legally binding international agreement on emission reduction targets. Only talk.
Canada's Environment Minister Leona Aglukkaq last year discarded a proposal from her department to publicly state that the Harper government recognised scientific evidence that humans were "mostly responsible" for climate change and that it took this advice "seriously".
And now, no doubt in response to the political backlash from the economic cost of green schemes, the European Commission is to order Britain to end wind farm subsidies. According to Britain's The Telegraph: "The commission ... is about to argue that the onshore wind and solar power industries are 'mature' and should be allowed to operate without support from taxpayers." Germany's renewable energy industry virtually shut down for almost a week in December when nearly 23,000 wind turbines and one million solar panels ceased to generate. Faced with uncompetitive electricity prices and the fantasy of cheap, reliable renewable energy, Germany is building 10 coal-fired power stations over the next two years with 15 more planned. The green delusion is finally confronting economic reality.
What we now see is the unravelling of years of shoddy science and sloppy journalism. If it wasn't for independent Murdoch newspapers around the world, the mainstream media would be almost completely captured by the IPCC establishment. That is certainly true in Australia. For six or seven years we were bullied into accepting that the IPCC's assessment reports were the climate science bible. Its chairman, Rajendra Pachauri, told us the IPCC relied solely on peer-reviewed literature. Then Murdoch papers alerted us to scientific scandals and Donna Laframboise, in her book The Delinquent Teenager, astonished us with her extraordinary revelation that of 18,000 references in the IPCC's AR4 report, one-third were not peer reviewed. Some were Greenpeace press releases, others student papers and working papers from a conference. In some chapters, the majority of references were not peer reviewed. Many lead authors were inexperienced, or linked to advocate groups like WWF and Greenpeace. Why are we not surprised?
The IPCC was bound to be captured by the green movement. After all, it is a political body. It is not a panel of scientists but a panel of governments driven by the UN. Its sole purpose is to assess the risks of human-induced climate change. It has spawned industries. One is scientists determined to find an anthropogenic cause. Another is climate remediation. And, naturally, an industry to redistribute taxes to sustain it all. With hundreds of billions of dollars at stake, this cartel will deny all contrary evidence. Its very survival depends on it. But the tide is turning and Mother Nature has signalled her intention not to co-operate.
In the meantime, childish personal attacks on those who point out flaws in IPCC reasoning and advice only increase scepticism. They are no substitute for empirical evidence and are well into diminishing returns. The party's over.
Maurice Newman is chairman of the Prime Minister's Business Advisory Council.
Saturday, January 04, 2014
Native species, a rethink!
Quadrant Jan 2014
DAVID F. SMITH
In Praise of Exotic Species
Which plants and animals should Australians favour? This question causes endless debate. Many people vehemently assert that native plants growing on a site are the best on that site for every purpose. One leader of the pack has been Tim Flannery, who in his 2002 Australia Day address suggested introduced plants like roses, plane trees and lawn grasses were a blot on the Australian landscape and should be discouraged. He suggested that among the 25,000 species of plants in Australia we should find superior replacements. Would we? How long and arduous would breeding and selection be? Such people say, “Surely it’s simple logic: they have evolved there and are therefore best adapted.” But did they evolve there? And even if they did, does that make them the best?
The answer to the question has important implications, not least in the use of land to feed the growing population of the world, both in growing food crops and in providing protein through grazing animals. The answer is particularly interesting in Australia, where accepting plants from elsewhere has been a hugely important part of wealth creation.
Can we define a native plant?
Again, many say, surely it is easy. It has long grown here, belongs here, must have evolved to suit this environment. But in fact we have no proof that even plants now dominant have evolved here. Perhaps the best we can do to define a native plant is, “A native is a plant that was here before us.” (And one can include Aborigines and Europeans under “us”, as in terms of the evolutionary time-scale of plants we arrived at much the same time.) That is all we really know for sure. It may have evolved here, but it may have arrived at any time in the more than 100 million years this continental land mass has existed. In any case, is presence at a site proof of superior adaptation over all other possible plants that have not been tested? There are many examples of the superb adaptation of known recent arrivals: all Australia’s jacaranda trees apparently developed from one introduction, and the coconut palm frequently and randomly floats ashore, having grown on many distant islands.
The same natives-are-best logic is used in places like India and South Africa to damn our eucalypts, which flourish there: it is alleged they use too much water, that native trees would be better. In fact, to give a certain amount of shade, that is, by having a certain leaf area, there is little difference between plants in their total water needs. A different issue is that some are more drought resistant. Because of their ability to survive and thrive, and provide good wood and honey, eucalypts could be considered one of Australia’s great gifts to the world.
Evolution of plants has gone on for millions of years, before the land we call Australia had the present shape and the current climate prevailed. (Yes, there has been much climate change.) Though we can only speculate, it is generally agreed that there was long ago a huge southern landmass comprising what is now Antarctica, South America, Africa, Australia and some nearby islands. A large mass apparently broke clear—we call this Gondwana—which in turn broke up to form New Zealand, New Caledonia, New Guinea and Australia. Plants were carried with these pieces and continued evolving, impacted by climatic and other influences, such as animal grazing, insect attacks and microbial infestations. Breakdown of rocks with their varying composition of minerals formed characteristic soils, and because plants vary in their tolerance of levels of minerals, soil type, too, influenced this evolution.
It is hard to know how much long-distance migration of plants there has been over time. Plants have a huge variety of dispersal mechanisms: wings that enable seeds to blow on the wind, awns that cling to fur and wool (and, once it was invented, clothing), flotation in running water. Having seeds attractive as food for an animal or bird, yet having some resistance to digestion, is a great asset, especially if the creature involved is a wanderer that can pass the surviving seeds some distance from the parent plant along with some manure to give any new seedlings a good start.
The classification of plants has been formalised over the last few hundred years, especially since the detailed studies of Linnaeus, who developed the idea of classification and descriptive Latin names.
Plants have evolved into many different forms. Early plants did not have flowers: we classify that lot as gymnosperms. The more recent evolution of plants to have flowers has given us the group we call angiosperms, the male parts forming pollen that is spread by wind and insects. Most are cross-pollinated, so each new plant is a new genetic entity—the basis of adaptability and evolution—but some are self-pollinating, so very stable in genetic composition, though less adaptable. Many plants, once seed is set, form seed covers and store a nice amount of carbohydrate and protein to give the new germinating plant a good start—and also become one of our great sources of food. Plants have been classified into orders, which have been divided into families, then genera, and finally individuals grouped into species.
The exact grouping for a species depends on the judgment of humans, some of whom have been eager to divide plants and so claim fame by naming new species. Properly establishing that a new species has been found demands careful exploration of existing collections in herbaria and the study of any closely related material, which all takes hours of tedious work. A well-known international television broadcaster once reported on a day’s visit to Gippsland: “We found five new species!” He may be a good broadcaster, but he is not a good botanist.
Another division of plants is into annuals and perennials. Annual plants begin from a new seed each season, while with perennials, even if seed is set, the old plant lives on for some years, even centuries. The perennial plant may survive by becoming dormant in adverse conditions, with buds bursting out into new growth when conditions are right for growth. Annuals often set huge numbers of seeds, which may have resistance to germination, such as dormancy or hard-seededness—thus surviving adverse conditions and often germinating in huge numbers when conditions are right for growth.
Many of our crop plants are annuals, but it has become fashionable to call for the use of perennials, arguing that there would be a saving of effort such as preparing a seed bed and sowing each year. However, the idea is a delusion. In modern zero-till farming very little effort goes into land preparation and sowing, and both annual and perennial crops would need the same weed control and nutrient additions. The main down-side would be that the farmer would lose the flexibility of changing crops in response to demand, or using new disease-resistant varieties. Further, yields would often be lower: an annual prepares to die by translocating virtually all of the nutrients in its stems and leaves to the seed. A perennial sets stores aside to live on through the adverse season, usually in basal buds and structures on roots, so translocates much less to its seed.
When conditions are right, seeds of annuals germinate, mobilising the reserves packed into the seed and growing rapidly: the capability of annuals such as barley grass to germinate, turn green and build plant tissue is astonishing. Tim Flannery has described such plants as having “weedy” characteristics. Rather, this is simply the result of successful evolution in areas with a long adverse season such as summer drought yet a definite seasonal winter rainfall ideal for growth. These are the characteristics of successful adaptors.
Gondwana has a special place in the evolution of our vegetation. It is often said that Australia, one of the largest of the Gondwanan land-masses, has ended up with a unique flora. However, it depends on definition. All of the main plant families occurring here are well represented on other continents, so most plants have cousins elsewhere, some far away. For example, wallaby grass, so widespread in Australia, is a close relative of the oats long used as a grain crop in Central Asia and Europe. The ancestors of one or the other must have moved or been carried.
How unique then is our native flora? The assertion is based on the fact that two well-known groups—the very large and prominent genus Eucalyptus (with some recently given other names) and the large-leafed subgroup of Acacia occur only in Australia or nearby (a few Eucalyptus occur in New Guinea). However, a comprehensive analysis of world distributions brings the assertion into question.
Northern Australian plants are closely related to the Indian and Malayan flora, so are probably immigrants from South-East Asia. The botanist Joseph Hooker suggested several eras of arrivals in the Cretaceous period (65 to 135 million years ago) as a result of land movements and sea level changes.
Of course, the genetic material present at break-up of Gondwana has powerfully influenced the flora on each land mass. Thus, though there were some brilliant-flowering legumes, like Sturt’s desert pea, on poor soils in dry areas, in the moister areas there was a dearth of legumes capable of prolific fixation of atmospheric nitrogen. Further, some of the soils were low in plant nutrients, especially the phosphorus that is so vital for energy transactions in living things. This was accentuated by the great age of the continent which meant much phosphorus had leached or washed into the seas, whence it had moved up through the food chain and been accumulated by bird droppings on islands such as Ocean Island. Once our farmers understood the need for more phosphorus, much of this has been returned as fertiliser. However, the lack of both phosphorus and nitrogen often meant poor efficiency of capture of solar energy, which is so abundant on this continent, and this made for some ecosystems with very stunted native vegetation. Paradoxically, some of the rather stunted plants evolved brilliantly coloured flowers—clothing the sand-plains of Western Australia and the Big and Little Deserts of Victoria in beautiful displays. However, low phosphorus and nitrogen limit the sequestration of carbon.
The flora of any place is, then, something of an accident of history, beginning with the genetic material present at the formation of the land mass, usually a break-up, or in the case of a large land mass, when barriers like deserts developed or there were depositions by lava flows. The flora present at European settlement of Australia was the result of such evolution, affected by those soils and that climate, occasionally with later arrivals. Thus it was the best that could evolve from the plant material that happened to be present in that set of conditions—but not necessarily the best that could ever be. Defining best is, of course, a complex matter. We humans usually relate it to utility: bearing much fruit or grain, providing a lot of leafy herbage for our animals, having brightly coloured flowers, producing timber.
New areas of soil such as lava flows as they cool and deposits from floods or even wind deposition of massive sandhills and plains can make an interesting study. What grows on these “new start” locations is just an accident of proximity. A good example was the treeless grasslands of the lava flows of western Victoria. Grasses spread quickly but eucalypts suitable to the soil type were slow to arrive: it took European settlers to find and widely plant a suitable eucalypt, the sugar gum, from western South Australia, especially Kangaroo Island and the Eyre Peninsula. They could almost have been accused of using an overseas species!
Sites having similar climatic conditions—for southern Australia this meant the Mediterranean regions—were the most likely providers of new plants. These regions were remote from Australia until bigger ships enabled long journeys. If the ships carried livestock and fodder, these lands were a source. A next important factor was modification of the soil or climate: cultivation, adding fertilisers, irrigation—all recent activities of humans.
The “arrival” of humans on Earth (if we can assume there was a defining moment) was geologically recent, coming after plants had been evolving for a very, very long time. Soon humans left their mark. (In a sense we define them as humans from the time when they began to leave their mark.)
Imagine a person—we’ll say “she” because women did so much of the food gathering—walks from the cave some distance over the hill and down along a valley, exploring. She sees a plant with a lot of fruit, picks some fruit, and it tastes better than any other she has tasted.
Being a thinker, she keeps the seed and uses her digging stick to bury it in the soil nearer to the cave, thus in due course a shorter walk to pick fruit. She is pleased, but the new plant is shaded by an existing plant, so she breaks off part of the offending plant. She and her family deposit faeces and urine on the earth near the plants, having noticed that this seems beneficial. Here is the beginning of plant selection, soil cultivation, plant propagation, pruning, and using manures, all in the cause of better value to humans. This is the beginning of agriculture, altering ecosystems, possibly grouping of plants.
These ecosystems—managed for human utility, including the one to which our gatherer had added new plants—could easily be described as both natural and agricultural. Was it unnatural of these humans to use their minds to think, and their hands to do things, to grow better plants, and have more varied food? There could also be benefit, or loss, when her new plant at its new site flowered and crossed with different plants, giving slightly different genotypes and possibly less seed down the valley. The possibility of variation does not alter the fact that her behaviour was natural.
Do we need to put a geographic limit on how far the “new” plant can be carried and the spread still be called natural? Some people want to put a limit on genetic difference in breeding too. And must all movement of pollen or seeds without human handling be natural if by birds, but not if carried by humans? An interesting example is the long trade routes like the Silk Road from China to the Middle East—passing through areas of great evolutionary significance—with plant material carried, both deliberately and accidentally. Someone would soon have done deliberate trade in new varieties known to have better taste or resistance to attack by insects or fungi.
In the last 500 years agriculture has become more and more complex. Especially in the last fifty years farming has become much more precise—exact measurement, awareness of plant nutrition, of chemical dangers as well as benefit, and so on—based on a profession called agricultural science. Modern humanity does not select a plant only from down along the valley—the search is global. New plants are “composed” using special plant breeding techniques, reaching to remote biological entities on other branches of the evolutionary chain for special genes. A great example is the merging of bacterial genes into cotton to give resistance to boll weevils, saving a huge amount of chemical application. Finally, while a single plant is just a plant, sensible arrangement into a cluster is an orchard or a crop. Modern humans use machinery to sow many seeds of the selected new plant to maximise yield per unit area of land or of human or mechanical energy.
Soils vary enormously. Eroding streams tend to gather material from a range of rocks and, downstream, may lay down deep soil, creating fertile land. Paradoxically, severe erosion upstream means deposition of large areas of soil downstream. We define fertile as having an adequate supply of the sixteen elements needed for plant growth, such that it will supply several crops in succession. Slash-and-burn farming, as practised in times past in parts of Central and South America, was a way of releasing plant nutrients and getting several crops, then moving on. In other places, from early times efforts were made to return all possible residues—plant remains, animal manure, even human excrement—to the growing site, thus deferring the impact of deficiencies in the soil. Where the soil was very fertile, or where there were additions through soil deposition (as along river valleys and flood plains) or further breakdown of rocks, yields might be maintained over many years.
Progress in maintaining or even increasing yield with successive crops came with the scientific revolution: precise definition of deficiencies, amounts needed of any deficient elements to be added to maximise yield, and the timing of any application. Included in the models to define these things is information on soil moisture availability.
In most commercial farming every possible thing is known precisely, while organic production is still to a large extent “hit and hope”. The yield of crops under organic systems is, as a result, generally about 70 per cent of that attained with precision systems. Further, some of the vegetable matter added as organic “fertiliser” comes from land dedicated to the purpose of growing that vegetable matter, so the overall effect is to make organic land use about 50 per cent efficient. Given that the amount of energy input is related to area sown, rather than ultimate yield, about double the energy per unit of production is involved, so there is a serious question of higher greenhouse gas emissions per unit of food under organic systems. Fortunately, despite its publicity, organic agriculture is a small player in world food production, variously estimated as around 1 per cent of the farmed area in the developed countries of the planet. Every increase in its area reduces the planet’s ability to feed a growing population.
Australia eventually received some human migrants—two groups in just about the same geological moment: the Aborigines from the near north about 50,000 years ago, and people from Europe 225 years ago. At the time of Aboriginal arrival, their society was more developed than that of most other parts of the world, including Europe, but by the time the Europeans came to Australia there had been huge developments in other societies. Agricultural ecosystem management and plant introduction had become firmly established and accepted.
The Aborigines eventually spread to the bounds of the continent but the extent to which they interacted with the ecosystems is a moot point. They would have dwelt (and hunted and gathered) more in the locations where there were more assured water supplies and accessible game. They certainly accepted impacting on ecosystems, developing management systems, for instance, burning grasslands to improve visibility and expose regrowth. This was especially so near sites where kangaroos and other game came to water; managing kangaroos for meat this way was what we would probably now call a sustainable system. They must have recognised the part burning played in preventing grasslands from evolving into woodland or forest, at the same time increasing green shoot production from grasslands after rain, though they might not have defined the other benefit—more rapid recycling of plant nutrients. These activities were classic cases of management of natural ecosystems, with these benefits, yet having down-sides such as reducing soil organic matter. In more “knowing” systems both would be quantified and suitable amelioration devised.
As is the case with hunter-gatherers, the point of balance between the food needs of the total community and production is difficult to comprehend, especially over vast areas of land and variable seasons. Authors such as Bill Gammage claim that the Aborigines exercised management control over every bit of the land mass of Australia, but with their modest population this would be an impractical and unrewarding task. Gammage does suggest that care did not always mean action: people might leave land alone for long periods. In long droughts there could well have been catastrophic ecological events, and much human misery. That the Aborigines did not develop a form of agriculture in some of the well-watered parts with fertile soil is something of a mystery. There was apparently some local plant cropping near Geraldton, and fish trapping was organised in western Victoria.
Gammage also makes some curious generalisations about the soils, suggesting that under Aboriginal management they were nice and spongy but lost this quality after European settlement, leading to vegetation changes. No doubt there were changes in the areas that felt the impact of the plough—less than 10 per cent—and some effects of the tread of sheep, though the close grazing of the rabbit (with the softest tread of all) possibly had greater impact on vegetation. Locally, the influence of frequent firing probably had the greatest impact—on the down-side destroying organic material and on the up-side releasing and circulating nutrients.
Gammage is one of a number of authors who have pointed out that Captain Cook described some landscapes as open woodland, that is, scattered trees over grass, yet, as far as could be judged viewing from the same spots today, these areas are shrubby instead of grassy. Gammage asserts that “typically, grass grew on good soil, trees on poor”, a gross over-simplification. There are many factors involved: for instance, the western plains of Victoria were grassland because their formation from lava had been so recent that suitable trees had not yet arrived or thrived on new soils. Grasses have much faster dispersal.
There is plenty of evidence of plant migration, ecosystem regrouping, producing different vegetation over time. Take the case referred to earlier of the sugar gum, one of the few eucalypts which readily adapted to the basalt plains of Victoria. At the time of European settlement it occurred on Kangaroo Island, on the Eyre Peninsula, and in the Flinders Ranges. We can speculate whether this was the residue of a very wide occurrence or the result of slow dispersal from a small area of evolution. Either way, adaptability to certain soil characteristics is likely to have been a factor.
Commentators on vegetation need to have a background in plant ecology, asking why, when and where, not simply be describers of what is now. The Adelaide School of Botany, led in the 1940s and 1950s by Professor Joseph Wood, produced some great analysts in ecology: Crocker, Specht, Coaldrake. The Soils Division of the CSIR was established in Adelaide and so the powerful ecological influence of soil variations was studied there.
Much of their work explained ecological change without the European influence. Crocker reached back in geological history in a seminal paper, Post-Miocene Climate and its Effect on Pedo-Genesis in Southern Australia. An extraordinary coincidence was the arrival from England, hoping to be cured of tuberculosis, of James Black. In his then successful life as a journalist and author, and using his inheritance from his sister Mrs D’Oyly Carte of Gilbert and Sullivan fame, he produced Black’s Flora of South Australia, a wonderful aid to plant studies which ensured that the classification of plants was standardised, an essential for good ecological work.
The Europeans came from what was not only an agriculturally developed economy, but by that time a rapidly evolving society, with a surge in the use of science and technology. Making a new Europe may have been in their minds to some extent when choosing food crops—logically, familiar fruits and grains. In due course, plants such as wheat were “localised” by selection and ultimately, breeding. Settlement inland by the Europeans emulated much of the Aboriginal system: grazing animals (sheep) on the kangaroo and wallaby grasslands for meat and skins or wool. They saw fire as a hazard to their fixed structures, so in contrast to the Aborigines they avoided widespread burning, so unknowingly increasing shrub and tree growth and reducing grassland. Despite what some apologists say, less burning of plant debris would have allowed more organic matter to accumulate and so benefited the structure of the soils—they were likely to become spongier. Later, where land was to be cropped, burning, axing and grubbing were used to eliminate regrowth of native shrub and tree species, and considerable areas of ecosystems, some recent, some long established, were cleared and cropped—and had lower organic matter, were less spongy.
The land left as grasslands at first seemed moderately productive, often carrying one sheep per hectare. In due course, legumes with a capability to copiously fix atmospheric nitrogen arrived from elsewhere and, given the right fertilisers, this meant substantial increases in everything: herbage production, numbers of grazing animals (up to ten sheep per hectare), soil organic matter, carbon fixation, water infiltration—and prosperity. In passing, we should also note that this was a grand sequestration of carbon!
Many people consider any partial or complete replacement of existing vegetation to be against nature. However, plants developed elsewhere have often flourished, becoming more useful than natives so either complementing them, as with grasses, or replacing them. And in any case, the environment was changed by new activities, especially the adding of fertilisers and control of grazing by fencing.
Some people ask: Why use introduced animals such as sheep and cattle, rather than the ones that had evolved here and were well suited, such as kangaroos? The answers lay with both groups of animals. After centuries of breeding and selection and training, sheep fitted well with the life and needs of humans, so, as well as being easy to manage, they provided meat and skins and even milk. Kangaroos were incredibly advanced in evolution: though only breeding singly, the adaptation of one young at foot, another in the pouch and another fertilised ovum in the womb gave for rapid multiplication after harsh times—and add in the re-absorption of the foetus if the new season did not deliver its promise. Lastly, their locomotion by bouncing, rather than running, enabled movement over long distances and leaping over obstacles which made it difficult to muster and contain kangaroos. In contrast, docile sheep were easy to contain and sort—and kill.
A twelfth-century monk once wrote that God created sheep first of all animals because a sheep flock would have enabled humans to survive by providing most of their needs: skins for tents, meat for food, milk to drink and make cheese, fat for candles, wool for clothing, ropes and thread, bones for tools and needles, horns for handles, skins for holding water and wine, and lambs for pets.
The Mediterranean climate has a long history of human activity, with the climate well understood: the hot dry summer and usually fairly moist winter, but tapering off into desert. It is not widely understood that Australia provides a high proportion of the land on the planet so described. In the Mediterranean itself at that latitude much is sea. In North and South America the same locations are limited by high altitudes, and South Africa has little land in that latitude. By contrast, in Australia there is a wide sweep across Western Australia and quite an area in South Australia, while much of Victoria, though not exactly conforming, has strong elements, like a definite winter season and considerable rain. Scattered around the globe as these remnants are, and with oceans and tropical climates in between, there was little likelihood of natural spread of plants from one to the other—even bird carriage over such a distance was unlikely. However, once human boat trafficking carrying livestock began, many ships called into Mediterranean ports to load fodder, which often contained seeds of plants from the lands around the Mediterranean Sea. Some plants made landings across southern Australia, and found it climatically suitable for their growth. One notable arrival was subterranean (sub) clover, which had evolved on acid soils. Such soils were not common in southern Europe, but southern Australia had vast areas of acid soils, so there was huge potential for spread.
Now that we understand the physiology of these plants, we recognise some fascinating quirks of plant ecology. Once germinated, the clover plants need to “accumulate” some cold before the plant stops producing leaves and flowering is initiated. This change is attained in less time in Europe with very cold winds from the Arctic over land compared with milder southerlies of Australia—over sea from the Antarctic region. Thus sub clover in Australia has a longer leafy period, so is a prolific herbage plant in the fields rather than the stunted roadside weed it is in Europe, fixing proportionately much more nitrogen. Further, the seed is set into a spiny burr, so with the widespread grazing of sheep, animals that lie down a lot and have woolly coats and often rub on trees, seeds spread readily. Widely sown in the early to mid-1900s, this plant has fixed between $5 billion and $10 billion worth of nitrogen each year depending on the season: its arrival was arguably one of the most important single economic events in the history of Australia. This classic case puts down forever any assertion that plants evolved elsewhere are not suitable to Australian conditions. Many other new arrivals have also proved to be useful.
Prosperous farming industries were built on a readiness to modify natural ecosystems using new plants (especially legumes to fix nitrogen), defining and remedying any nutrient deficiencies and deliberately controlling animal grazing. This maximised their interception of solar energy and incorporation of carbohydrates into the pool of biological carbon. Growing well-chosen plants was repeatable, comparing favourably with mining, which can be done only once.
Changes in an ecosystem
Some people believe change equals degradation; some say change indicates the ecosystem was unstable—but surely it might just be that it was highly adaptable.
It is fashionable to assert that Australia is too dry and the soils too old for agriculture, implying that settlers should never have tried to change ecosystems to give fruitful outcomes. In fact selection and breeding of cultivars and agronomic processes to conserve water have enabled productive ecosystems—crops—to be established over vast areas of semi-arid country (as I described in “The Greening of the Arid Boundary” in Quadrant, July-August 2011). And a small percentage of Australia isn’t dry, so because it is a big continent, this is quite a large area. Furthermore, because of its geography—wide east-west facing the South Pole over ocean—the wide coastal fringe, though receiving moderate rainfall, has a better spread and more reliability than comparable areas elsewhere in the world. And not all our soils are poor—the river valleys and flood plains of the Great Dividing Range and the volcanic soils make up quite a large area. In any case, improving poor soils is a simply a challenge for scientists and farmers.
Those who leap to the assertion, popularised by Tim Flannery in his book The Future Eaters, that to change an ecosystem equates with its destruction have not stopped to analyse human history. They do not recognise that human interaction with ecosystems—as our prehistoric woman did—is a natural way of adapting, aimed at, and frequently achieving, improvement for human utility.
Allied in the minds of these critics is the view that the move from hunter-gathering and the beginning of sedentary lives, which came to be called farming, was the beginning of the end for the planet. Such people see farming as always exploitative, not allowing the healing as did the allegedly gentler hunter-gatherer. In fact, modern farming can be restorative, even productivity-building. Logically, although farming was initially grasped as simply a more certain way of ensuring an adequate food supply, commercial farming with surplus for sale has always intrinsically had the general objective of at least maintaining, and hopefully increasing, yields. It pays! The emphasis on the notion of sustainability in the last few decades has been helpful in emphasising this. At the same time, the search for fixed systems is a delusion—the readiness to note adverse elements and have solutions adopted must go on and on.
The story of the cropping areas of the southern part of Australia, where very little of the land is less productive (or has lower carbon) than when Europeans arrived, illustrates this. Most of it is now much more productive, and has more carbon. Turning the farmers into legume lovers has meant much more nitrogen being available to combine with carbon and phosphorus in the soil—sequestration of carbon. (What if urban people became legume lovers too and insisted on a legume—peas or beans—as their preferred green vegetable over others that need artificial fertiliser nitrogen. What a contribution this could make!)
Thus under farming there have been huge changes, much of it the inexorable march of evolution albeit speeded up by these intelligent, technically able mammals called humans. This has sometimes led to complete replacement of the “natural” ecosystems, resulting in huge multiples of productivity.
This continuous search for improvement is much more challenging than simply aiming for no change. Imagine how much simpler it was for the Aborigines—and how powerless they were. They saw the land and its coverage as a simple, fixed, gift of their gods. There was little or no new arrival of plants or animals. Climate was a given—no forecasts and measures such as food stores or grazing management of animals to handle adverse conditions. Soils were accepted for what they were, and there were no agronomists with new technology. Just a few kangaroos and low population growth, largely through higher and earlier mortality. Perhaps, after thousands of years, they were close to sustainability. A European settler arriving and choosing to farm faced awesome challenges, but eventually gained increasingly powerful tools, better and better science, greater resources—and now a responsibility to help feed the world’s growing population. And most of all, rejecting any notion that native is best, but rather, accessing the worldwide range of germplasm.
There are some dilemmas. In places like Europe and Australia a key objective in managing swards of pasture was to maximise growth and animal production, and in recent times, optimum sequestration of carbon. The Serengeti ecosystem of Tanzania, in its natural state, carries a million gnus and wildebeests. The herds eat out certain areas, not pushing on until there is just about complete baring of the soil. In Australia we have known from early research that grazing to leave a certain amount of leaf can markedly increase production. This would be possible in the Serengeti if areas were fenced. It would be surprising if the existing pasture species could not be improved by breeding or selection or new plants introduced from elsewhere, especially legumes that fix more nitrogen and produce much more biomass. Perhaps there could be some fodder conservation, even if only to allow some grazing areas to be set aside and rested. If the region became a production system rather than merely a tourist curiosity, livestock production might well be increased four- or five-fold, and animals could be sold for meat production, even export. Of course, if nutrients were exported, fertiliser application to remedy any limiting factors would be necessary. Given the same impact on productivity of such measures in native grasslands as has occurred in Australia, the people of some parts of Africa might no longer need aid.
Conversely (and perhaps a little perversely!) let us imagine that the Europeans settling in Australia had heeded an earlier “Flannery” and eschewed the introduction of species from elsewhere, had not considered the possibility of soil improvement or fertiliser application, and had allowed the animals to work out their own version of grazing. The residents of Australia and their descendants might well be grateful recipients of aid. Perhaps we were fortunate that for the first Europeans in Australia there was no “tradition”, but rather, an interest in new species of garden and crop plants, in identifying limiting factors for plant growth and remedying them, and a recognition of the need for innovation and change.
Sadly, so many of the good-hearted folk who go to help the Africans, while allowing some change, constrain it in the direction of “native is better”, traditional methods, no chemical fertilisers, locking the people into food shortages, even starvation. The Australian Soils for Life people suggest we learn grazing management from the casual, low-input operations in East Africa, on the grounds that they are working with nature—thereby implying that Australian farmers aren’t!
Land managers must focus their human abilities and strengths on intelligently and responsibly making a future, recognising the human and biological resources of the whole planet, constantly evolving new and more productive ecosystems. Wherever we are working to sustainably improve the productivity of ecosystems we must not be blinded by prejudice. We should accept the full range of plant material, evaluate new things and efficiently incorporate them into management systems. The more we make agricultural ecosystems productive, the more space we are able to set aside as parks and reserves.
We must use the full range of our tools and resources: we must be future makers.
Dr David F. Smith AM is an Australian ecologist who has studied a range of ecosystems around the world, some natural, others modified for specific human benefits. His book Rain & Shine: A Simple Guide to How Plants Grow was published in 2012 by Connor Court.
DAVID F. SMITH
In Praise of Exotic Species
Which plants and animals should Australians favour? This question causes endless debate. Many people vehemently assert that native plants growing on a site are the best on that site for every purpose. One leader of the pack has been Tim Flannery, who in his 2002 Australia Day address suggested introduced plants like roses, plane trees and lawn grasses were a blot on the Australian landscape and should be discouraged. He suggested that among the 25,000 species of plants in Australia we should find superior replacements. Would we? How long and arduous would breeding and selection be? Such people say, “Surely it’s simple logic: they have evolved there and are therefore best adapted.” But did they evolve there? And even if they did, does that make them the best?
The answer to the question has important implications, not least in the use of land to feed the growing population of the world, both in growing food crops and in providing protein through grazing animals. The answer is particularly interesting in Australia, where accepting plants from elsewhere has been a hugely important part of wealth creation.
Can we define a native plant?
Again, many say, surely it is easy. It has long grown here, belongs here, must have evolved to suit this environment. But in fact we have no proof that even plants now dominant have evolved here. Perhaps the best we can do to define a native plant is, “A native is a plant that was here before us.” (And one can include Aborigines and Europeans under “us”, as in terms of the evolutionary time-scale of plants we arrived at much the same time.) That is all we really know for sure. It may have evolved here, but it may have arrived at any time in the more than 100 million years this continental land mass has existed. In any case, is presence at a site proof of superior adaptation over all other possible plants that have not been tested? There are many examples of the superb adaptation of known recent arrivals: all Australia’s jacaranda trees apparently developed from one introduction, and the coconut palm frequently and randomly floats ashore, having grown on many distant islands.
The same natives-are-best logic is used in places like India and South Africa to damn our eucalypts, which flourish there: it is alleged they use too much water, that native trees would be better. In fact, to give a certain amount of shade, that is, by having a certain leaf area, there is little difference between plants in their total water needs. A different issue is that some are more drought resistant. Because of their ability to survive and thrive, and provide good wood and honey, eucalypts could be considered one of Australia’s great gifts to the world.
Evolution of plants has gone on for millions of years, before the land we call Australia had the present shape and the current climate prevailed. (Yes, there has been much climate change.) Though we can only speculate, it is generally agreed that there was long ago a huge southern landmass comprising what is now Antarctica, South America, Africa, Australia and some nearby islands. A large mass apparently broke clear—we call this Gondwana—which in turn broke up to form New Zealand, New Caledonia, New Guinea and Australia. Plants were carried with these pieces and continued evolving, impacted by climatic and other influences, such as animal grazing, insect attacks and microbial infestations. Breakdown of rocks with their varying composition of minerals formed characteristic soils, and because plants vary in their tolerance of levels of minerals, soil type, too, influenced this evolution.
It is hard to know how much long-distance migration of plants there has been over time. Plants have a huge variety of dispersal mechanisms: wings that enable seeds to blow on the wind, awns that cling to fur and wool (and, once it was invented, clothing), flotation in running water. Having seeds attractive as food for an animal or bird, yet having some resistance to digestion, is a great asset, especially if the creature involved is a wanderer that can pass the surviving seeds some distance from the parent plant along with some manure to give any new seedlings a good start.
The classification of plants has been formalised over the last few hundred years, especially since the detailed studies of Linnaeus, who developed the idea of classification and descriptive Latin names.
Plants have evolved into many different forms. Early plants did not have flowers: we classify that lot as gymnosperms. The more recent evolution of plants to have flowers has given us the group we call angiosperms, the male parts forming pollen that is spread by wind and insects. Most are cross-pollinated, so each new plant is a new genetic entity—the basis of adaptability and evolution—but some are self-pollinating, so very stable in genetic composition, though less adaptable. Many plants, once seed is set, form seed covers and store a nice amount of carbohydrate and protein to give the new germinating plant a good start—and also become one of our great sources of food. Plants have been classified into orders, which have been divided into families, then genera, and finally individuals grouped into species.
The exact grouping for a species depends on the judgment of humans, some of whom have been eager to divide plants and so claim fame by naming new species. Properly establishing that a new species has been found demands careful exploration of existing collections in herbaria and the study of any closely related material, which all takes hours of tedious work. A well-known international television broadcaster once reported on a day’s visit to Gippsland: “We found five new species!” He may be a good broadcaster, but he is not a good botanist.
Another division of plants is into annuals and perennials. Annual plants begin from a new seed each season, while with perennials, even if seed is set, the old plant lives on for some years, even centuries. The perennial plant may survive by becoming dormant in adverse conditions, with buds bursting out into new growth when conditions are right for growth. Annuals often set huge numbers of seeds, which may have resistance to germination, such as dormancy or hard-seededness—thus surviving adverse conditions and often germinating in huge numbers when conditions are right for growth.
Many of our crop plants are annuals, but it has become fashionable to call for the use of perennials, arguing that there would be a saving of effort such as preparing a seed bed and sowing each year. However, the idea is a delusion. In modern zero-till farming very little effort goes into land preparation and sowing, and both annual and perennial crops would need the same weed control and nutrient additions. The main down-side would be that the farmer would lose the flexibility of changing crops in response to demand, or using new disease-resistant varieties. Further, yields would often be lower: an annual prepares to die by translocating virtually all of the nutrients in its stems and leaves to the seed. A perennial sets stores aside to live on through the adverse season, usually in basal buds and structures on roots, so translocates much less to its seed.
When conditions are right, seeds of annuals germinate, mobilising the reserves packed into the seed and growing rapidly: the capability of annuals such as barley grass to germinate, turn green and build plant tissue is astonishing. Tim Flannery has described such plants as having “weedy” characteristics. Rather, this is simply the result of successful evolution in areas with a long adverse season such as summer drought yet a definite seasonal winter rainfall ideal for growth. These are the characteristics of successful adaptors.
Gondwana has a special place in the evolution of our vegetation. It is often said that Australia, one of the largest of the Gondwanan land-masses, has ended up with a unique flora. However, it depends on definition. All of the main plant families occurring here are well represented on other continents, so most plants have cousins elsewhere, some far away. For example, wallaby grass, so widespread in Australia, is a close relative of the oats long used as a grain crop in Central Asia and Europe. The ancestors of one or the other must have moved or been carried.
How unique then is our native flora? The assertion is based on the fact that two well-known groups—the very large and prominent genus Eucalyptus (with some recently given other names) and the large-leafed subgroup of Acacia occur only in Australia or nearby (a few Eucalyptus occur in New Guinea). However, a comprehensive analysis of world distributions brings the assertion into question.
Northern Australian plants are closely related to the Indian and Malayan flora, so are probably immigrants from South-East Asia. The botanist Joseph Hooker suggested several eras of arrivals in the Cretaceous period (65 to 135 million years ago) as a result of land movements and sea level changes.
Of course, the genetic material present at break-up of Gondwana has powerfully influenced the flora on each land mass. Thus, though there were some brilliant-flowering legumes, like Sturt’s desert pea, on poor soils in dry areas, in the moister areas there was a dearth of legumes capable of prolific fixation of atmospheric nitrogen. Further, some of the soils were low in plant nutrients, especially the phosphorus that is so vital for energy transactions in living things. This was accentuated by the great age of the continent which meant much phosphorus had leached or washed into the seas, whence it had moved up through the food chain and been accumulated by bird droppings on islands such as Ocean Island. Once our farmers understood the need for more phosphorus, much of this has been returned as fertiliser. However, the lack of both phosphorus and nitrogen often meant poor efficiency of capture of solar energy, which is so abundant on this continent, and this made for some ecosystems with very stunted native vegetation. Paradoxically, some of the rather stunted plants evolved brilliantly coloured flowers—clothing the sand-plains of Western Australia and the Big and Little Deserts of Victoria in beautiful displays. However, low phosphorus and nitrogen limit the sequestration of carbon.
The flora of any place is, then, something of an accident of history, beginning with the genetic material present at the formation of the land mass, usually a break-up, or in the case of a large land mass, when barriers like deserts developed or there were depositions by lava flows. The flora present at European settlement of Australia was the result of such evolution, affected by those soils and that climate, occasionally with later arrivals. Thus it was the best that could evolve from the plant material that happened to be present in that set of conditions—but not necessarily the best that could ever be. Defining best is, of course, a complex matter. We humans usually relate it to utility: bearing much fruit or grain, providing a lot of leafy herbage for our animals, having brightly coloured flowers, producing timber.
New areas of soil such as lava flows as they cool and deposits from floods or even wind deposition of massive sandhills and plains can make an interesting study. What grows on these “new start” locations is just an accident of proximity. A good example was the treeless grasslands of the lava flows of western Victoria. Grasses spread quickly but eucalypts suitable to the soil type were slow to arrive: it took European settlers to find and widely plant a suitable eucalypt, the sugar gum, from western South Australia, especially Kangaroo Island and the Eyre Peninsula. They could almost have been accused of using an overseas species!
Sites having similar climatic conditions—for southern Australia this meant the Mediterranean regions—were the most likely providers of new plants. These regions were remote from Australia until bigger ships enabled long journeys. If the ships carried livestock and fodder, these lands were a source. A next important factor was modification of the soil or climate: cultivation, adding fertilisers, irrigation—all recent activities of humans.
The “arrival” of humans on Earth (if we can assume there was a defining moment) was geologically recent, coming after plants had been evolving for a very, very long time. Soon humans left their mark. (In a sense we define them as humans from the time when they began to leave their mark.)
Imagine a person—we’ll say “she” because women did so much of the food gathering—walks from the cave some distance over the hill and down along a valley, exploring. She sees a plant with a lot of fruit, picks some fruit, and it tastes better than any other she has tasted.
Being a thinker, she keeps the seed and uses her digging stick to bury it in the soil nearer to the cave, thus in due course a shorter walk to pick fruit. She is pleased, but the new plant is shaded by an existing plant, so she breaks off part of the offending plant. She and her family deposit faeces and urine on the earth near the plants, having noticed that this seems beneficial. Here is the beginning of plant selection, soil cultivation, plant propagation, pruning, and using manures, all in the cause of better value to humans. This is the beginning of agriculture, altering ecosystems, possibly grouping of plants.
These ecosystems—managed for human utility, including the one to which our gatherer had added new plants—could easily be described as both natural and agricultural. Was it unnatural of these humans to use their minds to think, and their hands to do things, to grow better plants, and have more varied food? There could also be benefit, or loss, when her new plant at its new site flowered and crossed with different plants, giving slightly different genotypes and possibly less seed down the valley. The possibility of variation does not alter the fact that her behaviour was natural.
Do we need to put a geographic limit on how far the “new” plant can be carried and the spread still be called natural? Some people want to put a limit on genetic difference in breeding too. And must all movement of pollen or seeds without human handling be natural if by birds, but not if carried by humans? An interesting example is the long trade routes like the Silk Road from China to the Middle East—passing through areas of great evolutionary significance—with plant material carried, both deliberately and accidentally. Someone would soon have done deliberate trade in new varieties known to have better taste or resistance to attack by insects or fungi.
In the last 500 years agriculture has become more and more complex. Especially in the last fifty years farming has become much more precise—exact measurement, awareness of plant nutrition, of chemical dangers as well as benefit, and so on—based on a profession called agricultural science. Modern humanity does not select a plant only from down along the valley—the search is global. New plants are “composed” using special plant breeding techniques, reaching to remote biological entities on other branches of the evolutionary chain for special genes. A great example is the merging of bacterial genes into cotton to give resistance to boll weevils, saving a huge amount of chemical application. Finally, while a single plant is just a plant, sensible arrangement into a cluster is an orchard or a crop. Modern humans use machinery to sow many seeds of the selected new plant to maximise yield per unit area of land or of human or mechanical energy.
Soils vary enormously. Eroding streams tend to gather material from a range of rocks and, downstream, may lay down deep soil, creating fertile land. Paradoxically, severe erosion upstream means deposition of large areas of soil downstream. We define fertile as having an adequate supply of the sixteen elements needed for plant growth, such that it will supply several crops in succession. Slash-and-burn farming, as practised in times past in parts of Central and South America, was a way of releasing plant nutrients and getting several crops, then moving on. In other places, from early times efforts were made to return all possible residues—plant remains, animal manure, even human excrement—to the growing site, thus deferring the impact of deficiencies in the soil. Where the soil was very fertile, or where there were additions through soil deposition (as along river valleys and flood plains) or further breakdown of rocks, yields might be maintained over many years.
Progress in maintaining or even increasing yield with successive crops came with the scientific revolution: precise definition of deficiencies, amounts needed of any deficient elements to be added to maximise yield, and the timing of any application. Included in the models to define these things is information on soil moisture availability.
In most commercial farming every possible thing is known precisely, while organic production is still to a large extent “hit and hope”. The yield of crops under organic systems is, as a result, generally about 70 per cent of that attained with precision systems. Further, some of the vegetable matter added as organic “fertiliser” comes from land dedicated to the purpose of growing that vegetable matter, so the overall effect is to make organic land use about 50 per cent efficient. Given that the amount of energy input is related to area sown, rather than ultimate yield, about double the energy per unit of production is involved, so there is a serious question of higher greenhouse gas emissions per unit of food under organic systems. Fortunately, despite its publicity, organic agriculture is a small player in world food production, variously estimated as around 1 per cent of the farmed area in the developed countries of the planet. Every increase in its area reduces the planet’s ability to feed a growing population.
Australia eventually received some human migrants—two groups in just about the same geological moment: the Aborigines from the near north about 50,000 years ago, and people from Europe 225 years ago. At the time of Aboriginal arrival, their society was more developed than that of most other parts of the world, including Europe, but by the time the Europeans came to Australia there had been huge developments in other societies. Agricultural ecosystem management and plant introduction had become firmly established and accepted.
The Aborigines eventually spread to the bounds of the continent but the extent to which they interacted with the ecosystems is a moot point. They would have dwelt (and hunted and gathered) more in the locations where there were more assured water supplies and accessible game. They certainly accepted impacting on ecosystems, developing management systems, for instance, burning grasslands to improve visibility and expose regrowth. This was especially so near sites where kangaroos and other game came to water; managing kangaroos for meat this way was what we would probably now call a sustainable system. They must have recognised the part burning played in preventing grasslands from evolving into woodland or forest, at the same time increasing green shoot production from grasslands after rain, though they might not have defined the other benefit—more rapid recycling of plant nutrients. These activities were classic cases of management of natural ecosystems, with these benefits, yet having down-sides such as reducing soil organic matter. In more “knowing” systems both would be quantified and suitable amelioration devised.
As is the case with hunter-gatherers, the point of balance between the food needs of the total community and production is difficult to comprehend, especially over vast areas of land and variable seasons. Authors such as Bill Gammage claim that the Aborigines exercised management control over every bit of the land mass of Australia, but with their modest population this would be an impractical and unrewarding task. Gammage does suggest that care did not always mean action: people might leave land alone for long periods. In long droughts there could well have been catastrophic ecological events, and much human misery. That the Aborigines did not develop a form of agriculture in some of the well-watered parts with fertile soil is something of a mystery. There was apparently some local plant cropping near Geraldton, and fish trapping was organised in western Victoria.
Gammage also makes some curious generalisations about the soils, suggesting that under Aboriginal management they were nice and spongy but lost this quality after European settlement, leading to vegetation changes. No doubt there were changes in the areas that felt the impact of the plough—less than 10 per cent—and some effects of the tread of sheep, though the close grazing of the rabbit (with the softest tread of all) possibly had greater impact on vegetation. Locally, the influence of frequent firing probably had the greatest impact—on the down-side destroying organic material and on the up-side releasing and circulating nutrients.
Gammage is one of a number of authors who have pointed out that Captain Cook described some landscapes as open woodland, that is, scattered trees over grass, yet, as far as could be judged viewing from the same spots today, these areas are shrubby instead of grassy. Gammage asserts that “typically, grass grew on good soil, trees on poor”, a gross over-simplification. There are many factors involved: for instance, the western plains of Victoria were grassland because their formation from lava had been so recent that suitable trees had not yet arrived or thrived on new soils. Grasses have much faster dispersal.
There is plenty of evidence of plant migration, ecosystem regrouping, producing different vegetation over time. Take the case referred to earlier of the sugar gum, one of the few eucalypts which readily adapted to the basalt plains of Victoria. At the time of European settlement it occurred on Kangaroo Island, on the Eyre Peninsula, and in the Flinders Ranges. We can speculate whether this was the residue of a very wide occurrence or the result of slow dispersal from a small area of evolution. Either way, adaptability to certain soil characteristics is likely to have been a factor.
Commentators on vegetation need to have a background in plant ecology, asking why, when and where, not simply be describers of what is now. The Adelaide School of Botany, led in the 1940s and 1950s by Professor Joseph Wood, produced some great analysts in ecology: Crocker, Specht, Coaldrake. The Soils Division of the CSIR was established in Adelaide and so the powerful ecological influence of soil variations was studied there.
Much of their work explained ecological change without the European influence. Crocker reached back in geological history in a seminal paper, Post-Miocene Climate and its Effect on Pedo-Genesis in Southern Australia. An extraordinary coincidence was the arrival from England, hoping to be cured of tuberculosis, of James Black. In his then successful life as a journalist and author, and using his inheritance from his sister Mrs D’Oyly Carte of Gilbert and Sullivan fame, he produced Black’s Flora of South Australia, a wonderful aid to plant studies which ensured that the classification of plants was standardised, an essential for good ecological work.
The Europeans came from what was not only an agriculturally developed economy, but by that time a rapidly evolving society, with a surge in the use of science and technology. Making a new Europe may have been in their minds to some extent when choosing food crops—logically, familiar fruits and grains. In due course, plants such as wheat were “localised” by selection and ultimately, breeding. Settlement inland by the Europeans emulated much of the Aboriginal system: grazing animals (sheep) on the kangaroo and wallaby grasslands for meat and skins or wool. They saw fire as a hazard to their fixed structures, so in contrast to the Aborigines they avoided widespread burning, so unknowingly increasing shrub and tree growth and reducing grassland. Despite what some apologists say, less burning of plant debris would have allowed more organic matter to accumulate and so benefited the structure of the soils—they were likely to become spongier. Later, where land was to be cropped, burning, axing and grubbing were used to eliminate regrowth of native shrub and tree species, and considerable areas of ecosystems, some recent, some long established, were cleared and cropped—and had lower organic matter, were less spongy.
The land left as grasslands at first seemed moderately productive, often carrying one sheep per hectare. In due course, legumes with a capability to copiously fix atmospheric nitrogen arrived from elsewhere and, given the right fertilisers, this meant substantial increases in everything: herbage production, numbers of grazing animals (up to ten sheep per hectare), soil organic matter, carbon fixation, water infiltration—and prosperity. In passing, we should also note that this was a grand sequestration of carbon!
Many people consider any partial or complete replacement of existing vegetation to be against nature. However, plants developed elsewhere have often flourished, becoming more useful than natives so either complementing them, as with grasses, or replacing them. And in any case, the environment was changed by new activities, especially the adding of fertilisers and control of grazing by fencing.
Some people ask: Why use introduced animals such as sheep and cattle, rather than the ones that had evolved here and were well suited, such as kangaroos? The answers lay with both groups of animals. After centuries of breeding and selection and training, sheep fitted well with the life and needs of humans, so, as well as being easy to manage, they provided meat and skins and even milk. Kangaroos were incredibly advanced in evolution: though only breeding singly, the adaptation of one young at foot, another in the pouch and another fertilised ovum in the womb gave for rapid multiplication after harsh times—and add in the re-absorption of the foetus if the new season did not deliver its promise. Lastly, their locomotion by bouncing, rather than running, enabled movement over long distances and leaping over obstacles which made it difficult to muster and contain kangaroos. In contrast, docile sheep were easy to contain and sort—and kill.
A twelfth-century monk once wrote that God created sheep first of all animals because a sheep flock would have enabled humans to survive by providing most of their needs: skins for tents, meat for food, milk to drink and make cheese, fat for candles, wool for clothing, ropes and thread, bones for tools and needles, horns for handles, skins for holding water and wine, and lambs for pets.
The Mediterranean climate has a long history of human activity, with the climate well understood: the hot dry summer and usually fairly moist winter, but tapering off into desert. It is not widely understood that Australia provides a high proportion of the land on the planet so described. In the Mediterranean itself at that latitude much is sea. In North and South America the same locations are limited by high altitudes, and South Africa has little land in that latitude. By contrast, in Australia there is a wide sweep across Western Australia and quite an area in South Australia, while much of Victoria, though not exactly conforming, has strong elements, like a definite winter season and considerable rain. Scattered around the globe as these remnants are, and with oceans and tropical climates in between, there was little likelihood of natural spread of plants from one to the other—even bird carriage over such a distance was unlikely. However, once human boat trafficking carrying livestock began, many ships called into Mediterranean ports to load fodder, which often contained seeds of plants from the lands around the Mediterranean Sea. Some plants made landings across southern Australia, and found it climatically suitable for their growth. One notable arrival was subterranean (sub) clover, which had evolved on acid soils. Such soils were not common in southern Europe, but southern Australia had vast areas of acid soils, so there was huge potential for spread.
Now that we understand the physiology of these plants, we recognise some fascinating quirks of plant ecology. Once germinated, the clover plants need to “accumulate” some cold before the plant stops producing leaves and flowering is initiated. This change is attained in less time in Europe with very cold winds from the Arctic over land compared with milder southerlies of Australia—over sea from the Antarctic region. Thus sub clover in Australia has a longer leafy period, so is a prolific herbage plant in the fields rather than the stunted roadside weed it is in Europe, fixing proportionately much more nitrogen. Further, the seed is set into a spiny burr, so with the widespread grazing of sheep, animals that lie down a lot and have woolly coats and often rub on trees, seeds spread readily. Widely sown in the early to mid-1900s, this plant has fixed between $5 billion and $10 billion worth of nitrogen each year depending on the season: its arrival was arguably one of the most important single economic events in the history of Australia. This classic case puts down forever any assertion that plants evolved elsewhere are not suitable to Australian conditions. Many other new arrivals have also proved to be useful.
Prosperous farming industries were built on a readiness to modify natural ecosystems using new plants (especially legumes to fix nitrogen), defining and remedying any nutrient deficiencies and deliberately controlling animal grazing. This maximised their interception of solar energy and incorporation of carbohydrates into the pool of biological carbon. Growing well-chosen plants was repeatable, comparing favourably with mining, which can be done only once.
Changes in an ecosystem
Some people believe change equals degradation; some say change indicates the ecosystem was unstable—but surely it might just be that it was highly adaptable.
It is fashionable to assert that Australia is too dry and the soils too old for agriculture, implying that settlers should never have tried to change ecosystems to give fruitful outcomes. In fact selection and breeding of cultivars and agronomic processes to conserve water have enabled productive ecosystems—crops—to be established over vast areas of semi-arid country (as I described in “The Greening of the Arid Boundary” in Quadrant, July-August 2011). And a small percentage of Australia isn’t dry, so because it is a big continent, this is quite a large area. Furthermore, because of its geography—wide east-west facing the South Pole over ocean—the wide coastal fringe, though receiving moderate rainfall, has a better spread and more reliability than comparable areas elsewhere in the world. And not all our soils are poor—the river valleys and flood plains of the Great Dividing Range and the volcanic soils make up quite a large area. In any case, improving poor soils is a simply a challenge for scientists and farmers.
Those who leap to the assertion, popularised by Tim Flannery in his book The Future Eaters, that to change an ecosystem equates with its destruction have not stopped to analyse human history. They do not recognise that human interaction with ecosystems—as our prehistoric woman did—is a natural way of adapting, aimed at, and frequently achieving, improvement for human utility.
Allied in the minds of these critics is the view that the move from hunter-gathering and the beginning of sedentary lives, which came to be called farming, was the beginning of the end for the planet. Such people see farming as always exploitative, not allowing the healing as did the allegedly gentler hunter-gatherer. In fact, modern farming can be restorative, even productivity-building. Logically, although farming was initially grasped as simply a more certain way of ensuring an adequate food supply, commercial farming with surplus for sale has always intrinsically had the general objective of at least maintaining, and hopefully increasing, yields. It pays! The emphasis on the notion of sustainability in the last few decades has been helpful in emphasising this. At the same time, the search for fixed systems is a delusion—the readiness to note adverse elements and have solutions adopted must go on and on.
The story of the cropping areas of the southern part of Australia, where very little of the land is less productive (or has lower carbon) than when Europeans arrived, illustrates this. Most of it is now much more productive, and has more carbon. Turning the farmers into legume lovers has meant much more nitrogen being available to combine with carbon and phosphorus in the soil—sequestration of carbon. (What if urban people became legume lovers too and insisted on a legume—peas or beans—as their preferred green vegetable over others that need artificial fertiliser nitrogen. What a contribution this could make!)
Thus under farming there have been huge changes, much of it the inexorable march of evolution albeit speeded up by these intelligent, technically able mammals called humans. This has sometimes led to complete replacement of the “natural” ecosystems, resulting in huge multiples of productivity.
This continuous search for improvement is much more challenging than simply aiming for no change. Imagine how much simpler it was for the Aborigines—and how powerless they were. They saw the land and its coverage as a simple, fixed, gift of their gods. There was little or no new arrival of plants or animals. Climate was a given—no forecasts and measures such as food stores or grazing management of animals to handle adverse conditions. Soils were accepted for what they were, and there were no agronomists with new technology. Just a few kangaroos and low population growth, largely through higher and earlier mortality. Perhaps, after thousands of years, they were close to sustainability. A European settler arriving and choosing to farm faced awesome challenges, but eventually gained increasingly powerful tools, better and better science, greater resources—and now a responsibility to help feed the world’s growing population. And most of all, rejecting any notion that native is best, but rather, accessing the worldwide range of germplasm.
There are some dilemmas. In places like Europe and Australia a key objective in managing swards of pasture was to maximise growth and animal production, and in recent times, optimum sequestration of carbon. The Serengeti ecosystem of Tanzania, in its natural state, carries a million gnus and wildebeests. The herds eat out certain areas, not pushing on until there is just about complete baring of the soil. In Australia we have known from early research that grazing to leave a certain amount of leaf can markedly increase production. This would be possible in the Serengeti if areas were fenced. It would be surprising if the existing pasture species could not be improved by breeding or selection or new plants introduced from elsewhere, especially legumes that fix more nitrogen and produce much more biomass. Perhaps there could be some fodder conservation, even if only to allow some grazing areas to be set aside and rested. If the region became a production system rather than merely a tourist curiosity, livestock production might well be increased four- or five-fold, and animals could be sold for meat production, even export. Of course, if nutrients were exported, fertiliser application to remedy any limiting factors would be necessary. Given the same impact on productivity of such measures in native grasslands as has occurred in Australia, the people of some parts of Africa might no longer need aid.
Conversely (and perhaps a little perversely!) let us imagine that the Europeans settling in Australia had heeded an earlier “Flannery” and eschewed the introduction of species from elsewhere, had not considered the possibility of soil improvement or fertiliser application, and had allowed the animals to work out their own version of grazing. The residents of Australia and their descendants might well be grateful recipients of aid. Perhaps we were fortunate that for the first Europeans in Australia there was no “tradition”, but rather, an interest in new species of garden and crop plants, in identifying limiting factors for plant growth and remedying them, and a recognition of the need for innovation and change.
Sadly, so many of the good-hearted folk who go to help the Africans, while allowing some change, constrain it in the direction of “native is better”, traditional methods, no chemical fertilisers, locking the people into food shortages, even starvation. The Australian Soils for Life people suggest we learn grazing management from the casual, low-input operations in East Africa, on the grounds that they are working with nature—thereby implying that Australian farmers aren’t!
Land managers must focus their human abilities and strengths on intelligently and responsibly making a future, recognising the human and biological resources of the whole planet, constantly evolving new and more productive ecosystems. Wherever we are working to sustainably improve the productivity of ecosystems we must not be blinded by prejudice. We should accept the full range of plant material, evaluate new things and efficiently incorporate them into management systems. The more we make agricultural ecosystems productive, the more space we are able to set aside as parks and reserves.
We must use the full range of our tools and resources: we must be future makers.
Dr David F. Smith AM is an Australian ecologist who has studied a range of ecosystems around the world, some natural, others modified for specific human benefits. His book Rain & Shine: A Simple Guide to How Plants Grow was published in 2012 by Connor Court.
Friday, January 03, 2014
Climate uncertainties
Quadrant Jan 2014
The Fundamental Uncertainties of Climate Change
The World Meteorological Organisation of the United Nations took its first steps towards establishing the World Climate Program in the early 1970s. Among other things it held a conference in Stockholm to define the main scientific problems to be solved before reliable climate forecasting could be possible. The conference defined quite a number, but focused on just two. The first concerned an inability to simulate the amount and character of clouds in the atmosphere. Clouds are important because they govern the balance between solar heating and infrared cooling of the planet, and thereby are a control of Earth’s temperature. The second concerned an inability to forecast the behaviour of oceans. Oceans are important because they are the main reservoirs of heat in the climate system. They have internal, more-or-less random, fluctuations on all sorts of time-scales ranging from years through to centuries. These fluctuations cause changes in ocean surface temperature that in turn affect Earth’s overall climate.
The situation hasn’t changed much in the decades since. Many of the problems of simulating the behaviour of clouds and oceans are still there (along with lots of other problems of lesser moment) and for many of the same reasons. Perhaps the most significant is that climate models must do their calculations at each point of an imaginary grid of points spread evenly around the world at various heights in the atmosphere and depths in the ocean. The calculations are done every hour or so of model time as the model steps forward into its theoretical future. Problems arise because practical constraints on the size of computers ensure that the horizontal distance between model grid-points may be as much as a degree or two of latitude or longitude—that is to say, a distance of many tens of kilometres.
That sort of distance is much larger than the size of a typical piece of cloud. As a consequence, simulation of clouds requires a fair amount of guesswork as to what might be a suitable average of whatever is going on between the grid-points of the model. Even if experimental observations suggest that the models get the averages roughly right for a short-term forecast, there is no guarantee they will get them right for atmospheric conditions several decades into the future. Among other problems, small errors in the numerical modelling of complex processes have a nasty habit of accumulating with time.
Again because of this grid-point business, oceanic fluctuations and eddies smaller than the distance between the grid-points of a model are unknown to that model. This would not be a problem except that eddies in turbulent fluids can grow larger and larger. A small random eddy in the real ocean can grow and appear out of nowhere as far as a forecasting model is concerned, and make a dog’s breakfast of the forecast from that time on.
All of the above is background to one of the great mysteries of the climate change issue. Virtually all the scientists directly involved in climate prediction are aware of the enormous problems and uncertainties still associated with their product. How then is it that those of them involved in the latest report of the Intergovernmental Panel on Climate Change (IPCC) can put their hands on their hearts and maintain there is a 95 per cent probability that human emissions of carbon dioxide have caused most of the global warming that has occurred over the last several decades?
Bear in mind that the representation of clouds in climate models (and of water vapour, which is intimately involved with cloud formation) is such as to amplify the forecast warming from increasing atmospheric carbon dioxide—on average over most of the models—by a factor of about three. In other words, two-thirds of the forecast rise in temperature derives from this particular model characteristic. Despite what the models are telling us—and perhaps because it is models that are telling us—no scientist close to the problem and in his right mind, when asked the specific question, would say that he is 95 per cent sure that the effect of clouds is to amplify rather than to reduce the warming effect of increasing carbon dioxide. If he is not sure that clouds amplify global warming, he cannot be sure that most of the global warming is a result of increasing carbon dioxide.
Bear in mind too that no scientist close to the problem and in his right mind, when asked the specific question, would say there is only a very small possibility (that is, less than 5 per cent) that internal ocean behaviour could be a major cause of the warming over the past half-century. He would be particularly careful not to make such a statement now that there has been no significant warming over the most recent fifteen or so years. In the mad scurry to find reasons for the pause, and to find reasons for an obvious failure of the models to simulate the pause, suddenly we are hearing that perhaps the heat of global warming is being “hidden” in the deep ocean. In other words we are being told that some internal oceanic fluctuation may have reduced the upward trend in global temperature. It is therefore more than a little strange that we are not hearing from the IPCC (or at any rate not hearing very loudly) that some natural internal fluctuation of the system may have given rise to most of the earlier upward trend.
In the light of all this, we have at least to consider the possibility that the scientific establishment behind the global warming issue has been drawn into the trap of seriously overstating the climate problem—or, what is much the same thing, of seriously understating the uncertainties associated with the climate problem—in its effort to promote the cause. It is a particularly nasty trap in the context of science, because it risks destroying, perhaps for centuries to come, the unique and hard-won reputation for honesty which is the basis of society’s respect for scientific endeavour. Trading reputational capital for short-term political gain isn’t the most sensible way of going about things.
The trap was set in the late 1970s or thereabouts when the environmental movement first realised that doing something about global warming would play to quite a number of its social agendas. At much the same time, it became accepted wisdom around the corridors of power that government-funded scientists (that is, most scientists) should be required to obtain a goodly fraction of their funds and salaries from external sources—external anyway to their own particular organisation.
The scientists in environmental research laboratories, since they are not normally linked to any particular private industry, were forced to seek funds from other government departments. In turn this forced them to accept the need for advocacy and for the manipulation of public opinion. For that sort of activity, an arm’s-length association with the environmental movement would be a union made in heaven. Among other things it would provide a means by which scientists could distance themselves from responsibility for any public overstatement of the significance of their particular research problem.
The trap was partially sprung in climate research when a number of the relevant scientists began to enjoy the advocacy business. The enjoyment was based on a considerable increase in funding and employment opportunity. The increase was not so much on the hard-science side of things but rather in the emerging fringe institutes and organisations devoted, at least in part, to selling the message of climatic doom. A new and rewarding research lifestyle emerged which involved the giving of advice to all types and levels of government, the broadcasting of unchallengeable opinion to the general public, and easy justification for attendance at international conferences—this last in some luxury by normal scientific experience, and at a frequency previously unheard of.
Somewhere along the line it came to be believed by many of the public, and indeed by many of the scientists themselves, that climate researchers were the equivalent of knights on white steeds fighting a great battle against the forces of evil—evil, that is, in the shape of “big oil” and its supposedly unlimited money. The delusion was more than a little attractive.
The trap was fully sprung when many of the world’s major national academies of science (such as the Royal Society in the UK, the National Academy of Sciences in the USA and the Australian Academy of Science) persuaded themselves to issue reports giving support to the conclusions of the IPCC. The reports were touted as national assessments that were supposedly independent of the IPCC and of each other, but of necessity were compiled with the assistance of, and in some cases at the behest of, many of the scientists involved in the IPCC international machinations. In effect, the academies, which are the most prestigious of the institutions of science, formally nailed their colours to the mast of the politically correct.
Since that time three or four years ago, there has been no comfortable way for the scientific community to raise the spectre of serious uncertainty about the forecasts of climatic disaster. It can no longer use the environmental movement as a scapegoat if it should turn out that the threat of global warming has no real substance. It can no longer escape prime responsibility if it should turn out in the end that doing something in the name of mitigation of global warming is the costliest scientific mistake ever visited on humanity. The current redirection of global funds in the name of climate change is of the order of a billion dollars a day. And in the future, to quote US Senator Everett Dirksen, “a billion here and a billion there, and pretty soon we’ll be talking about real money”.
At the same time, the average man in the street, a sensible chap who by now can smell the signs of an oversold environmental campaign from miles away, is beginning to suspect that it is politics rather than science which is driving the issue.
Scientists—most scientists anyway—may be a bit naive, but they are not generally wicked, idiotic, or easily suborned either by money or by the politically correct. So whatever might be the enjoyment factor associated with supporting officially accepted wisdom, and whatever might be the constraints applied by the scientific powers-that-be, it is still surprising that the latest IPCC report has been tabled with almost no murmur of discontent from the lower levels of the research establishment. What has happened to the scepticism that is supposedly the lifeblood of scientific inquiry?
The answer probably gets back to the uncertainty of it all. The chances of proving that climate change over the next century will be large enough to be disastrous are virtually nil. For the same reason, the chances of a climate sceptic, or anyone else for that matter, proving the disaster theory to be oversold are also virtually nil. To that extent there is a level playing field for the two sides of the argument. The problem is that climate research necessarily involves enormous resources, and is a game for institutions and organisations. Scepticism is an occupation for individuals. Things being as they are in the climate-change arena, scepticism by an individual within the system can be fairly career-limiting. In any event, most individual scientists have a conscience, and are reluctant to put their heads above the public parapet in order to propound a view of things that may be inherently unprovable.
In short, there is more than enough uncertainty about the forecasting of climate to allow normal human beings to be at least reasonably hopeful that global warming might not be nearly as bad as is currently touted. Climate scientists, and indeed scientists in general, are not so lucky. They have a lot to lose if time should prove them wrong.
Garth Paltridge is an emeritus professor at the University of Tasmania and a fellow of the Australian Academy of Science. He is the author of The Climate Caper: Facts and Fallacies of Global Warming. He was a chief research scientist with the CSIRO Division of Atmospheric Research.
The Fundamental Uncertainties of Climate Change
The World Meteorological Organisation of the United Nations took its first steps towards establishing the World Climate Program in the early 1970s. Among other things it held a conference in Stockholm to define the main scientific problems to be solved before reliable climate forecasting could be possible. The conference defined quite a number, but focused on just two. The first concerned an inability to simulate the amount and character of clouds in the atmosphere. Clouds are important because they govern the balance between solar heating and infrared cooling of the planet, and thereby are a control of Earth’s temperature. The second concerned an inability to forecast the behaviour of oceans. Oceans are important because they are the main reservoirs of heat in the climate system. They have internal, more-or-less random, fluctuations on all sorts of time-scales ranging from years through to centuries. These fluctuations cause changes in ocean surface temperature that in turn affect Earth’s overall climate.
The situation hasn’t changed much in the decades since. Many of the problems of simulating the behaviour of clouds and oceans are still there (along with lots of other problems of lesser moment) and for many of the same reasons. Perhaps the most significant is that climate models must do their calculations at each point of an imaginary grid of points spread evenly around the world at various heights in the atmosphere and depths in the ocean. The calculations are done every hour or so of model time as the model steps forward into its theoretical future. Problems arise because practical constraints on the size of computers ensure that the horizontal distance between model grid-points may be as much as a degree or two of latitude or longitude—that is to say, a distance of many tens of kilometres.
That sort of distance is much larger than the size of a typical piece of cloud. As a consequence, simulation of clouds requires a fair amount of guesswork as to what might be a suitable average of whatever is going on between the grid-points of the model. Even if experimental observations suggest that the models get the averages roughly right for a short-term forecast, there is no guarantee they will get them right for atmospheric conditions several decades into the future. Among other problems, small errors in the numerical modelling of complex processes have a nasty habit of accumulating with time.
Again because of this grid-point business, oceanic fluctuations and eddies smaller than the distance between the grid-points of a model are unknown to that model. This would not be a problem except that eddies in turbulent fluids can grow larger and larger. A small random eddy in the real ocean can grow and appear out of nowhere as far as a forecasting model is concerned, and make a dog’s breakfast of the forecast from that time on.
All of the above is background to one of the great mysteries of the climate change issue. Virtually all the scientists directly involved in climate prediction are aware of the enormous problems and uncertainties still associated with their product. How then is it that those of them involved in the latest report of the Intergovernmental Panel on Climate Change (IPCC) can put their hands on their hearts and maintain there is a 95 per cent probability that human emissions of carbon dioxide have caused most of the global warming that has occurred over the last several decades?
Bear in mind that the representation of clouds in climate models (and of water vapour, which is intimately involved with cloud formation) is such as to amplify the forecast warming from increasing atmospheric carbon dioxide—on average over most of the models—by a factor of about three. In other words, two-thirds of the forecast rise in temperature derives from this particular model characteristic. Despite what the models are telling us—and perhaps because it is models that are telling us—no scientist close to the problem and in his right mind, when asked the specific question, would say that he is 95 per cent sure that the effect of clouds is to amplify rather than to reduce the warming effect of increasing carbon dioxide. If he is not sure that clouds amplify global warming, he cannot be sure that most of the global warming is a result of increasing carbon dioxide.
Bear in mind too that no scientist close to the problem and in his right mind, when asked the specific question, would say there is only a very small possibility (that is, less than 5 per cent) that internal ocean behaviour could be a major cause of the warming over the past half-century. He would be particularly careful not to make such a statement now that there has been no significant warming over the most recent fifteen or so years. In the mad scurry to find reasons for the pause, and to find reasons for an obvious failure of the models to simulate the pause, suddenly we are hearing that perhaps the heat of global warming is being “hidden” in the deep ocean. In other words we are being told that some internal oceanic fluctuation may have reduced the upward trend in global temperature. It is therefore more than a little strange that we are not hearing from the IPCC (or at any rate not hearing very loudly) that some natural internal fluctuation of the system may have given rise to most of the earlier upward trend.
In the light of all this, we have at least to consider the possibility that the scientific establishment behind the global warming issue has been drawn into the trap of seriously overstating the climate problem—or, what is much the same thing, of seriously understating the uncertainties associated with the climate problem—in its effort to promote the cause. It is a particularly nasty trap in the context of science, because it risks destroying, perhaps for centuries to come, the unique and hard-won reputation for honesty which is the basis of society’s respect for scientific endeavour. Trading reputational capital for short-term political gain isn’t the most sensible way of going about things.
The trap was set in the late 1970s or thereabouts when the environmental movement first realised that doing something about global warming would play to quite a number of its social agendas. At much the same time, it became accepted wisdom around the corridors of power that government-funded scientists (that is, most scientists) should be required to obtain a goodly fraction of their funds and salaries from external sources—external anyway to their own particular organisation.
The scientists in environmental research laboratories, since they are not normally linked to any particular private industry, were forced to seek funds from other government departments. In turn this forced them to accept the need for advocacy and for the manipulation of public opinion. For that sort of activity, an arm’s-length association with the environmental movement would be a union made in heaven. Among other things it would provide a means by which scientists could distance themselves from responsibility for any public overstatement of the significance of their particular research problem.
The trap was partially sprung in climate research when a number of the relevant scientists began to enjoy the advocacy business. The enjoyment was based on a considerable increase in funding and employment opportunity. The increase was not so much on the hard-science side of things but rather in the emerging fringe institutes and organisations devoted, at least in part, to selling the message of climatic doom. A new and rewarding research lifestyle emerged which involved the giving of advice to all types and levels of government, the broadcasting of unchallengeable opinion to the general public, and easy justification for attendance at international conferences—this last in some luxury by normal scientific experience, and at a frequency previously unheard of.
Somewhere along the line it came to be believed by many of the public, and indeed by many of the scientists themselves, that climate researchers were the equivalent of knights on white steeds fighting a great battle against the forces of evil—evil, that is, in the shape of “big oil” and its supposedly unlimited money. The delusion was more than a little attractive.
The trap was fully sprung when many of the world’s major national academies of science (such as the Royal Society in the UK, the National Academy of Sciences in the USA and the Australian Academy of Science) persuaded themselves to issue reports giving support to the conclusions of the IPCC. The reports were touted as national assessments that were supposedly independent of the IPCC and of each other, but of necessity were compiled with the assistance of, and in some cases at the behest of, many of the scientists involved in the IPCC international machinations. In effect, the academies, which are the most prestigious of the institutions of science, formally nailed their colours to the mast of the politically correct.
Since that time three or four years ago, there has been no comfortable way for the scientific community to raise the spectre of serious uncertainty about the forecasts of climatic disaster. It can no longer use the environmental movement as a scapegoat if it should turn out that the threat of global warming has no real substance. It can no longer escape prime responsibility if it should turn out in the end that doing something in the name of mitigation of global warming is the costliest scientific mistake ever visited on humanity. The current redirection of global funds in the name of climate change is of the order of a billion dollars a day. And in the future, to quote US Senator Everett Dirksen, “a billion here and a billion there, and pretty soon we’ll be talking about real money”.
At the same time, the average man in the street, a sensible chap who by now can smell the signs of an oversold environmental campaign from miles away, is beginning to suspect that it is politics rather than science which is driving the issue.
Scientists—most scientists anyway—may be a bit naive, but they are not generally wicked, idiotic, or easily suborned either by money or by the politically correct. So whatever might be the enjoyment factor associated with supporting officially accepted wisdom, and whatever might be the constraints applied by the scientific powers-that-be, it is still surprising that the latest IPCC report has been tabled with almost no murmur of discontent from the lower levels of the research establishment. What has happened to the scepticism that is supposedly the lifeblood of scientific inquiry?
The answer probably gets back to the uncertainty of it all. The chances of proving that climate change over the next century will be large enough to be disastrous are virtually nil. For the same reason, the chances of a climate sceptic, or anyone else for that matter, proving the disaster theory to be oversold are also virtually nil. To that extent there is a level playing field for the two sides of the argument. The problem is that climate research necessarily involves enormous resources, and is a game for institutions and organisations. Scepticism is an occupation for individuals. Things being as they are in the climate-change arena, scepticism by an individual within the system can be fairly career-limiting. In any event, most individual scientists have a conscience, and are reluctant to put their heads above the public parapet in order to propound a view of things that may be inherently unprovable.
In short, there is more than enough uncertainty about the forecasting of climate to allow normal human beings to be at least reasonably hopeful that global warming might not be nearly as bad as is currently touted. Climate scientists, and indeed scientists in general, are not so lucky. They have a lot to lose if time should prove them wrong.
Garth Paltridge is an emeritus professor at the University of Tasmania and a fellow of the Australian Academy of Science. He is the author of The Climate Caper: Facts and Fallacies of Global Warming. He was a chief research scientist with the CSIRO Division of Atmospheric Research.
Thursday, January 02, 2014
Bring science to climate policy
Bring science to climate policy
MICHAEL ASTEN THE AUSTRALIAN JANUARY 03, 2014 12:00AM
THE Senate inquiry probing the direct action scheme to reduce CO2 emissions provides opportunity for a review not only of the Coalition's scheme but its underlying justification.
Just as the National Broadband Network has been subjected to rigorous review and reframing, we should expect nothing less of the direct action scheme.
The debate thus far is not encouraging. The Climate Institute, an independent body previously supportive of Rudd-Gillard government initiatives, has weighed in to the debate with a submission to the Senate environment and communications references committee.
The institute argues that Australia's targets are not enough and, even if matched by the rest of the world, could bring about global warming disasters including "droughts in southern Australia occurring up to five times more often than present, and the destruction of the Great Barrier Reef". It does not attempt to review the physical science on which such projections are based.
Maurice Newman and Mohamed Nasheed have added to the debate on this page this week, with neither considering the underlying science and both adding more heat than light: Newman speaks of anthropogenic global warming as "scientific delusion" and Nasheed counters with sceptical views as being "antediluvian denialism".
I identify five segments of science - all detailed in peer-reviewed journals in the past three years - which demand scrutiny before we believe current global warming projections.
First, climate sensitivity is generally defined as the change in global temperature produced by a doubling of CO2 in the atmosphere. A range of studies across the past five years indicates this may be below, or significantly below, present values quoted by the UN's Intergovernmental Panel on Climate Change, in which case published modelling projections of future global warming and sea-level rise become overstated.
Second, the disconnect between CO2 increase and global temperature change since 1900 is especially evident in the global warming hiatus of the past 17 years. The mechanisms for this hiatus are not adequately described by consensus science, but there is increasing evidence to suggest natural cyclic change plays a major role in this dichotomy between projections from climate modelling based on anthropogenic global warming theory, and systematic measurement using terrestrial and satellite observation platforms.
Third, cyclic variations in global sea level suggest natural cycles of about 60 and 30 years in length. Such cycles, which are deserving of considerable further study, suggest a significant fraction of the observed rate of sea-level rise of past decades may be attributable to the upswing of natural cycles. The consequence, if proven, on projections of future sea-level rise and associated planning and land-use policy is large.
Fourth, natural cycles in climate change are increasingly evident from precise studies of temperature records imprinted in cave deposits, ice cores, corals and deep-sea sediments. These provide mounting evidence that current global warming is not abnormal in a historical context, and variations are subject to a range of natural cyclic phenomena with periods ranging from about 60 years to millennia.
Finally, causative mechanisms for natural cycles in climate change are an essential complement to observational data showing natural cycles in climate change. Mechanisms involving highly complex interactions of solar physics, magnetic fields and cosmic rays are on the cusp of delivering insights into possible mechanisms.
The issue of global sea-level rise has made news in the past week because of some local councils in NSW placing restrictions on frontage properties because of perceived flood and erosion risk. Such restrictions may have major impacts on land values, affecting individual owners, and it is worth considering some of the unknowns affecting such decisions.
A useful baseline number for global sea-level rise is a mean of 1.7mm a year for the period 1900-2010. Faster rates of rise in the order of 3.2 mm a year are recorded for 1993-2010, and 1930-1950. Projections of sea-level rise of 90cm by the year 2100 - used to justify planning restrictions on coastal property - require the rate of rise to increase to 15mm a year.
While the evidence for a small underlying trend of an increasing sea level is compelling, faster rates of rise and projections based on these need to be reconsidered in the light of evidence of multi-decadal cyclic variations in global sea level.
By my reading of the relevant literature (such as by Don Chambers of the college of marine science, University of South Florida, and colleagues, and by Ben Hamlington and colleagues at the University of Colorado, Boulder), the cycles arguably account for 30 per cent to 50 per cent of the observed sea-level change of the past 20 years, which leaves a residual rate of global sea-level rise near to the rate observed in the past 200 years.
The past 20 years appear to lie dominantly on the upswing of the natural cycles, thus raising the question of whether an apparent increasing rate of sea-level change in past decades is a new trend or whether it could be another facet of natural cyclic change of earth systems.
More alarmingly, these cyclic variations are not yet built into any published sea-level projections.
While the work of Chambers is mentioned in the IPCC's Fifth Assessment Report, it is not yet incorporated into any of the published modelling scenarios for future sea-level rise. Hamlington's work is more recent than the report.
Local councils that rely on the older IPCC Fourth Assessment Report not only risk injustice to their ratepayers, they are out of date by five years on scientific grounds.
The Abbott government is committed to spending $5 billion annually on its direct action emissions reduction program.
The Senate inquiry would do well to recommend some thousandths of this sum be spent re-examining which projections are credible, which natural changes require mitigation of effect rather than cause, and what cost-benefit parameters apply to programs targeting residual anthropogenically related climate change.
Michael Asten is professor of geophysics at Monash University.
MICHAEL ASTEN THE AUSTRALIAN JANUARY 03, 2014 12:00AM
THE Senate inquiry probing the direct action scheme to reduce CO2 emissions provides opportunity for a review not only of the Coalition's scheme but its underlying justification.
Just as the National Broadband Network has been subjected to rigorous review and reframing, we should expect nothing less of the direct action scheme.
The debate thus far is not encouraging. The Climate Institute, an independent body previously supportive of Rudd-Gillard government initiatives, has weighed in to the debate with a submission to the Senate environment and communications references committee.
The institute argues that Australia's targets are not enough and, even if matched by the rest of the world, could bring about global warming disasters including "droughts in southern Australia occurring up to five times more often than present, and the destruction of the Great Barrier Reef". It does not attempt to review the physical science on which such projections are based.
Maurice Newman and Mohamed Nasheed have added to the debate on this page this week, with neither considering the underlying science and both adding more heat than light: Newman speaks of anthropogenic global warming as "scientific delusion" and Nasheed counters with sceptical views as being "antediluvian denialism".
I identify five segments of science - all detailed in peer-reviewed journals in the past three years - which demand scrutiny before we believe current global warming projections.
First, climate sensitivity is generally defined as the change in global temperature produced by a doubling of CO2 in the atmosphere. A range of studies across the past five years indicates this may be below, or significantly below, present values quoted by the UN's Intergovernmental Panel on Climate Change, in which case published modelling projections of future global warming and sea-level rise become overstated.
Second, the disconnect between CO2 increase and global temperature change since 1900 is especially evident in the global warming hiatus of the past 17 years. The mechanisms for this hiatus are not adequately described by consensus science, but there is increasing evidence to suggest natural cyclic change plays a major role in this dichotomy between projections from climate modelling based on anthropogenic global warming theory, and systematic measurement using terrestrial and satellite observation platforms.
Third, cyclic variations in global sea level suggest natural cycles of about 60 and 30 years in length. Such cycles, which are deserving of considerable further study, suggest a significant fraction of the observed rate of sea-level rise of past decades may be attributable to the upswing of natural cycles. The consequence, if proven, on projections of future sea-level rise and associated planning and land-use policy is large.
Fourth, natural cycles in climate change are increasingly evident from precise studies of temperature records imprinted in cave deposits, ice cores, corals and deep-sea sediments. These provide mounting evidence that current global warming is not abnormal in a historical context, and variations are subject to a range of natural cyclic phenomena with periods ranging from about 60 years to millennia.
Finally, causative mechanisms for natural cycles in climate change are an essential complement to observational data showing natural cycles in climate change. Mechanisms involving highly complex interactions of solar physics, magnetic fields and cosmic rays are on the cusp of delivering insights into possible mechanisms.
The issue of global sea-level rise has made news in the past week because of some local councils in NSW placing restrictions on frontage properties because of perceived flood and erosion risk. Such restrictions may have major impacts on land values, affecting individual owners, and it is worth considering some of the unknowns affecting such decisions.
A useful baseline number for global sea-level rise is a mean of 1.7mm a year for the period 1900-2010. Faster rates of rise in the order of 3.2 mm a year are recorded for 1993-2010, and 1930-1950. Projections of sea-level rise of 90cm by the year 2100 - used to justify planning restrictions on coastal property - require the rate of rise to increase to 15mm a year.
While the evidence for a small underlying trend of an increasing sea level is compelling, faster rates of rise and projections based on these need to be reconsidered in the light of evidence of multi-decadal cyclic variations in global sea level.
By my reading of the relevant literature (such as by Don Chambers of the college of marine science, University of South Florida, and colleagues, and by Ben Hamlington and colleagues at the University of Colorado, Boulder), the cycles arguably account for 30 per cent to 50 per cent of the observed sea-level change of the past 20 years, which leaves a residual rate of global sea-level rise near to the rate observed in the past 200 years.
The past 20 years appear to lie dominantly on the upswing of the natural cycles, thus raising the question of whether an apparent increasing rate of sea-level change in past decades is a new trend or whether it could be another facet of natural cyclic change of earth systems.
More alarmingly, these cyclic variations are not yet built into any published sea-level projections.
While the work of Chambers is mentioned in the IPCC's Fifth Assessment Report, it is not yet incorporated into any of the published modelling scenarios for future sea-level rise. Hamlington's work is more recent than the report.
Local councils that rely on the older IPCC Fourth Assessment Report not only risk injustice to their ratepayers, they are out of date by five years on scientific grounds.
The Abbott government is committed to spending $5 billion annually on its direct action emissions reduction program.
The Senate inquiry would do well to recommend some thousandths of this sum be spent re-examining which projections are credible, which natural changes require mitigation of effect rather than cause, and what cost-benefit parameters apply to programs targeting residual anthropogenically related climate change.
Michael Asten is professor of geophysics at Monash University.
Tuesday, October 01, 2013
AGW
September Quadrant Article
ENVIRONMENT
The Cartoonist Who Questioned the Science on Global Warming
John Spooner
A state of scepticism and suspense may amuse a few inquisitive minds. But the practice of superstition is so congenial to the multitude, that if they are forcibly awakened, they still regret the loss of their pleasing vision. Their love of the marvellous and supernatural, their curiosity with regard to future events, and their strong propensity to extend their hopes and fears beyond the limits of the visible world, were the principal causes which favoured the establishment of Polytheism. —Edward Gibbon, The Decline and Fall of the Roman Empire
I was once approached by a friend who is concerned about the danger of human-caused global warming. He asserted that when it comes to scientific issues of major public concern like this, it is “not what you believe but who you believe”. I think he meant that my then hesitant scepticism about global warming was pointless, for as a cartoonist I must be as inadequate to judge the science as he was. For that matter it seems all of us who are untrained in “climate science” have no option but to respect the peer-reviewed authority of the climate science establishment. Of course, as a revered public intellectual, my friend did not see it as his duty to sit on his hands. He felt bound, as many have, to vigorously support the scientific and political authority of the Intergovernmental Panel on Climate Change (IPCC), and regional associates like the CSIRO.
I found my friend’s advice baffling. As anyone familiar with the judicial process knows, the gravest issues of liberty and fortune are often determined by a jury selected from the general public. During some trials, expert witnesses give evidence supporting either side in our adversarial system. The judge must rule which evidence is relevant or admissible, but in the end it is the jury that decides which version of the scientific evidence is to be believed. No one in a civilised society is daunted by this process. We accept the outcome unless a procedural mistake has been made. Often someone goes to jail because one cross-examined scientific expert is believed over another by ordinary jury persons. No big deal.
So what’s the problem? Everyone has the chance to do some reading until they hit the wall of their own ignorance or understanding. Then you ask for help. Acting as the foreman of your own jury, you can ask for more direction or for clarifications to help you follow the logic of the argument. But if in the end you cannot agree with your fellow jurors, then you cannot reach a verdict. It is surely the duty of scientists who wish to influence political events to explain themselves clearly. If they can’t do that to the degree that ordinary people (not to mention many of their equally qualified peers) understand and accept that there really is a dangerous global warming problem, then it is premature for governments to be setting expensive anti-carbon-dioxide measures in place.
But in matters to do with climate change there is no judge except the scientific method, that is, the proposition of a testable hypothesis followed by its testing against factual or experimental challenge. That it fails various empirical tests is, of course, precisely why the advocates of dangerous anthropogenic global warming (AGW) are attracted to the idea of a scientific consensus. And that is where things get difficult for cartoonists, public intellectuals, journalists, politicians, bloggers and the general public. The reason why the phrase “scientific consensus” emerges in this debate is because political activists want to get moving, and if they say that the so-called “scientific consensus” is scary and urgent, then cartoonists and others had better just get out of the way: the science is settled and procrastination is outrageously reckless. The question of whether there is, or can be, such a thing as a useful scientific consensus about a matter like dangerous AGW is a difficult theoretical and practical problem. Cutting through that uncertainty, AGW activists have preferred to use the political process to impose their consensus argument, mainly through the media.
The activist cause perhaps peaked in early 2007 when Al Gore’s film An Inconvenient Truth became an international hit, winning two Academy Awards. This evidence might have seemed compelling to the uninitiated, but in October 2007 the British High Court found the film contained at least nine significant errors of fact and required British schools to refer to these errors when using the film in lessons. Though Professor Bob Carter gave evidence in this case, to date few people in Australia are aware of this severe embarrassment for Mr Gore.
Thanks mainly to media neglect, I never heard of the case at the time, and so, like nearly everyone else, I was initially taken in by the authoritative pronouncements of the former vice-president. However, I remember too that later in 2007, when the ABC broadcast Martin Durkin’s provocative documentary The Great Global Warming Swindle, a lot of people got very upset indeed. How interesting. The science was settled; the debate was over; no more discussion was needed, yet all it took was one contrariantelevision program to cause an explosive and long-running public sensation.
Any media professional should have been aroused by such an excited censorship campaign, and it stimulated my first cartoon on the subject, which depicted the family television set as medieval stocks with an imprisoned climate sceptic being pelted by the family with their television dinner.
And what of Durkin’s documentary? I know it didn’t get one or two of the fine details of the science exactly right, but then very few documentaries ever do. For example, there has been much criticism that any influence of cosmic rays on clouds will apply only to lower-level clouds—not all clouds, as the program stated. But, as in many good documentaries, Swindlepresented some riveting interviews with high-calibre professional scientists. To take one example, we heard from Professor Paul Reiter, chief of the Insects and Infectious Diseases Unit at the Pasteur Institute in Paris. As if I were in a jury, I had the opportunity to see him as he spoke (remember that appeals courts won’t hear an appeal based on a written transcript), and I formed a strong impression that he was telling the truth—that mosquitoes are equally at home in freezing Siberia as they are in the tropics. The same goes for malaria, the disease that they carry. Professor Reiter also seemed credible when he spoke of his difficulties with the IPCC process, describing why he thought that the organisation was dysfunctional. Many of the other interviews with sceptics, including one with the co-founder of Greenpeace, Patrick Moore, had a similar effect on me. Though I hadn’t seen all the evidence, after this film was broadcast the sceptics at the very least had my attention, and no doubt that of many other independent persons. So why did the media at large attack the film, and continue to vigorously promulgate their belief in dangerous AGW?
Up to the time of Swindle’s screening, the role of journalists in the global warming debate had perhaps been unexceptional. But things changed after the screening of the documentary and the outpouring of protest and criticism that it attracted. Then, and just as lobbyists do for matters of economic or social reform, the proselytisers for global warming alarm, who were feeling threatened as never before, got nasty.
Someone came up with the brilliantly clever but insidious idea of using the term “denier” to describe a person who remained agnostic or sceptical about the extent of human contribution to the global warming of the last 100 years. Why “denier”? Because it made the connection in people’s minds to “Holocaust denial”. Unbelievably, this malicious rhetoric henceforward came to be adopted by climate activists, media reporters and politicians up to the level of heads of state, and was applied to distinguished science professors such as Paul Reiter, Richard Lindzen, Freeman Dyson, William Happer and many others.
Holocaust denial describes the heartless and despicable refusal by anti-Semites to acknowledge the historical truth of the Jewish genocide that occurred during the Second World War. If you use the offensive term “denier” in another context, you do so for reasons best known to yourself. You may be calculating or you may be indifferent, but as politicians like Kevin Rudd, Penny Wong and Julia Gillard (all users of the term) would have known, the effect is pungent. No sensible, morally responsible person wants to be stigmatised in such a way.
Intimidation comes in many forms, and there can be no doubt that many people have been inhibited from entering the public debate on dangerous AGW because of the intimidatory power of this vicious language. And just in case you still haven’t got it, some prominent Australian public intellectuals to this day continue to explicitly endorse the moral equivalence between Holocaust and global warming denial. This endorsement is all the more incredible because it comes from academics who really understand the horror of the Holocaust.
Nonetheless, this blatant method of stigmatising those who questioned the so-called “consensus” view on AGW turned out not to be enough to suppress all independent views: many agnostic and sceptical scientists are made of tougher stuff. Accordingly, more stops had to be pulled out on the vilification organ, with sceptical scientists being compared by journalists and Labor politicians to eighteenth-century slave trade advocates, the odious tobacco lobby and recently even to paedophiles—a new low in public discourse.
Every cartoonist and satirist in the world, not to mention the investigative reporters, should by now have had their bullshit detectors on high alert. If the evidence was so good, and the sceptical scientists were so weak, wrong and few in number, then why the need for such rancorous politics? If you have the UN, the EU, the banks, the financial markets, most of the clergy and the media on your side, then why this dishonourable nastiness as well? I’ve always hated bullies and they have certainly been thick on the ground in this debate.
No good came of going back to my friend, the intellectual, for another discussion. I was referred to the “only” source of definitive knowledge on the subject; which was supposed to be Dr James Hansen, head of the NASA Goddard Institute for Space Studies. By this time, the alarm bell in my head was ringing loudly. For example, I came to know that only 0.7 degrees of warming had occurred since 1910, and only 0.4 degrees of that since 1945 when carbon dioxide levels started to rise considerably. Next, I was told that the Arctic sea ice was melting in an “unprecedented” way, despite abundant scientific documentation that the Arctic Ocean was virtually ice-free during the Holocene climatic optimum, only 8000 years ago—nonetheless, the public continued to be told that this melting sea ice manifests a “tipping point” that will lead to catastrophe.
These things notwithstanding, if NASA was the “main authority” then I thought that I should do as I was advised and consult them. So I went to the NASA website and searched for material on Arctic sea-ice melt. Five items into that page I discovered that “a new NASA-led study” into the causes of Arctic sea-ice melting had reported “a 23 per cent loss” in the Arctic’s year-round sea-ice cover between 2005 and 2007. The research team was led by Dr Son Nghiem of NASA’s Jet Propulsion Laboratory in California, who said that the rapid decline in winter perennial ice over the previous two years had been caused by unusual winds, which “compressed the sea ice … and then sped its flow out of the Arctic” where it rapidly melted in warmer waters. Dr Nghiem also said that “the winds causing this trend in ice reduction were set up by an unusual pattern of atmospheric pressure that began at the beginning of this century”. Yet nowhere in the public discussion of the dramatic sea-ice melt had we heard much about these real causes for its diminution, including the North Atlantic Oscillation, doubtless because it was easier to concentrate on the positive feedback loops created by the exposed Arctic sea. The real story was far more complicated and difficult to explain than in the glib terminology of AGW.
Furthermore, other than on Andrew Bolt’s and Joanne Nova’s blogs, it is extremely hard to find any widely-read popular public description in Australia (or worldwide for that matter) of the melting of Arctic sea ice that occurred between 1920 and 1940. Virtually all media coverage has related to the short satellite record of Arctic sea ice, which is only available since 1979. Rather than a simplistic alarming story of global warming, the NASA research was an accurate account of only part of a complex matter.
The real story in the Arctic Ocean obviously involves an intricate relationship between sea ice, ocean currents, atmospheric winds and temperature as affected by ocean–atmosphere oscillations like the North Atlantic Oscillation, and these modern changes need to be studied in the context of changes that have occurred through millennia. When viewed in these wider contexts, there is nothing untoward about the relatively minor changes in sea-ice cover that have occurred in the Arctic Ocean in modern times.
This was, of course, a great time for cartooning. John Howard saw the votes to be gained in crossing over to the warming camp, and Kevin Rudd promised to save the planet from the greatest moral, economic and spiritual threat of our time.
As the Arctic ice melted in 2007, Australia was suffering the continuation of an allegedly unprecedented, decade-long drought. The absence, indeed end, of sufficient rain to fill our rivers and dams was predicted by grim-faced climate scientists who invariably announced that things were far worse than their computer models had predicted. Yet the preceding severe droughts of the 1860s, the 1890s and during the First World War were rarely discussed in order to provide a needed perspective. With a simple Google search, anyone could, and still can, access photographs of horses and carriages on a bone-dry Murray River bed in 1914.
With so much political clout behind the dangerous-warming cause, and the Australian drought in full force, the next game-changing moment that captured my attention occurred on December 19, 2007. Dr David Whitehouse caused a stir by writing an article for the left-wing New Statesman magazine titled “Has Global Warming Stopped?” Dr Whitehouse stressed a point which concerned sceptics had long noticed: “The fact is that the global temperature of 2007 is statistically the same as 2006 as well as every year since 2001.” Because of the fundamental mechanism of global warming (the greenhouse effect), temperatures should have been increasing as carbon dioxide levels continued their relentless rise; but they were not. As Whitehouse, a PhD in astrophysics and former online science editor for the BBC, noted, “something else is happening [to the climate] and it is vital we find out what or else we may spend hundreds of billions of pounds needlessly”.
It was about this time that slowcoach denier cartoonists like me really started to wake up and look around, to discover the writings of experienced agnostic scientists like William Kininmonth, a former head of Australia’s National Climate Centre at the Bureau of Meteorology. Senior scientists like William had been publishing serious critiques of dangerous AGW way back in the 1990s, which was long before I and other slowcoaches had stopped our dreaming.
So just when those supporting climate alarm thought they had everything settled and nailed down, a gale of discontent started to blow. Cartoonist heaven, really: we love the spectacle of powerful people preparing their policy vessel against strong winds and rough seas, frantically rigging up fragile, flapping sails of spin and blather. If you’re going to spend over $15 billion of taxpayers’ money on desalinated water, or manage a potentially ruinous carbon dioxide trading scheme (please don’t stock our superannuation with the stuff), then you certainly don’t want to be questioned too closely, let alone lampooned, about the scientific details that you misunderstood or got wrong.
Nobody anticipated the next debacle: Climate¬gate. As the result of an apparent hacking attack on a server at the Climatic Research Unit (CRU) at the University of East Anglia (which might, or might not, have been mounted by an internal whistleblower), thousands of e-mails previously exchanged between senior IPCC scientists were leaked to the public a few weeks before the 2009 Copenhagen Climate Change Conference. The e-mails reeked of scientific uncertainty, political manoeuvring, unreasonable secrecy and strange ethics.
The revelations that they contained undoubtedly exercised an influence on the failure of the Copenhagen conference. Climategate also prompted at least a short burst of candour from Professor Phil Jones, Director of the CRU, who confirmed in a BBC interview that the warming rates of the periods 1860–1880, 1910–1940 and 1975–1998 had been statistically similar; that from 1995 to 2009 there had been no statistically significant global warming; and that from 2002 to 2010 there had been slight but “insignificant” global cooling. In answer to a further question as to whether the “climate change (debate) is over”, Professor Jones stated, “I don’t believe the vast majority of climate scientists believe this.” I found this statement extremely encouraging, for the science was obviously not settled and the consensus was crumbling even amongst the warming devotees.
Since 2007 the non-scientific players in this great intellectual drama have been confronted by a creeping uncertainty (which some still do not want to acknowledge) concerning many contentious dangerous AGW issues. These have included: the composition of the IPCC and the credibility of its processes; the unusual melting, or not, of sea ice and glaciers; the evidence for medieval warm temperatures; the importance of sunspots; the measurement of claimed global warming; changes or not in patterns of extreme weather events; ocean “acidification”; ocean warming and sea-level rise; biomass absorption and the longevity of molecules of atmospheric carbon dioxide; the reliability of climate computer models; the influence of the short-period El Nino Southern Oscillation (ENSO), and other similar oscillations on a multi-decadal scale; the chaotic behaviour of clouds; the impact of cosmic rays on climate; realisation that it is just clean air that is being vented by the Yallourn power stations (carbon dioxide and water, with virtually no pollutants); and, to cap it all off, even a newly declared scepticism towards dangerous AGW by green gurus like James Lovelock, the founder of the Gaia movement.
By early 2010, it seemed that nearly every single element of the global warming debate was well and truly up for grabs. In addition, and not put off by having their sanity questioned, myriads of qualified agnostic and sceptical persons made public statements, or signed declarations or petitions, to the effect that whilst dangerous AGW was a theoretically possible outcome of human-related carbon dioxide emissions, it was a very unlikely one given that, despite strenuous efforts, no proven AGW at all had yet been identified at a measurable level. For example, in the Oregon Petition, starting in 1998, more than 31,000 scientists, including 9029 with PhDs, signed a statement of protest at the findings and recommendations of the IPCC.
In Australia, and against this hurricane of uncertainty, the tattered vessel of government climate policy heedlessly weighed anchor and began to implement the demonisation of carbon dioxide by introducing penal taxes against its emission. Instead of waiting out the storm in harbour, government activists set out to sea guided by the Green faith and a few bearings taken on scattered windmills along the shoreline.
All of this provided great material for a satirist, but it was bad news indeed for the average Australian citizen whose cost of living was inexorably on the rise. In addition to the continuing increases in direct costs, it is also painful to contemplate the things that could have been done to improve our schools or health services using the money that has instead been squandered in vain pursuit of irrational renewable energy targets and “stopping global warming”.
Imagine if the sceptics are right. Who is going to be accountable, and who is going to do the accounting?
What of the establishment activists, and their media supporters, who have so vilified a group of honest, brave and experienced scientists for merely staying true to the empirical values of their profession? Who will vindicate the sullied reputations of, to name but a few antipodean names: Michael Asten, Bob Carter, Chris de Freitas, David Evans, Stewart Franks, William Kininmonth, Bryan Leyland, Jennifer Marohasy, John McLean, Joanne Nova, Garth Paltridge, Ian Plimer, Peter Ridd and Walter Starck? And the same question applies also for economists like Henry Ergas, Martin Feil, David Murray and others, who have dared to suggest that the Stern and Garnaut reviews were a travesty on both scientific and economic grounds, and that the carbon dioxide pricing/taxing emperor actually has no clothes.
I would love to see a list of all those socially beneficial environmental, educational and health projects that could have been funded instead of the profligate and futile spending on dangerous AGW that has actually occurred. I would like, too, though I doubt that it will happen in my lifetime, to see a public apology from all those advocates, intellectuals and politicians who have so freely slandered and injured the moral reputations of those other Australian citizens and qualified scientists whom they call “deniers”.
History is usually written by the political victors, but the global warming issue seems set to continue as a ritualised tribal debate for a long time yet. I once asked a committed “warming” journalist how many years the present pause in warming would have to last to cause her to challenge her own belief. Calmly looking me dead in the eye, she said, “Fifty years.”
This is the introduction to the new book Taxing Air: Facts and Fallacies About Climate Change, by Bob Carter and John Spooner, with Bill Kininmonth, Martin Feil, Stewart Franks and Bryan Leyland, published by Kelpie Press and available at bookshops and at www.taxingair.com for $30. John Spooner has been a newspaper cartoonist and illustrator since the 1970s. In 2002 he won the Graham Perkin Australian Journalist of the Year Award.
ENVIRONMENT
The Cartoonist Who Questioned the Science on Global Warming
John Spooner
A state of scepticism and suspense may amuse a few inquisitive minds. But the practice of superstition is so congenial to the multitude, that if they are forcibly awakened, they still regret the loss of their pleasing vision. Their love of the marvellous and supernatural, their curiosity with regard to future events, and their strong propensity to extend their hopes and fears beyond the limits of the visible world, were the principal causes which favoured the establishment of Polytheism. —Edward Gibbon, The Decline and Fall of the Roman Empire
I was once approached by a friend who is concerned about the danger of human-caused global warming. He asserted that when it comes to scientific issues of major public concern like this, it is “not what you believe but who you believe”. I think he meant that my then hesitant scepticism about global warming was pointless, for as a cartoonist I must be as inadequate to judge the science as he was. For that matter it seems all of us who are untrained in “climate science” have no option but to respect the peer-reviewed authority of the climate science establishment. Of course, as a revered public intellectual, my friend did not see it as his duty to sit on his hands. He felt bound, as many have, to vigorously support the scientific and political authority of the Intergovernmental Panel on Climate Change (IPCC), and regional associates like the CSIRO.
I found my friend’s advice baffling. As anyone familiar with the judicial process knows, the gravest issues of liberty and fortune are often determined by a jury selected from the general public. During some trials, expert witnesses give evidence supporting either side in our adversarial system. The judge must rule which evidence is relevant or admissible, but in the end it is the jury that decides which version of the scientific evidence is to be believed. No one in a civilised society is daunted by this process. We accept the outcome unless a procedural mistake has been made. Often someone goes to jail because one cross-examined scientific expert is believed over another by ordinary jury persons. No big deal.
So what’s the problem? Everyone has the chance to do some reading until they hit the wall of their own ignorance or understanding. Then you ask for help. Acting as the foreman of your own jury, you can ask for more direction or for clarifications to help you follow the logic of the argument. But if in the end you cannot agree with your fellow jurors, then you cannot reach a verdict. It is surely the duty of scientists who wish to influence political events to explain themselves clearly. If they can’t do that to the degree that ordinary people (not to mention many of their equally qualified peers) understand and accept that there really is a dangerous global warming problem, then it is premature for governments to be setting expensive anti-carbon-dioxide measures in place.
But in matters to do with climate change there is no judge except the scientific method, that is, the proposition of a testable hypothesis followed by its testing against factual or experimental challenge. That it fails various empirical tests is, of course, precisely why the advocates of dangerous anthropogenic global warming (AGW) are attracted to the idea of a scientific consensus. And that is where things get difficult for cartoonists, public intellectuals, journalists, politicians, bloggers and the general public. The reason why the phrase “scientific consensus” emerges in this debate is because political activists want to get moving, and if they say that the so-called “scientific consensus” is scary and urgent, then cartoonists and others had better just get out of the way: the science is settled and procrastination is outrageously reckless. The question of whether there is, or can be, such a thing as a useful scientific consensus about a matter like dangerous AGW is a difficult theoretical and practical problem. Cutting through that uncertainty, AGW activists have preferred to use the political process to impose their consensus argument, mainly through the media.
The activist cause perhaps peaked in early 2007 when Al Gore’s film An Inconvenient Truth became an international hit, winning two Academy Awards. This evidence might have seemed compelling to the uninitiated, but in October 2007 the British High Court found the film contained at least nine significant errors of fact and required British schools to refer to these errors when using the film in lessons. Though Professor Bob Carter gave evidence in this case, to date few people in Australia are aware of this severe embarrassment for Mr Gore.
Thanks mainly to media neglect, I never heard of the case at the time, and so, like nearly everyone else, I was initially taken in by the authoritative pronouncements of the former vice-president. However, I remember too that later in 2007, when the ABC broadcast Martin Durkin’s provocative documentary The Great Global Warming Swindle, a lot of people got very upset indeed. How interesting. The science was settled; the debate was over; no more discussion was needed, yet all it took was one contrariantelevision program to cause an explosive and long-running public sensation.
Any media professional should have been aroused by such an excited censorship campaign, and it stimulated my first cartoon on the subject, which depicted the family television set as medieval stocks with an imprisoned climate sceptic being pelted by the family with their television dinner.
And what of Durkin’s documentary? I know it didn’t get one or two of the fine details of the science exactly right, but then very few documentaries ever do. For example, there has been much criticism that any influence of cosmic rays on clouds will apply only to lower-level clouds—not all clouds, as the program stated. But, as in many good documentaries, Swindlepresented some riveting interviews with high-calibre professional scientists. To take one example, we heard from Professor Paul Reiter, chief of the Insects and Infectious Diseases Unit at the Pasteur Institute in Paris. As if I were in a jury, I had the opportunity to see him as he spoke (remember that appeals courts won’t hear an appeal based on a written transcript), and I formed a strong impression that he was telling the truth—that mosquitoes are equally at home in freezing Siberia as they are in the tropics. The same goes for malaria, the disease that they carry. Professor Reiter also seemed credible when he spoke of his difficulties with the IPCC process, describing why he thought that the organisation was dysfunctional. Many of the other interviews with sceptics, including one with the co-founder of Greenpeace, Patrick Moore, had a similar effect on me. Though I hadn’t seen all the evidence, after this film was broadcast the sceptics at the very least had my attention, and no doubt that of many other independent persons. So why did the media at large attack the film, and continue to vigorously promulgate their belief in dangerous AGW?
Up to the time of Swindle’s screening, the role of journalists in the global warming debate had perhaps been unexceptional. But things changed after the screening of the documentary and the outpouring of protest and criticism that it attracted. Then, and just as lobbyists do for matters of economic or social reform, the proselytisers for global warming alarm, who were feeling threatened as never before, got nasty.
Someone came up with the brilliantly clever but insidious idea of using the term “denier” to describe a person who remained agnostic or sceptical about the extent of human contribution to the global warming of the last 100 years. Why “denier”? Because it made the connection in people’s minds to “Holocaust denial”. Unbelievably, this malicious rhetoric henceforward came to be adopted by climate activists, media reporters and politicians up to the level of heads of state, and was applied to distinguished science professors such as Paul Reiter, Richard Lindzen, Freeman Dyson, William Happer and many others.
Holocaust denial describes the heartless and despicable refusal by anti-Semites to acknowledge the historical truth of the Jewish genocide that occurred during the Second World War. If you use the offensive term “denier” in another context, you do so for reasons best known to yourself. You may be calculating or you may be indifferent, but as politicians like Kevin Rudd, Penny Wong and Julia Gillard (all users of the term) would have known, the effect is pungent. No sensible, morally responsible person wants to be stigmatised in such a way.
Intimidation comes in many forms, and there can be no doubt that many people have been inhibited from entering the public debate on dangerous AGW because of the intimidatory power of this vicious language. And just in case you still haven’t got it, some prominent Australian public intellectuals to this day continue to explicitly endorse the moral equivalence between Holocaust and global warming denial. This endorsement is all the more incredible because it comes from academics who really understand the horror of the Holocaust.
Nonetheless, this blatant method of stigmatising those who questioned the so-called “consensus” view on AGW turned out not to be enough to suppress all independent views: many agnostic and sceptical scientists are made of tougher stuff. Accordingly, more stops had to be pulled out on the vilification organ, with sceptical scientists being compared by journalists and Labor politicians to eighteenth-century slave trade advocates, the odious tobacco lobby and recently even to paedophiles—a new low in public discourse.
Every cartoonist and satirist in the world, not to mention the investigative reporters, should by now have had their bullshit detectors on high alert. If the evidence was so good, and the sceptical scientists were so weak, wrong and few in number, then why the need for such rancorous politics? If you have the UN, the EU, the banks, the financial markets, most of the clergy and the media on your side, then why this dishonourable nastiness as well? I’ve always hated bullies and they have certainly been thick on the ground in this debate.
No good came of going back to my friend, the intellectual, for another discussion. I was referred to the “only” source of definitive knowledge on the subject; which was supposed to be Dr James Hansen, head of the NASA Goddard Institute for Space Studies. By this time, the alarm bell in my head was ringing loudly. For example, I came to know that only 0.7 degrees of warming had occurred since 1910, and only 0.4 degrees of that since 1945 when carbon dioxide levels started to rise considerably. Next, I was told that the Arctic sea ice was melting in an “unprecedented” way, despite abundant scientific documentation that the Arctic Ocean was virtually ice-free during the Holocene climatic optimum, only 8000 years ago—nonetheless, the public continued to be told that this melting sea ice manifests a “tipping point” that will lead to catastrophe.
These things notwithstanding, if NASA was the “main authority” then I thought that I should do as I was advised and consult them. So I went to the NASA website and searched for material on Arctic sea-ice melt. Five items into that page I discovered that “a new NASA-led study” into the causes of Arctic sea-ice melting had reported “a 23 per cent loss” in the Arctic’s year-round sea-ice cover between 2005 and 2007. The research team was led by Dr Son Nghiem of NASA’s Jet Propulsion Laboratory in California, who said that the rapid decline in winter perennial ice over the previous two years had been caused by unusual winds, which “compressed the sea ice … and then sped its flow out of the Arctic” where it rapidly melted in warmer waters. Dr Nghiem also said that “the winds causing this trend in ice reduction were set up by an unusual pattern of atmospheric pressure that began at the beginning of this century”. Yet nowhere in the public discussion of the dramatic sea-ice melt had we heard much about these real causes for its diminution, including the North Atlantic Oscillation, doubtless because it was easier to concentrate on the positive feedback loops created by the exposed Arctic sea. The real story was far more complicated and difficult to explain than in the glib terminology of AGW.
Furthermore, other than on Andrew Bolt’s and Joanne Nova’s blogs, it is extremely hard to find any widely-read popular public description in Australia (or worldwide for that matter) of the melting of Arctic sea ice that occurred between 1920 and 1940. Virtually all media coverage has related to the short satellite record of Arctic sea ice, which is only available since 1979. Rather than a simplistic alarming story of global warming, the NASA research was an accurate account of only part of a complex matter.
The real story in the Arctic Ocean obviously involves an intricate relationship between sea ice, ocean currents, atmospheric winds and temperature as affected by ocean–atmosphere oscillations like the North Atlantic Oscillation, and these modern changes need to be studied in the context of changes that have occurred through millennia. When viewed in these wider contexts, there is nothing untoward about the relatively minor changes in sea-ice cover that have occurred in the Arctic Ocean in modern times.
This was, of course, a great time for cartooning. John Howard saw the votes to be gained in crossing over to the warming camp, and Kevin Rudd promised to save the planet from the greatest moral, economic and spiritual threat of our time.
As the Arctic ice melted in 2007, Australia was suffering the continuation of an allegedly unprecedented, decade-long drought. The absence, indeed end, of sufficient rain to fill our rivers and dams was predicted by grim-faced climate scientists who invariably announced that things were far worse than their computer models had predicted. Yet the preceding severe droughts of the 1860s, the 1890s and during the First World War were rarely discussed in order to provide a needed perspective. With a simple Google search, anyone could, and still can, access photographs of horses and carriages on a bone-dry Murray River bed in 1914.
With so much political clout behind the dangerous-warming cause, and the Australian drought in full force, the next game-changing moment that captured my attention occurred on December 19, 2007. Dr David Whitehouse caused a stir by writing an article for the left-wing New Statesman magazine titled “Has Global Warming Stopped?” Dr Whitehouse stressed a point which concerned sceptics had long noticed: “The fact is that the global temperature of 2007 is statistically the same as 2006 as well as every year since 2001.” Because of the fundamental mechanism of global warming (the greenhouse effect), temperatures should have been increasing as carbon dioxide levels continued their relentless rise; but they were not. As Whitehouse, a PhD in astrophysics and former online science editor for the BBC, noted, “something else is happening [to the climate] and it is vital we find out what or else we may spend hundreds of billions of pounds needlessly”.
It was about this time that slowcoach denier cartoonists like me really started to wake up and look around, to discover the writings of experienced agnostic scientists like William Kininmonth, a former head of Australia’s National Climate Centre at the Bureau of Meteorology. Senior scientists like William had been publishing serious critiques of dangerous AGW way back in the 1990s, which was long before I and other slowcoaches had stopped our dreaming.
So just when those supporting climate alarm thought they had everything settled and nailed down, a gale of discontent started to blow. Cartoonist heaven, really: we love the spectacle of powerful people preparing their policy vessel against strong winds and rough seas, frantically rigging up fragile, flapping sails of spin and blather. If you’re going to spend over $15 billion of taxpayers’ money on desalinated water, or manage a potentially ruinous carbon dioxide trading scheme (please don’t stock our superannuation with the stuff), then you certainly don’t want to be questioned too closely, let alone lampooned, about the scientific details that you misunderstood or got wrong.
Nobody anticipated the next debacle: Climate¬gate. As the result of an apparent hacking attack on a server at the Climatic Research Unit (CRU) at the University of East Anglia (which might, or might not, have been mounted by an internal whistleblower), thousands of e-mails previously exchanged between senior IPCC scientists were leaked to the public a few weeks before the 2009 Copenhagen Climate Change Conference. The e-mails reeked of scientific uncertainty, political manoeuvring, unreasonable secrecy and strange ethics.
The revelations that they contained undoubtedly exercised an influence on the failure of the Copenhagen conference. Climategate also prompted at least a short burst of candour from Professor Phil Jones, Director of the CRU, who confirmed in a BBC interview that the warming rates of the periods 1860–1880, 1910–1940 and 1975–1998 had been statistically similar; that from 1995 to 2009 there had been no statistically significant global warming; and that from 2002 to 2010 there had been slight but “insignificant” global cooling. In answer to a further question as to whether the “climate change (debate) is over”, Professor Jones stated, “I don’t believe the vast majority of climate scientists believe this.” I found this statement extremely encouraging, for the science was obviously not settled and the consensus was crumbling even amongst the warming devotees.
Since 2007 the non-scientific players in this great intellectual drama have been confronted by a creeping uncertainty (which some still do not want to acknowledge) concerning many contentious dangerous AGW issues. These have included: the composition of the IPCC and the credibility of its processes; the unusual melting, or not, of sea ice and glaciers; the evidence for medieval warm temperatures; the importance of sunspots; the measurement of claimed global warming; changes or not in patterns of extreme weather events; ocean “acidification”; ocean warming and sea-level rise; biomass absorption and the longevity of molecules of atmospheric carbon dioxide; the reliability of climate computer models; the influence of the short-period El Nino Southern Oscillation (ENSO), and other similar oscillations on a multi-decadal scale; the chaotic behaviour of clouds; the impact of cosmic rays on climate; realisation that it is just clean air that is being vented by the Yallourn power stations (carbon dioxide and water, with virtually no pollutants); and, to cap it all off, even a newly declared scepticism towards dangerous AGW by green gurus like James Lovelock, the founder of the Gaia movement.
By early 2010, it seemed that nearly every single element of the global warming debate was well and truly up for grabs. In addition, and not put off by having their sanity questioned, myriads of qualified agnostic and sceptical persons made public statements, or signed declarations or petitions, to the effect that whilst dangerous AGW was a theoretically possible outcome of human-related carbon dioxide emissions, it was a very unlikely one given that, despite strenuous efforts, no proven AGW at all had yet been identified at a measurable level. For example, in the Oregon Petition, starting in 1998, more than 31,000 scientists, including 9029 with PhDs, signed a statement of protest at the findings and recommendations of the IPCC.
In Australia, and against this hurricane of uncertainty, the tattered vessel of government climate policy heedlessly weighed anchor and began to implement the demonisation of carbon dioxide by introducing penal taxes against its emission. Instead of waiting out the storm in harbour, government activists set out to sea guided by the Green faith and a few bearings taken on scattered windmills along the shoreline.
All of this provided great material for a satirist, but it was bad news indeed for the average Australian citizen whose cost of living was inexorably on the rise. In addition to the continuing increases in direct costs, it is also painful to contemplate the things that could have been done to improve our schools or health services using the money that has instead been squandered in vain pursuit of irrational renewable energy targets and “stopping global warming”.
Imagine if the sceptics are right. Who is going to be accountable, and who is going to do the accounting?
What of the establishment activists, and their media supporters, who have so vilified a group of honest, brave and experienced scientists for merely staying true to the empirical values of their profession? Who will vindicate the sullied reputations of, to name but a few antipodean names: Michael Asten, Bob Carter, Chris de Freitas, David Evans, Stewart Franks, William Kininmonth, Bryan Leyland, Jennifer Marohasy, John McLean, Joanne Nova, Garth Paltridge, Ian Plimer, Peter Ridd and Walter Starck? And the same question applies also for economists like Henry Ergas, Martin Feil, David Murray and others, who have dared to suggest that the Stern and Garnaut reviews were a travesty on both scientific and economic grounds, and that the carbon dioxide pricing/taxing emperor actually has no clothes.
I would love to see a list of all those socially beneficial environmental, educational and health projects that could have been funded instead of the profligate and futile spending on dangerous AGW that has actually occurred. I would like, too, though I doubt that it will happen in my lifetime, to see a public apology from all those advocates, intellectuals and politicians who have so freely slandered and injured the moral reputations of those other Australian citizens and qualified scientists whom they call “deniers”.
History is usually written by the political victors, but the global warming issue seems set to continue as a ritualised tribal debate for a long time yet. I once asked a committed “warming” journalist how many years the present pause in warming would have to last to cause her to challenge her own belief. Calmly looking me dead in the eye, she said, “Fifty years.”
This is the introduction to the new book Taxing Air: Facts and Fallacies About Climate Change, by Bob Carter and John Spooner, with Bill Kininmonth, Martin Feil, Stewart Franks and Bryan Leyland, published by Kelpie Press and available at bookshops and at www.taxingair.com for $30. John Spooner has been a newspaper cartoonist and illustrator since the 1970s. In 2002 he won the Graham Perkin Australian Journalist of the Year Award.
Subscribe to:
Posts (Atom)