Monday, December 29, 2008
Existence of God - proof
A Proof of the Existence of God
By James Kidd
The First Vatican Council taught that the existence of God can be proven by our reason alone:
God, the origin and end of all things, can be known with certainty by the natural light of human reason, through the things that he created. (Dei Filius 2)
But the Church has never offered an actual proof of God; it has left that to the philosophers. Although many have attempted to prove God’s existence, what they end up with is mere arguments. They may be quite persuasive, but they lack the metaphysical certitude of a mathematical proof. They may presuppose some bit of knowledge, or they may leave room for possible doubt.
But the medieval understanding of God, which St. Thomas Aquinas espoused, does not allow for doubting his existence. The proof that follows is a paraphrasing of the Angelic Doctor’s many writings that dealt with this subject. It proves the existence of a being that is one, immutable, eternal, infinite, omniscient, and omnipotent.
In fact, you can be more certain that God exists than that you are reading this article right now.
A Brain in a Vat
Let’s start by taking a position of radical doubt. Suppose for a moment that you are not really a human being with an actual body. In reality, you are nothing more than a brain floating in a vat of fluids, with electrodes attached to various parts of your exterior that allow evil scientists to manipulate you into thinking that what you perceive is actually there, when in fact it is nothing more than an imaginary world constructed by the scientists. Right now, they are making you think that you are reading this article when in fact you are not.
From this point of extreme skepticism, we will prove beyond all possible doubt that God exists.
1. One cannot deny one’s own existence.
Cogito, ergo sum. Even if you’re just a brain in a vat, your own existence can be verified simply by the fact that you perceive—that is, you see, hear, smell, taste and touch things. Whether or not your perceptions are accurate is another question, but even if you doubt your own existence, you must exist, for it is impossible for a non-existent thing to doubt. In fact, the very act of doubting proves that you exist. Therefore, denying your own existence is a contradiction in terms. I can deny yours and you can deny mine, but I can’t doubt mine, nor can you doubt yours.
2. There is at least one thing that exists.
It is possible for you to be deceived in your perception. In fact, it’s conceivable that every one of your perceptions is a delusion. But even if that is the case—even if nothing you think exists actually exists—you still must exist.
Entity is the word we have for anything that exists. You exist, so you are an entity.
3. There is such a thing as existence.
You can know with certainty that there is at least one entity, at least one thing of which the term existence can be predicated. If there were no such thing as existence, nothing would exist, not even you. But, as we have seen already, that is impossible.
As Aquinas would say, there must be an "act of being" in which all entities participate. This act of being must itself exist; it must be an entity. Thomas calls this entity esse, which is Latin for "to be" or "to exist."
4. The nature of esse is actuality.
Now that we have established that esse is an entity, we must ask: What is the nature of this entity? What is its definition?
To answer these questions, we must consider existence by itself, apart from everything else.
What do we mean when we say that something exists? We mean that it is actual. For example, an acorn is actually an acorn and potentially a tree. A tree is actually a tree and potentially lumber. Lumber is actually lumber and potentially a desk. A desk is actually a desk and potentially firewood. Firewood is actually firewood and potentially ashes.
In other words, a thing is actually what it is right now; it is potentially what it might be in the future.
Now when we say that something exists, we normally refer to actuality rather than potentiality. For instance, if I held up an egg and said, "This egg exists," you would understand me, because what I am saying is "This egg is actual" or "This is actually an egg." But if I held up the egg and said, "This chicken exists," that would not make sense to you, because even though the egg is potentially a chicken (that is, the chicken exists potentially), the concept of existence applies primarily to the egg’s actual state and only secondarily to its potential state.
Now potentiality is still a form of existence, but we realize that it is, in some sense, inferior to actuality. In other words, potentiality is a "shade" of existence the same way that pink is a shade of red. Just as we would say that pink lemonade is red but not in the same way that Hawaiian punch is red, so we say that potentiality exists but not as much as actuality does. Actuality is the fullness of existence.
So, again, taking the brain-in-a-vat hypothesis, you know that you are actual, even if nothing else you perceive exists.
5. Esse is nothing but pure actuality.
Potentiality is a privation of actuality. That is, it is not a thing in itself but the absence of something. In the same way, darkness is not a substance itself but the absence (or privation) of light.
Now a thing considered in itself contains nothing but its fullness. The nature (or essence) of light consists of nothing but light itself; it does not contain darkness. Therefore, the essence of esse contains nothing but its fullness, actuality. There is no potentiality in the nature of esse. Thus, the essence of esse is pure actuality, just as the essence of light is pure light.
Thomas argues that all entities participate in esse insofar as they are actual. Therefore, that in which they participate—esse—must be actual. In fact, it cannot admit of any potentiality.
6. Esse not only does exist but must exist.
Existence itself is pure actuality, with no potentiality in it. This means that the essence of existence is nothing other than existence. Existence is its own essence.
From this it follows that esse itself must exist, for if it did not, it would violate its own essence, which is impossible.
7. Esse is distinct from everything else that exists.
You can know from step 1 that you exist, and we know from step 3 that esse exists. But we also know that the two are not identical.
Let’s say you’re just a brain in a vat, that everything you perceive is an illusion. You can still recognize that, while you are actual in some ways, you are potential in other ways. You actually perceive that you’re reading this article right now; you’re potentially perceiving something else. You are actually existing right now; you potentially exist five minutes from now. Moreover, anything else that may exist has the same attribute: Its essence is composed of both actuality and potentiality.
But, as we saw in step 5, esse is nothing but pure actuality. Thus, it must be distinct from any other entity.
8. Esse must be one.
If there were more than one esse, then there would be distinctions among them. But distinctions imply limitations, and limitations imply potentiality. But since esse is pure actuality, it has no limitations, which means there is no distinction in esse. Therefore, there is only one esse.
9. Esse must be immutable.
Change involves potentiality. In order for something to change, it must first have the potential to change; it must have a potentiality that is to be actualized. But since esse is purely actual, it has no potential to change. Therefore, esse is unchanging.
10. Esse must be eternal.
Time is nothing but the passing of the future into the present into the past. It is the changing of the not-yet into the now into the no-longer. But because esse does not change, it does not change from the future to the present to the past. It must be outside the realm of time, which means that there is no future, present, or past with esse. In other words, esse is non-temporal, or eternal.
11. Esse must be infinite.
Space is nothing but the changing of the over-here to the over-there. Anything that is actually here is potentially there. But because esse is immutable, it must be outside the realm of space. It has no spatial constraints—that is, esse is infinite.
12. Esse must be omniscient.
Even if you’re a brain in a vat, you can perceive that you have the capacity to know. Because you are only partly actual, and esse is purely actual, esse must know all there is to know. That is, esse is all-knowing, or omniscient.
13. Esse must be omnipotent.
You can perceive that you have the capacity to do some things that are logically possible. Since you are only partly actual, and esse is purely actual, esse must be able to do all things that are logically possible. That is, esse is all-powerful, or omnipotent.
We have thus proven the existence of a being (esse) that not only does exist but must exist and is one, unchanging, eternal, infinite, omniscient, and omnipotent. This matches our definition of God that we stated at the beginning.
We can conclude, then, that even if all of your sense perceptions are false, even if you are nothing but a brain in a vat being manipulated by scientists into believing that you are reading this article right now when in fact you are not, there are two things you can know with absolute, 100 percent certainty: (1) You exist, and (2) God exists.
Sunday, December 28, 2008
Homophobia? really....!
(small excerpt of the Pope's address To the curia 22 Dec 2008 )
[ .....Joy as the fruit of the Holy Spirit with this we come to the central theme of Sydney which, precisely, was the Holy Spirit. In this retrospective glance I
would like to refer, by way of synthesis, to the orientation implicit in such a theme. Keeping before our eyes the witness of Scripture and of Tradition, four dimensions of the theme "Holy Spirit" are easily recognised.
1. The first is the affirmation which we find at the beginning of the account of creation: there we hear of the Creator Spirit which hovers over the waters, creates the world and constantly renews it. Faith in the Creator Spirit is an essential part of the Christian Credo. The fact that matter carries within itself a mathematical structure, is full of spirit, and forms the foundation on which the modern natural sciences rest. Only because is structured in an intelligent fashion is our spirit competent to interpret it and to actively refashion it. Because this intelligent structure proceeds from the same Spirit Creator which has given the spirit to us, it brings with it a task and a responsibility. The ultimate foundation for our responsibility towards the earth rests on our beliefs about creation. The earth is not simply our possession which we can plunder according to our interests and desires. It is rather a gift of the Creator who has designed its intrinsic laws and with this has given us the basic directions for us to adhere as stewards of his creation. The fact that the earth, the cosmos, mirror the Creator Spirit, clearly means that their rational structures which, transcending the mathematical order, become almost palpable in our experience, bear within themselves an ethical orientation. The Spirit which has formed them, is more than mathematics, he is the Good in person, using the language of creation, and points us to the way of right living.
Since faith in the Creator is an essential part of the Christian Credo, the Church cannot and should not confine itself to passing on the message of salvation alone. It has a responsibility for the created order and ought to make this responsibility prevail, even in public. And in so doing, it ought to safeguard not only the earth, water, and air as gifts of creation, belonging to everyone. It ought also to protect man against the destruction of himself.
What is necessary is a kind of ecology of man, understood in the correct sense. When the Church speaks of the nature of the human being as man and woman and asks that this order of creation be respected, it is not the result of an outdated metaphysic. It is a question here of faith in the Creator and of listening to the language of creation, the devaluation of which leads to the self destruction of man and therefore to the destruction of the same work of God. That which is often expressed and understood by the term "gender", results finally in the self-emancipation of man from creation and from the Creator. Man wishes to act alone and to dispose ever and exclusively of that alone which concerns him. But in this way he is living contrary to the truth, he is living contrary to the Spirit Creator. The tropical forests are deserving, yes, of our protection, but man merits no less than the creature, in which there is written a message which does not mean a contradiction of our liberty, but its condition. The great Scholastic theologians have characterised matrimony, the life-long bond between man and woman, as a sacrament of creation, instituted by the Creator himself and which Christ without modifying the message of creation has incorporated into the history of his covenant with mankind.
This forms part of the message that the Church must recover the witness in favour of the Spirit Creator present in nature in its entirety and in a particular way in the nature of man, created in the image of God. Beginning from this perspective, it would be beneficial to read again the Encyclical Humanae Vitae: the intention of Pope Paul VI was to defend love against sexuality as a consumer entity, the future as opposed to the exclusive pretext of the present, and the nature of man against its manipulation. ...... ]
Friday, December 12, 2008
The process of erroneous religions
Dec 1, '08, 1:53 am
onenow1 is online now
Senior Member
Satans' Master Plan...
--------------------------------------------------------------------------------
THE MASTER PLAN OF THE DEVIL. THE DEVELOPMENT OF MODERN ERRORS...
For approximately seventeen centuries men acknowledged that authority comes only from God, and temporal rulers sought the approval and the blessing of their bishops who, by divine right, ruled in their dioceses as successors of the Apostles. Then came the Philosophists. As always, the Power of Darkness used pride to achieve his aims, the pride of human reason. As always he called the Light, Darkness and the Darkness, Light (Isaiah 5:20). That is why the Medieval times are now referred to as the "Dark Ages"; (in fact, the Dark Ages were pre-Medieval), and why Philosophism is referred to as "Enlightenment".
As always, the Devil acted with subtlety: he did not bring in Communism immediately, he brought in Modern Democracy first, knowing that the one would lead to the other. The lures inherent in the first would more easily lead to the destruction of man by the second. The Devil acted with cunning. So shrewd is he that even Christians were deceived. To make a thorough job of it he instilled into modern minds the myth of historical inevitability. "We must march with the times" we are told, as if the times were not what we are making them!
A SUBTLE AND GRADUAL PROCESS...
The present state of the world is not due to chance, it is the outcome of the everlasting struggle between good and evil. The Devil knows that his fight against God has to be gradual if it is to have any chance of success. Therefore, he began his fight in the 16th century by dividing Christianity.
When the first battle had been won, the Devil moved from the religious field into the philosophical field, and conceived Rationalism, which put human reason before Revelation.
Christians being already divided, there was no single front to defend the primacy of Divine Revelation.
The interpretation of Divine Revelation being divided against itself, it could not resist the claim of the so-called primacy of human reason. Human reason appeared more reliable, and so the new philosophy installed itself. It naturally followed that man began to think about an earthly paradise.
Hence Rationalism begot Human Messianism (i.e. Humanism). It was then logical that man should not want to be impeded by standards of moral conduct. He had to be free from all restraints, and his reason alone was going to tell him how to act and behave.
Thus came into being the doctrine of Liberalism. Almost immediately, this doctrine extended to every field of human activity, especially economics, politics and science. From being philosophical, it became practical a way of life, the philosophical origin of which, most people do not suspect nowadays.
AN UNHOLY TRINITY...
After this, Human Messianism combined with Liberalism to set up CAPITALISM, an economic system based on greed and usury, which paves the way for Communism. Rationalism and Liberalism combined to give birth to the principle of POPULAR SOVEREIGNTY, being free and reasonable, every human being was to make all decisions.
Rationalism, and Human Messianism, combined to give birth to SCIENTISM (or the cult of Technology, the worship of the work of man, i.e. TECHNOLATRY) whereby we expect salvation from better and higher production, an error which was observed by Pius XII in his 1952 Christmas message. We speak of "Progress" in terms of industrialisation, completely unaware of "the undeniable advantages of an economy based chiefly on agriculture". (Pius XII)
DIABOLICALLY LOGICAL...
Thus, the unholy trinity, that is, Rationalism, Human Messianism, and Liberalism, laid the ground-work for all the evils which are destroying modern society. Observe how gradual the process has been:
a) Difference in religious views (affecting the soul).
b) Alteration in philosophical thinking (affecting the intellect).
c) Organisation and purpose of the physical world (affecting the will).
Observe how logical the development:
a) REFORMATION (dividing Christianity to weaken Divine Revelation).
b) RATIONALISM (doubting that man can rely on Divine Revelation).
c) HUMAN MESSIANISM (asserting that man can rely on himself).
d) LIBERALISM (trusting man wholly).
e) CAPITALISM (Human Messianism plus Liberalism).
f) DEMOCRACY (Rationalism plus Liberalism).
g) TECHNOCRACY (and Technolatry) - (Nationalism plus Human Messianism).
These developments are too gradual and logical to leave any doubt that there is an Intelligence behind it. This Intelligence is that of the Power of Darkness.
A number of Saints have said that, in the Latter Days, evil will be done by men of good will. There is no doubt that many Catholics believe in good faith that we are living in an age of progress, and that Modern Democracy IS Progress. The superficial advantages which it presents hide from many its intrinsic nature, the errors on which it is based, and the evils which accompany it.
The deception of the Devil has worked. Author Unknown...
Quote=onenow1, I like to read this every once in awhile Christian Knight,I think it brings a little more understanding of reformation history into perspective.
For 2000yrs the Catholic Church is still here, same place same station same doctrines. As Jesus said ! The gates of Hell Shall not prevail.
Did you ever wonder, why till this day Christian churches continue to split into newer factions, I don't think this is the work of the Holy Spirit
In the gospels remember what Jesus said a house divided against itself will not stand.
Peace, onenow1
Thursday, December 11, 2008
Climate Change Illusions
An article from Quadrant
Environment
Illusions of Climate Science
William Kininmonth
How have we come to a situation where, as some polls suggest, most Australians are so concerned about dangerous climate change that they will put aside the very tools and technologies that have sustained clean air, clean water, nutritious food and long life? More importantly, is the perceived danger real and will the reduction of carbon dioxide emissions avert the perceived danger? Although there are many uncertainties to be resolved, it is clear that the community has been the subject of more than two decades of heavily biased propaganda.
In spite of claims to the contrary, there is no consensus of scientists supporting the findings and recommendations of the Intergovernmental Panel on Climate Change. There exists a large and vocal group of highly qualified dissenters (often denigrated as sceptics, deniers or worse). Published letters and opinions in the press suggest the scientific community is still divided and the community has not succumbed to the propaganda of human-caused global warming. Many in the community, with every justification, are awaiting more information about the costs and the economic and social impacts before lining up to march behind the government’s carbon dioxide reduction banner.
A widely accepted conviction that dangerous climate change is actually pending will be required before the community will support the government’s strategy to shut down fossil-fuel-dependent industries and willingly abandon the energy-dependent and satisfying lifestyle activities they enjoy. After all, in the cause of saving the planet we will all be required to give up a wide range of personal freedoms that we currently take for granted. We will want to be in full agreement that the alleged dangers are real and present, and that the course of government-imposed actions really will avert them.
Are the Dangers of Human-Caused Climate Change Real and Present?
The notion of human-caused global warming has its origins in late-nineteenth-century speculation about the causes of past climate shifts, especially the ice ages when large parts of North America and Europe were under kilometres of ice. Svante Arrhenius of Sweden argued that intermittent volcanic activity, and the injection of huge amounts of carbon dioxide into the atmosphere, had regulated retreats and advances of glaciations, but this theory has now been discarded. The Serbian Milutin Malenkovich’s early-twentieth-century calculations linking the glaciations to changing characteristics of the earth’s orbit around the sun is now in favour. Nevertheless, speculation linking potential global warming to the burning of fossil fuels, based on Arrhenius’ theory, continued through the middle twentieth century.
During the 1960s and 1970s computer modelling was being developed to advance weather prediction. As they advanced, weather prediction models were adapted to crudely simulate climate, and a number of simple “what if?” experiments were carried out. For example, what would happen to Earth’s temperature if the concentration of carbon dioxide in the atmosphere was doubled, or trebled? Some of these crude experiments suggested that increased carbon dioxide might significantly raise the temperature of the earth.
As a consequence of the early modelling experiments, the issue of dangerous human-caused global warming was a consistent underlying theme of a series of international and intergovernmental environmental conferences that preceded the formation of the United Nations Environment Program (UNEP) in the early 1970s.
The 1979 First World Climate Conference held in Geneva played a crucial role in alerting the world community to the need for a better understanding of climate systems, climate change and mitigation of its harmful effects. Global cooling and the possibility of earth slipping into the next ice age had been dominant themes in the years leading up to the conference. However, the possibility of human-caused global warming was recognised and received attention. The Conference Declar-ation noted: “it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to gradual warming of the lower atmosphere, especially at high latitudes”.
In 1985, mainly at the instigation of UNEP, but co-sponsored by the World Meteorological Organization (WMO) and the International Council of Sciences (ICSU), a conference was held in Villach, Austria, to review the impact of human-caused carbon dioxide emissions on climate. Following the presentation of invited papers the Conference Statement was forthright: “As a result of the increasing concentrations of greenhouse gases, it is now believed that in the first half of the next century a rise of global mean temperature could occur which is greater than any in man’s history.” It was asserted that planning for the future should not be based on historical data, because human-caused emissions of carbon dioxide were contributing to global warming and climate change. Based on then available computer models, estimates were given of a 1.5°C to 4.5°C temperature rise and a sea level rise of between 20 and 140 centimetres from a doubling of carbon dioxide concentration in the atmosphere.
The Villach Statement was the basis for instigating a series of national and international conferences. The essential purpose of the conferences was to raise community awareness of the danger from burning fossil fuels and raising atmospheric carbon dioxide concentrations. In Australia, the then Commission for the Future and CSIRO were leading players in the local promotion, including sponsoring of the December 1987 conference of invited scientists, “Greenhouse: Planning for Climate Change”. A 1988 international conference in Toronto, of invited environmental bureaucrats and scientists, was the first to specifically call for a 20 per cent reduction of carbon dioxide emissions in order to prevent future dangerous climate change.
The very strong and active role being played by UNEP and the environment movement generally in the promotion of human-caused global warming became of concern to the more conservative science-orientated WMO. The concern was twofold: first that the policy proposals were running far ahead of perceived scientific understanding; and second, the lead in climate matters was being usurped by UNEP. WMO and UNEP agreed that a thorough review of the science associated with carbon dioxide and its impacts on climate should be carried out. The two agencies co-sponsored the formation, in 1988, of the Intergovernmental Panel on Climate Change (IPCC), under UN auspices, as an authoritative source of advice to governments.
The IPCC presented its first assessment report at the 1990 Second World Climate Conference. In essence, the findings confirmed that there is a greenhouse effect and that increasing atmospheric concentrations of carbon dioxide resulting from human activity will enhance the greenhouse effect. However, the report highlighted the many scientific uncertainties and noted it was not possible to predict the timing, magnitude or regional impacts of the enhanced greenhouse effect. Nevertheless, in spite of the uncertainties, the IPCC provided an estimate of between 0.2°C and 0.5°C temperature rise per decade and 6 centimetres per decade sea level rise over the coming century, based on computer models.
The IPCC First Assessment Report was endorsed by the 1990 Second World Climate Conference, and an associated ministerial meeting issued a declaration calling for the negotiation of a treaty to restrict carbon dioxide emissions and prevent dangerous climate change. The UN established an Intergovernmental Negotiating Committee that met six times between February 1991 and May 1992 and presented its Framework Convention on Climate Change for endorsement by governments at the Earth Summit in Rio de Janeiro in June 1992. From the Convention:
“The ultimate objective of this Convention and any related legal instruments that the Conference of the Parties may adopt is to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Such a level should be achieved within a time frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner.”
What is not generally understood is that the objective of the Convention is not to prevent climate change, dangerous or otherwise, but to prevent dangerous climate change caused by human activity. This is underscored in the Definitions to the Convention: “‘Climate change’ means a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods.”
The negotiations for the UN Framework Convention were carried out against a background of politics rather than science and logic. Many undercurrents could be clearly discerned that reflected the various vested national and regional interests.
Developing countries saw the issue starkly: whether the problem was real or not it was created by industrialised countries; it was for industrialised countries to fix and there should be money for compensation to developing countries that would be affected; and there should be technology transfer to ensure developing countries did not make the same “mistakes”. The association of small island states was most vocal in this respect—their very existence was claimed to be threatened by rising sea levels and they should receive refuge and compensation.
The oil exporting countries of the Middle East were concerned at where these negotiations were heading, seeing a potential drying up of revenues. At every point there was denial of a problem, emphasis on the dangers of nuclear energy as a “clean” alternative, and attempts to water down any agreements for action.
The newly independent countries of the former Soviet Union sought special recognition because economic downturn was leading to the closure of older inefficient and “dirty” factories. Credits for newer technology with fewer emissions were expected to lead to investment through a clean development mechanism.
Most intriguing was the obvious tension between the European Union and the United States. The former had a high investment in nuclear energy and was rapidly converting from coal to offshore natural gas; it could also see offset benefits from the modernisation of industry in the former East Germany. The EU was in a strong position to comply with a low-end requirement for 20 per cent reduction in emissions from the propitious 1990 baseline, especially with the potential of expanding nuclear energy—despite the experience of Chernobyl. In contrast, the much publicised nuclear accident of Three Mile Island had raised very strong public opposition and the USA had little prospect of expanding nuclear energy. Any move to impose energy constraints through reduction in carbon dioxide emissions would hit the US industry base with its reliance on fossil fuels, especially coal. There was clearly potential to shift the comparative advantage for producing goods and services from the USA to the EU, with advantages for European economies.
Now that the first Kyoto period is drawing to a close there are added political complications through the rapid industrialisation of developing countries, especially the economies of China, India and Brazil. Whether or not there is consensus about the underlying scientific assertions, the next round of negotiations for post-2012 commitments is going to be difficult, given that the bulk of new emissions will come from developing countries and that they have emphatically and repeatedly refused to countenance any limitations on their use of carbon-based fuels.
Are the Alleged Impacts of Climate Change Exaggerated?
In Australia the assertions of dangerous human-caused climate change, and even runaway global warming, are being fanned by varying interests who should know better, or who, at least, should check the facts. Drought and problems of the Great Barrier Reef are being promoted as diabolical issues that can only be addressed by emissions reduction, despite the overwhelming evidence that the current climate experience is but a repeat of past droughts when we were far less able to offset the economic losses with income from other industries. The argument put by many, that the drought conditions of the past decade are so severe that they can only be the cumulative outcome of environmentally-unfriendly human activity, cannot be sustained. It ignores a large body of accumulated scientific and historical knowledge to the contrary, and is a resort to illusion.
Australia’s documented history, incorporating its cultural and economic development within the constraints of its relatively harsh climate, is relatively short. Drought and other vicissitudes of climate have been ever-present dangers and have been dominant in shaping the pattern of settlement and development. The twentieth century opened with much of the country in the grip of the “Federation Drought” that commenced in the middle 1890s and continued in parts until about 1905. The decade around the First World War, the late 1930s and through to the early 1940s, and the middle 1960s were all prolonged periods of generally low rainfall, especially in parts of eastern Australia. Although there were short but intense drought periods, such as 1982–83, the second half of the century was generally wetter than the first, until the early 1990s and the beginning of the current dry period.
There is currently a focus on the state of the Murray-Darling Basin and the condition of the lower Murray River, as if the current low river flows had not happened before. However, during the Federation Drought the basin suffered significant rainfall deficiency and by late 1902 the Darling and Murray Rivers had virtually run dry. In 1914 the Murray downstream of Swan Hill was reduced to a series of stagnant pools. Prolonged low rainfall during the 1940s again resulted in stress on the rivers of the Basin and the Murray River ceased to flow at Echuca in April 1945.
One of the great myths that gained currency during the recent debate on human-caused global warming is that higher temperatures will cause more droughts. However, continental rainfall largely has its origins in evaporation from the surrounding oceans. The fact is that evaporation increases by more than 6 per cent with each degree Celsius of sea surface temperature rise and, as a consequence, warmer temperatures will generate more rainfall. The great wind-blown sand dunes of Central Australia did not form during the warmer Holocene Optimum between 5000 and 10,000 years ago but during the colder, drier glacial period more than 20,000 years ago. During the Holocene Optimum the subtropical deserts of the world were blooming and teeming with life. In the past, a warmer world with higher sea surface temperatures has been beneficial and enhanced Australia’s summer monsoon and winter rainfall.
What we find is that maximum temperatures are lower during wetter years, when there is more cloud and wetter soil from which evaporation keeps the surface cooler. During very dry years there is less cloud and very little cooling evaporation; maximum temperatures are consequently higher. It is the clouds, rainfall and soil moisture that regulate temperature. It should be noted that the all-time daily maximum temperatures of Adelaide, Melbourne and Sydney occurred during January 1939, as a heatwave progressed across southern Australia at the culmination of two years of regional drought. The high temperatures followed the drought—the drought did not follow the high temperatures.
Many other mythologies that link possible dire outcomes with global warming have their origins in studies following the major El Niño event of 1997–98. The shift in seasonal weather patterns during the event caused many tropical and middle latitude regions to experience drought, while other regions experienced excessive rainfall and flooding. The uncommon weather sequences and seasonal rainfall patterns resulted in a range of ecological responses, most of which were deemed undesirable because they were outside the boundaries of usual experience. Land and water management systems became stressed, much community infrastructure was damaged or destroyed, and ecological changes promoted the spread of a range of diseases.
Drought in many equatorial and tropical forests and grasslands made these lands susceptible to fire; outbreaks that occurred were generally unmanageable because of inadequate planning and response infrastructure. The accompanying smoke promoted respiratory and eye infections; the stagnation of streams and waterways led to pollution accumulation and to the outbreak of a range of pollutant-related diseases.
Elsewhere, excessive rainfall caused waterlogging of fields, flooding, and destruction of private and community infrastructure. Expansion of insect populations, especially mosquitoes, meant that the carriers could spread disease more readily. Higher incidences of encephalitis, dengue fever and malaria could be linked directly to the changed environmental conditions. As a general rule, the increased incidence of disease occurred in countries that did not have the resources on hand for rapid deployment to control the outbreaks, either through insect control or for medication.
According to a US assessment, the global impact from the 1997–98 El Niño event included 24,000 deaths, 533,000 people suffering illness, 6 million persons displaced, 111 million persons adversely affected, and a direct loss of US$34 billion.
The 1997–98 El Niño event, and information from earlier events, provided a valuable database for linking changed climatological conditions to environmental, community and industrial impacts. The real value of this database is in the formulation of response strategies so that, in the future, resources are marshalled and potential impacts can be mitigated. For example, early response will prevent the build-up of insect populations and reduce the spread of disease; medications will be available for treatment of eye and respiratory diseases where smoke is a problem; and water purifiers can be mobilised to ensure clean water.
Unfortunately, the raw data relating unmanaged ecosystem responses and community impacts from the local and limited duration seasonal climate anomalies of the El Niño events have been extrapolated to give potential impacts of human-caused climate change. However, directly extrapolating future impacts from past experience can be misleading. For example, the expansion of mosquito populations and increased disease incidence to a hypothetical future climate gives a very scary but exaggerated scenario. This, of course, is the intention.
There are two essential pieces of information that the proponents of the theory of dangerous human-caused climate change do not discuss. First, the impact statistics relate to largely unmanaged systems. For example, malaria was endemic throughout northern Europe before the draining of marsh lands and the imposition of good public health regimes. Would it not be important to implement appropriate public health measures in the countries still subject to these diseases in order to reduce their incidence, whether or not the world is getting warmer or cooler? Second, it is legitimate to ask whether reduction in carbon dioxide emissions is the most sensible and cost-effective approach to controlling a range of endemic diseases.
Is Dangerous Human-Caused Global Warming a Reality or an Illusion?
The case for dangerous human-caused global warming rests solely on the projections of computer models. Without such projections, which have consistently been in the range of a 1.5°C to 4.0°C global temperature rise from a doubling of carbon dioxide concentration, there would be no basis for alarm. The low-end computer projections exceed the range of temperature variation of the past 10,000 years—the experience that ranged between the Holocene Optimum and the Little Ice Age. The high-end projections approach the temperature and climate range between the major ice ages, including advance and retreat of continental ice sheets and sea level variation of more than 130 metres. Sediment analysis from ocean cores suggests that the range of tropical temperature variation across the glacial cycles was only about 3°C, although the range was much larger over the polar regions.
There is, however, much observational and theoretical evidence to suggest that the computer projections are fanciful. Even the evolution of computer modelling of climate suggests that the projections should be treated with extreme caution. Importantly, the oceans and their circulations are the thermal and inertial flywheels of the climate system; as the ocean circulation changes, the atmosphere and its climate respond. Our knowledge of subsurface ocean circulations and their variability is limited. Without this vital input, projections of future climate are tenuous at best.
The computer models used as the basis of projections at the 1985 Villach Conference and later for the 1990 IPCC first assessment had no dynamic ocean circulation. The ocean was represented by a water slab with prescribed energy transfers. It was assumed that the ocean surface temperature variations would respond to changing atmospheric temperatures as forced by carbon dioxide increases. To determine the effect of carbon dioxide a model would be brought to equilibrium to generate the “control” climate; the atmospheric carbon dioxide concentration would be doubled and run to equilibrium, giving the “response” climate. The difference between the “response” and the “control” was deemed to be the impact of carbon dioxide.
The IPCC initiative and the negotiations for the UN Framework Convention on Climate Change gave worldwide impetus and increased government funding for the rapid development of climate modelling. The 1995 IPCC second report was able to draw on results from climate models that by then included dynamic ocean circulations. A weakness of these early coupled ocean–atmosphere models was a tendency to warm, even without carbon dioxide forcing. Many of the projections of carbon dioxide forcing came from models that took the difference in warming between control and response where both experienced warming. In other models the ocean–atmosphere energy exchange was constrained and adjusted to maintain a steady global temperature in the control. The same artificial constraints were applied to the response.
The issues surrounding the natural tendency of the coupled models to warm were claimed to have been overcome by the time of the 2001 IPCC third report. Unforced computer simulations extending over a period of 1000 years showed no long-term global temperature trend and no significant periodic oscillations. On this basis, the IPCC asserted, “The warming over the past 100 years is very unlikely to be due to internal variability alone, as estimated by current models.” This was also the report that promoted the infamous “hockey stick” representation of global temperature over the past 1000 years—essentially the straight handle of constant temperature for 900 years followed by the blade of rising temperatures of the past century.
The combination of unvarying computer simulation and apparently steady temperatures before the rapid industrialisation and then temperature rise of the twentieth century was powerful imagery to support the propaganda that the warming of the twentieth century was human-caused. Unfortunately the “evidence” was all a mirage. The statistical analysis underlying the “hockey stick” has been shown to be fatally flawed; a wide range of compelling historical, cultural, archaeological and paleo data support the Medieval Warm period–Little Ice Age–modern warm period climate cycle. Moreover, there is no strong evidence that the current temperatures are warmer than those of the Medieval period from the ninth to the thirteenth centuries when, for example, there were thriving settlements on Greenland.
Although the IPCC case for an unvarying climate, unless externally forced, rests on the performance of computer models, the proposition does not accord with either evidence or logic. The oceans are a relatively large mass of cold dense fluid that is constantly in motion, largely driven by surface wind stress, although tempered by topography, tropical surface heating and salinity variations. The atmospheric circulation responds to tropical heat and moisture exchange from the underlying oceans and the atmosphere transports heat to polar regions. It would be truly remarkable for two interacting fluid layers on a rotating spherical surface not to produce significant periodic variations on a range of time-scales. The observed inter-annual and decadal variations of climate and the multi-centennial time-scale of some overturning circulations contradict the assertion of the IPCC that the climate system has only limited internal variability. The evidence provides support for the view that there is significant internal variability of the climate system that gives rise to variations on a range of time-scales.
The 2007 IPCC fourth report has claimed that, based on contemporary computer model simulations, the temperature rise of the last half-century is very likely caused by human activity, particularly carbon dioxide emissions. This claim is made even though there is only limited correlation between fossil-fuel-based carbon dioxide emissions and twentieth-century global temperature rise. The economic stagnation and limited growth of carbon dioxide emissions in the decades between the world wars was accompanied by significant rise in global temperature; the rapid increase in fossil fuel usage in the decades following the Second World War coincided with declining global temperatures. It was only after the middle 1970s that temperatures again increased, and mainly over the continental areas of the northern hemisphere, but the temperature trend has again plateaued during the last decade.
The IPCC rationale is that emissions of aerosols during the early years of the postwar industrial boom reflected more solar radiation back to space and therefore constrained global temperatures against the enhanced greenhouse effect of increasing carbon dioxide concentrations. According to the rationale, following the implementation of national Clean Air Acts the aerosol emissions were eliminated, allowing the enhanced greenhouse effect to emerge and force up temperatures in the 1980s and 1990s. Models with carbon dioxide and aerosol forcing were able to reproduce the twentieth-century global temperature record with a degree of fidelity.
Credulous supporters have accepted the seemingly plausible IPCC rationale and its associated computer simulations without questioning the underlying foundations. First, as the IPCC report notes, there is a very low level of understanding about the interactions between radiation and atmospheric aerosols. Second, there are no observations for the magnitude and distribution of atmospheric aerosols—the aerosol forcing of the computer models is without validation. Third, there is no explanation as to why the late-twentieth-century warming was mainly over the northern hemisphere land areas despite the carbon dioxide increase being well mixed across both hemispheres. Fourth, there is no reason given for the recent hiatus of temperature increase. The “evidence” is no more than model tuning with plausible parameters. The faith in the computer models, on which the IPCC’s climate predictions are based, is misplaced.
The current global temperatures are relatively warm but not too dissimilar from those before Earth entered the current glacial phase about 5 million years ago; the current global temperature is only marginally cooler than the temperature peaks achieved during each of the interglacials of the last half-million years as earth recovered from successive ice age periods. A casual observer of the record might readily conclude (and not be far wrong) that there is a natural upper limit that earth’s temperature asymptotes towards.
Although not in the context of global warming, in 1966 C.H.B. Priestley (then Chief of CSIRO’s Division of Meteor-ological Physics) wrote of the limitation of temperature by evaporation in hot climates. High daytime maximum temperatures are reached over arid lands of the tropics where only radiation loss and conduction are available to rid the surface of energy absorbed from the sun. Where the land surface is wet or covered in vegetation the temperature is considerably lower because the additional evaporation of latent energy has a powerful cooling effect; evaporation (and latent heat exchange to the atmosphere) increases almost exponentially as temperature rises. The combined radiation, conduction and evaporation losses from the oceans and wet or vegetated surfaces can offset the absorption of solar energy at a lower temperature than when evaporation is absent.
The principle is identical for carbon dioxide forcing and its enhancement of the greenhouse effect. The magnitude of the down-welling long-wave radiation at the surface increases as the concentration of carbon dioxide increases. There is a corresponding rise in surface temperature that is constrained by the increase in surface energy losses (radiation emission, conduction and evaporation of latent heat). The earth’s surface is predominantly water or well-vegetated land; increasing evapor- ation of latent heat is a dominant factor in the additional energy loss under carbon dioxide forcing. It is the additional evaporation of latent heat that will constrain surface temperature response to human-caused carbon dioxide emissions.
The mathematical formulation of surface temperature response to carbon dioxide forcing is straightforward, even considering water vapour feedback. For a doubling of carbon dioxide concentration the global average surface temperature increase from 280 ppm (before industrialisation) to 560 ppm (towards the end of the twenty-first century) will only be about 0.5°C.
Why then, it might be asked, do computer models give projected temperature increases that are nearly an order of magnitude larger; and why are there claims of dangerous “tipping points” and potentially runaway global warming?
The likely answer to these questions is the recent revelation that computer models grossly under-specify the rate of increase of evaporation with temperature, the factor that constrains surface temperature increase. In 2006, US researchers Isaac Held and Brian Soden reported that, on average, the rate of increase of evaporation with temperature in computer models used for the IPCC fourth assessment is only about one third of the expected value. In 2007, Frank Wentz and his US colleagues repeated the earlier finding and, on the basis of satellite analysis of rainfall, confirmed the expected rate of increase of evaporation with temperature as the appropriate value.
The significance of the computer model shortcomings identified by the US researchers can be appreciated from the mathematical formulation of feedback amplification. As surface temperature rises under carbon dioxide forcing, it warms the overlying atmosphere and further enhances the long-wave radiation back to the surface from the atmosphere. The feedback amplification has a term of the form [1 / (1 – r)], where r is less than unity. As r increases the amplification will also increase. The term r is linked to evaporation such that any underestimation in the specification for the evaporation term causes r and the feedback amplification to be anomalously large. As the rate of increase of evaporation approaches zero then r approaches unity and the projected amplification is very large. The gross under-specification of evaporation in computer models gives inflated values of r and exaggerated amplification of global surface temperature to carbon dioxide forcing.
In the more extreme computer models, the erroneous specification of evaporation response means that the models are approaching computational instability and the global temperature projections give the appearance of “runaway global warming”. Of course, the projections are erroneous. The exaggerated surface temperature increase associated with the computer model projections is a direct consequence of the failure of the model specification and does not represent the true sensitivity of the earth’s temperature to carbon dioxide forcing. In reality, surface evaporation from the oceans and vegetated land areas will constrain surface temperature increase to about 0.5°C for a doubling of carbon dioxide concentration, which cannot be considered as dangerous.
Unfortunately, those who are entrusted with building and validating the computer models seem to be blind to the inherent failings—the models have cost so much to build but the presentation of being accurate and useful is fallacious. In many countries, such as Australia and the UK, research funds have been specifically allocated by Environment ministries to generate climate projections in order to underpin and give credence to environmental impositions, including indirect taxation and restriction of a range of personal freedoms targeted solely at reducing carbon dioxide emission.
The CSIRO has taken state funding for the purpose of generating specific predictions for land use planning and water resource management at the regional level. But we should note that the CSIRO has legal disclaimers of responsibility for the truth or veracity of these predictions in case they turn out to be incorrect or misleading. The CSIRO apparently has no confidence in its computer predictions—all the risk is with the user.
Notwithstanding that computer models exaggerate the magnitude of warming, there continue to be NGO commentators and advocates who claim the danger is even more horrific than the IPCC suggests. The claim is that it is the higher temperature projections that are the more realistic and that as the temperature passes a tipping point then irreversible runaway global warming will take off. Such claims are without scientific foundation because of the constraining effect of surface evaporation that is grossly under-specified in computer models.
Is There a Sound Case for Carbon Emissions Reduction?
Australians are now being bombarded by an intense government-funded propaganda campaign to encourage people to accept the reality of dangerous human-caused climate change and support early action for “carbon pollution” reduction. The scaremongering about dangerous climate change is based on the erroneous computer model projections and the unsubstantiated extrapolations of a range of climate impacts that are only realistic if no adaptive or mitigating measures are taken.
In the absence of computer models there would be little credence given to the view that the relatively small warming of the second half of the twentieth century was due to carbon dioxide emissions; there would certainly be no credence given to the possibility of irreversible runaway global warming over the coming century. Cool heads would note that most of the earth’s surface is either ocean or freely transpiring vegetation and that surface evaporation will continue to constrain surface temperature rise, as it always has done.
The likely magnitude of human-caused global warming is so low that it will not be discernible against the background of natural variability in the climate record. Thus national or internationally co-ordinated efforts to impose carbon dioxide emission reduction for the purpose of preventing climate change will be a tremendous waste of resources. The real danger is that government-instigated measures to drastically downsize a wide range of fossil-fuel-dependent industries in order to achieve emission reduction targets will actually be effective. Such success will destroy jobs and will limit future development opportunities, with no discernible impact on climate. Then the government will realise that it is much easier to change the economy than to change the climate, and it will also find that the direction and impacts of change will be equally unpredictable.
William Kininmonth is the former head of Australia’s National Climate Centre. He was an Australian representative and consultant to the World Meteorological Organization on climate issues and is the author of Climate Change: A Natural Hazard (Multi-science Publishing Co., 2004). He will be among the speakers at the Australian Environment Foundation’s annual conference, “A Climate for Change”, in Canberra this month.
Monday, November 17, 2008
Falling Human Fertility and the Future of the Family
By Philip Longman*
* Philip Longman, Ph.D., is author of The Empty Cradle: How Falling Birthrates Threaten World Prosperity. An earlier version of this essay was delivered as remarks to the World Congress of Families IV in Warsaw, Poland, in May 2007.
We can approach a very important demographic concept by way of Cavett’s Iron Law of Population. Formulated many years ago by a man named Dick Cavett, Cavett’s Law articulates a basic truth: “If your parent’s never had any children, chances are you won’t either.”
Dick Cavett, obviously, was a comedian. But his observation is not just funny. It’s also importantly true. Because, as a careful analysis will demonstrate, the ongoing global decline in human birthrates is the single force that will most affect the fate of nations and the future of society in the 21st century.
Some might regard that prediction as implausible. After all, most of us just take it for granted that there will always be more and more people in the world.
We see it in our day-to-day lives. Every year, traffic gets worse. Every year, the price of waterfront property gets more prohibitive. If we turn on the television, we see images of Third World famine, war, and environmental degradation. And we see a pattern of population growth in the official numbers (see graph: “World Population Growth”).
Today, world population is increasing by some 76 million annually. That’s equivalent to adding a whole new country the size of Egypt every year. In my parents’ lifetime, world population has tripled. Just during the 50 years since I was born, world population has more than doubled.
We have grown up—and continue to live in—an era of explosive world population growth. And for most of us, this phenomenon deeply informs our world views and expectations for the future.
But now, here’s a curious fact—the first of many essential to understanding our current demographic circumstance—world population is still growing, but the world supply of children is shrinking.
The claim that global population is growing while the number of children is declining may strain credulity. But it’s true. This paradoxical trend started in Europe in the middle of the last century. Today there are 36 percent fewer children under age 5 in Europe than there were in 1960. In Poland, the number of young children declined by a full 50 percent during this period.
Now that same trend is going global. For the world as a whole, the absolute number of children aged 0 to 4 is actually six million lower today than it was in 1990.
How can this be? Where have all the children gone?
To be sure, war, hunger, and disease still carry away millions of the world’s children. In parts of Africa, as many as 20 children die for every 100 that are born.
But as horrible as this reality is, it’s not the explanation for the shrinking supply of children. Child and infant mortality are generally improving throughout the world, often dramatically. What has changed is something much bigger and newer and stranger.
It’s happening in rich countries. It’s happening in poor countries. It’s happening in Catholic countries; it’s happening in Protestant countries. It’s happening in Muslim countries, both Shia and Sunni. It’s happening throughout the Eastern Hemisphere and the Western Hemisphere, North and South. It’s happening under all forms of government.
And just what is this big, universal trend? More and more people are choosing to have only one child—or none at all. Birthrates are plummeting around the world.
Recent data indicate that among the major industrialized countries, only the United States still comes close to producing enough children to replace its population. (See chart: “Fertility in Developed Countries”.) In modern societies such as the United States, Japan, and Italy, the average woman must give birth to an average of 2.1 children in order to avoid long-term population loss.
Why 2.1? To maintain a stable population, each woman needs to have one child to replace herself. Then she has to have another child to replace her male partner. Since some children die before reaching reproductive age, an additional one-tenth of a child is needed on average in modern countries to replace the population.
And yet there are fewer and fewer places left on earth where women still have as many as 2.1 children. From Argentina to Austria, from China to the Czech Republic, sub-replacement fertility rates are now spreading to every corner of the globe.
True, the total fertility rate runs above 2.1 births per woman through most African countries. But the AIDS epidemic in Africa and the high infant-mortality rates found through much of sub-Saharan Africa means that many of the countries in this region are actually experiencing below-replacement fertility levels. South Africa, for example, with a total fertility rate of just 2.24 children per woman, is surely at below replacement level, given its high rates of child mortality.
Where is fertility falling the fastest? Just where most people think it is growing the most: that is, in the Middle East. Consider, for instance, Iran’s little-noticed birth dearth. Who would know, from reading today’s headlines, that Iran’s total fertility rate fell below replacement level in 1999 and—according to the U.S. Census Bureau’s International Data Base—has since dropped to just over 1.6 lifetime births per Iranian woman, 22 percent below replacement level. Similarly, how many Americans know that Iran has been joined by four other countries in the region—Tunisia, Algeria, Lebanon, and Turkey—in the slide into sub-replacement fertility? (See chart: “Sub-Replacement Fertility North Africa and the Middle East.”)
Most countries in the Middle East, to be sure, still have birthrates above replacement levels, and the populations are still growing. But everywhere birthrates are falling, and the long-term trend is unmistakable. Even in Egypt, for instance, a country feeling the growing appeal of radical Islam, with its emphasis on patriarchy and pronatalism, total fertility has fallen from approximately seven lifetime births per Egyptian woman in the middle of the twentieth century to just three births in the early twenty-first century, with population scholars predicting a drop to sub-replacement fertility by mid century. (See graph: “Lifetime Births Per Woman in Egypt.”)
A similar picture emerges in United Nations’ demographic data for Pakistan during the same period, where total fertility has fallen from over six births per Pakistani woman in the 1950s to just three births in 2000, with demographers again anticipating a continued slide to just two births by mid-century.
The same pattern, often even more striking, is evident throughout most of Asia, as recent demographic data make clear. (See table: “Sub-Replacement Fertility in Asia.”)
In China, Taiwan, Japan and South Korea, birthrates have dropped so low that we see the emergence of so-called 4-2-1 societies: these are societies in which single child families are the norm, and each only child eventually becomes responsible for supporting two parents and four grandparents.
Despite the images Americans see in the media of China’s teeming cities, its working-aged population is on course to begin shrinking within ten years. The big question for China is, Will it get rich before it grows old (as the West did), or will it grow old before it gets rich?
India is not yet on this list. As a whole, it still has an average fertility of about 2.5. But its southern provinces are already reproducing at well below replacement levels.
Sub-replacement fertility prevails in Eastern Europe as well. (See chart: “Sub-Replacement Fertility in Eastern Europe.”)
In a country such as the Czech Republic, fewer and fewer children have any siblings. If current trends continue, it will also be rare in another generation for anyone to have an aunt or an uncle. Similarly, nieces, nephews, and cousins are becoming endangered social species.
The retreat from childbearing is especially dramatic in the former Soviet Union. (See chart: “Sub-Replacement Fertility in the Former Soviet Union.”) As a consequence of its plummeting birthrates and its very high death rate, Russia has a population that is shrinking by some three-quarters of a million people a year.
What about the United States? Within North America today, we see much the same pattern that we see in the world as a whole. That is, fertility rates are falling, and falling especially fast among historically disadvantaged groups.
Indeed, recent fertility data reveal that the only major American ethnic or racial group that has experienced an increase in fertility since 1990 is non-Hispanic Whites, and that increase has been a mere one percent. (See chart: “Change in Total Fertility Rates for Major American Ethnic and Racial Groups, 1990-2002.”)
Families are also shrinking among our friends to the south. Indeed, the drop in the number of Mexican children is a national phenomenon without precedent in its speed and extent. The number of Mexican children below the age of four has fallen dramatically—by tens of thousands—just since 1995! (See graph: “Mexican Children, Age 0-4.”)
Informed observers see much the same picture elsewhere in the Americas. Today, the median age in Latin America and the Caribbean may be 10 years lower than it is in the U.S. But reliable UN projections indicate that the age gap will soon begin closing and will virtually disappear by mid-century. The gap will close particularly rapidly if—as seems quite possible—the UN’s “low variant” projection for the region’s future fertility rates proves more accurate than the “medium variant” projection. (See chart: “Shrinking Youth Supply to the South.”) Since youthful Latinos are those most likely to immigrate to the United States, rates of immigration from South to North America may well taper off in the decades ahead.
Nonspecialists may understandably wonder how the population can continue to grow in countries where fertility rates are well below replacement levels. This perplexity can be resolved by taking into account the high rates of population growth that occurred in the ’70s and ’80s. This population growth wasn’t the result of high birthrates, but of falling death rates, particularly for the children in the Third World. And today, that accomplishment leaves a large percentage of the population still in its prime reproductive years.
But the demographic momentum traceable to the Seventies-and-Eighties population surge is fast dwindling. In many countries, such as Italy and Spain, population momentum has already turned negative. That is to say, in these countries the decades-long decline in the number of women of childbearing has created aging societies certain to shrink in the years ahead. Today in Italy, there are only half as many women of childbearing age as there were in 1960. A country in which women of childbearing age are disappearing is a society headed toward long-term depopulation. After a certain point, even if the average woman starts having substantially more children than her mother did, the population will still fall.
The dynamics that push society toward depopulation create another curious feature of early 21st century demography—something mankind has never seen before. Never before has the world seen the global population continue to grow when almost all of that growth is among people who have already been born. Such a growth pattern defies traditional logic. But it’s quite real.
As already noted, the global number of small children is already falling. By 2050, according to one United Nation projection, there will be 248 million fewer children under age 5 in the world than there are today—even if birthrates rise in the developed world! But the population of elders will have swollen by nearly a billion. (See chart: “Total Fertility From Now On, Population Growth Comes From More Elders and Middle Aged People, Not From More Infants.”) Over the next half century, then, population growth will thus reflect the replacement of a smaller aging generation by a larger aging generation: that is, population growth will come from people who are already alive, not the birth of new babies—as paradoxical as that might seem.
How sure the trend?
But what is driving these trends, and how likely are they to continue? In developed countries, sheer economics largely accounts for the disappearance of children. In today’s advanced economies, many people are not even done with school before their fertility (or their partner’s) begins to decline.
Then there is the rising cost of raising children. In the U.S., the direct cost of raising a middle-class child born this year through age 18, according to government estimates, exceeds $200,000—not including the cost of college tuition. As women, as well as men, have gained new economic opportunities, the real cost of children in the form of foregone wages and compromised careers can often run much, much higher. Indeed in my book, The Empty Cradle, I calculate that, for a typical middle-class couple, the opportunity cost of raising a child just through age 18 can easily surpass $1 million—again, not including the cost of college tuition.
Meanwhile, although the world’s social security systems—and private pension plans—depend on the human capital created by childbearing parents, these systems create huge incentives to remain childless. We no longer must have children to find support in old age. Instead, we can rely on retirement benefits paid for by other people’s children.
Urbanization counts as another potent cultural development driving down fertility. As mega-cities have grown around the world, the percentage of the world’s population living in urban areas has grown from less than thirty percent in 1950 to over fifty percent in 2005. This trend shows no sign of abating. UN demographers anticipate that by 2030 fully sixty percent of the earth’s population will live in urban areas.
The widespread use of safe and effective contraception and abortion also pushes fertility lower. Today, among married or cohabiting women of reproductive age, slightly more than 50 percent are using modern contraceptive methods. Such levels of contraceptive use are found in rich and poor countries alike.
But a full understanding of current population patterns requires a careful look at the subjective—but perhaps more important—question of how sub-replacement fertility reflects certain cultural and religious values (“memes” in the vocabulary of Richard Dawkins). Scholars who have scrutinized polling data have limned a strong correlation between what we might call modern, individualistic, secular values on the one hand and low fertility on the other.
Survey sociologists have identified a number of value questions that separate those inclined to have large families from those inclined to minimize or altogether avoid parental responsibilities. For example, Do you distrust the army? Among Europeans, at least, those who say they do are far less likely to be married and have children, or ever to get married and have children, than those who do trust the military.
Do you think the most important goal in education is that of developing imagination and independence? Then according to polling data, there’s little chance you’ll have a large family.
Or again, are you not very proud of your nationality? Do you have little identification with the village or town you grew up in?
Do you find soft drugs acceptable? Homosexuality? Suicide and euthanasia? For whatever reason, people answering affirmatively to such questions are far more likely to live in childless, cohabitating unions than those who answer negatively. (See table: “Anti-Natal Memes: Some values and attitudes associated with low fertility.”)
A variety of other modern attitudes also go hand-in-hand with low fertility. In Brazil, for example, birthrates have dropped, province by province, coincident with the introduction of television. Today, the more hours that a Brazilian woman spends watching television, the less likely she is to have a large family.
What’s on Brazilian television? Mostly domestically produced soap operas, called telenovelas. These soaps rarely address reproductive issues directly. Instead, they typically depict wealthy individuals living the high life in big cities.
The men are dashing, lustful, power hungry, and unattached. The women are lithesome, manipulative, independent, and in control of their own bodies. The few who have young children delegate their care to nannies.
The telenovelas thus reinforce a cultural message that is conveyed as well by many North American and European cultural exports: people with wealth and sophistication are people who have at most one or two children.
Implications
Sub-replacement fertility is a pervasive global reality. But what are the implications of that reality? Should we laugh or cry, be thankful or wary?
Space does not here permit full consideration of the implications of sub-replacement fertility. But one social consequence of current fertility trends does deserve attention.
This consequence comes to light when people ask a natural question: Won’t persistent, sub-replacement fertility lead to eventual extinction? If current trends continue, Europe’s population, for example, will just wither away. But sober analysis suggests that current trends will not continue indefinitely in Europe or the West in general, for a special reason.
In Asian countries such as Japan, nearly every one eventually marries and eventually has one child. In Europe, and the West in general, by contrast, there is far more diversity in reproductive behavior. In my generation of Americans, for example, nearly a fifth of us never had children, and another 17 percent had only one.
The high incidence of childless and single-child families in the West has one big implication many overlook. It means a very large proportion of what children are being born are being produced by a small subset of the current population. And who are the people who are still having large families today?
The stereotypical answer is poor people or dumb people or members of minority groups. But the more accurate answer is deeply religious people.
To be sure, religious fundamentalists of all varieties are themselves having fewer children than in the past. But whether they be Mormons, Orthodox Jews, Islamic or Christian fundamentalists, or evangelicals, devout members of all these Abrahamic religions have on average far larger families than do the secular elements within their society.
In Europe, for example, the fertility differential between believers and non-believers has recently been estimated at 15 to 20 percent. Though children born into religious families often do not become religious themselves, many do, especially if they themselves go on to have children.
But if, over several decades, many of the secular-minded choose not to have children, the faithful begin to inherit society by default. Total population may fall, perhaps for quite a while; but those who remain will be disproportionately committed to God and family.
Remember Cavitt’s law: If your parents never had children, chances are you won’t as well.
A corollary might be, If you forgot to have children, chances are your descendents won’t grow up to be secular humanists.
Remember, too, that other strong finding of sociology, which is also enshrined in European folklore: “The apple doesn’t fall far from the tree.”
This is the way of the world.
The broad sweep of human history offers many examples of peoples, or classes of peoples, who chose to avoid the costs of parenthood. Indeed, sub-replacement fertility is a recurring tendency of human civilization. Like today’s modern, well-fed nations, both ancient Greece and Rome, for example, eventually found that their elites had lost interest in the often-dreary chores of family life. Here is the Greek historian, Polybius, around 140 B.C, lamenting the fate of his country as it gave way to Roman domination:
In our time all Greece was visited by a dearth of children and general decay of population... This evil grew upon us rapidly, and without attracting attention, by our men becoming perverted to a passion for show and money and the pleasures of an idle life.
By the time of Caesar Augustus, birthrates among Roman nobles had fallen so low that the Emperor felt compelled to enact steep “bachelor taxes” and otherwise punish those who remained unwed and childless. Augustus explained in clear language his deep concern about the Roman rejection of parenthood:
We liberate slaves chiefly for the purpose of making out of them as many citizens as possible; we give our allies a share in the government that our numbers may increase: yet you, Romans of the original stock...are eager that your families and names at once shall perish with you.
Needless to say, such exhortations didn’t work. Divorce became rampant in Roman society; childlessness increasingly common. When cultural and economic conditions discourage parenthood, not even a dictator—and many have tried—can force people to go forth and multiply. Eventually, the sterile, secular noble families of Imperial Rome died off, and with them, their ancestors’ traditional Roman ideals.
But what was once the Roman Empire remained populated. Only the composition of the population changed. Nearly by default, it became comprised of new, highly patriarchal family units, hostile to the secular world and enjoined by faith either to go forth and multiply or join a monastery. Sociologist Rodney Stark has shown that nearly all the spread of Christianity in late antiquity was the result of higher birthrates, and lower death rates, enjoyed by Christians.
With these changes came a Medieval Europe, but not the end of Europe, nor the end of Western Civilization. But secularism and individual freedom went into a long decline. This cycle in human history may be obnoxious to the enlightened, but it is still very much with us.