But whatever you think about fractional reserve banking, whether or not you agree with its critics, the truth is that we no longer have it.
The whole post is completely fascinating.
But whatever you think about fractional reserve banking, whether or not you agree with its critics, the truth is that we no longer have it.
Krugman’s tribe was academic economists, and insofar as he paid any attention to people outside that tribe, his enemy was stupid pseudo-economists who didn't understand what they were talking about but who, with attention-grabbing titles and simplistic ideas, persuaded lots of powerful people to listen to them. He called these types “policy entrepreneurs”—a term that, by differentiating them from the academic economists he respected, was meant to be horribly biting. He was driven mad by Lester Thurow and Robert Reich in particular, both of whom had written books touting a theory that he believed to be nonsense: that America was competing in a global marketplace with other countries in much the same way that corporations competed with one another. In fact, Krugman argued, in a series of contemptuous articles in Foreign Affairs and elsewhere, countries were not at all like corporations. While another country’s success might injure our pride, it would not likely injure our wallets. Quite the opposite: it would be more likely to provide us with a bigger market for our products and send our consumers cheaper, better-made goods to buy. A trade surplus might be a sign of weakness, a trade deficit a sign of strength. And, anyway, a nation’s standard of living was determined almost entirely by its productivity—trade was just not that important.
When Krugman first began writing articles for popular publications, in the mid-nineties, Bill Clinton was in office, and Krugman thought of the left and the right as more or less equal in power. Thus, there was no pressing need for him to take sides—he would shoot down idiocy wherever it presented itself, which was, in his opinion, all over the place. He thought of himself as a liberal, but he was a liberal economist, which wasn’t quite the same thing as a regular liberal. Until the late nineties, when he became absorbed by what was going wrong with Japan, he believed that monetary policy, rather than government spending, was all that was needed to avoid recessions: he agreed with Milton Friedman that if only the Fed had done its job better the Great Depression would never have happened. He thought that people who wanted to boycott Nike and other companies that ran sweatshops abroad were sentimental and stupid. Yes, of course, those foreign workers weren't earning American wages and didn't have American protections, but working in a sweatshop was still much better than their alternatives—that’s why they chose to work there. Moreover, sweatshops really weren't the threat to American workers that the left claimed they were. “A back-of-the-envelope calculation . . . suggests that capital flows to the Third World since 1990 . . . have reduced real wages in the advanced world by about 0.15%,” he wrote in 1994. That was not nothing, but it certainly wasn't anything to get paranoid about. The world needed more sweatshops, not fewer. Free trade was good for everyone. He felt that there was a market hatred on the left that was as dogmatic and irrational as government hatred on the right. --The New Yorker
Certainly until the Enron scandal, Krugman had no sense that there was any kind of problem in American corporate governance. (He consulted briefly for Enron before he went to the Times.) Occasionally, he received letters from people claiming that corporations were cooking the books, but he thought this sounded so implausible that he dismissed them. “I believed that the market was enforcing,” he says. “I believed in the S.E.C. I just never really thought about it. It seemed like a pretty sunny world in 1999, and, for all of my cynicism, I shared a lot of that. The extent of corporate fraud, the financial malfeasance, the sheer viciousness of the political scene—those are all things that, ten years ago, I didn't see.” --The New Yorker
Again, as in his trade theory, it was not so much his idea that was significant as the translation of the idea into mathematical language. “I explained this basic idea”—of economic geography—“to a non-economist friend,” Krugman wrote, “who replied in some dismay, ‘Isn't that pretty obvious?’ And of course it is.” Yet, because it had not been well modelled, the idea had been disregarded by economists for years. Krugman began to realize that in the previous few decades economic knowledge that had not been translated into models had been effectively lost, because economists didn't know what to do with it. --The New Yorker
The number of Americans taking antidepressants doubled in a decade, from 13.3 million in 1996 to 27 million in 2005. [Basically 1 in 10 Americans.] -- Newsweek, January 29, 2010
In 1998, researchers examined 38 manufacturer-sponsored studies involving just over 3,000 depressed patients. The authors, psychology researchers Irving Kirsch and Guy Sapirstein of the University of Connecticut, saw—as everyone else had—that patients did improve, often substantially, on SSRIs, tricyclics, and even MAO inhibitors, a class of antidepressants that dates from the 1950s. This improvement, demonstrated in scores of clinical trials, is the basis for the ubiquitous claim that antidepressants work. But when Kirsch compared the improvement in patients taking the drugs with the improvement in those taking dummy pills—clinical trials typically compare an experimental drug with a placebo—he saw that the difference was minuscule. Patients on a placebo improved about 75 percent as much as those on drugs. Put another way, three quarters of the benefit from antidepressants seems to be a placebo effect....
Out of the blue, Kirsch received a letter from Thomas Moore, who was then a health-policy analyst at George Washington University. You could expand your data set, Moore wrote, by including everything drug companies sent to the FDA—published studies, like those analyzed in "Hearing Placebo," but also unpublished studies. In 1998 Moore used the Freedom of Information Act to pry such data from the FDA. The total came to 47 company-sponsored studies—on Prozac, Paxil, Zoloft, Effexor, Serzone, and Celexa—that Kirsch and colleagues then pored over. (As an aside, it turned out that about 40 percent of the clinical trials had never been published. That is significantly higher than for other classes of drugs, says Lisa Bero of the University of California, San Francisco; overall, 22 percent of clinical trials of drugs are not published. "By and large," says Kirsch, "the unpublished studies were those that had failed to show a significant benefit from taking the actual drug.") In just over half of the published and unpublished studies, he and colleagues reported in 2002, the drug alleviated depression no better than a placebo. "And the extra benefit of antidepressants was even less than we saw when we analyzed only published studies," Kirsch recalls. About 82 percent of the response to antidepressants—not the 75 percent he had calculated from examining only published studies—had also been achieved by a dummy pill.
The extra effect of real drugs wasn't much to celebrate, either. It amounted to 1.8 points on the 54-point scale doctors use to gauge the severity of depression, through questions about mood, sleep habits, and the like. Sleeping better counts as six points. Being less fidgety during the assessment is worth two points. In other words, the clinical significance of the 1.8 extra points from real drugs was underwhelming. Now Kirsch was certain. "The belief that antidepressants can cure depression chemically is simply wrong," he told me in January on the eve of the publication of his book The Emperor's New Drugs: Exploding the Anti-depressant Myth.
Even Kirsch's analysis, however, found that antidepressants are a little more effective than dummy pills—those 1.8 points on the depression scale. Maybe Prozac, Zoloft, Paxil, Celexa, and their cousins do have some non-placebo, chemical benefit. But the small edge of real drugs compared with placebos might not mean what it seems, Kirsch explained to me one evening from his home in Hull. Consider how research on drugs works. Patient volunteers are told they will receive either the drug or a placebo, and that neither they nor the scientists will know who is getting what. Most volunteers hope they get the drug, not the dummy pill. After taking the unknown meds for a while, some volunteers experience side effects. Bingo: a clue they're on the real drug. About 80 percent guess right, and studies show that the worse side effects a patient experiences, the more effective the drug. Patients apparently think, this drug is so strong it's making me vomit and hate sex, so it must be strong enough to lift my depression. In clinical-trial patients who figure out they're receiving the drug and not the inert pill, expectations soar. That matters because belief in the power of a medical treatment can be self-fulfilling (that's the basis of the placebo effect). The patients who correctly guess that they're getting the real drug therefore experience a stronger placebo effect than those who get the dummy pill, experience no side effects, and are therefore disappointed. That might account for antidepressants' slight edge in effectiveness compared with a placebo, an edge that derives not from the drugs' molecules but from the hopes and expectations that patients in studies feel when they figure out they're receiving the real drug.
In an analysis of six large experiments [published in the Journal of the American Medical Association in January 2010] in which, as usual, depressed patients received either a placebo or an active drug, the true drug effect—that is, in addition to the placebo effect—was "nonexistent to negligible" in patients with mild, moderate, and even severe depression. Only in patients with very severe symptoms (scoring 23 or above on the standard scale) was there a statistically significant drug benefit. Such patients account for about 13 percent of people with depression. "Most people don't need an active drug," says Vanderbilt's Hollon, a coauthor of the study. "For a lot of folks, you're going to do as well on a sugar pill or on conversations with your physicians as you will on medication. It doesn't matter what you do; it's just the fact that you're doing something."
Right about here, people scowl and ask how anti-depressants—especially those that raise the brain's levels of serotonin—can possibly have no direct chemical effect on the brain. Surely raising serotonin levels should right the synapses' "chemical imbalance" and lift depression. Unfortunately, the serotonin-deficit theory of depression is built on a foundation of tissue paper. How that came to be is a story in itself, but the basics are that in the 1950s scientists discovered, serendipitously, that a drug called iproniazid seemed to help some people with depression. Iproniazid increases brain levels of serotonin and norepinephrine. Ergo, low levels of those neurotransmitters must cause depression. More than 50 years on, the presumed effectiveness of antidepressants that act this way remains the chief support for the chemical-imbalance theory of depression. Absent that effectiveness, the theory hasn't a leg to stand on. Direct evidence doesn't exist. Lowering people's serotonin levels does not change their mood. And a new drug, tianeptine, which is sold in France and some other countries (but not the U.S.), turns out to be as effective as Prozac-like antidepressants that keep the synapses well supplied with serotonin. The mechanism of the new drug? It lowers brain levels of serotonin. "If depression can be equally affected by drugs that increase serotonin and by drugs that decrease it," says Kirsch, "it's hard to imagine how the benefits can be due to their chemical activity."
Antidepressants had sales of $9.6 billion in the U.S. in 2008.
Later studies have shown that patients suffering from depression and anxiety do equally well when treated by psychoanalysts and by behavioral therapists; that there is no difference in effectiveness between C.B.T., which focuses on the way patients reason, and interpersonal therapy, which focuses on their relations with other people; and that patients who are treated by psychotherapists do no better than patients who meet with sympathetic professors with no psychiatric training. Depressed patients in psychotherapy do no better or worse than depressed patients on medication. There is little evidence to support the assumption that supplementing antidepressant medication with talk therapy improves outcomes. What a load of evidence does seem to suggest is that care works for some of the people some of the time, and it doesn't much matter what sort of care it is. Patients believe that they are being cared for by someone who will make them feel better; therefore, they feel better. It makes no difference whether they’re lying on a couch interpreting dreams or sitting in a Starbucks discussing the concept of “flow.” --The New Yorker, March 1, 2010
patients who are treated by psychotherapists do no better than patients who meet with sympathetic professors with no psychiatric training.... it doesn't much matter what sort of care it is. Patients believe that they are being cared for by someone who will make them feel better; therefore, they feel better. It makes no difference whether they’re lying on a couch interpreting dreams or sitting in a Starbucks discussing the concept of “flow.”
It's bracing to see how depression is treated in other countries, where the relationship between drug manufacturers and physicians isn't quite so hand-in-glove. Great Britain's National Institute for Health and Clinical Excellence, for example, recommends that, before taking antidepressants, people with mild or moderate depression should undergo nine to 12 weeks of guided self-help, nine to 12 weeks of cognitive behavioral therapy, and 10 to 14 weeks of exercise classes.
"Behind every loan shark, there lurks an implicit threat." --The Ascent of Money, p. 40
"Prior to the 1390s, it might legitimately be suggested, the Medici were more gangsters than bankers: a small-time clan, notable more for low violence than high finance. Between 1343 and 1360 no fewer than five Medici were sentenced to death for capital crimes." p. 42
"There were no debtors' prisons in the United State in the early 1800s, at a time when English debtors could end up languishing in jail for years." p. 60
"'War', declared the ancient Greek philosopher Heraclitus, 'is the father of all things.' It was certainly the father of the bond market. In Pieter van der Heyden's extraordinary engraving, The Battle about Money, piggy banks, money bags, barrels of coins, and treasure chests -- most of them heavily armed with swords, knives and lances -- attack each other in a chaotic free-for-all. The Dutch verses below the engraving say: 'It's all for money and goods, this fighting and quarreling.' But what the inscriptions could equally well have said is: 'This fighting is possible only if you can raise the money to pay for it.' The ability to finance war through a market for government debt was, like so much else in financial history, an invention of the Italian Renaissance." p. 69
"The Battle of Waterloo was the culmination of more than two decades of intermittent conflict between Britain and France. But it was more than a battle between two armies. It was also a contest between rival financial systems: one, the French, which under Napoleon had come to be based on plunder (the taxation of the conquered); the other, the British, based on debt." p. 80
"In many ways, it was true that the bond market was powerful. By the later nineteenth century, countries that defaulted on their debts risked economic sanctions, the imposition of foreign control over their finances and even, in at least five cases, military intervention. It is hard to believe that Gladstone would have ordered the invasion of Egypt in 1882 if the Egyptian government had not threatened to renege on its obligations to European bondholders, himself among them. Bringing an 'emerging market' under the aegis of the British Empire was the surest way to remove political risk from investors' concerns. Even those outside the Empire risked a visit from a gunboat if they defaulted, as Venezuela discovered in 1902, when a joint naval expedition by Britain, Germany and Italy temporarily blockaded the country's ports. The United States was especially energetic (and effective) in protecting bondholders' interests in Central America and the Caribbean." p. 98 [By the way, don't you just love how Ferguson describes international war crimes as "energetic (and effective!) ways of protecting bondholder's interests!"]
"As [Jan Pieterszoon] Coen [officer of Dutch East India Company (VOC) in the early seventeenth century] himself put it: 'We cannot make war without trade, nor trade without war. he was ruthless in his treatment of competitors, executing British East India Company officials at Amboyna and effectively wiping out the indigenous Bandanese." p. 134
"Besides cheaper calories, cheaper wood and cheaper wool and cotton, imperial expansion brought other unintended economic benefits, too. It encouraged the development of militarily useful technologies -- clocks, guns, lenses and navigational instruments -- that turned out to have big spin-offs for the development of industrial machinery." p. 285-286.
"The key problem with overseas investment, then as now, is that it is hard for investors in London or New York to see what a foreign government or an overseas manager is up to when they are an ocean or more away. Moreover, most non-Western countries had, until quite recently, highly unreliable legal systems and differing accounting rules. If a foreign trading partner decided to default on its debts, there was little that an investor situated on the other side of the world could do. In the first era of globalization, the solution to this problem was brutally simple but effective: to impose European rule." p. 289
"The question, Am I one of the elect? must sooner or later have arisen for every believer and have forced all other interests into the background." --The Protestant Ethic & The Spirit of Capitalism, p. 110.
"On the one hand it is held to be an absolute duty to consider oneself chosen, and to combat all doubts as temptations of the devil, since lack of self-confidence is the result of insufficient faith, hence of imperfect grace... On the other hand, in order to attain that self-confidence intense worldly activity is recommended as the most suitable means." --The Protestant Ethic and the Spirit of Capitalism, p. 111 and 112
"The God of Calvinism demanded of his believers not single good works, but a life of good works combined into a unified system. There was no place for the very human Catholic cycle of sin, repentance, atonement, release, followed by renewed sin.... [Protestantism] had developed a systematic method of rational conduct with the purpose of overcoming the status naturae, to free man from the power of irrational impulses and his dependence on the world and on nature." p. 117 - 118
"Sebastian Franck struck the central characteristic of this type of religion when he saw the significance of the Reformation in the fact that now every Christian had to be a monk all his life.... By founding its ethic in the doctrine of predestination, Protestantism substituted for the spiritual aristocracy of monks outside of and above the world the spiritual aristocracy of the predestined saints of God within the world." p. 121
"For Christians, lending money at interest was a sin. Usurers, people who lent money at interest, had been excommunicated by the Third Lateran Council in 1179. Even arguing that usury was not a sin had been condemned as heresy by the Council of Vienna in 1311-12. Christian usurers had to make restitution to the Church before they could be buried on hallowed ground." --The Ascent of Money, p. 35
"Of particular importance in the Medici's early business were the bills of exchange (cambium per literas) that had developed in the course of the Middle Ages as a way of financing trade. If one merchant owned another sum that could not be paid in cash until the conclusion of a transaction some months hence, the creditor could draw a bill on the debtor and use the bill as a means of payment in its own right or obtain cash for it at a discount from a banker willing to act as broker. Whereas the charging of interest was condemned as usury by the Church, there was nothing to prevent a shrewd trader making profits on such transactions. That was the essence of the Medici business. There were no checks; instructions were given orally and written in the bank's books. There was no interest; depositors were given discrezione (in proportion to the annual profits of the firm) to compensate them for risking their money. " --The Ascent of Money, p. 43-44
"From 1515 until early 1524, Luther's works indicate that he was completely opposed to lending money at interest. In the second time period, from late 1524 until his death in 1546, while still principally against usury -- especially among Christians -- Luther's writings indicate that he allowed for the practice of lending money at interest, albeit with certain restrictions." --Reforming the Morality of Usury: A Study of the Differences that Separated the Protestant Reformers, David Jones p. 52
"Luther's writings reveal that he tolerated and even suggested guidelines whereby usury may be practiced in the kingdom of this world. These guidelines include a call for itemized collateral, shared risk, and governmental oversight of usurious transactions." -- Reforming the Morality of Usury: A Study of the Differences that Separated the Protestant Reformers, p. 61
"Calvin knew there were two Hebrew words translated as “usury.” One, neshek, meant “to bite”; the other, tarbit, meant “to take legitimate increase.” Based on these distinctions, Calvin argued that only “biting” loans were forbidden. Thus, one could lend at interest to business people who would make a profit using the money." -- Norman Jones, Utah State University
"It was in Amsterdam, London and Stockholm [all cities that broke from Catholicism during the Reformation] that the next decisive wave of financial innovation occurred, as the forerunners of modern central banks made their first appearance. The seventeenth century saw the foundation of three distinctly novel institutions that, in their different ways, were intended to serve a public as well as a private financial function. The Amsterdam Exchange Bank (Wisselbank) was set up in 1609 to resolve the practical problems created for merchants by the circulation of multiple currencies in the United Provinces, where there were no fewer than fourteen different mints and copious quantities of foreign coins. By allowing merchants to set up accounts denominated in a standardized currency, the Exchange Bank pioneered the system of checks and direct debits or transfer that we take for granted today. This allowed more and more commercial transactions to take place without the need for the sums involved to materialize in actual coins. One merchant could make a payment to another simply by arranging for his account at the bank to be debited and the counterparty's account to be credited. The limitation on this system was simply that the Exchange Bank maintained something close to a 100 percent ratio between its deposits and its reserves of precious metal and coin....
It was in Stockholm nearly half a century later, with the foundation of the Swedish Riksbank in 1656, that the barrier was broken through. Although it performed the same functions as the Dutch Wisselbank, the Riksbank was also designed to be a Lanebank, meaning that it engaged in lending as well as facilitating commercial payments. By lending amounts in excess of its metallic reserve, it may be said to have pioneered the practice of what would later be known as fractional reserve banking, exploiting the fact that money left on deposit could profitably be lent out to borrowers. Since depositors were highly unlikely to ask en masse for their money, only a fraction of their money need to be kept in the Riksbank's reserves at any given time." --The Ascent of Money, p. 48-49
To work the mines, the Spaniards at first relied on paying wages to the inhabitants of nearby villages. But conditions were so hard that from the late sixteenth century a system of forced labor (la mita) had to be introduced, whereby men aged between 18 and 50 from the sixteen highland provinces were conscripted for seventeen weeks a year. Mortality among the miners was horrendous, not least because of constant exposure to the mercury fumes generated by the patio process of refinement, whereby ground-up silver ore was trampled into an amalgam with mercury, washed and then heated to burn off the mercury. The air down the mine shafts was (and remains) noxious and miners had to descend seven-hundred-foot shafts on the most primitive of steps, clambering back up after long hours of digging with sacks or ore tied to their backs. (p. 21)
"What the Spaniards had failed to understand is that the value of precious metal is not absolute. Money is worth only what someone else is willing to give you for it. An increase in its supply will not make a society richer, thought it may enrich the government that monopolizes the production of money. Other things being equal, monetary expansion will merely make prices higher. There was in fact no reason other than historical happenstance that money was for so long equated in the Western mind with metal. In ancient Mesopotamia, beginning around five thousand years ago, people used clay tokens to record transaction involving agricultural produce like barley or wool, or metals such as silver. Rings, blocks or sheets made of silver certainly served as ready money (as did grain), but the clay tablets were just as important, and probably more so. A great many have survived, reminders that when human beings first began to produce written records of their activities they did so not to write history, poetry or philosophy, but to do business." (The Ascent of Money, p. 26 - 27.)
"What the conquistadors failed to understand is that money is a matter of belief, event faith: belief in the person paying us; belief in the person issuing the money he uses or the institution that honors his checks or transfers. Money is not metal. It is trust inscribed: on silver, on clay, on paper, on a liquid crystal display. Anything can serve as money, from the cowrie shells of the Maldives to the huge stone discs used on the pacific islands of Yap. And now, it seems, in this electronic age nothing can serve as money too." (p. 29 - 30.)
"It is no coincidence that in English the root of 'credit' is credo, the Latin for 'I believe.'" (p. 30)
"Cursed with an abundance of precious metal, mighty Spain failed to develop a sophisticated banking system relying instead of the merchants of Antwerp for short-term cash advances against future silver deliveries. The idea that money was really about credit, not metal, never quite caught on in Madrid. Indeed, the Spanish crown ended up defaulting on all or part of its debt no fewer than fourteen times between 1557 and 1696. With a track record like that, all the silver in Potosi could not make Spain a secure credit risk. In the modern world, power would go to the bankers, not the bankrupts." (p. 52)