GED Social Studies Practice Test: Business Cycles

Just as life doesn’t stand still, neither does the business cycle. This means that real output of goods and services (and, thus, the GDP and per-capita income) fluctuate (change).

Some times are good (think of the 1920s and 1990s) and some times are not so good (think of the late 1970s), or even catastrophic (think of the Great Depression that began in 1929 and the Great Recession that began in 2007).

There are four stages of the business cycle:

  1. Expansion (or, in some cases, recovery)
  2. Peak
  3. Contraction ( also known as recession or depression)
  4. Trough

econ pic 11

Expansion. During this period, the economy grows (as does personal income). War, it so happens, is good for an economy. During World War II, the GDP more than doubled, because (a) so many goods and services were needed; and (b) most anyone who wanted a job could have one.

Peak. When producers are producing all they can and consumers are consuming all that they can, the economy reaches a peak. The economy seems to be humming. People have jobs and money to spend. But, to use the cliché, what goes up must come down. Producers may reach their ultimate capacity. Consumers may be spending all they can. And then the GDP may stop increasing. So then you get a contraction.

Contraction. When an economy shrinks, rather than enlarges, we call this contraction. A recession is when the GDP declines for two successive quarters (three-month periods). Anything more than that, and the economy runs the risk of falling into a depression. To be honest, there’s no one-size-fits-all definition of a depression, but they usually go on for one or more years. The Great Depression began in 1929 and lasted, more or less, until about 1940 (after things began to improve with the New Deal programs, President Roosevelt throttled back the New Deal programs, and there was a recession-in-a-depression in 1937 and 1938). While a recession is a normal (but definitely unpleasant) part of the business cycle, a depression (and it’s a scary word) is an extreme event. One of the most clever ways of distinguishing between the two (even if it’s not completely accurate) was coined by President Ronald Reagan in the 1980s: “A recession is when your neighbor loses his job. A depression is when you lose yours.”

Trough. When the recession or depression hits as low as it can go (and in 1933, the U.S. unemployment rate was close to 25%, we call this a trough. In the case of the Great Depression, the economy began to grow again (although slowly), after President Franklin Roosevelt and Congress began to implement the “Three R’s” — Relief, Recovery, and Reform — of the New Deal. The free-fall we experienced when the economy entered the Great Recession (which began in late 2007) stopped when the brand-new administration of President Barack Obama and Congress passed a $700-plus billion stimulus package.

Work: It’s a Four-Letter Word

Except for a tiny percentage of people in the world (lucky devils), people work to earn wages (money in exchange for labor or services performed). Of course, there are other reasons why people work (self-esteem, sense of accomplishment, etc.), but in order to get money to live, most people work — or want to. In places where there is high unemployment (especially among the young), you’ll find political and social instability.

Unemployment affects Joe or Jane, but in the big picture, unemployment is a key factor in studying macroeconomics, as unemployment has policy and monetary implications for a nation or a region. In the United States, the federal government looks carefully at the unemployment rate each month to help gauge the health of the economy.

Nations look at the unemployment rate as one key indicator of its economy’s health. But what, exactly, do “employed” and “unemployed” mean? After all, your 90-year-old Great Uncle Fred doesn’t work. Is he unemployed? Is a stay-at-home dad considered “employed”?

Let’s begin with what it means to be “employed.” If you work for pay for another person or establishment more than one hour a week, you’re considered employed. If you work for a family business for more than 15 hours a week, but you don’t get paid, you’re still considered employed. You’re also considered employed if you have a job but you’re not working because of vacation, illness, or work stoppage.

If you do not fall into one of these categories, and you are looking for work (more on that in a bit), then you are considered, by the government’s criteria, to be unemployed. Each month, the federal government’s Bureau of Labor Statistics compiles data culled from telephone and mail surveys to figure out employment and unemployment rates.

You can be out of work but not unemployed. Suppose you have used up your federal unemployment benefits and simply “drop out” — you stop looking for work. Since you are “off the radar,” the government doesn’t count you as unemployed. People who are retired, on medical disability, or unable (for a variety of reasons) to work are not considered unemployed either.

There are different types of unemployment, too. Structural unemployment is usually the result of technological change, movement of industries to other nations, or depletion of natural resources. There are fewer and fewer telephone operators as networks go digital and people use email. In mining communities, for example, when all of the gold (or timber, or coal) has been removed from the earth, there may be mass unemployment. “Gold Rush” towns once dotted the Western states. Today, they are ghost towns.  Seasonal unemployment is just what it sounds like — it depends on the time of year. If you own and operate an amusement park, you’re probably not working during the winter. Many agricultural workers experience seasonal unemployment; once the crops are harvested and shipped, they may not have anything to do until the next planting season.  Cyclical unemployment is usually a feature of the up-and-down business cycle. Below is a chart of the U.S. unemployment rate from 1900 to 1998. The period of highest unemployment was in the 1930s, in the depths of the Great Depression. In 1933, the unemployment rate was 25% — that’s one in four persons who wanted work but couldn’t get any. By the time the U.S. was in World War II, unemployment dropped dramatically, to under 5%.

The Value of Money: Inflation and Deflation

You probably remember how much candy a dollar could buy when you were a kid. Today, that same dollar bill probably buys much less candy. When the value of money decreases, prices for goods and services goes up. It usually occurs when demand outstrips supply (and people are willing to pay more). Inflation decreases the purchasing power of consumers.

When prices decrease, and the amount of goods a dollar can buy increases, deflation occurs. This occurs when supply exceeds demand (for example, in the Great Depression). Deflation makes money more valuable.

There are two causes of inflation. Demand-pull inflation is the result of aggregate (total) demand increasing faster than an economy can produce goods and services. After World War II, for example, everyone wanted to buy a car (none had been made for 4 years). With 16 million people being mustered out of the armed services, there was a tremendous demand for cars, radios, apartments, and clothing. Demand was far higher than supply (the American economy was just beginning to convert from its wartime production back to civilian production). So, the prices of goods were pulled higher. If two people want the same apartment, the landlord can begin to ask for a higher price (and maybe even start a bidding war). When 50 women want the 30 dresses available, the price of each dress will rise.

The second cause is when the cost of producing a good or service must be raised (usually to cover the cost of increased natural resources). Here, producers must raise their prices in order to cover their costs and make a profit. Think of the oil companies after the oil shocks of the 1970s, when oil-producing nations began to decrease the shipments of petroleum to the U.S. The price of gasoline and heating oil soared. This is called cost-push inflation. If it costs me more to make or get something, I’m simply going to charge you more when you come around to buy it. In the chart below, you can see the deflation that occurred in the Great Depression, and the high inflation of the late 1970s and early 1980s.

Inflation usually hurts consumers. The amount of money they have will buy fewer goods. When there’s inflation, your money won’t go as far, and you may have to give up things: people spend money on necessary items like food and baby products, but non-essential purchases may dry up.

Inflation also hurts because you’re making the same salary, but costs of goods and services are going up. All of a sudden your salary, which once seemed adequate, isn’t covering rent, groceries, child care, and other things in your budget. Since you don’t “have as much money as you used to,” you’ll save and invest less.

Inflation is especially hard on those people who live on fixed incomes (such as people who receive disability payments or Social Security). In addition, when inflation is high, the interest rate on loans is higher, making it more difficult to borrow money.

Inflation does tend to make the value of assets such as real estate higher. If you sell your house in a period of high inflation (if you can), you may find you’ll walk away with more money.

Poverty in the United States

The United States is one of the richest nations in the world in terms of GDP, but that hardly means everyone is well-off, or even getting by. If a family of four in 2010 had an aggregate income of $26,675 or less, it was considered to be below the poverty threshold. In 2014, a single person making less than $11,670 would be below the poverty threshold; the 2014 threshold for a family of four is $23,850 (notice that it’s less than 2010 because the recovery from the Great Recession is a bit further along). According to the Bureau of the Census, 14.7 million people in the United States had a poverty level between 100% and 175% of the poverty level.

Certain areas of the U.S. have much more poverty than others. In regions where there tends to be a non-diversified economy (and overreliance on, for example, agriculture — especially in the South), there tends to be more poverty.

Today there is a growing income gap — you’ve probably heard of this referred to as “the one percent” (that is, the 1% of the American population with the greatest income). There’s no question that a small number of Americans control a proportion of the wealth far beyond their numbers would suggest. Finding sound data on this is hard, because many groups that compile figures have a political or ideological agenda

but, it’s generally agreed that the top 1% of population controls about a third of the total wealth in the U.S. The next quintile (20%) of the population controls about 48%, while the bottom 80% of the population controls only 19% of the wealth. In other words, the rich are getting richer, while the poor are getting poorer?

Is this good for democracy? Most would say not. Is it stoppable? Probably not, unless massive tax changes were to transfer much of the wealth. Raising the minimum wage, to lift more people out of poverty, might help, but it’s a very controversial idea (as you may already know from reading the newspapers and watching the news).

 

You have seen 1 out of 15 free pages this month.
Get unlimited access, over 1000 practice questions for just $29.99. Enroll Now