Doc1

14
Sara Mead, a partner at Bellwether and the lead author of this report, echoed Petrilli’s assessment, noting that New Jersey’s legislators, for example, wrote laws establishing charter schools in Newark and Paterson but resisted efforts to create them in the Garden State’s less urban areas. But other analysts suggest that suburbanites simply aren’t looking for alternatives because they’re already satisfied with their traditional public schools. After all, studies show that suburban schools, on average, outperform their urban counterparts. “It’s all about demand,” said Bruce Fuller, a professor of education and public policy at UC Berkeley who’s conducted extensive research on the topic. “The middle class doesn’t want charter schools—they don’t need them. The demand is in the city.” The greatest demand for charters comes from parents in urban areas like Newark and D.C. that have struggled with low-performing traditional public schools, according to Fuller, who also suggested that parents in these areas like the structure and discipline-heavy ethos embraced by KIPP and other charter networks. “Moms always say they like the uniforms,” he said. “They like that the charter school is a serious place. They have an affection for smaller, more orderly places.” Fuller said the charter-school movement in California, where he’s focused much of his research, began in the majority-white suburbs, largely as an effort to seal themselves off from people of color. But then charter schools operators like KIPP and Aspire stepped in, creating the state’s highest-demand charter schools in inner-city neighborhoods. If states lifted some of their restrictions on the numbers and locations of charter schools, according to Petrilli, more would begin to pop up in the suburbs. “Where they have been allowed, they exist. Middle-class parents like choice, too.” In the meantime, new forms of charter schools are attracting a wider range of students and families. Education Next’s Richard Whitmore recently reported that middle-class parents are participating in “intentionally diverse” charter schools—institutions designed to include a mix of families by race and income. In cities undergoing gentrification, charter schools are pulling children from a wide variety of backgrounds. (It’s also worth noting that evangelical homeschoolers in California have started creating their own charter schools.) Ultimately, even without charters schools, suburban students and their families aren’t necessarily locked into one local school. Dissatisfied parents have the means to move to a town with a better school system or to send their children to private schools. They may also enjoy increased options within the traditional public-school system: In my own area of Northern New Jersey, students can pay a tuition to attend public schools in other towns, and there are specialized and highly ranked magnet schools. For six years, my school district paid for my youngest son to attend a public school in a different

description

adf

Transcript of Doc1

Page 1: Doc1

Sara Mead, a partner at Bellwether and the lead author of this report, echoed Petrilli’s assessment, noting that New Jersey’s legislators, for example, wrote laws establishing charter schools in Newark and Paterson but resisted efforts to create them in the Garden State’s less urban areas.

But other analysts suggest that suburbanites simply aren’t looking for alternatives because they’re already satisfied with their traditional public schools. After all, studies show that suburban schools, on average, outperform their urban counterparts. “It’s all about demand,” said Bruce Fuller, a professor of education and public policy at UC Berkeley who’s conducted extensive research on the topic. “The middle class doesn’t want charter schools—they don’t need them. The demand is in the city.”

The greatest demand for charters comes from parents in urban areas like Newark and D.C. that have struggled with low-performing traditional public schools, according to Fuller, who also suggested that parents in these areas like the structure and discipline-heavy ethos embraced by KIPP and other charter networks. “Moms always say they like the uniforms,” he said. “They like that the charter school is a serious place. They have an affection for smaller, more orderly places.”

Fuller said the charter-school movement in California, where he’s focused much of his research, began in the majority-white suburbs, largely as an effort to seal themselves off from people of color. But then charter schools operators like KIPP and Aspire stepped in, creating the state’s highest-demand charter schools in inner-city neighborhoods.

If states lifted some of their restrictions on the numbers and locations of charter schools, according to Petrilli, more would begin to pop up in the suburbs. “Where they have been allowed, they exist. Middle-class parents like choice, too.” In the meantime, new forms of charter schools are attracting a wider range of students and families. Education Next’s Richard Whitmore recently reported that middle-class parents are participating in “intentionally diverse” charter schools—institutions designed to include a mix of families by race and income. In cities undergoing gentrification, charter schools are pulling children from a wide variety of backgrounds. (It’s also worth noting that evangelical homeschoolers in California have started creating their own charter schools.)

Ultimately, even without charters schools, suburban students and their families aren’t necessarily locked into one local school. Dissatisfied parents have the means to move to a town with a better school system or to send their children to private schools. They may also enjoy increased options within the traditional public-school system: In my own area of Northern New Jersey, students can pay a tuition to attend public schools in other towns, and there are specialized and highly ranked magnet schools. For six years, my school district paid for my youngest son to attend a public school in a different town because it had a special inclusion program for children with high-functioning autism.

Perhaps suburbs aren’t clamoring for charter schools because they already have choices.

Gun-rights advocates have waged a relentless battle to gut what remains of America’s lax and inadequate gun regulations. In the name of the Second Amendment, they are challenging the constitutionality of state and municipal “may issue” regulations that restrict the right to carry weapons in public to persons who can show a compelling need to be armed. A few courts are starting to take these challenges seriously. But what the advocates do not acknowledge—and some courts seem not to understand—is that their arguments are grounded in precedent unique to the violent world of the slaveholding South.

Claims that “may issue” regulations are unconstitutional have been rejected by most federal appellate courts—that is, until last year, when a court in California broke ranks and struck down San Diego’s public-carry regulation. This year, a court did the same with the District of Columbia’s rewritten handgun ordinance. Both decisions face further review from appellate courts, and perhaps also by the Supreme Court. If the justices buy this expansive view of the Second Amendment, laws in states such as New York, New Jersey, Rhode Island, Massachusetts, and Hawaii with

Page 2: Doc1

the strictest public carry regulations—and some of the lowest rates of gun homicide—will be voided as unconstitutional.

Public-carry advocates like to cite historical court opinions to support their constitutional vision, but those opinions are, to put it mildly, highly problematic. The supportive precedent they rely on comes from the antebellum South and represented less a national consensus than a regional exception rooted in the unique culture of slavery and honor. By focusing only on sympathetic precedent, and ignoring the national picture, gun-rights advocates find themselves venerating a moment at which slavery, honor, violence, and the public carrying of weapons were intertwined.

The opinion most enthusiastically embraced by public-carry advocates is Nunn v. State, a state-court decision written by Georgia Chief Justice Joseph Henry Lumpkin in 1846. As a jurist, Lumpkin was a champion both of slavery and of the Southern code of honor. Perhaps, not by coincidence, Nunn was the first case in which a court struck down a gun law on the basis of the Second Amendment. The U.S. Supreme Court cited Nunn in District of Columbia v. Heller, its landmark 2008 decision holding, for the first time in over 200 years, that the Second Amendment protects an individual right to possess a handgun in the home for self-defense. Why courts or gun-rights advocates think Lumpkin’s view of the right to bear arms provides a solid foundation for modern firearms jurisprudence is puzzling. Slavery, “honor,” and their associated violence spawned a unique weapons culture. One of its defining features was a permissive view of white citizens’ right to carry weapons in public.

As early as 1840, antebellum historian Richard Hildreth observed that violence was frequently employed in the South both to subordinate slaves and to intimidate abolitionists. In the South, violence also was an approved way to avenge perceived insults to manhood and personal status. According to Hildreth, duels “appear but once an age” in the North, but “are of frequent and almost daily occurrence at the [S]outh.” Southern men thus carried weapons both “as a protection against the slaves” and also to be prepared for “quarrels between freemen.” Two of the most feared public-carry weapons in pre-Civil War America, the “Arkansas toothpick” and “Bowie knife,” were forged from this Southern heritage.

The slave South’s enthusiasm for public carry influenced its legal culture. During the antebellum years, many viewed carrying a concealed weapon as dastardly and dishonorable—a striking contrast with the values of the modern gun-rights movement. In an 1850 opinion, the Louisiana Supreme Court explained that carrying a concealed weapon gave men “secret advantages” and led to “unmanly assassinations,” while open carry “place[d] men upon an equality” and “incite[d] men to a manly and noble defence of themselves.” Some Southern legislatures, accordingly, passed laws permitting open carry but punishing concealment. Southern courts followed their lead, proclaiming a robust right to open carry, but opposing concealed carry, which they deemed unmanly and not constitutionally protected. It is this family of Southern cases that gun-rights advocates would like modern courts to rely on to strike down popularly enacted gun regulations today.

But no similar record of court cases exists for the pre-Civil War North. New research produced in response to Heller has revealed a history of gun regulation outside the South that has gone largely unexplored by judges and legal scholars writing about the Second Amendment during the last 30 years. This history reveals strong support for strict regulation of carrying arms in public.

In the North, publicly carrying concealable weapons was much less popular than in the South. In 1845, New York jurist William Jay contrasted “those portions of our country where it is supposed essential to personal safety to go

Page 3: Doc1

armed with pistols and bowie-knives” with the “north and east, where we are unprovided with such facilities for taking life.” Indeed, public-carry restrictions were embraced across the region. In 1836, the respected Massachusetts jurist Peter Oxenbridge Thacher instructed a jury that in Massachusetts “no person may go armed with a dirk, dagger, sword, pistol, or other offensive and dangerous weapon, without reasonable cause to apprehend an assault or violence to his person, family, or property.” Judge Thacher’s charge was celebrated in the contemporary press as “sensible,” “practical,” and “sage.” Massachusetts was not unusual in broadly restricting public carry. Wisconsin, Maine, Michigan, Virginia, Minnesota, Oregon, and Pennsylvania passed laws modeled on the public-carry restriction in Massachusetts.

This legal scheme of restricting public carry, it turns out, was not new. Rather, it was rooted in a longstanding tradition of regulating armed travel that dated back to 14th-century England. The English Statute of Northampton prohibited traveling armed “by night [or] by day, in [f]airs, [m]arkets, ... the presence of the [j]ustices or other [m]inisters” or any “part elsewhere.” Early legal commentators in America noted that this English restriction was incorporated into colonial law. As early as 1682, for example, New Jersey constables pledged to arrest any person who “shall ride or go arm’d offensively.” To be sure, there were circumstances where traveling armed was permitted, such as going to muster as part of one’s militia service or hunting in select areas, but the right of states and localities to regulate the public carrying of firearms, particularly in populated places, was undeniable.

Today, Americans disagree about the best way to enhance public safety and reduce crime, and that disagreement is voiced in legislatures across the nation. Throughout most of the country and over most of its history, the Second Amendment has not determined the outcome of this debate nor stood in the way of popular public-carry regulations. Then, as now, such regulations were evaluated based on the impact they would have on crime and public safety. At the end of this deadly summer, the debate rages on over how best to balance public safety against the interests of people who wish to “pack heat.” If elected officials decide to restrict the right to carry to those persons who can demonstrate a clear need for a gun, present-day judges should not intervene on the basis of opinions about the right to bear arms from the slave South and its unique culture of violence.

Losing weight is hard — and it’s getting harder.

That’s not an excuse, a group of researchers say, it’s science. A study from York University published recently in the journal Obesity Research & Clinical Practice looked at dietary and exercise data for tens of thousands of Americans over the past four decades and found an unsettling but perhaps not so surprising trend: Even when he had the same diet and same activity level, a given adult in 2006 had a higher BMI than a counterpart of the same age in 1988.

In other words, “our study results suggest that if you are 25, you’d have to eat even less and exercise more than those older, to prevent gaining weight,” Jennifer Kuk, a professor of kinesiology and health science at York and co-author of the paper, said in a statement. “Ultimately, maintaining a healthy body weight is now more challenging than ever.”

Just how much more challenging? When comparing people with the same diets in 1971 and 2008, the more recent counterpart was on average 10 percent heavier. Looking at physical activity data, which was only available between

Page 4: Doc1

1988 and 2006, those born later were five percent heavier even if they exercised just as much people two decades earlier.

In a couple of charts, the paper complicated years of conventional wisdom on weight loss, which has stressed diet and exercise and blamed problems with both for increasing rates of obesity.

“Weight management is actually much more complex than just ‘energy in’ versus ‘energy out,’” Kuk said in the statement. “That’s similar to saying your investment account balance is simply your deposits subtracting your withdrawals and not accounting for all the other things that affect your balance like stock market fluctuations, bank fees or currency exchange rates.”

In the case of weight, the “other things” affecting our balance might have to do with our environment — both outside our bodies and within them. Discussing her theories with the Atlantic and in her statement (the explanations are still only hypotheses), Kuk said that the world we live in today makes it harder to manage our weight than it was for people a generation ago.

The habits and lifestyles of today’s world certainly have something to do with it, she said in the statement. We’re sleeping less than we used to, according to Gallup — in 2013, 40 percent of Americans got less than seven hours of sleep per night. And a Carnegie Melon survey published in 2012 found that Americans were roughly 20 percent more stressed than a quarter of a century before.

Another factor is our exposure to certain kinds of chemicals that affect the endocrine system and metabolic processes, Kuk told the Atlantic. Plastic packaging, pesticides and substances known as persistent organic pollutants (mostly synthetic toxins that tend to bioaccumulate through the food web) may be impacting the way our bodies process food and store fat.

Increased use of prescription drugs may also play a role, Kuk told the Atlantic. According to the Centers for Disease Control, spending on prescription drugs doubled between 1999 and 2008, the last year of the York University study. Among adults (the subjects of the study), antidepressants were the most commonly used drug — and many, many, many studies have linked antidepressants to weight gain. But they’re not the only culprits: Allergy medications, steroids and pain medications can also affect weight.

The tiniest and perhaps least intuitive factor is innate to each of us. It has to do with our “microbiomes,” the brew of tiny organisms that live in our guts and play a role in processing food. Changes in our diets — we each ate roughly 20 pounds more meat per year in 2000 than we did 30 years earlier, according to the USDA, and we’re consuming far more artificial sweeteners — are known to affect the bacteria in our bodies, which in turn have been proven to affect how we extract energy from our diets. And if an individual is obese, their microbiome might actually be making weight loss harder: in studies, average-sized mice implanted with gut bacteria from obese counterparts were found to gain weight.

Determining the role that each of those factors plays is a subject for further study, Kuk said. But the end result — a more difficult time managing weight — is the same.

Page 5: Doc1

It’s a lesson to all of us, Kuk said, a reminder that our weight is not entirely in our control.

“There’s a huge weight bias against people with obesity,” Kuk told the Atlantic. “They’re judged as lazy and self-indulgent. That’s really not the case. If our research is correct, you need to eat even less and exercise even more,” just to weigh what your parents did at your age.

Sarah Kaplan is a reporter for Morning Mix.

Inside the marriage of Dustyn and Scott.

It was with a force greater than an atom bomb that Mount Vesuvius erupted and blotted out Pompeii in 79 A.D.

Or, not blotted out, exactly.

The city’s destruction, and the thing that has kept Pompeii so fascinating over the centuries, entails a paradox: The surge of ash and hot gas that blanketed thousands of victims also, simultaneously, preserved their bodies—along with their colorful art, sparkling jewelry, wine jugs, scrolls, and other cultural remnants.

Now, scientists are using new imaging technologies to examine in detail the bones and teeth of those killed in the blast.

Detailed casts of Pompeii’s victims—made by pouring plaster into the small cavities in their ash-encapsulated remains—have long prevented sophisticated scanning of this nature. The 19th-century plaster is so dense that today’s standard imaging technology can’t distinguish between the thick outer cast and skeletal pieces inside. But researchers recently used a multi-layer CT scan to obtain imagery never before possible, then used software to make digital 3-D reconstructions of skeletons and dental arches.

The initial images reveal two major surprises.

For one thing, these ancient people had “perfect teeth,” according to Agenzia Giornalistica Italia, a discovery that scientists linked to a healthy diet and high levels of fluorine in the air and water near the volcano.

ANSA

Page 6: Doc1

The scans also support a theory that many of those who were killed after the eruption died from head injuries—caused by falling rock or collapsing infrastructure—and not from suffocation.

This squares with the famous account by Pliny the Younger, who wrote of his uncle’s death after the eruption in an account that is treasured by historians. Here’s a portion of one of Pliny's letters, translated from Latin by Betty Radice:

They debated whether to stay indoors or take their chance in the open, for the buildings were now shaking with violent shocks, and seemed to be swaying to and fro, as if they were torn from their foundations. Outside on the other hand, there was the danger of falling pumice-stones, even though these were light and porous; however, after comparing the risks they chose the latter. In my uncle's case one reason outweighed the other, but for the others it was a choice of fears. As a protection against falling objects they put pillows on their heads tied down with cloths.

The smoke and ash that poured out of the volcano was likely still stifling to those who experienced it. In the early 1990s, when researchers uncovered the first remains found at Pompeii in many decades, archaeologists determined that some of the victims had tunics wrapped around their mouths as make-shift masks. Pliny the Younger described victims surrounded by broad sheets of flame, the air reeking of sulphur and the sky darker than night.

The latest findings build on an astounding body of knowledge about Pompeii. “Because of careful scholarship, we collectively have been able to get beyond the study of the victims’ death and have opened up exciting information about what the victims’ lives had been like,” said Roger Macfarlane, a professor of classical studies at Brigham Young University. “And, so, any technological development, such as CT scanning of the skeletal remains of those who perished in 79 A.D., promises to introduce potentially interesting new evidence about how those people existed before their death.”

Over the centuries, researchers have discovered unbelievable artifacts: erotic murals on walls, emeralds, coins, marble busts—and, in one case, an entire oven filled with dozens of loaves of bread still inside. Other food scraps found in Pompeii's ancient drainage system have suggested the city's wealthy residents dined on delicacies that included sea urchin, flamingos, and even giraffe.

The 79 A.D. eruption of Vesuvius was, as Pliny the Younger wrote, “a catastrophe which destroyed the loveliest regions of the earth.” But before that Pompeii was a vibrant city full of people who lived and, in one sense, continue to live, centuries after they died.

Aesop would have had a straightforward explanation for why some people just can’t manage to save up for retirement: Some people are born ants—industrious and in possession of great willpower—while others are grasshoppers, living only for today.

Page 7: Doc1

Millennia later, sorting workers into personality-specific boxes is still the preferred way of thinking about how to get people to put more money into savings. An otherwise thoughtful survey of over 1,000 individuals and interviews with 50 people by the MetLife Mature Markets Institute identifies no fewer than 10 different variations on the grasshopper: There are “Snoozers,” “Oversleepers,” “Stewers,” “Brewers.” And then there are “Preemptive Planners,” whose radar screens are populated with future risks. If people fit into these neat buckets, the conventional savings wisdom goes, then the solution is to educate the grasshoppers to start behaving more like ants.

MetLife isn’t wrong to dwell on personality types. Researchers such as Angela Duckworth, a professor of psychology at the University of Pennsylvania, have found convincing links between personality traits and savings. Duckworth doesn’t talk about “oversleepers,” but about the Big Five personality traits: conscientiousness, agreeableness, neuroticism, openness, and extroversion. Conscientious types, she’s found, tend to save more money.

But while MetLife and Duckworth may have identified some traits associated with inadequate saving, telling people to buckle down is unlikely to do very much, because, by adulthood, personality traits are more or less fixed. A lack of discipline or joie de vivre are hardly the main reasons some people don’t put enough into their 401(k)s or IRAs; the blame lies not with individuals but with the nation’s savings institutions.

Even if someone has access to an IRA or a 401(k), it’s difficult to stash away money—there are always bills to pay or relatives in need of financial assistance. But the fact is, as of 2012, only about half of workers have access to such plans, which makes saving even more difficult. Conscientiousness has little to do with it: Generally speaking, one of the most common answers people give when asked why they aren’t saving for retirement is that they simply don’t have enough money.

Since at least the Victorian era, there has been a ready retort. The problem isn’t a lack of money, many have insisted—it’s misspending. In fact, Victorian social reformers spearheaded a financial-literacy campaign, inflicting “home economics” on a generation of girls. The blame for bad behavior and insufficient savings was placed squarely on women, who were accused of failing to manage their household’s finances.

Economists and boards of education deploy a variant of that thinking today, forcing boys and girls to perform compound-interest calculations to impress upon them the virtues of saving money. But it is apparently not obvious that imposing stock-market games and pamphlets about IRAs on fourth graders—as several states do—will probably not bring many dividends.

Foisting financial literacy or personality transplants onto workers wouldn’t be necessary if there existed a universal savings account, modeled on Social Security. If all workers were automatically enrolled in a savings account that couldn’t be tapped into until retirement (or disability, if that came first), they wouldn’t be burdened with investment decisions. The various proposals for such an account assume a similar structure: The government would let workers contribute to it directly from their paychecks, and it would be managed for them by an independent, government-appointed committee, much like the Pension Benefit Guaranty Corporation or the board that oversees Social Security.

Page 8: Doc1

Instead of a retirement-savings system that punishes certain people for being less disciplined, there needs to be a system that acknowledges a simple truth: There will always be both grasshoppers and ants.

Page 9: Doc1

______________________________________

Page 10: Doc1