HURJ Volume 11 - Spring 2010

62

description

The Spring 2010 issue of the Hopkins Undergraduate Research Journal (HURJ) based at the Johns Hopkins University.

Transcript of HURJ Volume 11 - Spring 2010

Page 1: HURJ Volume 11 - Spring 2010
Page 2: HURJ Volume 11 - Spring 2010

NIAID needs you because the world needs us!

National Institute of Allergy and Infectious Diseases

Help Us Help Millions

The National Institute of Allergy and Infectious Diseases (NIAID), one of the largest institutes of the National Institutes of Health (NIH), conducts and supports a global program of biomedical research to better understand, treat, and ultimately prevent infectious, immunologic, and allergic diseases. NIAID is a world leader in areas such as HIV/AIDS, pandemic and seasonal influenza, malaria, and more.

Advance your career while making a difference in the lives of millions!

NIAID has opportunities for an array of career stages and types of research, including clinical and research training programs that provide college students and recent graduates an opportunity to work side-by-side with world-renowned scientists who are committed to improving global health in the 21st century. Your individual talents may help us complete our mission. This is your chance to get involved.

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICESNational Institutes of Health

National Institute of Allergy and Infectious Diseases

Proud to be Equal Opportunity Employers

Check out all of NIAID’s opportunities atwww.niaid.nih.gov/careers/jhu.

Take the first step in advancing your future and apply for a training program today.

Page 3: HURJ Volume 11 - Spring 2010

a letter from the editors

{ A little over a year ago and barely a generation after the Civil Rights Act, the United States elected its first African-American president. Barack Obama based his campaign for the White House on the idea of forging change that was more than skin deep. This lasting change would be essential, since the problems awaiting the new Commander-in-Chief were mounting: two internationally unpopular wars fueling a growing deficit, the need to take action to address environmental concerns, the recent bailouts of large financial institutions feeding fears of a deepening recession, the cries to rejuvenate a dying healthcare system, and the lingering feeling that America was slowly receding from its international dominance. How would this new era play out under a new brand of president and his promise of a different kind of politics?

This eleventh volume of the Hopkins Undergraduate Research Journal features a snapshot of these issues, which, even a year into Obama’s presidency, continue to affect the lives of citizens both domestically and around the world. Drawing from a range of interests pursued by students at the Johns Hopkins University, this volume explores some of the effects of policy on health and technology at home and the relationship of the US with its neighbors abroad. Alongside these pieces are the continued investigations in the humanities concerning the rise of evangelicalism and the representation of Jews in Argentina, as well as the science and engineering pieces that embody the quality and diversity of research that Hopkins is renowned for.

HURJ aims to offer a unique opportunity to undergraduates by providing a venue to present their research to a wider audience. The undergraduates contributing to this volume come from a variety of disciplines and we would like to thank them for their dedicated work on these pieces. We would also like to extend our thanks, as always, to the Student Activities Commission for their continued support. Also, many thanks to our hardworking staff members, old and new, that have made this issue possible.

We hope that you come curious and leave the pages of this journal stimulated.

Sending our best,

Johnson Ukken Karun AroraEditor-in-Chief, Content Editor-in-Chief, Operations

Paige Robson Editor-in-Chief, Layout

Page 4: HURJ Volume 11 - Spring 2010

table of contentsspring 2010 focus: america’s new era

pg. 14 America’s Changing International Role Paul Grossinger

pg. 19 Biofuels and Land Use Changes: Flawed Carbon Accounting Leading to Greater Greenhouse Gas Emissions and Lost Carbon Sequestration Opportunity Julia Blocher

pg. 23 Beyond Tokyo: Emissions Trading in America Toshiro Baum

pg. 31 Stem Cell Act Spurs New Age for Medicine Nezar Alsaeedi

Page 5: HURJ Volume 11 - Spring 2010

table of contentsspotlights on research

7 Economic Ramifications of Drug Prohibition Calvin Price

9 Role of Adenosine and Related Ectopeptidases in Tumor Evasion of Immune Response Carolyn Rosinski

11 What Time Is It Now? 2PM and Global Social Movements Isaac Jilbert

science

33 Micro-RNA: A New Molecular Dogma Robert Dilley

35 Double Chooz: A Study in Muon Reconstruction Leela Chakravarti

engineering

52 Fractal Image Compression on the Graphics Card August Sodara

56 Robotic Prosthetic Development: The Advent of the DEKA Arm Kyle Baker

humanities

Resigned to the Fringes: An Analysis of Self-Represenations of Argentine Jews in Short Stories and Films Helen Goldberg

Innovation & Stagnation in Modern Evangelical Christianity Nicole Overley

42

46

Page 6: HURJ Volume 11 - Spring 2010

hurj 2009-2010hurj’s editorial board

Editor-in-Chief, Content Johnson Ukken

Editor-in-Chief, Layout Paige Robson

Editor-in-Chief, Operations Karun Arora

Content Editors Budri Abubaker-Sharif Ayesha Afzal Leela Chakravarti Isaac Jilbert Mike Lou Layout Editors Kelly Chuang Sanjit Datta Edward Kim Lay Kodama Michaela Vaporis

Copy Editors Haley Deutsch Mary Han Andi Shahu

PR/Advertising Javaneh Jabbari

Webmaster Ehsan Dowlati

Photographer/Graphic Design Sarah Frank

hurj’s writing staff

Nezar AlsaeediKyle BakerToshiro BaumJulia BlocherLeela ChakravartiRobert DilleyHelen Goldberg

Paul GrossingerIsaac JilbertNicole OverleyCalvin PriceCarolyn RosinskyAugust SodaraCover & Back Cover by Karam Han

about hurj: The Hopkins Undergraduate Research Journal provides undergraduates with a valuable resource to access research being done by their peers and interesting current issues. The journal is comprised of five sections - a main focus topic, spotlights, and current research in engineering, humanities, and science. Students are highly encouraged to submit their original work.

disclaimer: The views expressed in this publication are those of the authors and do not constitute the opinion of the Hopkins Undergraduate Research Journal.

contact us:

Hopkins UndergraduateResearch JournalMattin Center, Suite 2103400 N Charles StBaltimore, MD 21218

[email protected]

http://www.jhu.edu/hurj

Page 7: HURJ Volume 11 - Spring 2010

hurj

can you see yourself in hurj?

share your research!

now accepting submissions for our fall 2010 issue

hurj spring 2010: issue 11

focus -- humanities -- science -- spotlight -- engineering

Page 8: HURJ Volume 11 - Spring 2010

When California Assemblyman Tom Ammiano introduced the Marijuana Control, Regulation, and Education Act in early 2009, he was challenging decades-old US policy and popular thought that considered drug legaliza-tion harmful to society. With the recent recession intensifying California’s budget crisis, the bill was designed to supply much-needed capital to Sacra-mento, without raising taxes or stifling the already weakened economy. The reasoning behind the bill was to choose the lesser of two evils, suggesting that the negative impact of the “societal ill” of increased marijuana use was less than the gains that could be made by injecting well over one billion dollars into a state government that, at times, cannot even pay its own employees.

Marijuana is at the forefront of the drug debate because it is relatively safe, both for individuals and communities when compared not only to other illegal drugs, but even to alcohol or tobacco. Support- ers of marijuana legalization argue that marijuana, unlike these already legal drugs, is not physically addictive, and its use is less likely to cause death to others than alcohol (driving under the influence of marijuana, while more dangerous than driv-ing sober, is less dangerous than driving drunk) , and also seems to cause diseases such as lung cancer or emphysema less than to-bacco. For these reasons, marijuana is the only cur-rently illegal drug for which there is serious political debate as to its legal-

ization. In this time of serious economic issues, though, should marijuana be the sole drug considered for legalization?

Harvard economist Jeffrey Miron estimates that the legalization of all drugs would net the federal govern-ment over $70 billion per year, more than half of that from decreased law enforcement spending (funding the DEA, and providing aid to the govern-ments of Mexico and Columbia for their help in fighting the drug war). In a day of soaring deficits when funding cannot even be found for serious health reform, that kind of capital is sorely needed in Washington and it may just be possible to obtain it without raising taxes. Unfortunately, most of the cash crop drugs (such as cocaine, heroin, and methamphetamine) are far more dangerous than marijuana: they cause severe addiction problems, carry a risk of overdose, and pose dangers to others. Any legal- ization would have to come with govern-ment pro- grams such as those seen in Portugal and the Czech Republic, with seri- ous invest-ment in rehabilita-tion and recovery

for ad- dicts.

These programs have been a resound-ing success in the countries where they have been instituted. Though Portugal has decriminalized all drug use, its serious investment in rehabilitation and education has decreased the number of “problem drug users” (people whose contribution to society is lessened or

negative due to their drug use) to a level below that seen in other developed countries. Deaths relating to drug use in Portugal have been more than halved recently, while HIV rates due to drug use have dropped and the number of drug users seek-ing treatment for addiction has dou- bled. Maybe most

sur- prisingly, however, is that drug use as a

whole has declined, suggesting that treat-

ment and education could be more effective

tools than law enforce-ment. Though this can

be attributed to Portugal’s investment in rehabilita

spotlight hurj spring 2010: issue 11

7

The Economic Ramifications of Drug ProhibitionCalvin Price / Staff Writer

$70B I L L I O N

The legalization of all drugs would net the federal government

over $70 billion per year

— Jeffrey Miron, Harvard

Page 9: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 spotlight

8

tion, much of the credit has to be given to their drug policies. When people do not fear imprisonment for letting the government know they are drug us-ers, they are much more likely to seek treatment. Portugal’s drug policies are not only a social good, they also help Portugal’s general economy by prevent-ing non-problem users from becoming addicts, allowing them to remain in the workforce and increase general produc-tivity.

There are certainly a number of ar-guments against legalization of drugs. If Portugal had legalized drug use in-stead of simply decriminalizing it, there likely would have been an increase in the overall prevalence of drugs, because it would be legal to sell them as well. On the other hand, drug use is still nontaxable when not legalized, meaning that Portugal is not collecting a large amount of money that could go to the public good. With the correct preventative measures, it may indeed be possible for the United States to legalize drug use and reap the positive benefits from it (more money for the government, decreased problem users, decreased mortality rate), while limit-ing the obvious negative side effects.

Unfortunately, for the past 40 years, discussions by politicians on the issue of drug legalization has been limited to legislation like Ammiano’s, a state politician from one of the country’s perceived wackiest cities (Ammiano, a Democrat, represents California’s 13th district, which includes San Francisco). It is doubtful that he has the political capital to even bring the discussion forward. Though drug use has enough downsides to warrant its prohibition, the fact that there is nearly no political discussion of drug legalization, which has serious advantages, represents a failure of the political system. There has been practically no discussion on the topic for 40 years, and the current administration wants to continue that trend, with President Obama’s Drug Czar Gil Kerlikowske saying, “legaliza-tion vocabulary doesn’t exist for me, and it was made clear that it doesn’t exist in President Obama’s vocabulary.” One of the greatest mistakes we can make, as a society, is to not consider the possible advantages of change. It is a folly that, for certain issues, we have always allowed and, apparently, always will.

References:

1. Ammiano, Tom. Assembly Bill No. 390.

23 February 2009. http://www.leginfo.ca.gov/

pub/09-10/bill/asm/ab_0351-0400/ab_390_

bill_20090223_introduced.pdf

2. G. Chesher and M. Longo. Cannabis

and alcohol in motor vehicle accidents. In: F.

Grotenhermen and E. Russo (Eds.) Cannabis and

Cannabinoids: Pharmacology, Toxicology, and

Therapeutic Potential. 2002. New York: Haworth

Press. Pages 313-323.

3. Miron, Jeffrey A. Drug War Crimes: The

Consequences of Drug Prohibition. 2004. Michi-

gan: University of Michigan Press.

4. Greenwald, Glenn. Drug Decriminalization

in Portugal: Lessons for Creating Fair and Suc-

cessful Drug Policies. 2009. Cato Institute. Page 3.

5. Greenwald, Glenn, et al. “Lessons for Creat-

ing Fair and Successful Drug Policies”. Drug

Decriminalization in Portugal. 3 April 2009. Cato

Institute. http://www.cato.org/pubs/wtpapers/

greenwald_whitepaper.pdf.

6. Peele, Stanton. The Five Stages of Grief over

Obama’s Drug Policies. 8 August 2009. http://

www.huffingtonpost.com/stanton-peele/the-

five-stages-of-grief_b_254134.html

Page 10: HURJ Volume 11 - Spring 2010

Most cancer treatments used today work mainly by targeting cancer cells and inhibiting their natural metabo-lisms and abilities to divide and prolif-erate. The problem with many of these treatments is that they do not target cancer cells specifically enough, dam-aging non-cancer cells as well. Tumor immunology is a field that strives to elucidate better methods for cancer treatment by enhancing and using the mechanisms of our natural immune systems to fight cancer, rather than using outside agents to target cancer cells. One focus of tumor immunology is the signaling pathway between the tumor and the immune system. Tumor signals can affect the immune system either defensively, protecting the tumor from immune responses, or offensively, effectively shutting down the immune response before it begins acting on the tumor. I research this signaling system under Christian Meyer, Ph.D, in the lab of Jonathan Powell, M.D. Ph.D, a pro-fessor of oncology at the Johns Hopkins School of Medicine.

We study adenosine, one of the cell’s basic nucleosides. Adenosine helps make up DNA, RNA, and ATP (adenos-ine triphosphate), the energy molecule of cells. Adenosine is also an integral regulator in immune response. It modulates the immune system’s T-cell activity by suppressing the immune response of T-cells when adenosine levels are high. When tissue is dam-aged, the initial release of adenosine from the damaged cells allows for an immune response, but as the adenosine level rises, the response begins to fall off, protecting the tissue from excessive immune activity. In the course of nor-mal inflammation and immune activity, adenosine generation helps limit the

extent of immune activation to prevent damage (1). However, in the microen-vironment of a tumor, adenosine is also released in large quantities by damaged malignant cells signaling an immune response, which, in turn, can create a negative feedback loop, allowing the cancer to grow (2). This provides a mechanism by which tumors can both avoid and dampen the immune re-sponse.

Whereas the adenosine receptor-sup-pression of T-cells protects tissue from excessive inflammation, it may cause the immune response to stop before T-cells have effectively protected tissue from pathogens. In the case of cancer cells, the oxygen-deprived tumor and damaged vasculature of the tumor mass cause adenosine to be released, thereby halting the T-cell response that could fight the tumor. Drugs that block the adenosine receptor stop premature T-cell inhibition, allowing anti-tumor T-cells to reject the tumor, causing a decrease in tumor growth, metastases, and vasculature (3).

Two cell surface ectopeptidases (enzymes that break down amino acid

chains) involved in adenosine signaling are CD73, which is involved in convert-ing AMP (adenosine monophosphate) to adenosine, and CD39, which is in-volved in converting ATP to AMP. We have been investigating the role that these surface markers play in tumor immunosuppression. Specifically, we have been examining the expression of these markers in BRCA-1-associated breast cancers in a stem-cell-enriched and a non-stem-cell enriched mouse cancer cell line, as well as in human lung cancer cell lines.

We surveyed cell lines in vitro and in vivo to determine expression pat-terns of CD39 and CD73. Expression of CD73 is higher in the stem-cell enriched breast cancer line, and seems confined mainly to the stem cell population. Additionally, the check-point inhibitor PD-1 ligand prevents immune activa-tion. We have shown that breast cancer cells highly express CD73 and PD1-L, thereby facilitating the conversion of ATP to adenosine, which, in the large quantities thus produced, halts the im-mune response prematurely.

In vivo, we have surveyed the ex

spotlight hurj spring 2010: issue 11

9

Role of Adenosine and Related Ectopeptidases in Tumor-Evasion of Immune Response

Carolyn Rosinsky / Staff Writer

Page 11: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 spotlight

10

pression of these markers in tumor growth in immune-compromised mice. Tumor expression of the ectopeptidases increased in these mice. These models show that the up-regulation of surface expression of CD73, CD39, and PD1-L does not depend on the T- and B-cells of the immune system being present. This suggests that the up-regulation of these markers is not a defensive response against the immune system, but rather an innate mechanism for survival.

We also injected mouse Lewis lung carcinoma into mice without the adenosine receptor gene and into mice treated with a drug that blocks the receptor. In these models, survival and tumor-free rates went up. This sug-gests that inhibition of the adenosine receptor genetically or pharmacologi-cally potentiates immune responses in vivo, and solidifies the crucial role

of adenosine as a modulator in tumor survival mechanisms used to halt the immune response. We are currently investigating the in vivo effects of enhancing vaccines against tumors by blocking the adenosine receptor.

Additionally, we have begun investi-gating lung cancer for these markers by screening human lung cancer lines and primary lung cancer tissue samples. Our results have shown an up-regu-lation of both CD73 and PD1-L, sug-gesting that these cancers use similar immune-suppression mechanisms to those outlined above. Our findings of-fer some elucidation of how cancer cells stop the immune system from fighting cancer in the same way that it fights and defeats other diseases. Drugs that block the breakdown and signaling of adenosine are an exciting, though still developing, new front to cancer therapy.

References

1. Sitkovsky M, Lukashev D, Deaglio S,

Dwyer K, Robson SC, Ohta A. Adenosine A2A

receptor antagonists: blockade of adenosinergic

effects and T regulatory cells. Br J Pharmacol.

2008 Mar;153 Suppl 1:S457-64. Pubmed.

2. Lukashev D, Ohta A, Sitkovsky M. Hypox-

ia-dependent anti-inflammatory pathways in

protection of cancerous tissues. Cancer Metastasis

Rev. 2007 Jun;26(2):273-9. Pubmed.

3. Ohta A, Gorelik E, Prasad SJ, Ronchese F,

Lukashev D, Wong MK, Huang X, Caldwell S,

Liu K, Smith P, Chen JF, Jackson EK, Apasov S,

Abrams S, Sitkovsky M. A2A adenosine recep-

tor protects tumors from antitumor T cells. Proc

Natl Acad Sci U S A. 2006 Aug 29;103(35):13132-7.

Epub 2006 Aug 17. Pubmed.

Page 12: HURJ Volume 11 - Spring 2010

spotlight hurj spring 2010: issue 11

11

On September 7th, 2009, Korea was rocked to its very core as a clash of civilization gripped the coun-try. Western values ran into those of a more conser-vative Korean culture as Park Jae Beom left the male idol group 2PM. Although few noticed in the United States, his departure precipi-tated an interna-tional uproar and, most interestingly, a social movement dedicated solely towards encourag-ing him to return to the group.

Although admittedly an unusual topic of study, the very oddity of the circumstances and the subsequent so-cial movement that arose out of Park’s withdrawal from 2PM is the reason such a movement deserves attention. Park Jae Beom is actually an American-Korean from Seattle who, during his teenage years, was recruited by the entertainment company JYP Entertainment to move to South Korea and train to become a Ko-rean pop music singer. With six other members, Park became the leader of the group 2PM. 2PM’s first songs were released in late 2008 and met huge success, not only in the Korean market, but also all throughout the Hallayu, or “Korean Wave” market, which includes Japan and much of Southeast Asia, as

well as pockets of the rest of the world.Park did not originally speak Korean

or know anyone in Korea, however, and thus felt alienated for many years during his training. He naturally stayed in communication with his friends back in the States, but at one point in 2005, wrote to his friend via a MySpace wall “Korea is gay. I hate Koreans. I want to come back like no other.” After a fan sifted through his past posts four years later on his still public MySpace site, this past quote came up and was translated into Korean, where a highly nationalistic public met the translated

comments with great disapproval. Merely four days after the “news”

had broken, a petition circulated the in-ternet, garnering some 3,000 signatures, which called for Park Jae Beom to com-mit suicide. Not even an apology that was meant to explain away the wall post as a difference in cultural values (where American’s often use the words “gay” and “hate” simply as exaggera-tors and descriptors) and as a mistake of a disgruntled youth, could quiet the fervor, and Park left 2PM to return to Seattle only four days after the original news hit the internet.

What Time Is It Now? 2PM and Global Social MovementsIsaac Jilbert / Spotlight Editor

Photo courtesy of AllKPop.com

Page 13: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 spotlight

Of course, the initial question is how this constitutes a social movement. Us-ing the guidelines provided by Sidney Tarrow, who defines social movements as “collective challenges, based on common purposes and social soli-darities, in sustained interaction with elites, opponents, and authorities,” the withdrawal of Park Jae Beom from 2PM did, in fact, cause a social movement to quickly form and shake the Korean mu-sic industry. What is remarkable about the situation is the sudden reversal of fans’ attitudes. As soon as Park left for Seattle, fans regretted their actions and launched a sustained effort to try and bring him back.

The movement to bring Park back remarkably exists to this day, account-ing for the sustainability aspect of Tarrow’s definition. But what is truly remarkable is how fans organized their protests. The rise of the internet as a networking and social tool is crucial to understanding how fans could decorate the headquarters of JYP Entertainment with 15,000 roses in protest. In addi-tion, 2,000 fans were able to organize outside of the headquarters for a silent protest which prompted a police pres-ence. Perhaps the most remarkable,

however, are the fans’ “flash mob” protests, which have been held in major cities across the world, including Van-couver, New York, Boston, and Toronto. This form of protest often involves a group of people, organized through a fan website, to go to a particular loca-tion and dance for a bit, then disperse as if nothing happened. However, it is not just casual dancing, but rather cho-reographed, coordinated, high quality dancing to one of 2PM’s songs. Thus, one witnesses through the power of the internet just how the modern social movement occurs.

The entire movement and the original cause seem perplexing to say the least, but this is why such a move-ment is worth study. When one looks through internet sites and finds how the protests are organized, one wit-nesses just how a modern social move-ment is organized. Although the Jae Beom Movement does not fight against great social injustices on the magnitude of the Civil Rights Movement or the Feminist Movement, one sees that per-haps social movements can be frivolous and petty, or maybe that thousands can come together for the justice and love of one man. Regardless, through

this movement, one can examine how the internet can change the landscape, whether it is in Iran, China, or Korea.

References 1. Korean. “2PM, Jaebeom, and Korea’s Inter-net Culture”; available from http://askakorean.blogspot.com/2009/12/2pm-jaebeom-and-koreas-internet-culture.html; Internet; accessed 22 March 2009. 2. RHYELEE. “Anits Create a Suicide Petition for 2PMs Jaebeom”; available from http://www.allkpop.com/2009/09/antis_create_a_suicide_petition_for_2pms_jaebeom; Internet; accessed 7 September 2009. 3. JOHNNYDORAMA. “2PM Jaebeom Apolo-gizes”; available from http://www.allkpop.com/2009/09/2pm_jaebeom_apologizes; Inter-net; accessed 14 September 2009. 4. BEASTMODE. “15000 Origami Roses for Jaebeom’s Return”; available from http://www.allkpop.com/2009/10/15000_origami_roses_for_jaebeoms_return; Internet; accessed 26 October 2009. 5. BEASTMODE. “2000 Mobilize at JYPE Building in a Silent Protest”; available from http://www.allkpop.com/2009/09/2000_mo-bilize_at_jype_building_in_a_silent_protest; Internet; accessed 14 September 2000. 6. RAMENINMYBOWL. “Hottests Around the World are ‘Flashing’ for Jaebeom”; available from http://www.allkpop.com/2009/10/hot-tests_around_the_world_are_flashing_for_jae-beom; Internet; accessed 20 October 2009.

12

A display of 15,000 origami roses on the AYP building created by fans in support of Park Jae Beom. - Courtesy of AllKPop

Page 14: HURJ Volume 11 - Spring 2010

america’s new era

america, like many countries, has seen rapid change in recent years -

economically,scientifically, environmentally, & politically.

hopkins’ undergrads have taken note and have

compiled this issue’s focus on

america’s new era.

focus:

Page 15: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 focus

14

AMERICA’S

Paul Grossinger / Staff Writer

In the last three decades, the world has seen several seismic shifts in the di-vision of power among various nation-states and international institutions. From the end of the Second World War until the late 1980’s, the Soviet Union and its Warsaw Pact of Communist states were seen as the chief rivals of the United States. This “bipolar” system of global affairs was radically redefined after the USSR’s collapse in 1991, at which point America became the primary, unchallenged power in a “unipolar” system.i However, this para-digm has begun to change in the last few years, due to the rise of potential rivals to the United States.

It is a well-recognized fact that the United States’ stranglehold on eco-nomic, political, and military power has weakened over the last few years. This is not to say that America itself has weakened; indeed, through the year 2009, the United States has sustained

over a decade of consistent economic growth and has retained its key alli-ances.ii However, its relative power has declined, because of the dramatic rise of competitor states and institutions, most notably the European Union, China, India, Brazil, and Russia. For example, while America’s annual GDP in 1999 was ten times that of China’s, the Chinese economy grew by 800% in the last ten years.1 Other emerg-ing states, such as India and Russia, have showed similarly exponential growth, thus decreasing the economic gap between these countries and the United States.2 Therefore, although the growth of the American economy was, overall, robust, several other states neatly bridged a large part of the eco-nomic gap between themselves and the superpower. It is likely that if current trends continue, China may surpass the United States in total national GDP in the near future.

What does this mean for American interests? How does the world’s richest and most politically influential super-power use its remaining time alone atop the global totem pole to shape its future role in world affairs? Unlike earlier pseudo-hegemonic states like Great Britain, who faced their up and coming rivals while already in decline, the United States has the opportunity to shape a future place amongst (or above) its rivals while still at the peak of its powers. This article suggests that the way the United States should maintain its hegemony is to forge individualized partnerships with each of these poten-tial rivals, thus binding them within a US-controlled world system that I will term: “uni-multipolarity.”

It is hard to define a theory using a term as contradictory as ‘uni-multi-polarity’. After all, the two terms that comprise it, unipolar and multipolar, represent two opposing world systems.

CHANGING INTERNATIONAL ROLE

THE THEORY

i. To clarify, uni-polarity is a world system where one state dominates all others whereas a bipolar system is one dominated by a rivalry between two superpowers.

ii. Despite the recession between December 2007 and August 2009, the United States has still maintained a consistent economic growth curve since 1999.

Photo by Sarah Frank

Page 16: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

15

Unipolarity is a system dominated by an unrivaled superpower, and multipolarity is one controlled by powers centered at various regional poles. From the American strategic perspective, the rise of these new powers means that the unipolarity that defined the 1990’s is no longer possible because, while the United States remains far stronger on a global scale than any other states, the EU, China, India, Russia, and even Brazil are growing briskly and can now challenge America’s dictates at their individual regional levels. However, with that said, the pursuit of full multipolar-ity would not make sense as a geopolitical objective for the United States, because it would either surrender the tremendous power advantage America currently possesses, or ruin its ability to persuade other states to pursue American interests at an international level. In-stead, the foreign policy strategy that makes the most sense for American interests is uni-multipolarity.

So, what is uni-multipolarity, aside from a hyphen-ated contradiction? This term attempts to describe a power system in which the United States uses its capacity as a global superpower to act as a hegemonic balancer between regionally dominant emerging powers. It differs from a pure unipolar system in that these states have the power to challenge the United States’ hegemony at a regional level. At the same time, this system is far more stable than a chaotic multipo-lar system because of the balancing affect of the global superpower.

How can the United States use its unique combi-nation of hard and soft power to achieve uni-mul-tipolarity? Broadly speaking, America should work to maintain its economic, technological, and military edge over emerging states, while simultaneously us-ing its soft power to create unique partnerships with each of these states. In this way, the US can make itself the indispensable “glue state” within the international system, moving from the position of a stand-alone he-gemony to a superpower mediator whose cooperation is needed for any and all major international projects. If it can achieve this, then America will have managed to retain its power and influence while accommodat-ing and benefiting from the inevitable “rise of the rest” of these new game changers on the international stage. Importantly, however, this approach is not a carte blanche collective doctrine, and instead relies on individual treatment of each of these major states, so let us deal in brief with how the US should approach creating long-term bilateral partnerships with each of these emerging powers.

Because of the intertwined histories of the United States and Europe, and the vast array of treaties between them, the task of changing the dynamics of their bilateral relationship promises to be one of the trickiest America will face in the next decade. This is mainly because the EU, despite the efforts of the most ardent Euro-integrationists, remains a voluntary institutional grouping of states, each having an inde-pendent relationship with the United States.3 How-ever, this very difficulty is also what makes the EU one of the best potential allies of the United States in the future; since most of its member-states are already close American allies. Within this group are tradition-al American allies, including Britain, Germany, and Greece, as well as several former Soviet satellites such as Poland, Hungary, and the Czech Republic, who have entered NATO. Over the next decade, America should seek to maintain the strength of its traditional alliances, particularly those with Britain and Ger-many, through continued cooperation on relevant economic and foreign policy concerns. In addition to keeping up these essential partnerships, and work-ing to prevent major rifts with occasionally estranged allies such as France, the United States should seek to actively build its relationships with Eastern European states by integrating them further into joint security pacts and expanding trade with them.

In its relations with the European Commission, the executive branch of the EU, America should seek deep cooperation and accommodation in areas of mutual interest, while still pursuing its own goals if interests differ. The United States should also seek to break down as many trade barriers as possible, in order to increase the mutual profitability of bilateral trade. In addition, the United States must cooperate effectively on international ventures of joint interest, which range from the continued protection of Kosovo (which is currently being turned over from NATO to EU forces) to joint investment ventures in Africa.

When relations reach potential snags, such as the role of NATO versus the common EU Defense Policy and America’s independent alliances with the EU’s eastern member states, the US should continue to guard these interests and pursue increased coopera-tion while still maintaining a sustained and active interest in the continued growth and integration of Europe. America should sustain this effective working relationship with the European Commission because, when combined with close alliances to individual member-states, this relationship will make American cooperation essential for any major EU actions outside of the purely domestic realm.

THE EUROPEAN UNION

Page 17: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 focus

16

If the potential for building a US-EU relationship is tricky because of complicated preexisting relation-ships, the potential for relationship building with China is difficult for opposite reasons: a combination of weak pre-existing ties and striking cultural differ-ences. However, of all the United States’ future foreign policy priorities, developing a positive relationship with China, and integrating it into the network of global institutions, should be right at the top of the list.

Despite the economic strength of the EU, the growth of Brazil and India, and the resurgence of Russia as a relevant regional power, only China is poised to rival the US as a geopolitical force in the near future. Because of their locations, economic and political situations, and pre-existing relationships, it makes sense for America and China to work together. From the US perspective, a post-Cold War lesson is that, however powerful it may be, America lacks the strength to run the world by itself. It needs power-ful regional partners to represent its interests in each region. China, because of its power and rapid growth, is a perfect partner for America in this regard. This is especially true because, while China is growing at a record pace, the prospect of singular Chinese domina-tion evokes hatred and fear in its neighbors, particu-larly economic powerhouses Japan and South Korea.4 Because of these issues, China needs to be at least loosely connected to the strong US alliances with its rival Asian powers. Therefore, because the US needs China’s influence to balance the region and keep North Korea and Russia in check, a direct, bilateral foreign policy agreement or ‘understanding’ to coop-erate is the best policy for both states.

Economically, bilateral cooperation between America and China is even more essential. As it cur-rently stands, the two nations’ economies already dovetail, with American consumption sustaining Chinese production and Chinese products and money sustaining the American economy and debt markets. While America’s economic dependence on China, particularly as it relates to American debt, often gets the most press, the reality is that both sides would suf-fer if this system of economic cooperation fell apart.5 If the American consumer economy and the dollar collapsed overnight, China would see its manufactur-ing sector collapse, while losing the cash reserves to provide stimulus to deal with the problem.6 Because of this mutual co-dependency on both the economic and foreign policy fronts, the US should seek both to sign several bilateral agreements aimed at cooperation and to integrate China more fully into international institu-tions. In so doing, it will stimulate economic growth, while also making China’s economy more dependent on its global partners–particularly the United States.

CHINARussia represents a unique foreign policy paradox:

how does one deal with a state whose power and influence are both highly under and over-estimated at the same time? On one hand, it is easy to dismiss Rus-sia: she has a one-horse energy economy with a weak conventional military, a declining population, and no key international alliances outside the old Soviet sphere to speak of. However, she is also territorially enormous, possesses a large nuclear arsenal, and is important to US influence in Eastern Europe, Central Asia, and the Middle East. Furthermore, along with India and Japan, she could serve as an essential part-ner for the United States in balancing the expansion of Chinese regional influence.

While many veterans of the Cold War tend to consider Russian policy over the last decade as trend-ing towards the brinkmanship politics of yesteryear, the evidence actually suggests that ‘Putinist’ politics, named for Russia’s current Prime Minister Vladimir Putin, have a uniquely czarist streak. Whereas the Soviet Union sought to spread Communism inter-nationally and engaged in a superpower struggle to dominate the globe, Russia today seems more focused on rebuilding a sphere of influence and becoming an independent great power. As such, the United States should attempt to engage Russia on issues within what is considered to be its sphere: Eastern Europe and Central Asia. This does not mean that the US should abandon its newer allies in Eastern Europe. What it means is that America should abandon poli-cies that unnecessarily antagonize Russia, such as missile defense plans in Eastern Europe, and instead positively engage with Russia on issues of mutual necessity.

One good example of how this could work in prac-tice concerns Iran’s nuclear program, which Russia has recently protected at the UN solely as a counter-weight to American power in the region. However, despite this recent support, a nuclear Iran is hardly in Russia’s interest, so if America can engage it on this issue and remove other antagonizing factors, it could see tangible results. Even more importantly, if the US can engage Russia bilaterally, it can and should use her as a counterweight to Chinese influence in Cen-tral Asia. Renewed Russian and Chinese engagement could challenge US power globally, but if the US can work to prevent that and engage both bilaterally, it should preserve its status as a unique global actor.

RUSSIA

Page 18: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

17

Historically speaking, the United States has never had a rival for influence within the Western Hemi-sphere. While Brazil is unlikely to ever truly rival its northern neighbor, its large size and growing econo-my make it an ideal co-hemispheric partner for future US interests. Brazil is important at both a regional and a global level: should the US maintain the support of the other two powers, Brazil and Canada, in its own hemisphere, it can work on global issues without worrying about its own backyard. To do this, America will need to tailor hemispheric agreements to Brazil’s needs in order to foster its own growth, while giving Brazil a reason to support both regional and global US interests.

At first glance, Brazil seems to be an odd final member of this quartet: the European Union, China, and Russia are all larger and more powerful states, who would seem to have more to offer the United States on a geopolitical level. However, Brazil is, in fact, central, because it has the potential to tip the balance in Latin America for or against American interests in future debates. The United States must work to appease Brazilian interests by investing in its economic growth and keeping them from turning to other rising powers, especially China, for their most important economic and security partnership agree-ments. In cooperating with them fully on matters pertaining to economic growth, the US can use Brazil to retain its geopolitical sphere. In many ways, Brazil represents the key to this uni-multipolar strategy for the US, since its cooperation would allow America to retain control of the Western Hemisphere and utilize its influence in other regions. Combined with contin-ued transatlantic cooperation, a secure partnership with Brazil would leave Asia as the only potentially muddled regional theater in the uni-polar system.

Each one of this essay’s subsections details necessary changes to American relationships with future partners and regional rivals. However, of all the foreign policy changes the US needs to make, none is more important than completely redefining our relationship with India. Broadly speaking, this essay has, to this point, sug-gested that the United States secure its own backyard (Brazil) while weaning itself away from dependence on its European allies in NATO and instead working to jointly cooperate with and contain a future superpower in China and a resurgent one in Russia. While these moves are all necessary, the linchpin that would make this new policy system work is a new and dynamic alli-ance between the United States and India.

Historically, America and India have not been ter-ribly close bedfellows. Throughout much of the Cold

BRAZIL War, India was a largely socialist state that viewed the US warily because of the superpower’s consistent support for its Pakistani neighbors. However, this has changed in the last decade for a number of different reasons. First, India has opened its economy to foreign enterprise and no longer embraces a socialist econom-ic or political model which has increased America’s interest in a strategic partnership, as Washington increasingly views India as a future economic super-power which already boasts a functioning democracy. This development has occurred alongside Pakistan’s increasingly debilitating struggle with corrupt, dicta-torial governance and homegrown terror cells. As the US increasingly watches India flourish while Pakistan lurches toward becoming a failed state, the incentive to drastically alter our relationship with the rising sub continental state is growing steadily.

To put it bluntly, India is the key to the successful construction of this new US uni-multipolar world view. It provides the ideal ally with which the United States can successfully counterbalance Chinese growth and persuade the Peoples’ Republic to en-dorse a Sino-American partnership, instead of a con-frontation. For this reason, the US should dramatically accelerate its current efforts to establish an alliance with India. While its efforts up to this point, which include a nuclear deal, the Doha trade rounds, and limited military cooperation, have been somewhat effective, more is needed. Therefore, the US should couple its current disengagement from its Pakistan alliance (due to the homegrown terror problem) with India’s fear of Chinese strength and use these to build on the current initiatives and work rapidly towards forming an individual alliance with India.

The reasoning for this, in power terms, is simple. Today, the US has no true rival and is economically, politically, and militarily capable of maintaining its hegemony without a superpower partner in the short term. However, as American power begins to plateau over the next several decades, while its rivals’ grows exponentially, that will change and China could look to challenge US influence in Asia. But, if the US forms a strong, close, individual alliance with India based on mutual security, information-sharing, and bilateral economic stimulation, this development, coupled with America’s already strong alliances with Japan, Aus-tralia, and South Korea, would be more than enough to maintain US hegemony in Asia, while successfully accommodating ever increasing levels of Chinese growth and cooperation. The US would effectively change from (in 2009) a stand-alone superpower with a shoestring of alliances to a superpower maintain-ing its status as regional arbiter through a combina-tion of superior regional partnerships and individual strength. This change, combined with maintaining strong trans-Atlantic ties, would be an excellent plat-form to maintain US uni-multipolar hegemony.

INDIA

Page 19: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 focus

18

In the ensuing decades, as it continues to grow, yet sees its relative power compared to the world’s other major actors wane, the United States will need to redefine its place in the world. Over the course of its history, America has been, at different times; a weak isolationist state, a multi-polar power, a bipolar superpower, and a uni-polar hegemony, so precedents for redefining the US role in the world are certainly present. In redefining its role, the Unit-ed States should seek uni-multipolarity. This new system will allow America to not only absorb, but also benefit, from the growth of these new power states. Furthermore, it would allow the US to be an integral part of all global initiatives without having to lead (and bankroll) every single one, as has been the case for almost two decades.

Uni-multipolarity will be difficult to achieve but certainly not impossible. America, unlike its hegemonic precursors, has a chance to redefine its role while it still has the power to do so. Though many who currently see America in recession have declared the US era of dominance to be over, the fact is that, despite its present economic weakness and military quagmires in Iraq and Afghanistan, America in 2010 still has no true rival and will not for at least another decade. US policymakers should recognize that US unipolar hegemony may be coming to an end because of the inexorable “Rise of the Rest” but this does not mean that US power itself is waning or the US is on the verge of losing its status as the world’s main actor. It means, instead, that the US remains alone atop the totem pole, but is now confronted by a number of rising states that will begin to rival it in power. However, while many writers have continued to equate this with an accompanying decline in American power and influence, there is no reason that this should prove to be the case. In fact, these developments actually represent an opportunity for the US to maintain its old trans-Atlantic ties while cultivating new alliances with two key states, India and Brazil. These alliances, the keys to the uni-multipolar system, should help maintain Americas’ role as international arbiter and prop up its relative strength in efforts to both partner with and con-tain a resurgent Russia and an emerging China. Ultimately, should these policies be pursued and these goals at-tained, there is no reason that America couldn’t flourish in a different, yet perhaps even more prosperous, way well into the future.

CONCLUSION References1. “US Historic GDP Chart.” Google Public Data. Web. <http://

www.google.com/publicdata?ds=wb- wdi&met=ny_gdp_mktp_cd&idim=country:USA&dl=en&hl=en&q=us+gdp+chart>.

2. “China GDP Growth Chart.” Google Public Data. Web. <http://www.google.com/publicdata?ds=wb-wdi&met=ny_gdp_mktp_cd&idim=country:CHN&dl=en&hl=en&q=china+gdp+chart>.

3. “EU Institutions and other bodies.” Europa. Web. <http://europa.eu/institutions/index_en.htm>.

4. Ellwell, Labonte, and Morrison, 2007. “Is China a Threat to the US Economy?” CRS Report for Congress. Web. <http://www.fas.org/sgp/crs/row/RL33604.pdf>

5. Ibid.6. “China’s development benefits US Economy.”

China Daily. Web. 28 August 2005. <http://www.chinadaily.com.cn/english/doc/2005-08/28/content_472783.htm>

Page 20: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

19

Biofuels and Land Use Changes

As the United States searches for solutions to climate

change and engages in the most important energy negotiations

to date, it is crucial that decision makers amend a significant carbon

accounting error, attached to bioenergy, present in current energy legislation which could lead to wide-spread detri-mental land use changes, as well as greater greenhouse gas emissions. The international conventional carbon accounting and the Waxman-Markley comprehensive energy bill (ACES) fail to account for both the lifecycle emissions of biofuels and the potential future emissions from land conversion that will result from an increased demand of biofuels crops. For most scenarios, converting land to produce first generation biofuels creates a carbon debt of 17 to 420 times more CO2 ¬equivalent than the annual greenhouse gas reductions that these biofuels would provide by displacing fossil fuels.1 Furthermore, the carbon opportunity cost of the land converted for biofuel pro-duction is not considered; the sequestration potential of lands that could support forest or other carbon-intensive ecosystems is often much higher than the greenhouse gas emissions saved by using the same land to replace fossil fuels with bioenergy. Biofuel credits create market incentives for farmers to convert fertile cropland and already biologically productive but un-used land to meet demand for biofuels and displace demand for crops.2

In order to understand the elements involved in evaluating the use of biofuels as an energy source, several dimensions of the argument will be treated. First, a typology of biofuels and their uses will be presented, focusing on the emissions savings presented by first-generation versus second-generation bio-fuels. Second, the way in which carbon accounting rules have failed to consider all of the aspects of biofuels production, com-bustion, and the carbon opportunity cost of the land required will be discussed. Finally, the current state of biofuels policies will be addressed.

Bioenergy is typically divided into two overall categories, first and second generation biofuels. Fuel from biomass is pro-duced biologically by using enzymes derived from bacteria

or fungi to break down and ferment the plant-derived sugars, producing ethanol. At present, because the energy output of biofuels remains lower than that of conventional fossil fuels, biofuels are most commonly used by mixing up to 85 percent ethanol with petrol for transportation uses.3 The preferred first generation biofuel crops, because of their effectiveness as a substitute for petroleum, are corn, sugarcane, soybeans, wheat, sugar beet, and palms.4 Second generation biofuels are considered to have the ability to solve some of the problems of first generation biofuels, and can supply greater amounts of biofuels without displacing as much food production. The materials used for second generation biofuels are non-food crop cellulosic biomasses, which can be leafy materials, stalks, wood chips and other agricultural waste. Because the sugars have to first be freed from the cellulose by breaking down the hemicelluloses and lignin in the source material (via hydrolysation or enzymes), second generation biofuel is more expensive to produce and generally has a smaller yield than does first generation bioethanol.5 The U.S. Department of Energy estimated in 2006 that it costs about $2.20 per gallon to produce cellulosic ethanol, twice as much as the cost for ethanol from corn.6

The UNFCCC, E.U. cap-and-trade law, and ACES inap-propriately exempt biofuels as being ‘carbon neutral’; as-signing lifecycle emissions from bioenergy solely to land-use accounts, while counting the amount of carbon released from combustion of the biofuels as equal to the amount of carbon uptake from the production of biomass feedstocks.7 This rewards bioenergy, in comparison to fossil fuels, which are taken out of underground storage. This accounting skews the carbon reporting in favor of the destination nations of the biofuels, where the tailpipe and smokestack energy emis-sions are not debited.8 The areas of the world with the fastest growing biofuels production, notably Southeast Asia and the Americas, have to report net carbon release from harvesting biomass while the importing countries exclude the emissions from their energy accounts. The Kyoto Protocol caps energy emissions of developed countries but does not apply limits to land use in developing countries.9 The emissions from land

Flawed Carbon Accounting Leading to Greater Greenhouse Gas Emissions and Lost Carbon Sequestration Opportunity

Typology of biofuels

Julia Blocher / Staff Writer

The problem with current carbon accounting

Page 21: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 focus

20

conversion required to produce biofuels feedstock are also not counted. Biofuels appear to reduce emissions, when in fact, varying with the source of biomass, the carbon debt is much higher than is repaid by the carbon captured in the production of biomass for feedstocks.

In isolation, replacing fossil fuels with biofuels does not de-crease emissions. The amount of CO2 released by combusting biofuels is approximately the same per unit of energy output as traditional fossil fuels, plus, the amount of energy required for its production is typically more than that required to pro-cess petroleum.10 Biofuels production can only reduce green-house gas emissions if the amount of CO2 sequestered in the process of growing feedstocks sequesters above the amount of carbon that would be sequestered naturally and if the annual net emissions from their production and combustion are less than the life-cycle emissions of the fossil fuels they displace. This can only be achieved by land management changes that increase carbon uptake by terrestrial biomass or utilize plant waste.11 Soils and biomass are the most important terrestrial storages of carbon, storing about 2.7 times more than the atmosphere.12 Clearing vegetation for cropland, coupled with the burning or microbial decomposition of organic carbon in soils and biomass, causes the release of significant amounts of carbon. ‘Carbon debt’ refers to the amount of CO2 released during the first 50 years of this change in the use of the land. Until the carbon debt is repaid, biofuels from converted lands have greater net greenhouse gas emissions than fossil fuels.13

In most scenarios, the time scale required to repay the carbon debt from land conversion with the annual greenhouse gas emissions savings from replacing fossil fuels with biofuels amounts to bioenergy representing a net increase of emissions. One study showed that all but two biofuels, sugarcane ethanol and soybean biodiesel, will increase greenhouse gas emissions for at least 50 years and up to several centuries.14 The amount of carbon debt varies by the productivity of the natural ecosystem. For example, converting Amazonian rainforest for soybean biodiesel would create net carbon emissions of 280 metric tons per hectare; a debt that would take about 320 years to repay.15 The use of second generation biofuels improves the tradeoff of carbon debt of land conversion to the emissions saved from displacing fossil fuels, but not considerably. Rap-idly growing grasses showed promise for producing cellulosic biofuels; calculations show that ethanol from switchgrass at high yields and conversion efficiency attain a carbon savings of 8.6 tons of carbon dioxide per hectare annually relative to fossil fuels.16 However, this doesn’t take into account the op-portunity cost of allowing the land to regenerate with trees.

Carbon credits, the lowering of carbon caps and agricultural subsidies for biofuels crops in the U.S. and E.U., create global economic incentives for large

scale conversion of land for bioenergy. Land-use changes already contribute 17 percent of the world’s total greenhouse gases; converting biologically productive land for biofuels only increases the need for carbon sequestration, thought to slow climate change.17 Biofuels production on existent agri-cultural land indirectly causes greenhouse gas emissions by land conversion elsewhere to fill the need for cropland.18 One study estimates that, given the current accounting methods, a global carbon dioxide target of 450 parts per million could displace virtually all global natural forests and savannahs by 2065 and release up to 37 gigatons of carbon dioxide per year, an amount close to the total human carbon dioxide emissions today.19 Converting a mature forest for corn ethanol, for ex-ample, releases 355 to 900 tons of carbon dioxide within a few years, plus sacrifices the ongoing carbon sequestration poten-tial of the forest of at least seven tons per hectare per year.20

Carbon accounting is one-sided, in that it attempts to account for what is gained, but ignores what is given up. Current climate legislation considers the land base used for bioenergy to have come carbon-free, which is not the case. The land required for first generation biofuels is typically productive enough to support tree growth if left alone, which would sequester significantly more carbon than substituting fossil fuels with bioenergy saves.21 For example, a hectare of land in the U.S. that could be used to grow corn for ethanol could instead be left to regenerate into trees; the resulting biomass would capture carbon dioxide at a rate between 7.5 to 12 tons.22 In the tropics, the carbon opportunity cost is greater. Bio-fuels made from sugarcane were calculat-ed to have carbon savings of roughly nine tons per hectare annually, and palm oil at 7.5 tons per hectare, while the reforestation rates in the tropics seques-ter carbon dioxide at a probable rate of 14 to 28 tons per year.23 Some researchers advocate economic incentives that will preserve forest and other carbon-dense ecosystems rather than bioenergy.

Land conversion emissions and carbon opportunity cost

Page 22: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

21

If developed countries were to transfer their total investment in biofuels (estimated at $15 billion) into preserving forests, pro-mote reforestation, and prevent the destruction of peatlands, the total costs of halting and reversing climate change will be halved.24 Preventing deforestation and wetlands destruction doesn’t require technological development and entails little capital investment; the cost would be as low as 0.1 dollars per ton of carbon dioxide.25

Conventional lifecycle analyses that award biofuels false carbon credit continue to plague energy legislation.26 Under the Energy Independence and Security Act (EISA), signed into law in 2007, the total amount of biofuels added to gasoline in the U.S. is required to increase to 36 billion gallons by 2022, four times the current levels.27 Furthermore, since Barack Obama’s election, requiring that biofuels replace ten percent of transportation fuel by 2020 has become a common national policy around the world. The U.S. and Europe both sponsor the conversion of ‘food’ crops into first generation biofuels through massive subsidies, while global food demand is expected to increase by 50 percent by 2030.28 Current crop yields are not high enough to avoid the need to replace farmland used for bio-fuels production with more cropland elsewhere. Meeting U.S. and E.U. objectives would require 60 million hectares of land by 2020, a target that would necessitate biofuels production to consume 70 percent of the land expansion for wheat, maize, oilseeds, palm oil, and sugarcane.29 To avoid land-use change, world cereal yield growth rates would have to triple the Department of Agriculture’s current projections.30

Proponents of biofuels are beginning to recognize the effects of land-use changes in policy negotiations. Notably, the EISA states that cellulosic biofuels must, after accounting for both direct, tail-pipe, and indirect conversion emissions, offer at least a 60% lifecycle greenhouse-gas reduction relative to conventional gasoline.31 How-ever, the direct and indirect causes and effects of land-use changes remain difficult to quantify. In the E.U. and the U.S., biofuels pro-ducers can circumvent the rules of land conversion carbon account-ing by separating materials for biofuels and those destined for food demand. For example, one tank for oil used from already cleared land will qualify for biofuels subsidies, while another tank of oil from newly cleared land can go to food production.32 Additionally, European policies and the EISA make provisions for imports of bio-fuels; the E.U. directive provides for 43 percent of the biofuels target

to come from imports.33 This has accelerated incentives for devel-oping countries to produce biofuels, particularly in Latin America and Southeast Asia. Much of the expansion in biofuels production in these regions has been from palms grown on former peatlands and wetlands. These lands present the highest carbon cost because of their ability to store large quantities of carbon, which are released when the land is drained and cleared. In these biologically produc-tive areas, between 43 and 170 tons of carbon per year per hectare of converted land are released over 30 years.34

By excluding emissions from land-use change, carbon account-ing is flawed, because it counts the carbon benefits of biofuels, but not the carbon costs of the source. Considering biofuels as carbon neutral by counting the change in biomass stocks and ignoring the tailpipe emissions has, in recent years, been recognized as inaccu-rate, but current legislation continues to ignore the carbon debt cre-ated when biologically productive land is converted to provide the land base needed for the production of first generation feedstocks. As a result, biofuels appear to reduce carbon emissions when, in actuality, the total emissions can be much higher than the life-cycle emissions of traditional fossil fuels. When calculations count costs as well as benefits, biofuels actually increase greenhouse gas emis-sions that contribute to climate change. Furthermore, the carbon opportunity cost of biologically productive land is ignored; carbon uptake would be much greater if land were allowed to regenerate to forests or similarly carbon-dense ecosystems. Current biofuel policies in developed countries risk exacerbating climate change by creating incentives that will lead to worldwide deforestation and threaten food security.

Certain second generation biofuels, conversely, can indeed be beneficial, not only to make the transition away from traditional fossil fuels but also to reduce greenhouse gas emissions. Biofuels de-rived from some second generation feedstocks have lower life-cycle greenhouse gas emissions than fossil fuels and can be produced in substantial quantities. This includes perennial plants grown on degraded agricultural lands, crop residues, slash (branches and thinning) left from sustainably harvested forests, double crops and mixed cropping systems, and animal, municipal, or industrial waste.35 Additional factors may eventually improve the greenhouse gas emissions reductions presented by biofuels, such as techno-logical improvements that reduce their carbon payback period by

Politics of biofuelsThe future of biofuels

Page 23: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 focus

22

increasing the carbon uptake potential of biomass.36 Since crop yields have grown steadily in the last century, proponents of bioenergy argue that they can improve enough to eliminate the problem of converting more land for its production.37

As the global population and demand for energy increases, the need for policy-makers and technology to find solutions to climate change will only become more urgent and expensive. The discourse on biofuels illustrates that as legislators engage in the most important climate treaty negotiations in U.S. history, it is vital that the technologies that are proposed as solutions to climate change are properly evaluated. Bioenergy can and should be an important part of energy legislation and make up a substantial portion of our future energy demand for trans-portation uses, but it must take into account real energy gains and lifecycle greenhouse gas emissions, preservation of natural carbon sinks, and sustainability of the global food supply. Clear tasks for the coming years include fixing the bioenergy car-bon accounting error, reevaluating carbon credit systems and biofuels subsidies, and developing second generation biofuels technology.

References:1. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Sci-

ence 319 (2008): 1235.2. Tim Searchinger et al., Science 319 (2008): 1238.3. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy &

Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/EE/article.asp?doi=b822951c.

4. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Sci-ence 319 (2008): 1235.

5. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy & Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/EE/article.asp?doi=b822951c.

6. J. Weeks, “Are We There Yet? Not quite, but cellulosic ethanol may be coming sooner than you think”, Grist Magazine (2006), http://www.grist.org/news/maindish/2006/12/11/weeks/index.html.

7. IPCC, “2006 IPCC Guidelines for National Greenhouse Gas Inventories, prepared by the National Greenhouse Gas Inventories Programme,” Institute for Global Environmental Strategies (IGES): Tokyo, 2007.

8. Ibid9. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Science

326 (2009): 527.10. IPCC, “2006 IPCC Guidelines for National Greenhouse Gas Inventories,

prepared by the National Greenhouse Gas Inventories Programme,” Institute for Global Environmental Strategies (IGES): Tokyo, 2007.

11. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Sci-ence 326 (2009): 527.

12. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Science 319 (2008): 1235.

13. Ibid., 14. Ibid., 15. Ibid.16. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land

to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009).

17. Ibid.18. Tim Searchinger et al., “Use of U.S. Croplands for Biofuels Increases

Greenhouse Gases Through Emissions from Land-Use Change”. Science 319 (2008): 1238.

19. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Sci-ence 326 (2009): 527.

20. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009): 8-13.

21. Ibid., 22. Ibid., 23. Ibid.24. Dominick Spracken et al., “The Root of the Matter: Carbon Sequestration

in Forests and Peatlands,” Policy Exchange (2008), http://www.policyexchange.org.uk/images/libimages/419.pdf.

25. Ibid.26. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by

International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009).

27. Energy Independence and Security Act of 2007, Public Law 110-140, H.R. 6, 2007.

28. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy & Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/EE/article.asp?doi=b822951c.

29. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009).

30. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009).

31. Energy Independence and Security Act of 2007, Public Law 110-140, H.R. 6, 2007.

32. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009).

33. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009).

34. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Sci-ence 326 (2009): 527.

35. David Tilman et al., “Beneficial Biofuels – the food, energy, and environ-mental trilemma,” Science 325 (2009): 270.

36. J. Weeks, “Are We There Yet? Not quite, but cellulosic ethanol may be coming sooner than you think”, Grist Magazine (2006), http://www.grist.org/news/main-dish/2006/12/11/weeks/index.html.

37. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper

Page 24: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

23

BEYOND KYOTO:Toshiro Baum / Staff Writer

In December 1997, nations from around the world adopted the Kyoto Protocol, an international agreement which compelled signatory nations to reduce their greenhouse gas emis-sions. The Kyoto Protocol was seen as a necessary response to a trend of global warming, which scientists contended was a result of the “Greenhouse Effect,” whereby human combustion of fossil fuels and other chemicals change the chemistry of the Earth’s atmosphere by causing it to trap more of the Sun’s en-ergy. If this were to continue, scientists warned, it would cause further global warming, resulting in a number of negative ecological effects with equally harmful economic, social and political repercussions.

Meanwhile, in the United States, the Senate passed a related resolution 95-0, stating that the US should not become a signatory to any international agree-ment which “would result in serious harm to the economy of the United States.”1 Although the Kyoto Protocol had been signed by then Vice President Al Gore, the Clinton Administration chose not to undertake the politically challenging task of persuading the Senate to agree to ratify it. With the election of George W. Bush, this would

continue, with the Bush Administration citing concerns over the leeway granted to large developing nations such as China, despite their high amount of greenhouse gas emissions.2 Throughout its tenure, the Bush Administration, and many like-minded conservatives, continued to resist government emis-sions reductions programs, citing concern over economic effects.3

With the election of a new president, Barack Obama, in 2008, environmental advocates regained hope that global warming would be addressed at the national level. Many in the scientific community were becoming increas-ingly disturbed by trends that warned of significant ecological upset if global warming continued. Problems such as the well-publicized plight of polar bears in the artic circle were becoming more well-known to a greater number of Americans. Reflecting this mounting concern, the Obama Administration’s acknowledgment that comprehensive and mandatory policies were needed to address global warming seemed to signal a willingness to address a grow-ing problem.4

In this new atmosphere, charac-terized by the oft-repeated idea of “change,” it came as little surprise that

the newly empowered Democratic majority in Congress would move towards legislation that addressed climate change. Most notably, the House of Representatives voted to pass the American Clean Energy Security (ACES) Act, a highly complex bill which would, among other measures, enact a system of carbon emissions limitations characterized by tradable carbon credits, a system known to many as “Cap and Trade.”5 Although the final fate of the Waxman-Markey ACES Act is yet to be determined (it still must pass through the more conservative Senate before reaching the President to be signed into law), it is an indication that the era of politi-cal inaction in addressing the issue of climate change may be over. Sweeping policies, such as the ones called for by the ACES Act, will change the face of the American economy and, with his, the faces of American society and the global community.6 Perhaps even more significant is the fact that these changes have support from diverse sectors of American society. According to a House of Representatives Press Release, “the [ACES] legislation was backed by a coalition that included electric utilities, oil companies, car companies, chemi-

Page 25: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

24

EMISSIONS TRADING IN AMERICAcal companies, major manufacturers, environmental organizations, and labor organizations, among many others.”7 Such wide support indicates that sweeping policy changes have a real chance of moving out of the arena of politics and into the daily lives of Americans.

However, effectively addressing cli-mate change presents a new and com-plicated set of problems. It is often dif-ficult to fully grasp the intricacies and implications of many proposed policy solutions. The following paper will examine a selection of policy options and proposals concerning the main avenue that legislation has followed up until this point: reducing greenhouse gas emissions. This examination will focus mainly on emissions limitations or emissions trading systems, as these are the most prevalent policy proposals. Each proposal will be examined and the efficacy of certain policy mechanisms will be explored.

When examining potential policies designed to reduce global warming by limiting the amount of greenhouse gases in the atmosphere, there are two

important considerations to be kept in mind. First, there is the ability of the policy to effectively reduce emissions, without which any policy becomes futile. Second, there must be an exami-nation of how such a policy will affect the economy, and, by extension, the lifestyle and society, of a nation. The ideal policy will accomplish two princi-pal tasks: effectively reduce greenhouse gas emissions while maintaining or improving the quality of life of the na-tions’ citizens, principally by stimulat-ing or preserving economic growth and prosperity. Relevant policy options will be evaluated according to these two criteria.

Policies designed to reduce green-house gas emissions can be divided into two main categories: non-market-based and market-based. The first category includes policies which do not rely on market mechanisms (i.e. eco-nomic incentives) to change behavior. Non-market-based policies include the heavy limitation or outright banning of greenhouse gas emissions, similar to what has already been done with a number of ozone depleting substances.8 Although these are the most effec-tive methods for reducing the total amount of greenhouse gas emissions overall, such restrictive policies can be considered impractical, due to the

prevalence of greenhouse gas-reliant practices in modern life, which would have to be limited should such pro-hibitive measures be placed on green-house gas emissions. Since policies of heavy limitation or total prohibition of greenhouse gases would be extremely disruptive to the economic life and society of a nation; they thus fail to meet or address the second criteria for evaluating greenhouse gas policies and as such, will not be discussed further in this paper.

The second category of market-based policies focuses on the use of market mechanisms, such as augmenting the price of emitting greenhouse gases, in order to provide an economic incen-tive to reduce emissions. Market-based policies are generally considered more flexible and are therefore often more politically palatable.9 Market-based policies can be further divided into two categories: emissions taxes, which are designed to make greenhouse gas emissions reflect their “true” cost by factoring in environmental costs, and emissions trading systems, which seek to place a limit on emissions while preserving economic flexibility through trading.10 Emissions tax policies are in-teresting from an economic standpoint (much debate has gone into how one would determine the “true” cost

Policy Approaches to Reducing Emissions

Page 26: HURJ Volume 11 - Spring 2010

25

of emissions, as well as how this cost would be implemented into a functioning modern economy), but they are not often seen as ideal policy options. Although a number of congressional bills have proposed emissions taxes (or measures similar to emissions taxes), critics of such policies often note that emissions taxes do not guarantee a certain level of emissions reductions and thus often fail to achieve the main point of climate change legislation; namely, to combat global warming by reduc-ing the amount of greenhouse gases in the atmosphere.11 Since emissions taxes fail to guarantee significant emissions reductions, they also fail to satisfy the first criteria for evaluating greenhouse gas policies and will not be further discussed in this paper.

The second group of market based policies, emissions limitation and emissions trading poli-cies, are characterized by an emissions limit (the “cap” in cap and trade), which the economy in question cannot exceed. Emissions trading policies include a provision which allows for economic players to trade specified units of emis-sions amongst themselves, as long as the total number of emissions units does not exceed the set limit (the “trade” in cap and trade). These policies, known as Emissions Trading Systems (ETS), are generally preferred by policy makers, as they help it to retain a certain amount of flexibility in the economy and help it to adapt to these new conditions imposed upon it. Emissions trading systems, due to their political viability and potential ability to satisfy the two goals set out above, will be the main focus of this paper.

Emissions Trading Systems (ETS) are highly complex. In general, a trad-ing system is composed of a limit, or cap, which clearly indicates the total amount of greenhouse gas emis-sions permitted in the economy. This emissions total is then broken down into units that can be traded, allow-ing for flexibility in the economy and hopefully for markets to self-correct and reduce inefficiency. In designing an ETS, a policy must incorporate a number of features, each of which will have its own effects.

Like an emissions tax, an ETS must choose a level at which it will prin-

cipally regulate. What this “point of regulation” boils down to is who will be subject to monitoring and report-ing emissions to the administrating agency. Since many emissions trading systems use a system of allowances to determine allowed emissions, the point of regulation can be thought of as who must hold the allowances required for emissions. For example, in regards to emissions from transportation, regula-tion could come at a number of dif-ferent points, focusing on the primary producers or importers of greenhouse gas-producing fuels, the distributors of the fuels, or the end-users who actu-ally combust the fuels and produce the greenhouse gases. This, in turn, raises

numerous policy concerns about how to monitor and quantify the actual amount of greenhouse gases emitted.

One of the main policy ques-tions is how to determine emissions allowances, which can be referred to as “credits,” or “shares.” (For the purposes of clarity, this paper will use allowance to refer to the tradable unit of emissions in an emissions trading system.) Allowances are the tradable units in emissions trading schemes. Allowances present a number of pos-sible benefits and problems for policy makers. For example, distribution of allowances can take a number of different forms: by government al-location, auctioning, or a combination of the two. Each of these has vari-ous implications: allocation is often

seen as a handout, as producers or importers do not have to pay for the emissions they are allowed to produce, but are instead granted them.12 Auctioning is seen as an effective way to allow markets to determine the price of emissions, but can

also be subject to collusion and specu-lation, which can artificially drive up prices and hurt smaller firms. Addressing this issue is crucial in making an emissions trading scheme effective and politically acceptable.

In an effort to address some of these potential market manipulation problems, numerous emissions limi-tations schemes and emissions trad-ing schemes include provisions for emissions offsets. Offsets are actions or projects which negate (offset) the emission of greenhouse gases into the atmosphere. Offset projects, if quanti-fied and monitored, could be turned into “offset credits,” which would

focus hurj spring 2010: issue 11

Emissions Trading Systems

“The era of political inaction in addressing the issue of climate change may be over. Sweeping policies, such as the ones called for by the ACES Act, will change the face of the American economy and, with this, the faces of American society and the global community.”

Page 27: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11

26

essentially act as an additional emis-sions allowance.13 Regulated entities could use offset credits to help them meet their emissions limit or could sell them to other entities, just like an emissions allowance. Offsets present opportunities for policy makers but also present a number of problems. For example, policy makers seek-ing to encourage reforestation could include provisions in their trading scheme which would allow covered entities to fund reforestation efforts and to receive offset credits from it. In this way, offsets can be used to encourage a number of environmen-tally responsible practices, such as preservation of forests or the capture of carbon dioxide emissions from factory smokestacks.14 However, any emissions trading policy will need to address the credibility of an offset project, most notably whether or not it is actually able to offset emissions at the level it claims to, and whether or not the offset demonstrates the principle of “additionality.” Addition-ality is defined as whether the offset project or action would have taken place despite the issuance of a carbon credit.15 A prime example of a situa-tion which does not reflect additional-ity is a tree farm reforesting part of its land with the intention to harvest it, since it can be argued that this would have occurred anyway. Offsets also have other implications from a policy standpoint. Most importantly, offsets can help control the price of carbon allowances by acting as an alterna-tive. Thus speculation or collusion in the allowance market would be more difficult and less profitable, due to the fact that greenhouse gas emitters will have alternative methods of meeting their limits.16

While offsets can quickly become complicated and confusing, it is im-

portant to not lose sight of the larger trading structure of an ETS, i.e., the method and framework in which actual trading takes place. The nature of trading can take on many different forms with correspondingly different outcomes. Like the distribution of al-lowances, trading can be open to the public, to a select few stakeholders, or to a mix of the two. Open trading has the potential to become a power-ful and vibrant part of the economy, but could also lead to speculation and price instability, whereas more closed forms of trading could lead to market manipulation by colluding groups or dominant stakeholders.

Any emissions trading scheme which auctions off allowances has enormous potential to generate rev-enue for the government. The use of these revenues will be a useful tool for policy makers in making a politically acceptable proposal.

As a final note, this paper will focus exclusively on emissions trad-ing systems which affect the United States as a whole. However, climate change and global warming are concerns which stretch across interna-tional boundaries, with various con-sequences for communities across the world. Likewise, any proposed policy must also take into consideration the effects it will have on the US relative to other nations, especially in terms of economic competitiveness.

THE AMERICAN CLEAN ENERGY AND SECURITY ACT

The Waxman-Markey Bill, also known as The American Clean En-ergy and Security (ACES) Act, or H.R. 2454, was an extensive piece of legis-lation introduced and passed by the

US House of Representatives in early July of 2009. Below is a brief examina-tion of the 1,400 plus page bill, with a focus on the benefits and consequenc-es of certain policy provisions.

The ACES Act establishes an emis-sions trading (cap and trade) system with a clear emissions limit for the economy, which promises definite re-ductions. This allows for the creation of a set allowance unit, as well as an idea of how many allowances will be allowed each year, allowing the economy to predict and adapt accord-ingly. The act also establishes a price floor and ceiling for allowances.

A price floor will preserve the incentive to reduce emissions (if allowances were worth nothing, or next to nothing, there would be little or no incentive to reduce emissions, since there would be an abundance of cheap allowances), and a price ceiling will curb speculative buying of allowances, as well as mitigate harsh economy-wide effects, such as inflation.17

The ACES Act also provides some revenue from the sale of allowances targeted to reducing the economic consequences of the bill on work-ers and on certain industries, as well as growing “green” sectors of the economy by investing in clean energy research, efficiency technolo-gies, and climate change adaptation, all of which are crucial aspects of mitigating the harsher downsides of the bill. ACES also provides for the creation of an emissions offset credit program to be overseen by a federal agency. While it is not clear how strict this monitoring agency’s guidelines will be, it will be an important factor in the growth of the carbon offset industry as well as helping to prevent market manipulation by offering an alternative to emissions allowances.

Present Legislation

focus

Page 28: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

27

Additionally, the offsets provisions in ACES add an inter-national dimension, allowing American firms to purchase foreign offset credits, provided they are subject to the same accreditation standards.

By far one of the most beneficial aspects of the ACES Act is that it will have a minimal net impact on the daily financial lives of most Americans. According to a report done by the Congressional Budget Office (CBO), the net cost of the ACES Act to the American economy would probably be about $22 billion.18 Although this seems high, it would be an average net cost of only $175 for every American household.19 While this would vary according to income bracket and region, and with the major caveat that economic modeling often falls short of accurately predicting reality, it can reasonably be said that the ACES Act would meet the second goal of preserving the economic livelihood of American Society.

The ACES Act does fall short in a number of areas, poten-tially establishing policies which will be harmful to its goal of reducing greenhouse gas emissions while preserving economic prosperity. Primary among these policies is the system of allocations to industries. Although this is intended to miti-gate many of the negative economic effects the cap and trade system will have on industries, it is often seen as a corporate handout which will only serve to increase the profits of regu-lated companies, while still increasing the prices to consumers. In testimony before Congress last March the Obama Adminis-tration’s Budget Director, Peter Orszag, described a policy of allocation rather than auction as the “largest corporate welfare program that has [would] ever [have] been enacted in the his-tory of the United States.”20 This system of allocations will not only grant corporations emissions allowances worth millions, it will also give them no incentive to change their business practices or to reduce emissions. Additionally, these allocations mean that the federal government will receive less money which could be channeled towards research and towards miti-gating the economic effects of the bill.

ACES also does not establish a clear point of regulation; since unallocated allowances are auctioned, presumably to the general public, it becomes difficult, even impossible, to track and monitor how such allowances are used. The ACES Act also allows for the banking of allowances, a practice which could lead to market manipulation through speculative trad-ing or cartel or monopolistic practices. According to a report done by the Washington State Department of Ecology, hold-ing auctions open to the public and permitting the banking of allowances greatly increases the chances of a speculative al-lowances market, market manipulation or abuses such as the reselling of previously used emissions credits.21 In short, the open system of the ACES Act is detrimental to stability and accountability in an area where such qualities are essential to the success of the policy.

THE VAN HOLLEN “CAP AND DIVIDEND ACT,” (H.R. 1862)

The Van Hollen Cap and Dividend Act is another emissions trading system sponsored by Representative Van Hollen of Maryland. Unlike the more comprehensive ACES Act, the Cap and Dividend Act focuses only on carbon dioxide-producing

Page 29: HURJ Volume 11 - Spring 2010

28

fossil fuels. Although it is one of the less harmful greenhouse gases, carbon dioxide (CO2) is distinguished by its prevalence and volume; per ton it is greenhouse gas emitted the most into the atmosphere.

The Cap and Dividend Act establish-es an emissions trading system with a clear limit, allowing markets to predict the amount of allowances and adjust accordingly. The Van Hollen Act is also well-constructed in that it has a clear point of regulation: the entity which makes the first sale of a fossil fuel. These “first sellers” are parts of a group with clear limits, which makes them easy to track and regulate. Additionally, the Van Hollen Act auctions all of the available allowances, rather than allo-cating them, in an auction open only to the regulated group, generating more revenue for the federal government and allowing the regulated firms to deter-mine a fair market price for allowances. This closed auction also makes tracking and monitoring of allowance use easier and, when combined with quantity lim-itations imposed on buyers, helps curb the ability of players to manipulate the market or engage in harmful specula-tive behavior.22 The Cap and Dividend Act is also unique in that it creates a dividend program, whereby part of the proceeds from allowance auctions are channeled back into the economy through a refund or dividend given to everyone “with a valid social security number.”23 This program would help enormously in mitigating the rise in prices which would otherwise hurt American consumers.24 Like the ACES Act, the Van Hollen Act contains a number of provisions to help direct federal funding towards growing clean or green sectors of the economy.

The Cap and Dividend Act is also beneficial in that is has specific inter-national provisions designed to protect American industry by levying a tariff on foreign imports of fossil fuel inten-sive goods which would be equal to the costs incurred by domestic firms due to the emissions trading system, and pro-viding subsidies to American exporters of the same goods, helping them to re-main competitive internationally. These provisions become ineffective towards the firms of another nation as soon as

that nation creates its own comparable emissions trading systems, an added incentive for other nations to move for-ward on climate change legislation.

Although the Van Hollen Act allows for an emissions offset credit program, it does not establish any clear oversight or guidelines for verifying or quantify-ing offsets, a weakness which could undermine the policy’s effectiveness in ensuring emissions reductions. The Van Hollen Act also allows for banking of allowances, a practice which could lead to market manipulation and hoarding, actions which could destabilize the al-lowance market and leave it vulnerable to price fluctuations.25

THE EPA ACID RAIN PROGRAM

The final policy examined is one which has actually already been implemented, adding to its ability to demonstrate certain policy mecha-nisms. The Environmental Protection Agency’s Acid Rain Program, originally started in 1995, regulates the emissions of chemicals associated with acid rain, including sulfur dioxide (SO2) and a number of nitrous oxide compounds (generally given the symbol NOx). Limits for the emissions of these sub-stances are determined, and allowances are distributed through auctions and allocations to participating firms.

One of the greatest benefits of the EPA Acid Rain Program is that it has been successful in reducing emissions.26 This success was helped by the fact that, since SO2 and NOx are byprod-ucts of coal combustion, the EPA was able to establish an easy point of regu-lation: firms burning coal, primarily for the production of electricity. The EPA Acid Rain program is interesting in that it allows non-regulated entities (entities other than coal-burning power plants) to purchase allowances either to hold as assets or to “retire” (to hold an allow-ance until it becomes invalid, thereby further reducing the total number of usable allowances in circulation).27 This has not presented serious problems for the program, as the allowances being bought and sold are specific to certain industrial processes, and are therefore not easily used by a large number of private citizens and organizations. It is

important to note that this may not be replicable with other types of green-house gas emissions allowances, which are more easily used by non-industry entities. The final feature of note in the Acid Rain program is that it rewards implementation of certifiable emis-sions-reducing technology by allocating more emissions allowances, making the industry cleaner overall and serving as an example of effective and accountable regulation for other emissions reduc-tions policies.28

Designing a national emissions trad-ing system is no easy task. However, the growing evidence that global warm-ing is a real threat which is linked to human emissions of greenhouse gases and holds disastrous consequences for both the national and international communities contributes to the ur-gency of implementing an effective emissions trading system. As shown by the brief comparison of a number of working and proposed emissions trading systems, an emissions trading system can have a number of different policy mechanisms, each of which, both individually and in concert with other policy features, have numerous out-comes. Nevertheless, there are a number of features which must be included in any emissions trading system in order for it to be effective and accomplish its two main goals: reduction of green-house gas emissions and preservation of economic prosperity. Building off of the policies described above the following section will outline a number of compo-nents which are necessary to the success of any emissions reductions policy.

Point of Regulation: An effective national or wide-scale emissions trading system must have a high-level “upstream” or “first-seller” point of regulation. That is to say, an effective trading system must regulate the initial sale of greenhouse gas-producing substances, with the allowance unit based on the amount of emissions estimated from the use of the substance and the entities subject to regulation being the initial importer and producers of the substances. This is essential in order to ac-curately track and monitor the number of

The Way Forward

focus hurj spring 2010: issue 11

Page 30: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

29

allowances being sold and their actual use. A specific upstream point of regulation will help prevent the resale of allowances which have been used, as well as help relieve the regulatory burden of determin-ing whether allowances have been used by what could be millions of private citi-zens, companies and groups. The ACES Act, which does not establish a clear point of regulation, faces just such a morass. It should be noted that the EPA Acid Rain program, despite its loose point of regula-tion, avoids such problems, because of the specific nature of its emissions allowance, which only applies to gas emitted in spe-cific and limited industrial applications, a feature not shared by some greenhouse gases such as carbon dioxide.

AUCTIONSAllowances should not be allocated. Allocation of allowances represents an enormous benefit to corporations (who will receive assets worth millions) and will thus remove any incentive to change their business practices and re-

duce emissions. Auctions of allowances will allow the market to determine their price, permitting greater economic flex-ibility and preventing inefficiency. With that said, auctions must include price floors, to prevent a complete devalua-tion of allowances and resulting ineffec-tiveness of the policy, and price ceilings to prevent dire economic contraction. Auctions should also be limited to the entities subject to regulation, once again to avoid usage monitoring problems, as described above.

ALLOWANCESAllowances should only be valid for a short period of time, and should come with a clear expiration date. This will prevent banking or hoarding of allow-ances and prevent market manipulation.

OFFSETSAny emissions trading program should include a provision which allows for the issuance of offset credits which

would act as additional allowances. This will help grow a new sector of the green economy and help prevent manipulation or large fluctuation in the allowance market by providing an alternative to emissions allowances. Any offset program must include rigor-ous accreditation procedures which can verify emission reductions, per-manence, and additionality, and can take into account other environmental consequences of the project.

REVENUESThe population that will likely feel the largest negative impact of any emis-sions trading system will be average consumers, who will face a rise in prices across the economy and a de-crease in buying power. The revenues produced by an emissions trading system must address this situation in order to make any proposal politically feasible. The most politically palatable proposal is the one reflected in the Van Hollen “Cap and Dividend” proposal,

whereby every individual with a social security number would receive an equal dividend payment. Similar to existing systems like the Alaska Perma-nent Fund, this would help to relieve the burden on the American consumer. Additional revenue should go towards covering the administrative costs of the emissions trading system as well as specific economic mitigation and fund-ing for new clean energy and efficiency technologies, which will help speed the transition from a greenhouse gas-emit-ting economy to a cleaner one, priori-ties reflected well in the ACES Act.

INTERNATIONAL PROVISIONSClimate change and global warm-ing go beyond national borders and involve every country in the world. Additionally, it is clear that the passage of extensive economic regulation like an emissions trading system would place the American economy and its

related firms at a disadvantage. Ef-forts should be made to mitigate these effects, such as the ones reflected in the Van Hollen Bill which calls for tariffs on imports of greenhouse gas intensive goods and subsidies for exporters of the same goods. While these provisions contradict free-trade agreements such as those set forth by the World Trade Organization, they can become useful tools in persuading other countries to implement their own emissions trading systems. By agreeing to end tariffs and subsidies if other nations implement comparable climate change legislation, the United States will be able to advo-cate for worldwide changes to address a worldwide problem. Since the passage of the Kyoto Protocol, America has radically changed its attitudes towards climate change and emissions reductions legislation. Al-though some say this shift has been too long in coming, while others contend it has been too fast or even unnecessary, the fact remains that we stand at a cross-

roads. We must decide to either move forward with complex and controversial legislation, which could represent an enlightened policy with great benefits for our nation and global community, or con-tinue with inaction, possibly with disas-trous results. For the time being, America remains the unchallenged superpower of the world. Although they have not yet been perfect, attempts at crucial legisla-tion such as the ACES Act represent the willingness of Americans to tackle these issues and act as a world leader in the effort to combat global warming.

“The fact remains that we stand at a crossroads. We must decide to either move forward with complex and controversial legislation, which could represent an enlightened policy with great benefits for our nation and global community, or continue with inaction, possibly with disas-

trous results.”

Page 31: HURJ Volume 11 - Spring 2010

30

References: 1. “Byrd-Hagel Resolution (S.RES.98)” 105th Congress, 1st Session. July 25th 1997 <www.congress.gov>

2. Kirby, Alex. “US Blow to Kyoto Hopes” BBC News Online. March 28, 2001 <http://news.bbc.co.uk/2/hi/science/nature/1247518.stm>

3. “Humans Cause Global Warming, US Admits.” BBC News Online. June 3, 2002 <http://news.bbc.co.uk/2/hi/americas/2023835.stm>

4. Hargreaves, Steve. “Obama Act on Fuel Efficiency, Global Warming” CNN.com January 26, 2009 <http://www.cnn.com/2009/BUSINESS/01/26/obama.green/>

5. Office of Congressman Rick Larsen (D-WA). “Larsen: Energy bill Builds Clean Energy Economy, Creates Jobs in Northwest Washington.” June 26, 2009. <house.gov/apps/list/press/Wa02_larsen/PR_cleanEnergyJobs_062609.shtml>

6. Office of Congressman Jay Inslee (D-WA) “Historic Climate Legislation passes U.S. House of Representatives.” June 26, 2009. <www.house.gov/Inslee>

7. US House of Representatives. “American Energy and Security Act (H.R. 2454)” Press Release. June 2, 2009 pg. 1

8. “The Phaseout of Ozone Depleting Substances.” Environmental Protection Agency. April 14, 2009 < http://www.epa.gov/ozone/title6/phaseout/index.html>

9. “Opportunities and Quantification Requirements for Local Government Participation in Greenhouse Gas Emissions Trading Markets.” World Resources Institute. July 8, 2008.

10. Ramseur, Jonathan L, Larry Parker, Brent D. Yaccobucci. Congressional Research Service. “Market-Based Greenhouse Gas Control: Selected Proposals in the 111th Congress.” May 27, 2009 <www.crs.gov> pg. 2

11. Ibid. pg. 112. Samuelsohn, Aaron. “House Panels Seek to Limit Effect of Cap and Trade

on Nation’s Pocketbook.” E&E Publishing LLC. March 9, 2009 <http://www.eenews.net/public/EEDaily/2009/03/09/1>

13. “Opportunities and Quantification Requirements for Local Government Participation in Greenhouse Gas Emissions Trading Markets.” World Resources Institute. July 8, 2008.

14. Ibid15. Ibid16. Washington State Department of Ecology. “Economic Analysis of a Cap and

Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008.

17. Ibid18. Congressional Budget Office. “Cost Estimate: H.R. 2454 American Clean

Energy and Security Act of 2009.” June 5, 2009 <www.cbo.gov>19. Ibid.20. Bailey, Ronald. “Cap and Trade Handouts.” The Reason Foundation April 7,

2009 <http://reason.com/archives/2009/04/07/cap-and-trade-handouts>21. Washington State Department of Ecology. “Economic Analysis of a Cap and

Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008.

22. Ibid23. H.R. 1862 “The Cap and Dividend Act of 2009.” (111th Congress, introduced

April 2009) Rep. Chris Van Hollen (MD-8) <www.congress.gov> Sec. 9912 (a) 24. Congressional Budget Office. “The Estimated Costs to Households from a

Cap-and-Trade Program for CO2 Emissions.” Statement of Douglas W. Elmendorf, Director (in testimony before the Senate Finance Committee) May 7, 2009. <www.cbo.gov>

25. Washington State Department of Ecology. “Economic Analysis of a Cap and Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008

26. United States Environmental Protection Agency “Cap and Trade: Acid Rain Program” Results” April 14, 2009 < www.epa.gov/airmarkt/cap-trade/docs/ctresults.pdf>

27. United States Environmental Protection Agency “Acid Rain Program.” April 14, 2009 < http://www.epa.gov/airmarkt/progsregs/arp/basic.html>

28. Ibid.

focus hurj spring 2010: issue 11

Page 32: HURJ Volume 11 - Spring 2010

focus hurj spring 2010: issue 11

31

On March 9, 2009, an atmosphere of anticipation and excitement sur-rounded newly-elected President Barack Obama as he signed the Stem Cell Executive Order, which allowed federal funding for research concern-ing embryonic stem cells. This execu-tive order concerning embryonic stem cells promised new hope for regen-erative medicine, an area of science stalled by the actions of a hesitant Bush administration. The issue of obtaining embryonic stem cells is an area of much debate, with moral and religious fervor fighting back waves of scientific curiosity and inquiry. Em-bryonic stem cells are obtained from embryos discarded at in-vitro fertility clinics and can differentiate into any type of tissue in the body. However, they require the destruction of their host embryos, which is deemed a destruction of potential human life by many religious protesters. As a result of this moral debate, the Bush admin-istration restricted the flow of federal money to fund embryonic stem cell research, except for 22 existing stem cell lines approved by the Bush ad-ministration.1

As a result, research concerning embryonic stem cells was mostly stymied. Researchers looked towards private backers to fund their stud-ies. Postdoctoral students on federal training grants could not conduct pertinent research on embryonic stem cells without violating the stipula-tions of their grant. Even international scientific collaborations were hindered because of lack of funds, equipment and (most importantly) information. As Amy Comstock Rick, president of the Coalition for the Advancement of Medical Research, puts it, “If you

talk to some scientists, you hear absurd stories. One guy has green dots on the things in his lab that are federally funded and red dots on the privately funded equipment. That shows you how crazy it is.”2 Aside from the most obvious obstacles of funding, the exist-ing 22 stem cell lines–unique families of constantly dividing cells originating from one parent stem cell–approved by the Bush administration were not genetically or ethnically diverse enough to experiment on. Furthermore, they were aging stem cells with accumulat-ing chromosomal abnormalities, which made them more difficult to maintain. Yet, despite these obstacles, some found alternative avenues to bypass federal funding and assistance.

In response to the federal restric-tions on embryonic stem cell research, citizens of California approved Proposition 71, known as the Cali-fornia Stem Cell Research and Cures Initiative.3 This initiative dedicated three billion dollars of California tax dollars to the establishment of the California Institute for Regenera-tive Medicine (CIRM), an institution that offers grants for the study of stem cells and associated cures, with high priority given towards embryo-derived cells.4 CIRM represented the first major cry against government restrictions on pursuing regenerative and therapeutic benefits of embryonic stem cells. Moreover, it illustrated the importance of maintaining the inde-pendence of scientific improvement from government initiatives.

With embryonic research tempo-rarily hindered, the focus of scientific research turned to potential medical treatments that used adult stem cells. Although these promise therapeutic

benefits, adult stem cells differ from em-bryonic stem cells in many ways. Adult stem cells have the potential to differen-tiate into specialized cells, such as blood or skin cells. They are very hard to locate and often difficult to induce into specialized cell types. Embryonic cells can become any tissue in the human body and can proliferate readily, which often leads to uncontrollable tumors. However, adult stem cells have one ad-vantage not shared by their embryonic counterparts: the ability to be accepted by the host. Because adult stem cells come from within the individual’s own tissues, they can be isolated from the patient, induced to form a functional tissue, and re-introduced as a native tis-sue, without the need for anti-rejection medication that can cause certain side-effects. In fact, many disorders have been alleviated by isolating stem cells from the bone marrow and injecting them back into areas of damage. For example, Thomas Clegg, a 58-year old congestive heart failure patient, had adult stem cells isolated from his bone marrow injected into his heart to induce regeneration of damaged heart cells. As Steven Stice, director of the Regenera-tive Bioscience Center at the University of Georgia, notes, “In the short term–say, the next five years–most of the therapeutic applications from stem cells will be from adult stem cells.”5

Recently, researchers have shown that adult stem cells could hold greater promise than embryonic cells in their ability to be reprogrammed into their early embryonic form. These cells are called “induced pluripotent” stem cells because they are able to differentiate into many types of tissues through the introduction of four major genes. Al-though rudimentary in its development,

Stem Cell Act Spurs New Age for MedicineNezar Alsaeedi / Staff Writer

Page 33: HURJ Volume 11 - Spring 2010

the concep-tion of induced

pluripotent stem cells would render the

ethical controversy surround-ing embryonic stem cell collection non-existent because these cells come from the host without the destruction of any embryos. Furthermore, there would be no immune rejection to any of these cells, because they are considered native to the individual.6 However, before such an objective can be realized, a deeper under-standing of the pluripotency of embryonic stem cells must be attained. With the newly signed Stem Cell Executive Order, medical discoveries through embryonic stem cells can gain their rightful recogni-tion and can finally be realized.

The federal funding of embryonic stem cell research can provide sought-after cures for many of the diseases plaguing patients today. In the short term, embryonic stem cells can be intro-duced into any organ to repair defects. This is especially significant for spinal cord injuries and Parkinson’s disease because neural adult stem cells are hard to find. These cells can also be used to repair damage brought about by stroke or heart attack and can replace defective beta-insulin cells in diabetic individuals.7 Nevertheless, the major significance of these stem cells lies in their long-term potential benefits.

The executive order signed by President Barack Obama will not only enable scientists to discover the short-term regenerative solutions many patients need, but also offers long-term solutions to bigger healthcare questions that plague America. Among the many dilemmas inherited by the Obama administration, the exponentially rising cost of healthcare has come to the fore-

front as a major prior-

ity in the American socio-economic sphere. Rising costs have been attrib-uted to many factors, including the research that goes into drug discovery and refinement. A more comprehensive understanding of the genetic makeup of embryonic stem cells could spur a revo-lutionary wave of medical discovery, yielding more efficiency at a lower cost. With federal money from the National Institute of Health (NIH), scientists can study diverse ethnic stem cell lines with a multitude of unique defects and diseases. This would add to the infor-mation bank initiated from the study of the first 22 stem cell lines approved by the Bush administration.8 A variety of diseases affecting diverse ethnicities can be studied, and commonalities can be deduced through the study of these stem cell lines.

Another long-term advantage is the use of embryonic stem cells in therapeutic experiments. Testing a drug on these cells will lead to more efficient results than test-ing the drug on conventional animal cells. These tests would offer more accurate results, because stem cells can be induced to differentiate into the target human organ under study. Moreover, these tests would be cost-effective, because they pro-vide accurate and reliable data with a few meaningful tests, as opposed to multiple tests on animal models with less accurate findings. Aside from the financial ben-efits, scientists can gain a more complete understanding of how diseases progress by tracing the effect of diseases on human

cells from the earliest stage until the aging stage of their lifespan.

This would save much-needed time and money on research, and, consequent-ly, would save many lives.

Unfortunately, the realization of any medical dream involving embryonic stem cells will take years to materialize. Little is known about stem cells, and much must be learned in the years to come. However, every medical breakthrough begins with persistent scientific inquiry coupled with the collaboration of supportive govern-ment action. President Obama has clearly made this his initiative by supporting “sound science,” as long as it conforms to humane and ethical standards. With the stroke of a pen, President Obama was able to usher in a wave of support for what could be an unparalleled medical revolu-tion. It is only a matter of time before em-bryonic stem cell research gains its proper standing in the scientific community and begins to heal our disease-ridden society.

References1. CNN. “Obama Overturns Bush Policy on

Stem Cells.” CNN Politics. 2009. 20 August 2009. http://www.cnn.com/2009/POLITICS/03/09/obama.stem.cells/index.html.

2. Kalb, Claudia. “A New Stem Cell Era: Scientists cheer as President Obama ends restric-tions on research. What the move means for your future.” Newsweek. 2009. 18 August 2009. http://www.newsweek.com/id/188454.

3. Mathews, Joe. “What Obama’s Support for Stem Cell Research Means for California.”

Scientific American. 2009. 17 August 2009. http://www.scientificamerican.com/article.cfm?id=stem-cell-research-in-california.

4. Conger, Krista. “Stem Cell Policy May Aid State Research Efforts.” Stanford School of

Medicine: Medical Center Report. 2009. Stanford University. 18 August 2009. http://med.stanford.edu/mcr/2009/stem-cell-0311.html.

5. Hobson, Katherine. “Embryonic Stem Cells—and Other Stem Cells—Promise to Advance Treatments” US News and World Report: Health. 2009. 17 August 2009. http://health.usnews.com/health-news/family-health/heart/ar-ticles/2009/07/02/embryonic-stem-cells--and-oth-er-stem-cells--promise-to-advance-treatments.html.

6. Lin, Judy. “Obama Stem Cell Policy Opens the Field to New Discoveries, Disease Treatment.” UCLA Today, 2009. UCLA. 18 August 2009. http://www.today.ucla.edu/portal/ut/obama-stem-cell-policy-opens-the-85172.aspx.

7. Ibid.8. Zenilman, Avi. “Reselling Stem Cells.” The

New Yorker News Desk. 2009. 20 August 2009. http://www.newyorker.com/online/blogs/newsdesk/2009/03/reselling-stem-cells.html.

hurj spring 2010: issue 11 focus

32

Page 34: HURJ Volume 11 - Spring 2010

In 1958, Francis Crick identified the central tenet of molecular biology as the unidirectional flow of genetic information from DNA to RNA to proteins. DNA is transcribed into messenger RNA (mRNA) by specific polymerases, which is subsequently translated into proteins on ribo-somes. When looking retrospec-tively at a protein’s life, this dogma gives insight into its mechanisms of synthesis. However, it was recently discovered that the human genome is comprised of approximately 95% non-coding DNA, which does not code for proteins. Major questions in all fields of biology have arisen about the functions and mechanisms of these vast stretches of DNA. Crick’s hypothesis has greatly aided biologi-cal advances during the 20th century in the fields of molecular biology, genetics, and biochemistry. However, the recent discoveries of non-coding microRNAs and their properties and functions have challenged the origi-nal dogma.

MicroRNAs (miRNAs) are an evolu-tionarily conserved class of non-cod-ing RNAs that regulate gene expres-sion in many eukaryotes. The first miRNA was discovered in the nema-tode Caenorhabditis elegans in 1993 by Victor Ambros’ laboratory [1]. At the same time, the first miRNA target gene was discovered by Gary Ru-vkun’s laboratory [2]. These simulta-neous discoveries identified a novel mechanism of posttranscriptional regulation. The importance of miR-NAs was not realized for seven years and was precipitated by the rising interest in another class of short RNA, the small interfering RNA (siRNA), which is involved in the phenomenon of RNA interference (RNAi), whereby mRNAs are degraded. Before look-ing at the biological impact of miR-NAs, it is important to consider their biogenesis, mechanisms of action,

and the strategies for studying these intriguing molecules.

Although miRNAs and siRNAs are both of the short non-coding RNA variety, they differ in their functions and biogenesis. SiRNAs have proven to be useful for in vitro laboratory studies to degrade a specific mRNA through RNAi, whereas miRNAs comprise an extremely important reg-ulatory mechanism in vivo that oper-ates in two closely related ways. Dif-fering from double-stranded siRNA, miRNA is a form of single-stranded RNA about 18-25 nucleotides long, derived from a long primary precur-sor miRNA (pri-miRNA), transcribed from DNA by RNA polymerase II [3, 4, 5]. Pri-miRNAs can be exonic or intronic, depending on their sur-rounding DNA sequences, but the pri-miRNA has to be non-coding by definition. The long pri-miRNA is then excised by Drosha-like RNase III endonucleases or spliceosomal components to form a ~60-70 nucleo-tide precursor miRNA (pre-miRNA). The pre-miRNA is exported out of the

nucleus by Ran-GTP and a recep-tor, Exportin-5 [6, 7]. Once in the cytoplasm, Dicer-like endonucleases cleave the pre-miRNA, forming ma-ture 18-25 nucleotide miRNA. Lastly, the miRNA is incorporated into a ribonuclear particle to form the RNA-induced gene-silencing complex (RISC), which enables the miRNA execute its function [8, 9].

The mature miRNA can inhibit mRNA translation in two ways.

Partial complementarity between the miRNA and the 3’-untranslated region (UTR) of the target mRNA inhibits translation by an unknown mechanism. If the complementarity between the miRNA and the 3’-UTR of the mRNA is perfect, then mRNA degradation occurs by a mechanism similar to RNAi performed by siRNA. As of now, most miRNAs discov-ered regulate gene expression post-transcriptionally. However, given the large number of miRNA genes (hundreds to thousands, or more per species), it is likely that some are involved in other regulatory mecha-nisms, such as transcriptional regu-lation, mRNA translocation, RNA processing, or genome accessibility [10].

In order to understand miRNAs, it is imperative to be able to visualize them, both spatially and temporally. Now that whole genome sequences are available for numerous organ-isms, the systematic analysis of mRNA expression levels has re-cently been expanded to the study of miRNA expression levels. Important techniques include microarrays, in situ hybridizations, reporter fusions, and northern blot analyses. Certain techniques give better spatial reso-lution, whereas others give better temporal resolution. Consequently, a combination of techniques most often pieces together the puzzle of miRNA localization and expression. Expres-sion patterns will help to further the understanding of cis-regulatory fac-tors, such as promoters and enhanc-ers, that effect miRNA expression. Integrating the data of upstream regulators and downstream targets facilitates development of a miRNA pathway and circuitry map within the larger context of the cell [10].

MiRNAs exhibit exact developmental and tissue-specific expression pat-terns. They are implicated in the cellular processes of differentiation, proliferation, and apoptosis, and

science hurj spring 2010: issue 11

33

microRNAs: A New Molecular DogmaRobert Dilley / Staff Writer

Page 35: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 science

34

some miRNAs may also have impor-tant functions in organ and immune system maturity. Recent studies have shown that dysregulation of miRNA expression is a common feature of human malignan-cies. Similar to protein-coding oncogenes and tumor-suppressor genes, miRNAs can also act as cancer-promoting or cancer-suppressing entities. The first identification of a miRNA abnor-mality in cancer came from studies of human chro-mosome 13q14 in chronic lymphocytic leukemia (CLL). Two miRNAs in this region, miR-15a and miR-16-1, were deleted or down-regulated in 68% of CLL cases [11]. Subsequent studies showed that the miRNAs induce apoptosis by suppressing the anti-apoptotic gene BCL2 [12]. Hence, the miRNA acted as a tumor suppressor. MiR-15a and miR-16-1, along with other miRNAs, constitute a unique expression profile that correlates with the prognosis of CLL [13].

The first example of a miRNA demon-strated to function as an oncogene is miR-155, which is processed from the non-coding B-cell integration cluster (BIC) RNA. BIC was shown to coop-erate with c-Myc in lymphonagenesis, and several years later, miR-155 was identified as originating from the last exon of the BIC mRNA [14]. Recent studies have shown that miR-155 expression is elevated in Hodgkin’s lymphoma samples, in diffuse large B-cell lymphoma, and in childhood Burkitt’s lymphoma, implicating its function as an oncogenic agent [15, 16, 17]. Although miRNAs compose only about 1% of the human genome, over 50% of them are located in can-cer-associated genomic regions, such as fragile sights, frequently amplified or deleted regions, and break points for translocations [18]. Clearly, the

functions of miRNAs are important in normal cellular processes, and their dysregulated expression participates in disease progression.

The discovery of miRNAs and their regulatory functions has opened the eyes of the scientific com-munity to a new level of gene expression. MicroRNomics, a sub-discipline of ge-nomics that describes the biogenesis and mechanisms of these tiny RNA regula-tors, has become an intense area of study,

and novel findings are constantly elucidated by research-ers all over the world. From basic cellular functions to disease biology, miRNAs are proving to be an invalu-able source of information to piece together the regulatory pathways in all eukaryotes [10]. It is hoped that better understanding of the functions of miRNAs will provide a platform for their use in translational medi-cine. As stated by Gary Ruvkun, one of the pioneers of miRNA discovery, “It is now clear an extensive miRNA world was flying almost unseen by genetic radar” [19]. We have certainly entered a new era in the world of genomics. MiRNAs are revealing a much more complicated molecular dogma than previously conceived. The challenges to the central dogma of molecular biology may have raised more questions than answers, but have also ushered in many triumphs and exciting possibilities.

References: 1. Lee, R. C., Feinbaum, R. L. and Ambros, V. (1993). The C. elegans heterochronic gene lin-4 encodes small RNAs with antisense complementarity to lin-14. Cell, 75, 843-854. 2. Wightman, B., Ha, I. and Ruvkun, G. (1993). Posttranscriptional regulation of the heterochronic gene lin-14 by lin-4 mediates temporal pattern formation in C. elegans. Cell, 75, 855-862. 3. Lin, S. L., Chang, D., Wu, D. Y. and Ying, S.Y. (2003). A novel RNA splicing-mediated gene silencing mechanism potential for genome

evolution. Biochemical and Biophysical Re-search Communications, 310, 754-760. 4. Lee, Y., Kim, M., Han, J. et al. (2004a). MicroRNA genes are transcribed by RNA polymerase II. European Molecular Biology Organization Journal, 23, 4051-4060. 5. Lee, Y. S., Nakahara, K., Pham, J. W. et al. (2004b). Distinct roles for Drosophila Dicer-1 and Dicer-2 in the siRNA/miRNA silencing pathways. Cell, 117, 69-81. 6. Lund, E., Guttinger, S., Calado, A., Dahl-berg, J. E. and Kutay, U. (2003). Nuclear export of microRNA precursors. Science, 303, 95-98. 7. Yi, R., Qin, Y., Macara, I. G. and Cullen, B. R. (2003). Exportin-5 mediates the nuclear export of pre-miRNAs and short hairpin RNAs. Genes & Development, 17, 3011-3016. 8. Khvorova, A., Reynolds, A. and Jayasena, S. D. (2003). Functional siRNAs and miRNAs exhibit strand bias. Cell, 115, 209-216. 9. Schwarz, D. S., Hutvagner, G., Du, T. et al. (2003). Asymmetry in the assembly of the RNAi enzyme complex. Cell, 115, 199-208. 10. MicroRNAs: From Basic Science to Disease Biology, ed. Krishnarao Appasani. Published by Cambridge University Press. © Cambridge University Press 2008. 11. Calin, G. A., Dumitru, C. D., Shimizu, M. et al. (2002). Frequent deletions and down-reg-ulation of micro-RNA genes miR15 and miR16 at 13q14 in chronic lymphocytic leukemia. Pro-ceedings of the National Academy of Sciences USA, 99, 15524-15529. 12. Cimmino, A., Calin, G. A., Fabbri, M. et al. (2005). miR-15 and miR-16 induce apoptosis by targeting BCL2. Proceedings of the National Academy of Sciences USA, 102, 13944-13949. 13. Calin, G. A., Ferracin, M., Cimmino, A. et al. (2005). A microRNA signature associated with prognosis and progression in chronic lym-phocytic leukemia. The New England Journal of Medicine, 353, 1793-1801. 14. Metzler, M., Wilda, M., Busch, K., Viehmann, S. and Borkhardt, A. (2004). High expression of precursor microRNA-155/BIC RNA in children with Burkitt lymphoma. Genes, Chromosomes, and Cancer, 39, 167-169. 15. Eis, P. S., Tam, W., Sun, L. et al. (2005). Accumulation of miR-155 and BIC RNA in human B cell lymphomas. Proceedings of the National Academy of Sciences USA, 102, 3627-3632. 16. Kluiver, J., Poppema, S., de Jong, D. et al. (2005). BIC and miR-155 are highly expressed in Hodgkin, primary mediastinal and diffuse large B cell lymphomas. Journal of Pathology, 207, 243-249. 17. van den Berg, A., Kroesen, B. J., Koois-tra, K. et al. (2003). High expression of B-cell receptor inducible gene BIC in all subtypes of Hodgkin lymphoma. Genes, Chromosomes, and Cancer, 37, 20-28. 18. Calin, G. A., Sevignani, C., Dumitru, C. D., et al. (2004b). Human microRNA genes are frequently located at fragile sites and genomic regions involved in cancers. Proceedings of the National Academy of Sciences USA, 101, 2999-3004. 19. Gary Ruvkun, Professor, Harvard Medi-cal School; Cell, S116, S95, 2004.

Page 36: HURJ Volume 11 - Spring 2010

science hurj spring 2010: issue 11

35

Abstract-------------------------------------------------------------------

This paper describes work done on the Double Chooz neu-trino detection project at Columbia University’s Nevis Labs during the summer of 2009. Studies done on the elimina-tion of background events in the experiment are presented in this paper. Cables for the outer veto system that reduces background were put together and tested for systematic er-rors. This report also describes studies of the reconstruction accuracy of muons and changes based on different starting energies and positions in the detector, as well as possible explanations of observed trends.

1. Introduction-------------------------------------------------------------------

1.1 The Standard Model

The Standard Model, which describes elementary particles and their interactions, is, at present, the most widely ac-cepted theory in particle physics, resulting from decades of experimentation and modification. However, it still does not provide a complete explanation of various phenomena. One main issue is that the theory only accounts for the electro-magnetic, strong nuclear and weak nuclear forces, excluding the fourth fundamental force of gravity.

The model consists of force carrier particles known as bo-sons, along with two main groups of fermions, quarks and leptons. Fermions are thought to be the building blocks of matter, while bosons mediate interactions between them.

All fermions have corresponding antiparticles with equal mass and opposite charge. Quarks have fractional charge and interact via the strong force; they combine to form had-rons, like neutrons and protons. Up and down quarks form neutrons and protons, while quarks in the other two genera-tions are generally unstable and decay to particles of lesser mass. Of the leptons, three are charged and three are elec-trically neutral, and all have spin 1/2. The electron, muon, and tau all have a charge of -1, though the muon and tau are much more massive than the electron, and thus have short lifespans before they decay. Each charged lepton corresponds to a neutral, much lighter neutrino particle.

1.2 Neutrinos and Oscillations

The existence of the neutrino was proposed by Wolfgang Pauli as an explanation for the experimental result of beta decay of a neutron into a proton, which showed that the elec-trons emitted in the decay have a range of energies, rather than a unique energy. [7] Electrons also do not seem to take the total energy that they are allotted, suggesting that there is another particle emitted that makes up for these discrepan-cies. The neutrino, thought to be massless, left-handed (coun-terclockwise spin), uncharged, and weakly interacting, was thus introduced. However, experiments have shown that this description is not entirely correct.

Through the conservation of Lepton Family Number in the Standard Model, neutrinos cannot change flavor; an electron neutrino cannot become a muon neutrino or a tau neutrino. [5] Through the weak force, an electron and electron neutrino can transmute into each other, but particles cannot directly change families. A tau cannot directly decay into a muon without production of a tau neutrino. Despite this predic-tion, neutrinos do appear to oscillate and change flavors. For example, as an electron neutrino moves through space, there is a chance that it will become a muon or tau neutrino. This implies that mass states and flavor states are not the same, as previously thought, andthat neutrinos actually do have small masses. The waves of two different mass states interfere with each other, forming different flavor states, creating an oscil-lation probability for one neutrino to change flavors. In the case of electron and muon neutrinos, this probability is:

where νµ and νe are the different flavors, ∆m is the difference in mass of the two particles, E is the energy, θ is the mix-ing angle, and L is the distance between the production and detection points of the neutrino.

Double Chooz: Muon Reconstruction

Figure 1: Standard Model in Particle Physics [2]

(1)

Leela Chakravarti / Focus Editor

Page 37: HURJ Volume 11 - Spring 2010

hurj spring 2010: issue 11 science

36

First, the positron produced annihilates with an electron, emitting two photons of about 0.5 MeV each. The neutron is then captured on a gadolinium nucleus after about 100 µs, emitting several photons with a total energy of around 8 MeV.

The signals emit light, which is then detected by several photomultiplier tubes (PMTs) around the inner surface of the tank. This double signal with the appropriate time lapse indicates the presence of an electron antineutrino.

Each detector has many layers and components. The central region is a tank filled with 10.3 m3 of scintillator. Moving outward, the gamma catcher region provides extra support for detecting the neutron capture signal. Surrounding the gamma catcher is the buffer region, where the 534 8-inch PMTs are located. Finally, the inner and outer veto systems are in place to help decrease background signal by other particles, such as muons or neutrons.

2 Muon Background and Reconstruction

2.1 Muon Background

One of the main sources of background events and causes of error in the Double Chooz experiment is the effect of cosmic ray muons, along with gamma, beta and neutron signals in the detector and rock. Near-miss muons in the rock

Figure 2: Inverse Beta Decay Reaction [1]

The different neutrino flavor states are different combina-tions of mass states (ν1, ν2, and ν3), and the transition from one basis to the other is described by a mixing matrix. In the three-neutrino case, the transition is described by a unitary rotation matrix that relates flavor eigenstates to mass eigen-states. [6]

This matrix can be split into three matrices, each of which deals with a different mixing angle.1

Two of the angles, θ12 and θ23 have been determined by experiments with solar and atmospheric neutrinos, but θ13 is still undetermined, with only an upper limit of 13°. Vari-ous efforts, such as the Double Chooz project, are underway to try to determine this last angle and better understand the way that neutrinos oscillate.

1.3 Double Chooz

Double Chooz is a neutrino detection experiment located in the town of Chooz in northern France. Instead of study-ing solar or atmospheric neutrinos, this project focuses on neutrinos produced at two nuclear reactors. Through fission reactions of isotopes U-235, U-238, Pu-239 and Pu-241, elec-tron antineutrinos are produced and move in the direction of two detectors. The original Chooz experiment only had one detector, but Double Chooz plans to achieve higher sensitiv-ity and accuracy by using both near and far detectors and looking for changes in antineutrino flux from the near to the far. The use of two detectors corrects for uncertainties about the absolute flux and the location of the experiment because the two identical detectors are compared to each other and only differ on how far away each one is from the reactors.

Assuming that oscillations will change some electron an-tineutrinos into other flavors, fewer electron antineutrinos should be observed at the far detector than at the near detec-tor. Should this effect be observed, the probability can be calculated and, using equation 1, the value of sin2(2θ13) can also be determined.

The near detector is 410km away from the reactors, while the far detector is 1.05km away. Both detectors are identical, with main tanks filled with scintillator material doped with gado-linium [4]. When an electron antineutrino particle reaches either detector, it reacts according to inverse beta decay:

In each tank, there are about 6.79 x 1029 protons for the elec-tron antineutrinos to react with. The actual detection of the particle is a result of the products of the inverse beta decay reaction.

(2)

(3)

Figure 3: Double Chooz detector vessel

Page 38: HURJ Volume 11 - Spring 2010

37

science hurj spring 2010: issue 11

around the detector react and form fast neutrons, which go through the detector and create false signals. Muons produce neutrons in a detector through spallation (collision of high energy particle with a nucleus) and muon capture. Recoil protons from interacting neutrons are mistaken for positrons, and successive neutron capture confirms the false antineu-trino signal. Muons can also make it into the detector and cause such background signals. In order to properly reject these signals, it is important to know which specific signals to ignore.

2.2 Outer Veto and Cabling

One of the ways to reduce error due to muon background is to use an outer veto system, which identifies muons that can produce backgrounds in the detector. Once these specific muons are tagged, the signals they produce can be elimi-nated from the data set. The outer veto detector differentiates between muons that go through target and those that pass near the target. It also detects muons that may miss the inner veto completely or may just clip the edges of the inner veto. The outer veto is composed of staggered layers of scintilla-tor strips above the detector. Strips in the X and Y directions can measure coincidence signals and identify muon tracks. Signals from light created in the scintillator are sent to PMTs, which process the signals in a similar fashion to the main detector.

The arrival of event signals should be properly timed to minimize dead time for the detector, delay time of the signal and to preserve the pulse signal. Cables that carry the signals must therefore be made uniformly and within these specifi-cations, while also taking into account the physical distance that must be traversed. Different types of cables offer differ-ent capabilities for data transfer. The outer veto uses RG58 and RG174 cables for data transfer. Each type of cable has a different characteristic delay time per foot, which must be ac-counted for to understand the total delay time for the signal. Moreover, 50-foot and 61-foot RG174 cables were cut, and will be combined with 110-foot and 97.5-foot RG58 cables for data transfer in the upper and lower sections of the outer veto. RG58 cable must be used in addition to RG174 because the use of only RG174 would result in a degeneration of the signal along the cable, as RG174 has a lower bandwidth and less capacity for data. The overall delay should be around 270 ns, and the cables must be tested for their individual delay times to ensure that this value remains constant for all cables to avoid systematic errors. RG174 sections of the outer veto cables have three cables bound together, one for the

Clock, one for the Trigger, and one for the Gate. It is especial-ly important that the Gate cable have the correct delay time, because it mediates data collection at certain intervals.

Cables for the outer veto were tested for proper delay times using an oscilloscope. Each end of the cable connects to an input channel in the oscilloscope, and the difference in tim-ing of pulse appearance is the delay time. It is apparent from the waveforms shown that there is a greater degeneration of the signal when using only RG174.

For the RG174 cable, cable lengths of 50 feet should have a delay time between 76 and 81 ns, while 61-foot cables should have delay times between 93 and 99 ns. Plots of delay times indicate that all of the cables made for the outer veto have delay times within the expected ranges. These cables will be put into place in the final construction of the outer veto detector.

2.3 Simulation: DOGS Overview

Muons that reach the detector and past the outer veto must be accounted for and identified. The muon-caused signals can be determined by muon location in the detector, as the signals are predicted to occur within some distance from muon location, depending on the energy and position of the muons. Simulation software is used to imitate muons pass-ing through the detector and the reconstruction of the mu-ons’ starting positions and energies. Various algorithms and

Figure 4: Waveforms: Signal is better preserved

along RG58 cable

Figure 5: RG174 cable delay times

1

Page 39: HURJ Volume 11 - Spring 2010

38

hurj spring 2010: issue 11 science

simulation processes are run through in order to reconstruct particles in the detector. The Double Chooz collaboration uses a software package called Double Chooz Offline Group Software, or DOGS. Within DOGS, there are basic simulation scripts that generate different types of particles, study them through the detector, and reconstruct their properties, such as type, starting position, and starting energy or momentum. The DOGS simulation keeps track of how much energy is deposited, where the energy is deposited, other particles produced due to the original particles, signals detected at different photomultiplier tubes, time between signals, and track directions, among other things. All of this information is stored and can later be accessed for analysis.

2.4 Data Flow in Muon Simulation

In order to work with DOGS and produce specific simula-tions, it is often necessary to modify the skeleton scripts that are originally provided to meet different needs. DOGS uses a software called GEANT4 to generate particles and simulate their activity in a liquid scintillator detector. GEANT4 simu-lations take into account the geometry of the detector, the specific materials used, particle location with respect to the detector, optical photons, and properties of photomultiplier tubes.

For this simulation of muons in the detector, one of the particle generator scripts was modified to include a genera-tor gun, which allows for specification of particle type, the number of particles, production rate, starting position, initial momentum and energy. If energy is given, momentum is treated as directional only, to avoid over-specification of the problem. The generator script also contains information about whether or not photons and the Cerenkov light effect are included.2 In order to produce proper scintillation light for PMT detection, photons and Cerenkov light are activated. The script is run in DCGLG4sim, which is the Double Chooz version of the GEANT4 simulation. All information about the particles is used in following simulation scripts to show a response to the particle. The information is then used to reconstruct its original information.

After particles are generated using DCGLG4sim, the output of the generation and particle tracking information is sent to the Double Chooz Readout Simulation Software, or DCRoSS. DCRoSS models the detectors response to the particles, from signal detection and amplification at the photocathode on the PMTs to data acquisition based on varying trigger levels de-pending on the expected signal strength. Within RoSS scripts, PMT and data acquisition settings are changed to accommo-date specific simulations.

Finally, the output from DCRoSS is channeled into a Double Chooz Reconstruction, or DCReco script. DCReco runs

through reconstruction algorithms (discussed in the follow-ing section) to determine properties of the particles, such as spatial information and initial energy. It uses the information from DCRoSS about location and magnitude of deposited en-ergy in the detector to reconstruct particle information.

Output from each step of the simulation is stored in different Info Trees, and can be accessed in order to compare differenc-es between actual and reconstructed information or look at where energy was deposited in the detector. Variables in Info Trees are accessed through the use of ROOT, a data analysis framework created to handle large amounts of data. A study of reconstruction efficiency as a function of starting energies and positions was thus conducted.

3 Reconstruction Accuracy

3.1 Reconstruction Algorithms

Muon spatial and energy reconstruction in the detector is based on a maximum likelihood algorithm that combines available information about an event. The characterization of an event is a function of seven parameters, namely the four-dimensional vertex vector (x, y, z, t), the directional vector (φ, θ), and the energy E [4]:

The likelihood of an event is the product over the individual charge and time likelihoods at each of the PMTs:

Given a set of charges qi and corresponding times ti, Levent is the probability that the event has the characteristics given by the seven-dimensional vector, alpha. Reconstruction looks for a maximization of Levent to determine what specific com-bination of vertex, direction, and energy corresponds to the event. DCReco uses the above method to reconstruct muon information. Because the process uses a likelihood algorithm, reconstruction is based on a probability and, therefore, will not always yield the same results, even if original particles had the same information. The accuracy of this algorithm may also change, depending on starting positions and start-ing energies.

3.2 Different Starting Energies

To assess reconstruction efficiency and overall accuracy, it is necessary to test different original particle information. In this study, starting energies and positions were changed to look for efficiency trends. Different starting energies were determined through consideration of the energy spectrum of

(4)

2 Charged particles traveling through a medium in which their speed is greater than the speed of light in that medium disrupt the electromagnetic eld and displace electrons in atoms of the material. When the atoms return to ground state, they emit photons; this is known as the Cerenkov eect.

(5)

Page 40: HURJ Volume 11 - Spring 2010

39

science hurj spring 2010: issue 11

muons. All of the muons tested at different energies had the same starting position in the detector, at a radius of 500 mm from the center at the top of the target region.

The tested energies were in a range from 1 GeV to about 25 GeV. This range corresponds to the section on the energy spectrum before the change in muon flux with respect to changing energy starts to decrease. Having energy range over different orders of magnitude helped show trends on a grander scale. About seven different energies were tested for trends in reconstruction efficiency. Because the recon-struction algorithm takes into account energy detected at each PMT (in terms of charge), there could be a correlation between energy amounts and efficiency.

Plots of the difference between reconstructed radial posi-tion and actual radial position show that both the mean and RMS values decrease with increasing energy. Histogram width and deviation get smaller as energy increases, and the reconstruction algorithms seem to get closer to predicting the actual starting position. The RMS value at 25000 MeV is about 1/5 the value at 1000 MeV.

Figure 6: Energy spectrum of muons from Double Chooz proposal [4]

Figure 7: RMS and mean values: difference of reconstructed and truth R

positions.

Figure 8: RMS Values at changing energies

Including all tested energies indicates that there is a notice-able drop in RMS value of the difference in reconstructed and truth positions, and thus an increase in accuracy as en-ergy increases. The effect of multiple Coulomb scattering as a particle goes through material could be responsible for this trend. Muons with lower initial energies, and thus less mo-menta, will not pass as easily through the detector, and the path may deflect because of multiple scattering off nuclei. This would result in a less well-defined path displayed by hits at the PMTs and a less accurate reconstruction of posi-tion. This idea is tested by calculating deflection angle θ0 at the different energies, using the formula:

where βc is velocity of the muon, p is the momentum, and x/X0 is the thickness of the scattering medium [3]. The value of x/X0 is calculated from the ratio of the track length to the radiation length in that material.3 Momentum of a relativistic particle is:

and at high energies, is essentially equal in magnitude to the energy. The coefficient β for the c (the speed of light in a vacuum) is calculated using the equation:

which considers starting energy and momentum. The theo-retical deflection angle θ0was calculated for each starting energy. Using tan(θ0) and scaling for the height of the tank results in a comparable value in units of length to the RMS values previously shown.

(6)

(7)

(8)

Figure 9: Multiple scattering prediction

3 Radiation length is defined as the mean path length required to reduce the energy of relativistic charged particles by the factor 1/e, or 0.368, as they pass through matter.

Page 41: HURJ Volume 11 - Spring 2010

40

hurj spring 2010: issue 11 science

The original plot now includes the prediction of multiple scattering. Although the trend does not appear exactly identical, the result that the observed data and theoretical prediction are on the same order of magnitude, and in the same approximate range, shows that they could correspond. Plotting this range at the varying starting energies indicates that it is likely that multiple scattering produces the results seen, and the effect of this on reconstruction accuracy should be accounted for when considering different energy muons.

3.3 Different Starting Positions

Muons starting at different distances from the center of the detector were also considered. In each run, muons were gen-erated at the top of the detector, so that they were through-going. Runs with varying radii from the center changed the x-position based on different sections of the detector.

According to the diagram, x-position was changed to cover each different section, as well as tracks in the middle of sec-tions and close to the walls, to see if PMT response changes when the particles are very close to the walls. Simulations took place in the middle of the target region (1), close to the wall on the target side (2), close to the wall on the gamma catcher side (3), in the middle of the gamma catcher region (4), close to the wall on the gamma catcher side before the buffer area (5), and through the buffer area (6).

Figure 10: Various starting positions in detector [4]

Figure 11: Energy deposited in central region of detector

As a preliminary check, a plot of the energy deposited in the detector’s central region shows that the particles are being generated according to the position specifications. There is a slight peak in the gamma catcher region where the muons go through the most scintillating volume. The buffer region, which is non-scintillating, should not detect any energy, which is also observed. The range of values for energy de-posited also matches up with the expected energy loss rate of about 2.3 MeV/cm in the tank.

Plots of differences in X and Y position reconstruction show that muons going through the target region are reconstructed slightly better than those going through the gamma catcher region. Comparing differences between reconstruction and truth starting positions shows that accuracy decreases when muons go through the gamma catcher region or near vessel walls. This effect could be due to changes in PMT response. Additional testing at more positions would give a more pre-cise indication of whether or not PMT response is affected by muons that pass very close to the walls.

A general decrease in accuracy appears in the gamma catcher region when all positions are considered. Other areas of the detector show fairly consistent reconstruction accuracy. Muons starting either in the target or outer regions seem to be reconstructed to within approximately 50 mm of the truth vertex.

Figure 12: X and Y positions: difference between reconstructed and truth

Figure 13: RMS values at changing positions

Page 42: HURJ Volume 11 - Spring 2010

41

science hurj spring 2010: issue 11

4 Conclusions

Identifying and rejecting background rates are important parts of data collection in Double Chooz. When background rates are accounted for, data collection becomes much more efficient. The use of hardware devices like the Outer Veto makes this possible in the actual data collection. Outer veto cabling tests indicate that the delay times and signal degen-eration are within acceptable ranges. In analysis, reconstruc-tion algorithms are important for understanding locations of background signals and tagging specific false events. Studies of muon reconstruction in the DOGS package give information about the software’s accuracy, as well as how it can be used to assist in analysis of real data. Reconstruc-tion accuracy appears to decrease in Gamma Catcher region and close to vessel walls, possibly due to changes in PMT response. Reconstruction accuracy appears to increase with starting energy, likely due to the effect of multiple scattering at lower energies. Although these trends are observed in this study, higher statistics runs at additional energies and posi-tions would provide a more definitive analysis to extend this initial study.

Acknowledgments

I would like to extend a sincere thanks to project advisors Mike Shaevitz, Leslie Camilleri, Camillo Mariani, and Arthur Franke for their help and guidance on my work, as well as to everyone else who worked with me on Double Chooz this summer for support and input. I would also like to thank the National Science Foundation for providing me with the wonderful opportunity to work at Nevis this summer.

References 1. Inverse beta decay, http://theta13.phy.cuhk.edu.hk/pictures/inversebetadecay.jpg. 2. Standard model, http://www.fnal.gov/pub/inquiring/time-line/images/standardmodel.gif. 3. C. Amsler et. al (Particle Data Group). Passage of particles through matter. Physics Letters B667(1), 2008. 4. F. Ardellier et al. (Double Chooz Collaboration). Proposal. arXiv:hep-ex/0606025, 2006. 5. J. Kut. Neutrinos: An insight into the discovery of the neutrino and the ongoing attempts to learn more. 1998. 6. M. Shaevitz. Reactor neutrino experiment and the hunt for the little mixing angle, 2007. 7. R. Slansky et al. The oscillating neutrino: an introduction to neutrino masses and mixing. Los Alamos Science, 25:28–72, 1997.

Page 43: HURJ Volume 11 - Spring 2010

42

hurj spring 2010: issue 11 humanities

Resigned to the Fringes:

An Analysis of Self-Represenations of Argentine Jews in Short Stories and Films

Helen Goldberg / Staff Writer

Abstract-------------------------------------------------------------------

Immigration and assimilation have been hot-button is-sues in American public discourse since the formation of independent states in the New World, but often left to the wayside is a discussion of how immigrant groups see and represent themselves in the face of pressure to assimilate. In Argentina, Jewish immigrants around the time of the centennial (1910) began to internalize and reproduce images of successful integration into the larger Argentine society in products related to their own cultural heritage, most notably in the short stories of famed Jewish-Argentine writer Al-berto Gerchunoff. In sharp contrast, the films of Argentine-Jewish director Daniel Burman, modern-day counterparts to Gerchunoff’s stories, reflect a growing sense of pessimism and resignation that currently pervades a Jewish commu-nity relegated to the fringes of society, due to their ultimate failure at real integration. Seen as counterparts, Gerchu-noff’s short stories and Burman’s films reflect very different attitudes toward assimilation within the Jewish communities of Argentina. During periods when Jews have felt that there was real potential for them to be just as Argentine as any-

one else, such optimism is clear in their self-representations designed for public consumption, whereas when they have felt disappointed and relegated to the fringes of society, such pessimism and resignation is again reflected just as clearly in these representations. Why have Jews been unable to fully assimilate? Furthermore, how influential are these self-representations? They certainly reflect contemporary at-titudes about Jewish integration in Argentina, but what role will they play in molding future attitudes and enlarging or further limiting the social space granted Jews in the country?

Introduction------------------------------------------------------------------- The word “assimilation” has two meanings: it refers to both the outward accommodation of one social group into a larger dominant group, in terms of speech, dress, and customs, and the internalization of a dominant belief. Assimilation, in the first sense of the word, held particular sway in early-twentieth century Argentina, following a wave of emigra-tion. Official rhetoric promoted the idea that all immigrants could be molded into true Argentines. Jewish immigrants in particular began to “assimilate” this rhetoric into their own literature soon thereafter. The optimistic tone of the official rhetoric was translated into an equally optimistic tone in Jewish short stories.

Page 44: HURJ Volume 11 - Spring 2010

43

History, however, has shown that Argentine Jews have never actually been able to fully assimilate into Argentine society. Focusing on their own definition of assimilation via econom-ic integration, Jews have failed to understand the Argentine nationalist and ideological definition of assimilation. Their frustration with this failure to be accepted within mainstream Argentine society is today reflected in the overwhelmingly pessimistic and resigned tone of current films representing Argentine Jewish life. It is thus that self-representations by members of the Jewish community in Argentina are reflective of the extent to which they have been able to mold them-selves to fit the Argentine assimilationist ideal.

Homogenization and Jewish Immigration

The official attitude of Argentine government officials to-ward immigrants arriving around the turn of the twentieth century was, interestingly, optimistically inclusive. While the “state assimiltionist policy sought to subordinate minority cultural identity to a national ideology,” the emphasis on ide-ology, rather than on race or ethnicity, seemingly provided a space for the new immigrants to become fully “Argentine.” Argentine nationalism was focused at this point on “the creation of the national citizen, individuals publicly certified and approved of by the state,” not on the limiting of oppor-tunities for immigrants.

This sense of inclusiveness seems at first to contradict the strong sense of nationalism prevalent amongst the Argen-tine elite, but even the elite found themselves invested in promoting immigration and integration in order to build a stronger Argentine state. The governing class in Buenos Aires during this period was particularly influenced by positivism, which empha-sized

“the necessity of an inte-grated reform of immigrant society based on education, which would bring numerous benefits, such as greater politi-cal cohesiveness, economic growth, and the general mod-ernization of society. It was also understood, as well, as an elimination of religious values and indigenous culture.”

State leaders were the direct heirs of the Liberal project of the early 19th century, which promoted the formation of strong, independent nation-states, “whose integrating capacity stemmed from the development of cohesive, homogenizing master narratives of national identity diffused by the educa-tional system.”

humanities hurj spring 2010: issue 11

Effects of Assimilation Rhetoric on the Jewish Population

Argentine Jews began to internalize and reproduce images of successful assimilation into the larger Argentine society beginning around 1910. This process of internalization of official rhetoric had particular effect on the Jewish commu-nity, specifically because of their education and the ambigu-ity of their ethnic identities. Assimilationist rhetoric was largely disseminated via the public school system. Because Argentine officials understood education to have a “civiliz-ing” effect on immigrants, school curriculums promoted the replacement of any foreign language or ideology with that of the criollo population, that is, of those born in Argentina of Spanish colonial descent. Jews tended to be relatively well educated; they were more likely to send their children to school and to be literate enough to read newspapers and journals than members of other ethnic groups. According to Ricardo Feierstein, an Argentine Jewish intellectual, this em-phasis on education was particularly important with regards to assimilation because it was the “intellectual who would begin to accept the images that others had of him” first.

Moreover, Argentine Judaism was, and in many ways still is, a heterogeneous category. Jews in Argentina came from Europe, the Middle East, and other countries in the Ameri-cas. They spoke a variety of languages, held different beliefs,

were educated to varying extents, and did not always consider one another to be of the same background. Jewish thinker Gershom Sholem once commented that, in Argentina, “One cannot define that which is called Judaism…Judaism is a living phenomenon in the process of constant reno-vation.” Together, these factors made the Argentine Jews particularly suscepti-ble to assimilation rhetoric.

Jewish elites soon began to employ their specific con-

ception of Argentine assimilation in public discourse. Baron Maurice de Hirsch, the philanthropist who founded the earliest Jewish settlements in Argentina, “wanted the Jews to assimilate and so solve the ‘Jewish question.’” He promoted the idea that the best way to take ownership of Argentine identity was by contributing to the national project of build-ing a great Argentina. He said in a speech to a Jewish con-gregation in Buenos Aires that the process of “making good agriculturalists and patriotic Argentines, though conserving their religious faith,” would lead to the “secularization and Hispanicisation of Argentine Jewish culture.” de Hirsch’s influence could be seen throughout the Jewish community, as

Page 45: HURJ Volume 11 - Spring 2010

44

new immigrants sought to learn trades, agricultural meth-ods, and to focus on economic goals of assimilation, rather than ideological or cultural ones.

Self-Representations of Jewish Integration during the Centennial

Jewish immigrants’ attempts to assimilate via economic integration were reflected in short stories written about them by Alberto Gerchunoff, a Jewish journalist and writer in Argentina during the turn of the twentieth century. Gerc-hunoff is perhaps best known for his series of short stories entitled The Jewish Gauchos of the Pampas, which appeared in Spanish in weekly installments in a popular Buenos Aires newspaper, and introduced much of the literate community to Gerchunoff’s own idealized understanding of the Jewish-Argentine relationship. The stories reflect a strong feeling of optimism with regards to breaking out of the limited social space to which many Jews were accustomed. In his first story, he writes, “To Argentina we’ll go-all of us! And we’ll go back to working the land and growing livestock that the Most High will bless.” This optimism reflects Gerchunoff’s belief that the Jews could insert themselves into the national-ist definition of a true Argentine citizen simply by working and contributing.

Where Gerchunoff´s optimism became a problem for the Jews was in the fact that he, although seen as a trusted, legiti-mate voice within the community, perpetuated an assimila-tion imperative emanating originally from the State. This perpetuation of the assimilation ideal based on economic integration, printed and distributed for all to read, spread the idea throughout the community.

Jewish Integration into Argentine Society

Argentine Jews are by no means rejected by the larger soci-ety; they have, in many cases, reached levels of education and income impressive for the country. Yet, they remain on the fringes of Argentine life. They remain both “Argentina’s only ethnic group” and the only group unable to be con-sidered as fully Argentine, without any sort of qualification. What continues to be a “crucial factor in the identity of the Jews…is the fact that they are considered as such by their non-Jewish neighbors and society.” They continue to be seen as the “other,” even today, as the fourth and fifth gen-erations of native-born Argentine Jews consider themselves to be, without qualification, Argentine.

A Christian character in a film by Jewish Argentine film-maker Daniel Burman compares being homosexual to being Jewish; they are both alternate identities that are at first invis-ible, but, once known, relegate the person to the position of permanent outsider. The Jews of Argentina are stuck. They have reached the middle class, but cannot move above it into the elite strata. To compound the glass ceiling they experi-ence economically, they possess little political or social voice to improve their situation. They are left with a comfortable

income but little legitimacy in the public setting.

Jews failed to fully assimilate into mainstream Argentine so-ciety largely due to their misunderstanding of the Argentine definition of assimilation. For an Argentine national identity to exist, assimilation had to be along ideological lines. It had to be about “questioning the legitimacy or the authority of …marginal cultures.” Baron de Hirsch’s plan of integrating economically, while maintaining Jewish belief systems, led many Argentines to question Jewish loyalty and patriotism. The community’s emphasis on conforming to the economic, as opposed to the ideological or cultural, relegated the Jews to their place on the very edges of accepted society.

The Jewish community in Argentina reacted to their failure to fully integrate by retrenching. They sought to conserve the gains they had made by limiting risk-taking, which could potentially lead to losses of economic and educational gains. The more elite members of the Jewish community became particularly concerned with respectability. Jews who had al-ready reached some measure of social standing put up strict barriers, making attempts to limit contact between them-selves and more recent immigrants. The Jewish community realized that it was unlikely to make any more inroads into full acceptance within Argentina, so it resigned itself to the space allowed, and sought to ensure that said space would not be further reduced in any way. The Jews were allowed a small universe, several neighborhoods in the spatial sense, and a certain level of respectability in a social sense, and avoided the risks that would be necessary to expand that little bubble.

Contemporary Self-Representations of Jewish Integration

The modern-day counterparts of Gerchunoff’s stories, the films of Argentine-Jewish filmmaker Daniel Burman, re-flect the sense of pessimism and resignation that the Jewish community feels at being left on the fringes of society. His three best known films can be viewed both as a series and as separate entities. Seen as a series, all three movies reflect Burman’s resignation; they show his being stuck. He uses the same actor in all three to play a very similar protagonist, a young, neurotic Jewish man trying to find a way to indi-viduate himself from his family. Other actors and characters are also repeated in the films, most notably, the perpetually servile indigenous character Ramón, played by Juan Jose Flores Quispe. Burman also repeats his characters’ names. The love interest is named Estela in two of the three movies, and Burman’s protagonists are all named Ariel, a notice-ably Ashkenazi Jewish name. All three movies take place in Buenos Aires, almost entirely in Ariel’s home or workplace. Scenes in public are short, or specifically business-related, as though Jews have no other real contact with the larger soci-ety, outside of business dealings. Burman demonstrates both a very circumscribed range of options and a sense of resigna-tion when it comes to selecting only from that range.

To take each movie individually, Waiting for the Messiah,

humanities hurj spring 2010: issue 11

Page 46: HURJ Volume 11 - Spring 2010

45

humanities hurj spring 2010: issue 11

features a young Jewish man who constantly seeks to find employment outside the family restaurant. Ariel succeeds in obtaining three contracts with a television company, but all last only six months, and all are “trial only” contracts. He secures some space for himself outside the Jewish neighbor-hood where he lives and works for his family, but it is only temporary. He talks a great deal about la burbuja, the bub-ble, from which he cannot escape. The bubble symbolizes for Ariel the limited options he has in life, which he describes as fitting into an already predetermined plan to which he learns to resign himself. One of Ariel’s last and most profound lines, “Uno se adapte,” one adapts, shows that despite his early attempts to take risks and try to widen the social space granted to him as a Jewish Argentine, he eventually learns to stop reaching and accept what he has.

Similar themes appear in Lost Embrace. Like the first Ariel, this Ariel lives with his family and works in the family busi-ness, in this case a lingerie shop. The lingerie shop is located in a small shopping mall, between a Korean store and a modern-looking Internet shop. The owners of the Korean store are recent immigrants who are viewed with contempt, while the owners of the Internet shop are ideal Argentines: fair-skinned, Spanish-speaking, and economically successful; they are far more respected than the Korean couple. Ariel’s family’s shop is located in between the two other shops, and his family receives a level of respect that is also somewhere between the two. Jews have reached a level of assimilation higher than the more visible immigrants groups, such as re-cent Asian immigrants, but still have not reached the level of Christian Argentines. Again, they are caught in the middle.

Family Law, the third of the series, features an Ariel who has made a bit more progress. This Ariel feels enormous pressure to become a lawyer and join his father’s practice. His father even names the practice “Perelman and Son,” long before Ariel decides to study law. Ariel succumbs and graduates from law school, but he individuates himself a bit by working as a professor and a defense lawyer for the state, as opposed to in his father’s firm. These positions put this Ariel fully in the public view, working for large, integrated public institutions, rather than ones owned and frequented primarily by Jews. But, by the end of the movie, Ariel leaves these jobs and takes over his father’s firm. He resigns him-self to the role that was always predestined to be his: that of the Jewish lawyer working in his father’s firm. Another interesting aspect of this third Ariel is his marriage to a Christian woman. There is no discussion about the difficul-ties of intermarriage, but when his wife sends their son to a Swiss Catholic preschool, he finds himself strangely upset at the idea that his son could become wholly assimilated. This scene raises the question as to what extent the bubble is self-maintained, or even self-created. Are the Jews of Argentina resigning themselves to being a fringe community because it was all the space granted to them, or because they them-selves created a sort of parallel society that only sometimes

intersects with the larger Argentine community, in order to preserve tradition?

Conclusion---------------------------------------------------------------------------------

Ultimately, Jewish self-representations in Argentine sto-ries and films seem to be interwoven with the feelings of belonging or not belonging that predominate during the time period. The extent to which they have integrated and their modes of representation cannot be separated. Seen as counterparts, Gerchunoff’s short stories and Burman’s films reflect very different attitudes toward integration within the Jewish communities of Argentina. During times when Jews have felt that there was potential for them to be just as Argentine as anyone else, such optimism is clear in how they represent themselves, and when Jews have felt disappointed and relegated to the fringes of society, such pessimism and resignation is just as clearly reflected in these representations. The question that arises for future prospects of integration of Argentina’s Jews is to what extent such representations of pessimism and resignation will prevent Jews from trying to burst the bubble limiting their role in the larger society.

References 1. Armony, Paul. Jewish Settlements and Genealogical Research in Argentina. http://www.ifla.org/IV/ifla70/papers/091e-Armony.pdf, Aug. 2004.2. Elkin, Judith Laikin and Gilbert W. Merkx, eds. The Jewish Presence in Latin America. (Boston: Allen & Unum, Inc.), 1987. 3. Family Law. Dir. Daniel Burman. Perf. Daniel Hendler, Julieta Diaz, Arturo Goetz. DVD. TLA Releasing, 2006. 4. Feierstein, Ricardo. Contraexilio y Mestizaje: Ser Judio en la Argentina. (Buenos Aires, MILA Ensayo), 1996. 5. Gerchunoff, Alberto. In the Beginning. In The Jewish Gauchos of the Pampas. (New York: Abelard-Schuman), 1955. 6. Humphrey, Michael. Ethnic History, Nationalism, and Transnational-ism in Argentine, Arab and Jewish Cultures. as quoted in Ignacio Kilch and Jeffrey Lesser, eds. Arab and Jewish Immigrants in Latin America: Images and Realities. (London: Frank Cass), 1998. 7. Lost Embrace. Dir. Daniel Burman. Perf. Daniel Hendler, Adriana Aizenberg, Jorge D’Elia. DVD. TLA Releasing, 2003 8. Mirelman, Victor A. Jewish Buenos Aires, 1890-1930: in Search of an Identity. (Detroit: Wayne State University Press), 1990. 9. Newland, Carlos. The Estado Docente and its Expansion: Spanish American Elementary Education, 1900-1950. in The Journal of Latin Ameri-can Studies Vol. 25 No. 2 (Cambridge: Cambridge University Press), 1994. 10. Senkman, Leonardo. Jews and the Argentine Center: A Middleman Minority. In Judith Laikin Elkin and Gilbert W. Merkx, eds. The Jewish Pres-ence in Latin America. (Boston: Allen & Unum, Inc.), 1987. 11. Vidal, Hernan. The Notion of Otherness within the Framework of National Cultures. in Juan Villegas and Diana Taylor, eds. Gestos: Repre-sentations of Otherness in Latin American and Chicano Theater and Film. (University of California Press), 1991. 12. Waiting for the Messiah. Dir. Daniel Burman. Perf. Daniel Hendler, Enrique Pineyro, Chiara Caselli. DVD. TLA Releasing, 2000.

Page 47: HURJ Volume 11 - Spring 2010

46

hurj spring 2010: issue 11 humanities

Nicole Overley / Staff Writer

The Economic Framework

Modern Christian evangelicalism in the United States has been supported immeasurably by the growth of megachurches. Many of these enormously successful churches, having already planted “daughter churches” across this country, have chosen to take their evangelical mission overseas—primarily to famously secular Western Europe, Great Britain in particular. I spent almost a month in Britain and continental Europe this summer, exploring the possibility that, with these churches’ almost limitless financial resources and manpower, they could contribute to a reversal of European secularism, which has only grown more profound in recent decades—but how would Europeans respond to an influx of stereotypically American spirituality?

To answer this question, I utilize a “market-oriented lens,” an approach from

economics that can

“illuminate what might otherwise seem a very disorderly landscape.”1 Firstly, I assert that the mostly state-supported churches of Europe gave rise to a monopolistic religious “economy”: the constant and reliable influx of state funds ensures the survival of the state church, regardless of its clergy’s response (or lack thereof) to the needs of its congregation. With little to no serious competition or other outside threats, this supply-side stagnation facilitates the increasing decline of religion in Europe, which is further enabled by each country’s incremental steps away from dependence upon the church. Therefore—and this is key—I combine the supply-side secularization hypothesis of sociologists like Roger Finke, Rodney Stark, and Steven Pfaff with a equal emphasis on declining demand, blaming a lack of both for the declining role of religion in European society. My further proposition relates this declining role

to the

megachurch movement in America, which is best analyzed in the context of the competitive and diverse American religious economy, in that it is a direct result of American churches’ acknowledgement of and response to the modernization of society. If the American megachurch movement, tweaked to fit the demands and desires of its customers, were exported to Europe—expanding, as any traditional capitalist firm inevitably would, in search of greater profits—it could introduce competition into a currently stagnant religious market, naturally increasing supply as well as demand, thus reversing the continent’s trend towards secularization.

An Economic History of American Religion:

Historical Precedents for a Modern Phenomenon

Having immigrated to the colonies primarily to escape from

religious persecution, colonists quickly established religious

freedom in the soon-to-be United States as a defining

tenet of colonial life. The famous separation of

church and state that so proudly

differentiated America

Innovation & Stagnation in Modern Evangelical Christianity

Page 48: HURJ Volume 11 - Spring 2010

47

humanities hurj spring 2010: issue 11

from Europe created an unregulated religious market and rampant pluralism of the Christian faith—which is manifested today in the myriad of denominations here. Some scholars argue that “pluralism [should instead weaken] faith—that where multiple religious groups compete, each discredits the other, encouraging the view that religion per se is open to question, dispute, and doubt.” 2 But there is no doubt that religious participation in America does, indeed, show an indisputable long-run growth trend, despite the expected cyclical upheaval. Even though, with few exceptions, consistent decline is exhibited in Europe and much of the rest of the world, American participation in organized religion has increased markedly over its two centuries of history.

What can explain this? Finke and Stark’s prominent analysis does not point to differences in demand for religion between Europe and America—in fact, they claim that all humans, across the globe, have the same innate desire for spiritual life and fulfillment. Instead, they credit the glut of supply in the United States compared to other nations and, furthermore, the actions—most notably innovation—that this “competition” encourages. Their supply-side theory of competition conjectures that this crowded religious market in America allows for the “rational choice” of the nation’s religious consumers.3 Mimicking the way in which Americans choose what products to consume in their everyday trips to the grocery store, this model asserts that churchgoers rationally, if subconsciously, assess the marginal benefit afforded to them by each church, based on their personal preferences, and choose the one with

the highest benefit and lowest cost. In short, it allows each person to find the spiritual experience which suits them best. It thus logically follows that, in nations lacking such choices, there is less involvement in religion, because

the only available option will suit fewer people.4 Americans—as well as, we will see, Britons—are fundamentally consumers; they “shop around for their spiritual needs.”5

This idea of “rational choice” inevitably inspires consideration of demand-side theories, which counter that religious phenomena in America and

elsewhere can in fact be explained by fluctuations in demand—in other words, shortages and surpluses. Some argue that different cultures beget different levels of spiritual dependence, taking Stark’s constant “innate demand” and adjusting it from nation to nation, continent to continent—a theory that is not without its merits, and will be discussed later in relation to the evolution of contemporary British society.6 But a second, more active way to address demand focuses on demand-side interventions: in other words, churches in America, through the years, have intervened purposefully to keep demand high. Surprisingly, this can fit with Stark’s supply-side hypothesis like two pieces of a jigsaw puzzle. Ultimately, it is only with competition and the desire to stay ahead of the curve that these churches actively try to augment religious demand; in a monopolistic environment, this would be unnecessary.

Assuming each American church desires to be the religious choice of the maximum number of Americans possible, the consistent shift in the practice of American religion—a

movement that correlates with sociocultural change and America’s religious growth trend—signals exactly such a relationship between ample supply and changing demand. First, let’s explore the reality that, unfortunately for some American churches, not every denomination or segment of religion has been fortunate enough to share in this long-run general growth trend. Herein lies the most crucial principle to grasp, the one that illuminates why there has been a trend of both growth and dramatic change in American religion, and one which is rooted in the straightforward realities of a laissez-faire free market. From the perspective of an individual church, if it wants to grow and become increasingly popular, its goal is pure and simple: to continuously reinvent itself to ensure that, as people change, it still identifies with a majority of them—and not every church is able to do this. As Finke and Stark assert, the “churching of America was accomplished by aggressive churches committed to vivid otherworldliness.”7 In other words, the most successful churches learned how to attract, commit, and retain followers in a changing world, by responding and adjusting to the changes in society that their “customers” had already grown accustomed to—before those changes could drive a wedge between these customers and Christianity. As many churches find, it’s a skill which is vitally important if they want to make a “profit” of believers.

From an economic perspective, just like any business, the effectiveness of a church depends upon not only their organization and their product but also “their sales representatives and marketing techniques”—methods of increasing demand that have played a huge role in the success of American religion.8 When translated into language reflecting the history of American religion, spiritual marketing brings to mind names like the famous “fire and brimstone” preacher George Whitefield, an example of evangelical Christianity uniquely tweaked to fit America in the early 18th century. The “Great Awakening” that Whitefield pioneered was essentially a “well-

The Crystal Cathedral in Anaheim, CA -Photo by Nicole Overley

Page 49: HURJ Volume 11 - Spring 2010

48

hurj spring 2010: issue 11 humanities

planned, well-publicized, and well-financed revival campaign,” which fed off of the fervor of the times and, in doing so, managed to capitalize off of the spirit of the era.9 It helps to think of Whitefield and his fiery camp meetings in the context of the colonial American landscape during his lifetime: besides being “quite simply one of the most powerful and moving preachers,” he drew crowds that gathered outside to hear him speak wherever he traveled.10 His revivals were almost shocking and revolutionary to attendees, simply because of the rather subdued nature of church preaching until then.

The Great Awakening had several important ramifications for the future of American religion and helped to shape the development of the megachurches today. How? Primarily, it “demonstrated the immense market opportunity for more robust religion,” setting a precedent for later preachers and whetting American churchgoers’ appetite for it as well.11 Also, it’s interesting to note that disillusioned members of the American mainline denominations, more than any other group, were the ones who flocked to Whitefield’s camp meetings, just as nondenominational megachurches draw lapsed members from established denominations today.

It could be argued, then, that the rise of Whitefield predicted the “decline of the old mainline denominations,” caused by “their inability to cope with the consequence of religious freedom and a free market religious economy,” especially when a viable competitor arose—a real-life example of the failure to “keep up” that we just addressed.12 Around this time the idea of “revivalism” developed, centered around the outbreaks of “public piety” occurring throughout America in the late eighteenth and nineteenth centuries. Reminiscent of the Great Awakening, these revivals were planned and conducted periodically “to energize commitment within their congregations and also to draw in the unchurched.”13 The idea of the

camp meeting, developed in the early nineteenth century, became hugely popular in rural America in part due to the fact that camp meetings occurred in venues that were familiar and agreeable to attendants: the open outdoors. This comfortable setting mirrors today’s casual, come-as-you-

are megachurches that shy away from traditionally ornate or grandiose environs.14 A contemporary observer noted, “Take away the worship [at the camp meeting] and there would remain sufficient gratifications to allure most young people”: in other words, they made Christianity seem comparatively fun to the “contemporary” generation at that time.15

Fast-forward to the 1960s, and we see how the Great Awakening was only one of a series of innovative changes during the history of American religion that occurred in response to cataclysmic social shifts. The cultural crisis and subsequent questioning of the “hippie” sixties generation, coupled with the increasing unease and outright protest brought about by Communism abroad and the Vietnam War, encouraged disillusionment in what was perceived as religion that was out of touch with the gritty real world. Suddenly, traditional or “mainstream” churches, with their white steeples, Sunday schools, and potluck dinners, began to experience a period of decline that continues today and has affected almost every one of the nation’s numerous established denominations. The mainline denominations “suffer in times of cultural crisis or disintegration [like during the 1960s and 1970s], when they receive blame for what goes wrong in society but are bypassed when people look for new ways to achieve social identity and location.”16 The only segment that benefited from this trend, or at the very least was unhurt, was the burgeoning nondenominational Protestant segment, most famously manifesting itself in recent decades with the establishment of new,

unorthodox churches that reached people within the context of their radically changing lives. The 1960s and the response (or lack thereof) of traditional American churches proves a testament to the importance of innovation in a competitive market. In fact, almost every one of the

nation’s most phenomenally expanding churches today pride themselves

on their uniquely developed methods of outreach oft-tailored to the specific needs and culture of the population they begin around. Whether this includes a Starbucks or McDonald’s within the church or designated parking spots just for churchgoers with motorcycles, these megachurches recognize and respond to a need to “change with the times” and, by making concessions in environment and style of worship for the comfort of attendees, they hope to simultaneously continue interest and affinity for religion, prevent the alienation of the general public, and even perhaps attract even more members by gradually breaking down the barriers that keep formerly “unreligious” people from stepping into an intimidating or formal church atmosphere.

A Comparison of Religion in Britain with its

American Counterpart: The Secularization Thesis

In contrast to the free market

American religion we’ve just analyzed, “there is ample evidence that in societies with putative monopoly faiths, religious indifference—not piety—is rife.”17 While within the largest example of this—Europe—there are exceptions to the rule, like Italy and Ireland, both of which exhibit levels of religious involvement and participation almost as high as those in the United States, most European countries very strongly support this hypothesis—I will center on Britain.18 Admittedly, after

megachurches recognize and respond to a need to ‘change with the times’“ ”

Page 50: HURJ Volume 11 - Spring 2010

49

consideration, the British religious environment seems counterintuitive—as did the coupling of growth and pluralism in the United States. Why, then, does this paradox occur at all?

There are a few primary problems faced by any religious monopoly that Adam Smith himself first unearthed and Whitefield later confirmed: firstly, a “single faith cannot shape its appeal to suit precisely the needs of one market segment without sacrificing its appeal to another.” Therefore, it lacks the ability to mobilize massive commitment because of its intrinsically smaller “customer” base, a structural explanation for the decreased religious demand that we later see.19 In contrast to the vast variety of churches in the US, the relative singularity of Christianity in Britain forces congregants to either conform to the only available option or choose not to attend altogether. Furthermore, simply and just as compellingly—“monopolies tend to be lazy.”20 But why, then, doesn’t the Church of England fear obsolescence? Smith notes in his famous Wealth of Nations, as he addressed state-sustained European churches, that, “in general, any religious sect, when it has once enjoyed for a century or two the security of a legal establishment, finds itself incapable of making any vigorous defense against any new sect which chooses to attack its doctrine or discipline.”21 And that monopolistic state support of a single Christian denomination has long afforded to that church a natural advantage in established resources that has thwarted competition. For centuries, the

Anglican Church has remained exactly the same—stagnant

in church hierarchy,

theology, worship style,

and even the dress of church leaders. Although, over the

humanities hurj spring 2010: issue 11

years, it might have faltered in the face of serious competition, it simply hasn’t faced any—none has arisen because of the asymmetric base of power which supports Britain’s “official religion.” Without the multiple-denomination challenges omnipresent in American religious society, religion in Britain has settled into a pattern where there’s no need to continually reinvent or innovate, change with the times, or “exert themselves for the spiritual welfare of their respective congregations” as their American counterparts must.22

This stagnant, monopolistic supply is one rather quantitative depiction of the British and American religious “markets” that leads logically to both the conceptualization and the justification of recent scholars’ “secularization thesis.” Derived from painstaking analysis of years of Europe’s church attendance and religious adherence, the secularization hypothesis claims that Europe’s citizens are slowly but surely moving towards an utter lack of religious iconography and away from even the slightest semblance of religious presence in everyday life. This is evidenced by the declining numbers of church attendance in recent decades and from national polls in which more and more Europeans—and Britons in particular—claim to have “no affiliation with religion.” It’s a widely believed hypothesis, one that leads famous and influential thinkers such as French family sociologist Martine Segalen to claim that European nations are becoming simply “post-religion societies.”

Beyond just supply-side stagnation, I argue that many factors contribute to this secularization, one of which worries global religious leaders far more: actual demand-side decline, that “Europe’s religious institutions, actions, and consciousness have lost their social significance.”23 While some hold demand constant, naming supply variations as the origin of both European decline and American growth, others point to industrialization, urbanization, and a conglomerate of constant yet

gradual forces propelling the world into post-modernism and signaling the unavoidable downfall of religion. The view exists that the evolution of European society towards an embrace of science and modernity and towards a gradual sense of the “death of religion” represents the future of all societies across the globe—the ubiquitous progressiveness of Europe has simply led its society to become the first on the globe to literally not need religion. Inevitably, secularization is an “absorbing state—that once achieved, it is irreversible and institutionalized, instilling mystical immunity.”24 With this perspective, the now-thriving American churches can be explained by the idea of “American exceptionalism”—that the American deviation from this so-called norm is a “case of arrested development, whose evolution has been delayed” and, soon, religious demand here will decline just as it has in Europe.25 They claim that the increasing lack of depth or substance in religious services illustrates a decline in public commitment to religion that foreshadows an imminent, rapid decay.

Britain: The True Anomaly?

For centuries since its founding, the Church of England enjoyed consistently high membership, attendance, and tithing from the citizens of Great Britain: this was the period where a traditional—arguably stagnant—church aligned with a similarly traditional society, enabling the majority of Britons to feel that the Anglican Church was relevant to their lives. The first half of the 20th century seemed no different—in fact, general enthusiasm and affection for the church reached an all-time high, and the Church, having remained essentially the same in almost every aspect since its beginnings, felt safe and comfortable with its position within the state. But the cultural shifts of the 1960s onward facilitated an ever-widening gap between British culture and religion that ultimately correlated with a sharp dropoff in church attendance. It was at that moment in a parallel timeline

Page 51: HURJ Volume 11 - Spring 2010

50

hurj spring 2010: issue 11 humanities

when the differences between the monopolistic and competitive religious markets become truly noticeable: while in the US, decline of the mainline denominations encouraged the entry of new ‘firms’ into the market and spurred intense competition, realized in the form of changing styles of worship and ultimately in the megachurches of today, in the UK, the Church of England refused to evolve or adapt with society, leaving it an outlier as social change became more and more profound. It is thus in their response to the gradual secularization of society where American and British Christianity differ—and, facing bankruptcy from the government’s own fiscal crisis and rumors that the English church and state could finally be separated upon the ascent of Prince Charles as king, the Church of England has realized, finally, that it has reached a critical turning point, and that the next decade or two will determine whether it dies out completely or manages to adapt and live on.

Given their aversion to the most stereotypical aspects of American Christianity, I was surprised to find that the leaders of the Anglican Church seem to be gradually acknowledging the idea that American Christianity—or at least its innovative and competitive nature—is the way church is ‘supposed to be,’ and in doing so, they have fundamentally altered the very theology of the stagnant Church that has existed for the past 300 years. For all that time, the Church was viewed as something in society that wasn’t

supposed to change—while society changed around it, it was supposed to be a rock or anchor, keeping people in line with ‘true’ spirituality, the way it was intended to be—much the same as the recently deteriorating mainline denominations in the United States have argued. But the competition

inherent in American Christianity ensured that, despite the objections of a few, innovation, and not stasis, was the norm. Without that competition, a pathological stagnation was legitimized and justified as a positive trait in the Church of England’s clergy until very recently, the product of Adam Smith’s ‘lazy monopoly’ on religion because the Church simply didn’t need to reach out to the citizens.26

The downfall of that mentality, likewise, resulted from the clearly impending loss of their monopoly: accompanying the realization that change was necessary was the paired realization that church in Britain, since its first inception as the Church of England, had been acting directly contrary to not just the rest of Christendom but to the actual intentions of the Bible itself. Monopolistic Britain, not the

competitive religious market of the United States, has been the anomaly in Christian history. The very reason for the success of the early Christian church lay in its willingness and ability to adapt to each new society or culture it came in contact with, a mandate its leaders of the time viewed as both

Biblical in origin and crucial for its survival. Among other things, this tradition explains why, as the church expanded, we celebrate Easter and Christmas when we do—not as arbitrary holidays, but as holy days centered around the original pagan festivals of the Roman Empire. State support of the church—which occurred with the establishment of the Anglican Church in the UK—gave that church such

stability that it no longer needed to pay attention to the ways in which the world around it was changing.

The central leadership has commissioned groups to implement modern ways of reaching out to its disillusioned populace, many of them patterned after successful American megachurches like Rick Warren’s Saddleback Church and the 20,000-member Willow Creek in Chicago, and with the intent of bringing about a total overhaul of the Church. Alice Morgan of the Church Mission Society in Oxford explained the theological shift from “maintenance to mission” and the importance of fully actualizing this change: the Church can’t deal with its potential ‘death’ by reaching out to the rest of the world—it has to reach inward.27 It is a problem illustrated by the simple fact that there are far more practicing Anglicans in

The Metropolitan Cathedral in Liverpool, UK - Photo by Nicole Overley

Page 52: HURJ Volume 11 - Spring 2010

51

humanities hurj spring 2010: issue 11

Nigeria than there are in Britain—this is because, when the Anglican Church first noticed decline at home, it seemed easier to transplant new churches in places that would be more easily accepting of them, rather than to change their domestic approach to church. While in the 1990s, the Anglican church tried unsuccessfully to buffer its decline by planting churches in areas that were perfect copies of the original successful ones, today’s new models focus on the importance of contextual, or ‘bottom-up’ planting: they utilize community mission, where leaders are sent out into the world as individuals not to reinforce paradigms, but to draw in those around them, “combining scale with micromanagement.”28 Michael Moynagh, an Anglican advisor and academic at Oxford University, cites Neil Cole’s “Organic Church” model in Sheffield, which focuses on the importance of getting beyond the church fringe and planting churches amongst those with no religious background.29

The Mission-Shaped Church Report, published in 2004, and the following Report of Church Army’s Theology of Evangelism one year later detail these problems, solutions, and the overarching new direction for the Anglican Church. Fundamentally, these reports verbalize the unspoken knowledge that the UK has become “a foreign mission field”—re-evangelizing Britain is now a cross-cultural mission.30 It details the Anglican Church’s acknowledgement that “church must reflect culture”—for how else, ultimately, can it connect to the people? Yet there still exists the danger of church becoming just culture—there must be a palpable underlying base of Christianity. They focus also on the “passive receivers’ problem”—the need to really involve churchgoers in the religious experience, making them committed to reach out to others and

perpetuate their faith, for without that aspect, “even the most radical movement conceivable becomes boring.”31 This method is summarized as the “incarnational church model,” recognizing that individual churches are most successful if they are founded by believers in their own way, as a natural outgrowth of the local culture and its needs—and if those churches continue to respond to how those needs change.32

Today, for the first time in its history, the Church of England faces the legitimate threat of obscurity. As the Church struggles to reinvent itself at this critical turning point, it has been forced to set aside many of its beliefs and assumptions about the way church is supposed to be. I am optimistic that, if the Church continues to embrace innovation and change in the decades to come, it might again capture the hearts and minds of British ‘consumers’ of religion.

References 1. Roger Finke and Rodney Stark, The Churching of America: Winners and Losers in Our Religious Economy (New Brunswick: Rutgers University Press, 1992), 18. 2. Ibid, 18. 3. Rodney Stark and Roger Finke, Acts of Faith: Explaining the Human Side of Religion (Berkeley: University of California Press, 2000), 38. 4. Ibid, 39. 5. Davie, Grace, Religion in Britain Since 1945: Believing without Belonging (Oxford: Oxford University Press, 1994), 39. 6. Steve Bruce, “The Social Process of Secularization,” in The Blackwell Companion to the Sociology of Religion, ed. Richard K. Fenn (Malden: Blackwell Publishers, 2001), 252. 7. Finke and Stark, 1. 8. Finke and Stark, 17. 9. Davie, 46. 10. Finke and Stark, 49. 11. Ibid, 51. 12. Davie, 54. 13. Finke and Stark, 88. 14. Ibid, 96. 15. Ibid, 96. 16. Steven Pfaff, Growing Apart?: America and Europe in the 21st Century (Cambridge: Cambridge University Press, 2007), 246.

17. Finke and Stark, 19. 18. Robin Gill, “The Future of Religious Participation and Belief in Britain and

Beyond,” in The Blackwell Companion to the Sociology of Religion, ed. Richard K. Fenn (Malden: Blackwell Publishers, 2001), 280. 19. Finke and Stark, 19. 20. Pfaff, 34.

21. Ibid, 52. 22. Finke and Stark, 19.

23. Ibid, 230. 24. Ibid, 230. 25. Ibid, 221.

26. Steve Hollinghurst, Interview, Church Army Sheffield Centre, Sheffield, UK, August 20,

2009 27. Alice Morgan, Interview, Church Mission Society, Oxford, UK, August 12, 2009

28. Ibid 29. Alice Morgan, Interview 30. Steve Hollinghurst, Interview 31. Ibid 32. Ibid

Page 53: HURJ Volume 11 - Spring 2010

52

hurj spring 2010: issue 11 engineering

Fractal Geometry and the New Image Compression

August Sodora / Staff Writer

Fractal image compression tech-niques, which have remained in obscuri-ty for more than two decades, seek for a way to represent images in terms of iter-ated functions which describe how parts of an image are similar to other parts. Images encoded in this way are resolu-tion-independent. This means that the information stored about an image can always be decoded at a prescribed level of detail, regardless of the size of the decoded image and without the usual scaling artifacts such as pixelation. The size of the resulting encoding is based on the encoding algorithm’s ability to exploit the self-similarity of the image, theoretically leading to more efficient encodings than traditional arithmetical methods, such as those used in the JPEG file format

Despite such advantages, fractal-based image formats have not gained widespread usage due to patent protec-tion and the computational intensity of searching an image for self-similarity. Although decoding an image from a fractal-based format can be performed quickly enough for it to be a potential-ly suitable format for video playback, encoding an image takes considerably longer. Modern implementations, even with very sophisticated encoding algo-rithms, have not yet been able to dem-onstrate the ability to encode images quickly enough to make the technique viable for capturing video.

Here, we take a different approach and simplify the encoding algorithm in order to explore the possibility that an

execution environment where the algo-rithm can operate on different parts of the image simultaneously might enable faster image encoding. By reducing en-coding time, such an implementation might help close the gap between en-coding time and real time as described by the frame rate of American television (approx. 24Hz), and make fractal-based images feasible for video applications. We chose the graphics card as our exe-cution environment, using it for general purpose computing through the Open-CL API. OpenCL is supported by most NVidia and ATI graphics cards less than three years old and has been ported be-tween Windows and *nix, making it an accessible vehicle for computation.

Applying Fractals to Image Compression

The term “fractal” was coined in 1975 by Benoit Mandelbrot, derived from the Latin fractus meaning fractured. Taken generally, it refers to a “shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole” [5]. A shape that has this property is said to be self-similar. Self-similar shapes can be defined using recursive functions, in which the shape appears in some form in its own defini-tion. For example, WINE, the name of an implementation of the Win32 API for Linux, stands for WINE Is Not an Emulator. One can imagine substituting the expansion for the acronym WINE in itself infinitely many times (WINE Is Not an Emulator Is Not an Emulator…). Thus, the acronym for WINE is self-sim-

ilar and has a recursive definition.Another example of a recursive defi-

nition is that of the Fibonacci sequence. Each element of the Fibonacci sequence is expressed as the sum of the previous two elements, with the first two ele-ments (referred to as the initial condi-tions) being zero and one respectively. Thus, the element following zero and one is one; the element following one and one is two; the element following one and two is three; the element fol-lowing two and three is five, and so on, ad infinitum.

In the context of fractals, the pro-cess of repeatedly applying a recursive definition on a set of initial conditions is termed an Iterated Function System (IFS). In fact, anything generated by an IFS is guaranteed to be recursively de-fined and thus self-similar. A famous ex-ample of a fractal generated by an IFS is the “Barnsley Fern,” which is created by iterating four linear functions over four points (Fig. 1). The generated “fern” demonstrates how a very small set of simple functions iterated on a few points can generate something with such or-ganic detail. The example also indicates the sensitivity of the result to the initial conditions; compared to the few initial conditions specified, the amount of de-tail represented is enormous [1].

In a previous example, the acronym WINE was said to be self-similar and could thus be generated by an Iterated Function System. Note, however, that the result would go on forever. The “Barnsley Fern,” on the other hand, al-though capable of representing detail on an infinite scale, seems to progress to-ward a particular image with each itera-

The term “fractal” was coined in 1975 by Benoit Mandelbrot, derived from the Latin fractus

meaning fractured.“ ”

Page 54: HURJ Volume 11 - Spring 2010

53

tion. This is because the functions that describe the “Barnsley Fern” are con-tractive, meaning that with each appli-cation, the function converges to a re-sult, or an image in this case, known as the fixed point. The fixed point is what one would ideally like to represent by the IFS.

Consider the Fibonacci sequence again. If we take the ratio between each two consecutive numbers in the sequence, we find that as the numbers in the sequence get larger, the ratio be-tween two consecutive numbers ap-proaches the golden ratio. It may seem surprising that an operation iterated over an infinite sequence of numbers that themselves grow to infinity can converge on a particular value. The con-cept, however, is akin to that of a limit in mathematics; a function like 1/x ap-proaches zero as x grows to infinity.

The application of Iterated Function Systems to image compression comes from the idea that if we can find a set of functions which converge on a par-ticular image, then perhaps we can rep-resent the image solely in terms of the parameters of the functions and their initial conditions. If the functions can describe how each part of the image is self-similar to another part of the image, then as long the functions are contrac-tive we can regenerate the image simply by iterating them over a prescribed set of initial conditions. While it promises extremely high compression ratios, the IFS approach also poses a daunting problem—how to conduct a search for functions which will closely approxi-mate an image.

A simple way to approach the prob-lem involves superimposing a square grid on the image and looking for func-tions which will take a portion of the im-age and make it look as close as possible to each cell in the grid. The encoding of an image then amounts to a set of func-tions which generate each cell in the grid from other parts of the image. There will be exactly one function for each cell, in order to ensure that the IFS as a whole will generate the entire source image [3].

We can simplify this arrangement even further by superimposing a sec-ond, coarser grid on the image which we will use to restrict the parts of the image on which functions in the IFS can oper-

ate. The first grid will for convenience be defined to be twice as fine as the sec-ond grid; so if the first, fine grid contains 4 pixel by 4 pixel sections of the image, the second, coarse grid would contain 8 pixel by 8 pixel cells. The cells in the coarse grid are called domain blocks, while the cells in the fine grid are called range blocks. The functions that make up an IFS for an image will transform domain blocks into range blocks in such a way that they minimize the difference between the transformed domain block and the range block.

In order to begin comparing domain blocks and range blocks in an effort to find self-similarity, domain blocks are contracted to the size of range blocks. The collection of contracted domain blocks is called the domain pool and gives us a finite and discrete set of im-age parts in which to search. The con-traction process involves dividing each domain block into 2x2 cells, and taking each pixel value in the contracted block to be the average of the pixel values in the corresponding 2x2 cell. Thus, each domain block is made to be a quarter of its original size, the same size as range blocks.

Each contracted domain block can further be transformed to approximate a range block as closely as possible. We keep these transformations very simple, and restrict them to changes in bright-ness and contrast. Changing the bright-ness is analogous to multiplying all the pixels in a block by a certain value, and changing the contrast is analogous to adding a certain value to all the pixels in a block. Technically, the contrast value must be less than one in or-der to ensure con-tractivity, but our experiments show that better re-sults are obtained w i t h o u t this restric-tion. The last t r a n s f o r m a -tion is the implicit translation from the location of the domain block to that of the range block.

In total, a function in the IFS for an image is composed of a contraction, an adjustment in brightness and contrast, and a translation. There must be one function for every range block, and each function operates on a particular do-main block, a value to adjust brightness, and a value to adjust contrast. In order to find the optimal function for a given range block, all we have to do is find the domain block which, when contracted and modified by particular brightness and contrast values, most closely ap-proximates the range block [2].

An error function such as the root mean square (RMS) provides a means of determining how closely a transformed domain block approximates a corre-sponding range block. More precisely, in an image where each pixel can have a value between 0 and 255 inclusive, the error between two pixels could be defined as the root of the squared dif-ference between the two pixel values. The RMS error between two equal-sized collections of pixels is simply the root of the averaged square difference between corresponding pixels. To find the best domain block to use to approximate a given range block, each domain block is transformed to the range block, and the RMS error between the range block and the transformed domain block is calculated. Opt imiza-t i o n

engineering hurj spring 2010: issue 11

Figure 1: The “Barnsley Fern”

Page 55: HURJ Volume 11 - Spring 2010

54

techniques can be used to determine the brightness and contrast adjustment values that minimize the error during transformation. Given these optimal transformation parameters, the domain block that produces the smallest error with the given range block is chosen as the domain block that will be used to generate that range block.

In this manner, one can determine a set of functions to represent, or en-code, a given image. The primary rea-son this technique for representing im-ages has been intractable is the time it takes to complete an exhaustive search for the optimal domain block to use for each range block. Attempts to mitigate encoding time by classifying domain blocks and improving the heuristics of the search have been moderately suc-cessful, but none have achieved the ef-ficiency required for capturing video [6,7,8,9]. The level of detail that can be represented and the space-efficiency with which we can represent this detail is intimately tied to the way we seg-ment the image into domain and range blocks. More sophisticated segmenta-tion schemes do not require that blocks be fixed in size, allowing large swathes of color to be represented tersely with-out sacrificing the ability to represent fine detail when necessary. The simplic-ity of the framework described in this article is so as not to obscure how imple-menting the algorithm on more capable hardware might affect the tractability of the problem.

Decoding an image from its rep-resentation as an IFS is considerably easier. Recall that each function in the IFS corresponds to a range block in the original image. An arbitrarily sized, ar-bitrarily colored image is used as the starting point, making sure to adjust the sizes of the range and domain blocks so that the number of range blocks in the decoded image matches the number of functions saved in the encoding. Each range block in the decoded image can then be generated by applying its corre-

sponding function to the specified do-main block. The application of the func-tion consists of contracting the contents of the domain block, adjusting them by the brightness and contrast values, and then moving them to the range block. A single iteration of the decoding process is the application of all the functions in the IFS. After seven or eight iterations, the decoded image should resemble the original image within .1 dB of ac-curacy (Fig. 2). Note that increasing the iterations beyond this number does not significantly improve the quality of the decoded image.

From 0 to 60 in 40ms: Parallel Encoding on the

Graphics Card

Over the course of the past decade, graphics hardware has become special-ized and powerful enough to greatly exceed the capabilities of a central pro-cessing unit (CPU) for certain families of tasks. These tasks were initially very simple, such as performing vector and matrix operations or drawing geomet-ric shapes, but eventually became quite complicated, aiding in intensive light-ing and shading calculations. Recently, graphics processing units (GPUs) have been used for general and scientific computing tasks including predicting stock prices and running simulations. The hardware manufacturers have em-braced this practice and now software exists that enables one to take advantage of a diverse set of functions. The appeal of computing on the GPU rather than the CPU lies in fundamental differences between the two architectures and the execution models which they support. A CPU and the way in which an operating system allows programs to use the CPU favors a serial execution environment, which means that only one program can be using the CPU at any given time. This ends up working well on desktops and other multitasking systems because

often a program will request data from disk or over the network, and during this time, control of the CPU can be freely given to another program that is not waiting on another device. Graph-ics applications, on the other hand, of-ten involve doing the same operation on independent pieces of a larger set of data, such as pixels in an image. As a result, GPUs have hundreds of stream processors which are individually less powerful than CPUs and run the same program on many different pieces of data simultaneously. This execution en-vironment is said to be parallel and is most suited to solving “parallelizable” problems involving data that is not in-terdependent.

Both the encoding and decoding al-gorithms for a fractal image compres-sion format have such parallelizable portions that operate on data which is independent. During the encoding pro-cess, for example, the search for the op-timal function for one range block does not depend on the search for the optimal function of another range block. Hence, the parallel environment of the GPU al-lows for a simultaneous search for each range block’s optimal function. This expedites the process of exhaustively checking every combination of range and domain block and helps reduce the impact that the size of the image has on encoding time. The process of contract-ing domain blocks in both encoding and decoding is also parallelizable as each domain block is contracted indepen-dently of the others.

Parallelizing the iteration of functions in the decoding process is slightly trick-ier. Note that when applying each func-tion for each range block sequentially, we raise the possibility that a function for a later range block will end up oper-ating on a domain block that was differ-ent at the start of the iteration. Because the functions converge to the image re-gardless of the order of their application during each iteration, it is only neces-sary that we ensure that an entire itera

Figure 2: The original image followed by seven iterations of decoding.

engineering hurj spring 2010: issue 11

Page 56: HURJ Volume 11 - Spring 2010

55

engineering hurj spring 2010: issue 11

tion’s work is complete before pro-ceeding to the next one.

An additional level of parallelization can be created during encoding if we not only partition the exhaustive search by range block, but only allow proces-sors to operate on a fraction of all the domain blocks. When all of the domain blocks have been searched for a given range block, the results can be reduced down to leave only the optimal result.

Our serial encoder and decoder were implemented in C++ and ran on an Intel Core 2 Duo P7450 processor with each of its two cores clocked at 2.13GHz. The parallel versions were also implement-ed in C++ with the aid of the OpenCL library and ran on an NVidia 9600M GS. The results (Fig. 3) are very encourag-ing, and demonstrate that with the help of the GPU, fractal image compression can indeed be useful as a resolution-independent video capture format. As long as the encoding can be done in fewer than 40 ms (the frame rate of American television) then the size of the screen on which content is displayed ceases to matter. If a content provider like a cable company can deliver con-tent encoded at a specific level of qual-ity, then it should display correctly on a wide variety of displays, as long as they agree on a common decoding process.The usefulness of applying Iterated Function Systems to cre-ate resolution-independent representations for data re-lates to more than just images. The waveforms that make up sound and music can just as easily be searched for self-similarity by replacing contrast and brightness adjustments with Fou-rier transforms. It is perhaps not obvious what would result if a piece of audio were scaled to occupy different amount of time, but the decoding algorithm would simply use the avail-able information to attempt fill in the gaps. The use of IFS in finding a source of patterns within even genetic code is be-ing investigated [4]. No one knows what kind of interesting finds IFS will help detect, but an interesting way to find out might be to use IFS on itself, by analyzing the current body of fractal-inspired research for self-similarity between different techniques and applications, running the resulting iterated function system, and seeing where it converges. If only it were that easy!

References1. Barnsley, Michael. (1988) “Fractals Everywhere”, Academic Press, Inc.,

1988.2. Fischer, Y. (1992) “Fractal Image Compression”. SIGGRAPH’92 course

notes.3. Jacquin, A. E. (1992) Image Coding Based on a Fractal Theory of Iterated

Contractive Image Transformations. IEEE Transactions on Image Processing, 1(1).

4. Lio, P. (2003) Wavelets in bioinformatics and computational biology: State of art and perspectives. Bioinformatics, 19 (1), pp. 2-9.

5. Mandelbrot, B.B. (1982) The Fractal Geometry of Nature. W.H. Freeman and Company. ISBN 0-7167-1186-9.

6. Martínez, A. R., et al. (2003) Simple and Fast Fractal Image Compres-sion. (391) Circuits, Signals, and Systems

7. Truong, Trieu-Kien; Jeng, J. H. (2000) Fast classification method for fractal image compression. Proc. SPIE Vol. 4122, p. 190-193.

8. Wu Xianwei, et al. (2005) A fast fractal image encoding method based on intelligent search of standard deviation. Computers & Electrical Engineering 31(6), pp.402-421.

9. Wu Xianwei, et al. (2005) Novel fractal image-encoding algorithm based on a full-binary-tree searchless iterated function system. Opt. Eng. 44, 107002.

FiguresFigure 1. The Barnsley Fern, courtesy Wikimedia Commons.Figure 2. The original image followed by seven iterations of decoding.Figure 3. Graphs comparing encoding time on the CPU and GPU.

Figure 3: A comparison of encoding on CPU vs. GPU

Page 57: HURJ Volume 11 - Spring 2010

56

engineering hurj spring 2010: issue 11

Robotic Prosthetic Development:

Kyle Baker / Staff Writer

A Brief History of Prosthetic Advancements

The history of amputations date back as early as Hippocrates in ancient Greece. Amputations have continuously been an element of war, but advance-ments in prosthetics to improve a sol-dier’s life post-war have not kept up with the times. Early United States ef-forts in limb prosthetics began around World War II. In a meeting at Northwest-ern University in 1945, a group of mili-tary personnel, surgeons, prosthetists and engineers collaborated to resolve what needed to be done in the field of limb prosthetics. This meeting resulted in the establishment of the Committee on Prosthetics Research and Develop-ment (CPRD), which directed endeavors in the field for over twenty-five years. Between 1946 and 1952, IBM developed some electrical limbs, but they were bulky and difficult for the user to oper-ate. Another device, the Vaduz hand, de-veloped in Germany after World War II, was a system ahead of its time. Childress explained that “the hand used a unique controller in which a pneumatic bag in-side the socket detected muscle bulge through pneumatic pressure, which in turn operated a switch-activated posi-tion servomechanism to close the vol-untary-closing electric hand.”

The most recent advancement in robotic prosthetics is the invention of the “Power Knees.” These innovative mechanical legs are operated by a mo-tor that propels each leg forward. The legs work in tandem to keep the user at a constant speed. Last year, Josh Bleill, who lost his legs in a roadside bomb in Iraq, became the first person to oper-ate the Power Knees in daily life. The development of advancements such as these and the fresh attitude toward im-proving prosthetics have helped make robotic prosthetics look more and more promising in recent years.

DARPA and the DEKA Arm Project

Since amputations are highly preva-lent in the war spectrum, it is only natu-ral for the military to fund prosthetic re-search. DARPA (The Defense Advanced Research Projects Agency), which “is the same group that oversaw the creation of night vision, stealth aircraft, and GPS,” funded a $100 million Pentagon project called Revolutionizing Prosthetics to re-form limb prosthetics, specifically upper limb prosthetics. In order for the arm to be effective, it must have “sensors for touch, temperature, vibration and proprioception, the ability to sense the position of the arm and hand relative to other parts of the body; power that will allow at least 24 h use; mechanical com-ponents that will provide strength and environmental tolerance (temperature, water, humidity, etc.); and sufficient du-rability to last for at least ten years.”

To fulfill these aims, DARPA has been working in conjunction with the DEKA Research and Development Corpora-tion to produce a robotic arm that is no bigger than a human arm and weighs no more than nine pounds. The head of

DEKA is Dean Kamen who is also the inventor of the “Segway.” According to Colonel Dr. Geoffrey Ling, manager of DARPA’s program, when DARPA ap-proached Kamen about this extraordi-nary goal, Kamen told them “the idea was crazy.” That did not stop Kamen and his team of 40 engineers. After one year, they came up with The DEKA Arm. Scott Pelley of 60 Minutes interviewed Ling, who remarked, “It is very much like a Manhattan Project at that scope. It is over a $100 million investment now. It involves well over 300 scientists, that is, engineers, neuroscientists, psycholo-gists.”

Unfortunately, the prosthetic de-signs available in the market today re-main well behind the times. Currently, a hook developed during World War II that resembles the hook in Peter Pan is the most common option. Two ma-jor problems that advanced prosthetics currently pose and that the DEKA Arm must overcome is irritation around the shoulder and excessive weight that tires the users. The cost of purchase is a vi-tal concern as well. Current estimates for the DEKA Arm are around $100,000, which may seem high, but is compara-ble to what current systems cost.

The Advent of the DEKA Arm

Figure 1: A man demonstrating the DEKA arm.

Page 58: HURJ Volume 11 - Spring 2010

57

engineering hurj spring 2010: issue 11

The basic approach utilized in the DEKA Arm design parallels Germany’s Vaduz prosthetic system. The DEKA Arm is mounted on the shoulder and inside the shoulder apparatus (Figure 1.) there are tiny balloons that are dis-placed across the user’s shoulder. These balloons inflate when a muscle flexes and respond by sending signals to the processor inside the arm. In addition to flexing the shoulder, the users, with the help of a set of buttons placed in their shoes, can direct the arm to perform spe-cific functions. In Figure 2, Fred Downs, the Veterans Affairs official in charge of prosthetics, uses his toes to control the grasping of a water bottle.

Along with effective arm movements, one needs control of hand sensitivity. As Pelley asked in the interview, “How do you pick up something that you might crush?” To address this, the DEKA Arm incorporates a vibrator sensor to alert the user on how tightly something is be-ing grasped. The shoulder feels this vi-bration, and the vibration escalates upon a more intense grasp. This function is demonstrated in Figure 3 as Chuck Hil-dreth picks a grape and eats it.

For DEKA Arm, the next step is to attach the arm directly to the nervous system. The final device Revolution-izing Prosthetics desires is an arm con-trolled simply by the user’s thoughts. Ling explained that although the arm is gone, the nerves are not necessarily lost. In this case, the brain still has an effect on the movement of the shoulder. This direct connection allows the user to sim-ply think about moving his arm to ac-complish the task. Jonathan Kuniholm of Duke University (Figure 4) described that electrical impulses occurring in his arm—generated by merely thinking—allow the computer to make movements in the hand based on these impulses. Eventually, this system of neuron-con-trolled movements will replace the but-tons in the user’s shoes creating one continuous mechanical arm.

An important issue that follows is whether one can learn to fully operate such a mechanical arm. Suppose, for example, that someone is born without either hand or arm. How is this person supposed to think about moving their limb when they have never done this in

their life? Although the nerves exist in the residual limb, the concept of moving something that is not there may seem ab-stract. To test this, Jacob Vogelstein and Robert Armiger of the Applied Physics Lab (APL) at Johns Hopkins Univer-sity have developed a spinoff of the very popular video game, Guitar Hero. Guitar Hero consists of five colored fret buttons and the strum control. The five colors cross the television screen and the user presses the color on the guitar while simultaneously strumming. APL’s “Air” Guitar Hero eliminates the strum, and so instead of using the standard gui-tar with its five colored fret buttons, the user simply “thinks” about pressing the colored buttons. This sends signals to electrodes that are placed on the user’s amputated limb. Kuniholm has been involved in this testing process. He has found that without any strumming, the electrodes recognize his muscle move-ments, and he can enjoy the game like any other user. The goal of Air Guitar

Hero said Armiger “is to motivate users to practice pattern based signal classifi-cation.”

Obviously, sensitivity of movements is a major concern, and Armiger and Vo-gelstein hope to surmount the issue us-ing this game. Continual practice with the game forces the player to think about moving his fingers, which in turn assists the program by calibrating the system. This helps users adapt to a novel robot-ic process that they may end up using when the prosthetic arm becomes avail-able. According to Armiger, Air Guitar Hero “provides a fun way for users to practice finger control while simultane-ously providing ‘training’ data for pat-tern recognition algorithms. This helps the user learn to control the system and improves the way the system interprets the user’s inputs.” Armiger hopes his research and Air Guitar Hero “will at-tract bright students” to ultimately re-fine prosthetic control.

The Big Picture: DEKA Arm and the General

Public

Although DARPA’s massive invest-ment of $100 million comes at the ex-pense of taxpayers, many people such as Ling maintain that this technology is absolutely necessary to honor the na-tion’s commitment to its soldiers. For concerned members of society, Ling re-assures that this program is “not a clas-sified, military weapons system. This is an advancement in medical technol-ogy.” As a result, other universities and companies such as Johns Hopkins and DEKA can collaborate to develop the most effective solution. Though the DEKA Arm is not available to the public just yet, it is currently being tested at the Department of Veterans Affairs. At this point, the first recipients of the DEKA Arm would be the nearly 200 amputees from Iraq and Afghanistan. The end goal, however, is that the general public will see their money returned to them in the form of the innovative DEKA Arm.

In an attempt to increase transparen-cy and cooperation, Kuniholm founded the Open Prosthetics Project which is an

Figures 1, 3, 4

Page 59: HURJ Volume 11 - Spring 2010

58

engineering hurj spring 2010: issue 11

open-source web site “that aims to make prosthetic-arm technology as open source and collaborative as Linux and Firefox.” DARPA and Johns Hopkins are on the same page as Kuniholm; they too want the hardware and software be-hind the research to be “open source so that prosthetic-arm research innovation can evolve organically.” Clearly, the en-gineers and scientists involved under-stand that this is a massive team project that should not be a competition among companies and universities. Collabora-tion will expedite the evolution and in the process, generate the most advanced prosthetic product.

Veterans seem to be the topic of most discussion when it comes to amputa-tions, but this is mostly a result of media coverage and research grants. They are certainly not the only ones who could use a prosthetic arm. Accidents, diseas-es, birth defects, and war all are causes of loss of limb. In fact, most amputations are “related to work-related civilian trauma.” The DEKA Arm may seem like an unnecessary gadget to some of those unlikely to receive any of its benefits im-mediately, but in the future, the technol-ogy is in everyone’s interests. Since this project comes at the taxpayer’s expense and the money is not directly returned to them, the current generation might ask, “Why are we spending so much money on this?” Those people should consider that this innovation could one day im-prove the life of their child. That is why this research is being conducted right now. According to DARPA, the tech-nologies they develop are expected to be “readily adaptable to lower-extremity amputees [with] civilian amputees [ben-efitting] as well as amputee soldiers.”

Currently, hooks developed around World War II are being used in place of amputated hands. This is simply unac-ceptable. The United States has the abil-ity to revolutionize this outdated tech-nology, and is taking important strides forward. With clinical testing now tak-ing place, the DEKA Arm and robotic prosthetics will be available in the fore-seeable future for amputees. The DEKA Arm appears very promising, especially with the open source configuration that the leading developers have elected to use. Furthermore, with increasing grant

support, the DEKA Arm will likely be-come the top model for all robotic pros-thetics.

References

1. Adee, Sally, “For those without hands, there’s Air Guitar Hero.” IEEE Spectrum. (2008). 12 November 2009 <http://spectrum.ieee.org/consumer-electronics/gaming/for-those-without-hands-theres-air-guitar-hero>.

2. Answers.com. 2009. Answers Corporation. 15 Nov. 2009. <http://www.answers.com>.

3. Armiger, Robert. Email interview. 2 Decem-ber 2009.

4. Bogue, Robert, “Exoskeletons and robotic prosthetics: a review of recent developments.” In-dustrial Robot: An International Journal. 5 (2009): 421-7. 14 Nov. 2009 <http://www.emeraldinsight.com/Insight/viewContentItem.do?contentType=Article&contentId=1806007>.

5. Childress, Dudley S, “Historical aspects of powered limb prostheses”. Clinical Prosthetics and Orthotics, 9(1): 2-13, 1985.

6. Ellison, Jesse, “A New Grip on Life.” News-week. (2008). 14 November 2009 <http://www.newsweek.com/id/172566>.

7. Graham, Flora. “Disability no barrier to gaming.” BBC News 12 March 2009. 16 November

2009 <http://news.bbc.co.uk/2/hi/technol-ogy/7935336.stm>.

8. Meier, R.H., & D. J. Atkins, ed. Functional Restoration of Adults and Children with Upper Extremity Amputation. New York: Demos Medi-cal Publishing, 2004. 1-7.

9. Pelley, Scott. “ The Pentagon’s Bionic Arm.” 60 Minutes 20 September 2009: 1-4. Web. 29 Octo-ber 2009.

10. Pope, David, “DARPA Prostheitcs Pro-grams Seek Natural Upper Limb.” Neurotech Reports. 14 November 2009 < http://www.neuro-techreports.com/pages/darpaprosthetics.html>.

11. United States. DARPA. Revolutionizing Prosthetics Program. February 2008. 16 No-vember 2009 <http://www.darpa.mil/Docs/prosthetics_f_s3_200807180945042.pdf>.

Page 60: HURJ Volume 11 - Spring 2010

59

hurj

can you see yourself in hurj?

share your research!

now accepting submissions for our fall 2010 issue

hurj spring 2010: issue 11

focus -- humanities -- science -- spotlight -- engineering

Page 61: HURJ Volume 11 - Spring 2010

For over half a century, the Institute for Defense Analyses has been successfully pursuing its mission to bring analytic objectivity and under-standing to complex issues of national security. IDA is a not-for-profit corpo-ration that provides scientific, technical and analytical studies to the Office of the Secretary of Defense, the Joint Chiefs of Staff, the Unified Commands and Defense Agencies as well as to the President’s Office of Science and Technology Policy.To the right individual, IDA offers the opportunity to have a major impact on key national programs while working on fascinating technical issues.

Along with competitive salaries, IDA provides excellent benefits including comprehensive health insurance, paid holidays, 3 week vacations and more – all in a professional and technically vibrant environment.

Applicants will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship is required. IDA is proud to be an equal opportunity employer.

Please visit our website www.ida.org for more information on our opportunities.

Please submit applications to: http://www.ida.org/careers.php

Institute for Defense Analyses 4850 Mark Center Drive Alexandria, VA 22311

IDA is seeking highly qualified individuals with PhD or MS degrees

Sciences & Math Astronomy Atmospheric Biology Chemistry Environmental Physics Pure & Applied Mathematics

Engineering Aeronautical Astronautical Biomedical Chemical Electrical Materials Mechanical Systems

Other Bioinformatics Computational Science Computer Science Economics Information Technology Operations Research Statistics Technology Policy

broaden your perspective

your career • your future • your nation

we thank you for reading!

Page 62: HURJ Volume 11 - Spring 2010