Olavian Natural Sciences Society Journal

26
2011 Edited by Luke Watkins Introduction Hello and welcome to the first ever Journal of the Olavian Natural Sciences Society! As a Society we were founded in mid-September, near the start of this academic year, and have gone from strength to strength since our inception. We are currently open to all students in years 11, 12 and 13 with an interest in scientific fields and our membership has grown steadily since September. We were aimed primarily at anyone considering a career in science, but weʼre very pleased to see that our membership is widening to include many students with just a casual interest in scientific matters. Our structure involves a speech every Friday lunchtime (at 1pm in S4) given by a member of the Society, and this term weʼve had speeches on a very wide range of topics. A comprehensive list of this termʼs speeches, along with a very brief summary of each, is included in this Journal, and we look forward in anticipation to the many speeches we will undoubtedly be receiving in the next term and beyond. On Thursday 8th December we hosted our first external speaker, Gavin Hunter, from engineering giant Mott MacDonald, who gave a highly informative presentation on careers in the sector as a whole, centred on his area of expertise as an environmental consultant. Going into the New Year we hope to host many more external speakers on a variety of topics, and on the whole we hope to build on the strong foundations we have established this term. In this Journal are a number of articles, all of which have been written by members of the Natural Sciences Society, on topics ranging from superfoods to string theory. I hope you will appreciate the effort invested on the part of the articlesʼ authors, and hopefully youʼll find the articles as enjoyable and informative as we have. On behalf of the Society I would also like to thank the following people: Ms Marwood, for her supportiveness from the beginning and for letting us use her lab every Friday; everyone whoʼs taken the time to contribute an article to this Journal; Luke Watkins, for compiling & formatting this Journal; everyone whoʼs given a speech to the Society; and finally, everyone whoʼs turned up each week to make this Society possible. Thanks for reading, Asher Leeks, Founding member Issue no. 1 12/12/11 Food for life 2 New sources of Energy 5 Nuclear interactions 8 Taxonomy 11 Are we still evolving? 16 Can humans live forever? 19 String theory 21

description

The December edition of the Natural Sciences Society journal.

Transcript of Olavian Natural Sciences Society Journal

Page 1: Olavian Natural Sciences Society Journal

2011Edited by Luke Watkins

IntroductionHello and welcome to the first ever Journal of the Olavian Natural Sciences Society!

As a Society we were founded in mid-September, near the start of this academic year, and have gone from strength to strength since our inception. We are currently open to all students in years 11, 12 and 13 with an interest in scientific fields and our membership has grown steadily since September. We were aimed primarily at anyone considering a career in science, but weʼre very pleased to see that our membership is widening to include many students with just a casual interest in scientific matters. Our structure involves a speech every Friday lunchtime (at 1pm in S4) given by a member of the Society, and this term weʼve had speeches on a very wide range of topics. A comprehensive list of this termʼs speeches, along with a very brief summary of each, is included in this Journal, and we look forward in anticipation to the many speeches we will undoubtedly be receiving in the next term and beyond.

On Thursday 8th December we hosted our first external speaker, Gavin Hunter, from engineering giant Mott MacDonald, who gave a highly informative presentation on careers in the sector as a whole, centred on his area of expertise as an environmental consultant. Going into the New Year we hope to host many more external speakers on a variety of topics, and on the whole we hope to build on the strong foundations we have established this term.

In this Journal are a number of articles, all of which have been written by members of the Natural Sciences Society, on topics ranging from superfoods to string theory. I hope you will appreciate the effort invested on the part of the articlesʼ authors, and hopefully youʼll find the articles as enjoyable and informative as we have.

On behalf of the Society I would also like to thank the following people: Ms Marwood, for her supportiveness from the beginning and for letting us use her lab every Friday; everyone whoʼs taken the time to contribute an article to this

Journal; Luke Watkins, for compiling & formatting this Journal; everyone whoʼs given a speech to the Society; and finally, everyone whoʼs turned up each week to make this Society possible.

Thanks for reading,

Asher Leeks, Founding member

Issue no. 1 12/12/11

Food for life" 2New sources of Energy " 5Nuclear interactions" 8Taxonomy " 11Are we still evolving?" 16Can humans live forever?" 19String theory " 21

Page 2: Olavian Natural Sciences Society Journal

2011

A Food for LifeThe idea of food as a life saver is not a new idea. However, recently society has turned to drugs for nearly every ailment, when all that may be needed is a simple change in diet. Food is the most consumed cure out there, available at local grocery stores, supermarkets and produce barns. Certain foods can act as cancer-blockers, antidepressants, diuretics, anticoagulants, painkillers, antibiotics, anti-inflammatory agents, tranquilisers and so much more. And these foods in turn can ward off headaches, arthritis, heart attacks & strokes, colds, influenza, ulcers, cancers of many types, gallstones, constipation and most other disorders or afflictions you can think of.

Apples

“An apple a day keeps the doctor away.” Although the word “superfood” may just be a marketing campaign by large companies, it is the perfect word to describe the simple yet amazing apple. Why is the apple amazing you ask? Here are just

some of the extraordinary properties of the fruit of Malus domestica.

Apples are enormously high in antioxidants, which scientists think aid the stopping and restoration of oxidation damage that occurs during normal cell activity.

These antioxidants are also thought to give some defence against Parkinsonʼs, a disease branded by a breakdown of the brainʼs dopamine-producing nerve cells. There are over 550,000 people suffering from Dementia, with Alzheimerʼs accounting for over 60%, in England. A new study performed on mice shows that consumption of apple juice could ward off Alzheimerʼs and fight the effects of ageing on the brain. Mice in the study that were fed an apple-enhanced diet showed higher levels of the neurotransmitter acetylcholine and improved in maze tests than those on a standard diet.

A compound named triterpenoids found in the peel of the apple has strong anti-growth activities against cancer.

The soluble fibre in apples binds with fats in the intestine, which can lead to lower cholesterol levels.

Red apples contain a special type of antioxidant called quercetin. Recent studies have found that quercetin can help improve and strengthen the immune system, particularly when you're suffering from stress.

The fibre in apple can help if you are having digestion problems. Fibre can drag the water out of your colon to keep things moving along or absorb more water from your stool to slow your bowels down.

After much research a link has been made between the soluble fibre intake and a slower build up of cholesterol-rich plaque in your arteries. A compound called phenolic found in apple skins prevents the cholesterol from solidifying in the artery walls.

When women eat at least one apple every day they are 28% less likely to develop type 2 diabetes.

Although eating apples wonʼt actually replace your tooth brush, it does stimulate the production of saliva, reducing tooth decay as the levels of bacteria are reduced.

Blueberries

Blueberries were one of the first fruits to be dubbed a “superfood,” but are they really that beneficial?

They have the highest concentration of antioxidants than other fruits, being rich in anthocyanin, which is one of the most powerful antioxidants known.

Page 2

Page 3: Olavian Natural Sciences Society Journal

2011

Naturally high in vitamins C & K, as well as in manganese, copper, selenium and much more, making them a perfect natural source of many vitamins & minerals.

They contain an antioxidant called Pterostilbene, which can reduce cholesterol even more effectively than most prescription medications.

They also contain anthocyanin, thought to be able to improve your eyesight by reducing eye strain, improve night vision and improve reaction to changes in light.

Epicatechin, also found in blueberries, is known to be able to fight bacteria and stop them attaching to the lining of bladder tissue, allowing them to be removed.

Blueberry fibres also improve your digestion and fight constipation naturally.

Colon Cancer

Colon cancer is the third most frequently diagnosed cancer and the second leading cause of cancer deaths in the United States. Symptoms include stools with mucus and rectal bleeding. A study team in London tested if a type of Omega-3(EPA) was able to stop the development of rectal polyps (an abnormal growth of tissue projecting from a mucus membrane) in human subjects who had a genetic risk of developing bowel cancer. The test subjects who were being given the EPA supplement on a daily basis experienced a drop in the number of new polyps that developed over the 6 months by 22.4% and a 30% reduction in the size of these polyps. In stark

contrast the group that had been receiving the placebo had an increase in both size and number of their colorectal polyps. It has been concluded that Omega-3 fatty acids are as effective and efficient in preventing colorectal cancer as anti-inflammatory drugs, but without the negative cardiovascular side effects these drugs can have, particularly in older patients. The best source of omega 3 is of course oily fish such as salmon but flaxseed oil is an even better source, whilst in most large stores Omega-3-enriched bread, eggs and milk can also be found. They are thought to reduce inflammation throughout the body, which can harm your blood vessels and lead to heart disease. Omega-3 fatty acids may also reduce your triglyceride levels, lower your blood pressure, reduce blood clotting, boost immunity and alleviate arthritis symptoms, and in children may even improve learning ability! Eating one to two servings a week of fish, particularly fish that's rich in Omega-3 fatty acids, appears to reduce the risk of heart disease, particularly sudden cardiac death.

Ten foods you should eat that you arenʼt eating already

60% of you will only eat only one of the following, with a further 20% not eat any. These are not uncommon or exotic, like goji berries, but things which are easy to find but never seem to fall into our shopping carts.

Cabbage: Packed full of nutrients such as sulforaphane which is said to be able to help cancer-fighting enzymes. How to eat: Asian-style slaw or Vegetarian Totra.

Swiss chard: Filled with carotenoids that protect against ageing. " " " How to eat it: Chop and sauté in olive oil.

Cinnamon: Helps control cholesterol and blood sugar levels How to eat it: Sprinkle on pancakes or oatmeal.

Pomegranate juice: Packed full of antioxidants that have a multitude of benefits including lowering blood pressure How to eat it: Just drink it.

Dried plums/prunes: Again filled with antioxidants. How to eat it: On their own or wrap in ham/bacon and bake.

Pumpkin seeds:  The most beneficial part of the whole pumpkin as they contain a high concentration of magnesium. How to eat it: Roasted as a snack, or sprinkled on salad for extra crunch.

Sardines: High in omega 3, calcium, iron, magnesium, phosphorus, potassium, zinc, copper, manganese and B vitamins – sardines are the “perfect” food.How to eat it: Pan fry with olive oil or mash and spread on toast.

Turmeric: Dubbed to be the “Queen of spices,” it has anti-inflammatory properties.How to eat it: Use it a spice for any dish – it works especially well with vegetable dishes.

Frozen blueberries: These are amazing as they can be eaten all year round unlike the fresh variety. Although they do lose some nutrients in this process they are still incredibly beneficial.How to eat it: Mix in yoghurt for a

Page 3

Page 4: Olavian Natural Sciences Society Journal

2011

perfect dessert or blend with other fruit to create a smoothie.

Canned pumpkin: Can be eaten all year round and is high in fibre and vitamin A. Also very filling but

still low in calories. How to eat it: Pumpkin pie, pumpkin rosti or even a casserole.

By Stanley Ho

Page 4

Page 5: Olavian Natural Sciences Society Journal

2011

New Sources of EnergyOne of the major issues facing the world today is the production of energy. As countries in the Third World develop and continue to grow in population, their energy consumption will grow tremendously. Simultaneously, nations in the First World are increasing their consumption as technology takes over more and more of our lives.

At the same time, we are faced with the problem that our most reliable and widespread energy sources – coal, oil and natural gas – are helping contribute to climate change, and are thus damaging the planet for future generations. As such, it is vital that new sources of energy are utilised or older sources are used more prolifically.

Perhaps the best replacement we could use for fossil fuels would be nuclear power – namely, nuclear fission. The process of nuclear fission is relatively simple. An unstable isotope, which is unstable either due to a neutron being added to a (more) stable isotope, or because the isotope is inherently unstable, undergoes radioactive decay, splitting into two (or occasionally three) other isotopes that have a roughly 3:2 mass ratio, as well as releasing other neutrons and energy.

The principle behind nuclear fission is to use those neutrons produced by the decay to induce the decay in other atoms. Thus, a chain reaction is started, with the decay of a single atom fuelling further decay until all the original isotope is gone, with energy having been produced

In a nuclear power station, this energy is used to heat water and

turn it into steam, which turns turbines, producing electricity.

The main issues with nuclear power are how hazardous it is. Namely, the type of radiation mainly produced in a nuclear reactor, gamma radiation, causes radiation poisoning in humans. Radiation poisoning in low doses causes nausea, vomiting, headache and fever, with death occurring after six to eight weeks. A high dose will kill in less than 48 hours. The death generally occurs due to either a massive drop in the number of blood cells (and thus an immune system collapse, leading to death by infections), the gastrointestinal system falling apart, or the neurological system degenerating.

Nuclear fission reactors are generally seen by the public to be very dangerous, which isnʼt helped by scaremongering on the part of many environmentalist groups. Although many people cite the dangers of a meltdown like at Chernobyl or Fukushima, the risk of this happening is very slight.

The meltdown at Chernobyl occurred because the authorities turned off the safety systems for some badly thought out tests. The meltdown at Fukushima occurred because nuclear plants in Japan did not have to undergo safety tests in case of tsunamis, although earthquakes were planned for. Using the argument against building a nuclear power station in (for example) Germany is scientifically baseless, since the chance of having a 7.0 earthquake or greater in Germany in any one year is statistically insignificant.

There is also the issue of the radioactive waste. The standard reactor produces huge amounts of radioactive isotopes that are useless for energy production, although they still emit enough radiation to be very dangerous. Isotopes often emitted are uranium-234, neptunium-237, plutonium-238 and americium-241.

Another problem with nuclear fission is, only certain isotopes are suitable to start nuclear chain reactions. The most common fuels used are Uranium-235 and Plutonium-239. However, there are other possible fuels, such as Thorium (see below).

A liquid fluoride thorium reactor works by turning Thorium-232 into Uranium-233, which can be used as a reactor fuel. While energy is produced, the excess neutrons are also absorbed by a blanket of thorium salts, which are subsequently turned into Uranium-233 for use as fuel. In other ways, it works similar to a regular nuclear reactor.

The advantages of Thorium reactors are that they are inherently safe (they cannot explode, because they are not pressurised). They also have much reduced nuclear waste compared to a standard uranium reactor, because Uranium-233 is very pure, unlike Uranium-235, which often produces the toxic isotope Plutonium-239. In addition, the most common isotopes left after the fission, Caesium-137 and Strontium-90, are far less dangerous than the comparative Uranium products.

Finally, thorium is far more abundant in the Earthʼs crust and far cheaper to extract and use than Uranium. Nearly all

Page 5

Page 6: Olavian Natural Sciences Society Journal

2011

countries have significant thorium reserves, although the US and India have the largest reserves.

The main disadvantage (in politiciansʼ eyes) is that Thorium reactors do not produce the correct material for nuclear weapons, which is why investment has been rather low for the past 50 years since they were first designed. This is starting to change, however, and they may well carry nuclear fission into the future.

Nuclear fusion is the opposite of fission. Instead of a large atom being broken down into smaller ones, this involves small atoms being combined into a large one. In the periodic table, fusing elements smaller than iron produces energy, while past that fusion requires energy. Similarly, Fission produces energy in all elements larger than iron, but requires energy in elements smaller than iron.

Nuclear fusion is also the process that powers stars. The form of fusion that occurs in stars involves fusing protium, which is the most common hydrogen isotope, with just one proton (and no neutrons) in the nucleus.

The form of fusion we are trying to induce on Earth involves combining deuterium and tritium. Deuterium is an isotope of hydrogen which contains one proton and one neutron in the nucleus, while tritium contains one proton and two neutrons. Both of these have a very low abundance. Combining them yields a helium-4 atom and a neutron, as well as releasing energy.

The main problem with initiating this is that nuclei are positively

charged, and like charges repel. As such, it is extremely difficult to force the nuclei together. Once they get close enough, the strong nuclear force (an attractive force) can overcome the electrostatic force (which repels) and so pull the nuclei together, causing fusion to begin.

The nuclei can be forced this close together through several means. In the sun, the nuclei are forced together due to gravity creating huge pressure in the core. Meanwhile, scientists are working on using magnetic fields to create sufficient pressure in order to initiate fusion. The other possibility, using high temperatures to induce fusion, would be unbelievably dangerous with the temperatures involved.

Although we have several facilities which have initiated fusion, we cannot make the reaction efficient enough that it can produce its own energy –our energy input is greater than our output. New designs and technical improvements are making progress, but it will still be many decades before we can initiate efficient fusion, at which point we will be looking at enormous amounts of cheap energy.

Nuclear fusion and fission are considered prime possibilities for energy because of how efficient they are. However, we must consider other, more practical alternatives, given that one faces huge public opposition and the other is not viable yet.

Geothermal power is one possibility. There are several different methods of utilising geothermal power, which is energy from the core of the Earth. The simplest type is simply using

steam released underground to drive turbines, which in turn produce electricity. This is a dry steam plant. Another, more common method is the flash steam plant. This involves using pipes to take deep, high pressure water (of around 180 degrees Celsius) and guide it upwards into lower pressure chambers. The water then flashes into steam, which drives the turbines. In both cases, the water is returned to the ground afterwards.

A new, more advanced type of geothermal station involves drawing up hot water of only around 57 degrees, and using it to flash heat a liquid of a much lower boiling point, such as a butane or pentane hydrocarbon. This is known as binary cycle geothermal power

The main issue with geothermal energy is that it is only usable in a very few select areas, namely those with areas such as geyser fields or which lie on tectonic plate boundaries. The advances in binary cycle geothermal power give a greater number of sites to utilise, but these are still limited by tectonic plate boundaries. Currently, 24 Countries around the world produce geothermal power, with the US the worldʼs largest producer. Five countries produce more than 15% of their power from geothermal sources - (El Salvador, Kenya, the Philippines, Iceland and Costa Rica), with Iceland leading at 30% of power output. Indonesia may become an energy superpower in the future with its geothermal reserves.

The problem of locating a plant may be solvable for First World nations. Both France and Germany have investigated the

Page 6

Page 7: Olavian Natural Sciences Society Journal

2011

possibility of several kilometre deep geothermal plants, as every piece of land on the planet is warm enough several kilometres down to engage in binary cycle power generation.

In conclusion, the energy problems facing our world today may well be solvable in the next few decades with scientific advances in the fields of nuclear fusion and fission, as well as geothermal power. This does not

mean, however, that we should simply ignore climate change while we wait for these advances to occur, because we cannot rely on the future.

By Samuel Bentley

Page 7

Page 8: Olavian Natural Sciences Society Journal

2011

The Strong and Weak Nuclear Interactions Even if physics isnʼt your thing, youʼve almost certainly heard about the four forces, quarks, or fusion. Everybody knows about gravity, and even if you havenʼt heard of the electromagnetic force, you will have learnt about its different manifestations, such as, well... electricity and magnets. There are two other forces which we donʼt play with on a daily basis. Forces that go against our normal intuition- the two nuclear forces (see the title for clarification). If this still leaves you confused, be aware that “interaction” is just a slightly more sciencey term for “force,” and if youʼre still in the dark, you may want to read below.

Particle Basics First of all, I should tell you how the universe works. A (rather excessive) simplification of quantum theory is that if left alone, something might turn into something smaller, but there is no way it will turn into something bigger as there is no energy input. It may output energy though, and become smaller. So long as no energy is put in, the universe will always work in this way- everything tends towards the lowest possible energy state (electrons are in the lowest shells- energy levels; things you drop fall towards the ground, etc). Iʼll apply this later when I tell you about the weak force, but before that, so you donʼt get bogged down in small numbers, Iʼll teach you the basics of particle mass: Itʼs measured in Electronvolts: This is the kinetic energy of 1 electron as it moves through a potential difference of 1 volt. e = 1.6 ×10^-19C and V= W/Q (joules per coulomb) So 1.6 ×10^−19 J =

1 eV (see, much cleaner!) As Einstein said, E = mc^2 so we usually measure particle masses in eV/c^2 to avoid messy looking everyday masses, e.g. 1 eV/c^2 = 1.6 ×10−19 ÷ 299 792 458 2 = 1.78×10^−36 kg, and if anyone cares to know, I weigh about 3.14 × 10^37 eV.

Quarks Before explaining the strong force, I should tell you about what constituents of matter experience them. Quarks compose protons and neutrons primarily, as well as other, short-lived exotic matter. Hadrons are particles made of quarks, but they can only exist in groups of two or three. These can be grouped as either Baryons (with three quarks, such as protons or neutrons) or mesons (composed of one quark and an antiquark). There are 6 “flavours” (types) of quark (12 with antiquarks). Quarks have a charge of either -1/3 (for down, strange and bottom) or +2/3 (for up, charm and top). These charges are reversed in antiquarks. These charges add up to give an integer charge- -for protons (2 Up + 1 Down) 2/3 + 2/3 – 1/3 = +1 -for neutrons (1 Up + 2 Down) 2/3 -1/3 – 1/3 = 0The masses of up and down quarks together donʼt get close to the true mass of a proton or neutron. This is thought to be because the gluons (particles that mediate (control) the strong force which holds quarks together) compose a highly energetic particle field surrounding the quarks, constituting most of the hadron mass.

The Strong Interaction This, the strongest of the four

forces (physicists arenʼt renowned for their naming capabilities) has a strength 100 times that of the electromagnetic force, the 2nd strongest. It is 10^38 times stronger than gravity. It binds protons and neutrons in the nucleus and holds the quarks in the nucleons together. The interaction actually gets stronger with larger distances. Within a nucleon quarks are almost free to move around, but at any greater distance it is impossible for them to move away (which is why there are no free quarks in nature). This is called Asymptotic Freedom. The quarks under the strong force have a property called “colour charge.” This is similar to electric charge, except with three types, Red, Blue and Green, as opposed to + and -. This has nothing to do with visible colours, it is just a name. Quarks in a hadron madly exchange gluons, changing their “colour” rapidly as they emit gluons, creating a “colour force field” consisting of gluons holding the quarks together. These gluons and hadrons must be “colour neutral-” the colour charge in the particle must be the 3 different ones (or 1 colour and its anti-colour for mesons and gluons). This is analogous to electrically neutral particles, where the positive charge cancels out the negative component. The exchange of gluons between quarks in a hadron is what holds it together. Colour charge is always conserved- as in, every exchange must have a colour neutral outcome (mixing the three colours as in a baryon, or mixing a colour with its anticolour as in a gluon or meson).

Colour Change Colour change happens as

Page 8

Page 9: Olavian Natural Sciences Society Journal

2011

gluons are exchanged. A quark may start red, then release a red/antigreen gluon. This makes it lost the red colour charge and become green. These are absorbed by a green quark, and that one becomes red- the antigreen neutralises it and the red makes it become red. Simples! You might realise this allows 9 different colour/anticolour combinations (green with antigreen/red/blue; red with antigreen/red/blue and blue with antigreen/red/blue), but as the maths works out there are only 8 possible combinations. There is no intuitive explanation for this result yet. An interesting property of quarks is a consequence of when enough energy is used to pull a quark away from a hadron, the colour force field increasing in strength to pull it back- the aforementioned asymptotic freedom. Eventually, thanks to our old friend E = mc2 and the huge force involved when the strong interaction acts at a distance, it eventually becomes energetically cheaper for the energy pulling it back to be converted into mass for a new quark/antiquark pair (a meson), allowing the colour force field to relax. This explains the strong interactionʼs minute range of 2.5 femtometres (about 2.5 times the diameter of a proton), and the fact that free quarks are nonexistent in nature. The tiny range is the reason we donʼt see a greater effect of this on larger scales, as the electromagnetic and gravitational interactions have infinite range.

The Weak Interaction This is the final, and least intuitive, of the 4 fundamental forces of the universe. It is responsible for radioactive decay and all fusion reactions in stars. It

is the only force that affects all known elementary particles, and hence is the only way we can detect neutrinos. This weak interaction (the word force is not very applicable here at all) is caused by exchange (emission or absorption) of (charged) “W” and (neutral) “Z” bosons between particles. These particles are many times more massive than nucleons. The interaction, to put bluntly, turns particles into different particles. Quark flavour change is one of the most interesting properties of any of the interactions. It literally turns one quark into another type or “flavour.” Remember at the beginning when I said that a big thing may turn into a smaller thing? Large quarks can only turn into smaller ones, unless energy in put into a system (such as in stars, in which due to the huge kinetic energy each particle has an up quark turns into a heavier down quark, converting a proton into a heavier neutron). Quarks can also only turn into ones with different charge, from -1/3 to 2/3 and vice versa (as W bosons can only have integer charge).

Beta Decay The weak interaction allows quarks to change flavour, which causes beta decay. A neutron may spontaneously emit a W- boson, turning one of the down quarks it contains into a (lighter) up quark, so turning the neutron into a lighter proton. This W boson has a lifetime of about 10−24 seconds, before decaying into an electron and an antineutrino. The W boson is much more massive than the neutron but since it has such a short lifetime it is possible for them to exist (but very briefly). These are emitted at very high energies, but in the lifetime they

have, if they travel at near the speed of light they can only travel 3×10^−16 metres before decaying, hence the tiny range of the weak interaction, even smaller than in the strong force.

Half Lives & Probability The weak force operates purely on probability, which explains the idea of half-lives, since we can predict how much of a sample will decay; yet it is impossible to know which atoms will decay. This is why free neutrons have a half life of 15 minutes- there is a 50% chance in this time that a down quark in a single free neutron will change into an up quark, and the remaining proton is fully stable as it has an integer charge and is made of the lightest possible combination of quarks (if you donʼt involve antimatter in mesons). This explains beta decay, but still doesnʼt explain why a proton could turn into a neutron in a star. To explain, remember that the rule at the start was IF NO ENERGY IS INPUT a particle canʼt become heavier.

Stars & Fusion In a star the huge kinetic energy each particle has (on macroscopic scales we observe this as temperature) allows lighter things to spend this energy in getting close enough. It takes on average 5 billion years for a proton to fuse with another, inside a star, and with good reason- they need a LOT of energy. As positive protons come together, they repel and this gets stronger with less distance. They need to be on a perfect collision course (within 2.5 nucleons distance away) with enough kinetic energy to overcome electromagnetic repulsion AND to allow some of this energy to turn into mass for

Page 9

Page 10: Olavian Natural Sciences Society Journal

2011

conversion into a heavier neutron. Once the proton gets close enough to another then the strong force binds it in, overcoming electromagnetic repulsion. The repulsion caused by this makes it “energetically cheaper” for one proton to become a neutron via the weak interaction, thus releasing binding energy in the form of a W+ boson (as positive charge is lost) which

then decays into a positron and neutrino.

This wraps up the beginnerʼs manual of the two least-well-popularly-understood forces we know, and in the future I hope you come to appreciate all these forces do for you. Without them the sun wouldnʼt shine, we wouldnʼt have stable elements, and oh, everything inside you

would blast apart from electrical repulsion. So when somebody asks you what your favourite force is, think about what makes a nucleus- because past 10^−15 metres, you know which to root for.

By Jacob Bartlett

Page 10

Fun Test -- Match Each Diagram To The Appropriate Force/Particle!

Page 11: Olavian Natural Sciences Society Journal

2011

TaxonomyTaxonomy

Taxonomy in a biological context refers to classifying life on earth, i.e. being able to meaningfully define living organisms in relation to other living organisms. In real terms this means identifying species and placing them in the correct place in relation to other species. We broadly use the Linnaean system of classification today in terms of notation; however human knowledge has expanded considerably since Linnaeusʼs time. Most classification now is done on the basis of evolutionary descent with cladistics, rather than morphology with phenetics, whilst advances in microbiology & evolutionary genetics have shaken even the most steadfast of taxonomic assertions.

The Linnaean System

The 1735 work Systema Naturae, by Swedish botanist (and ʻfather of taxonomyʼ) Carl Linnaeus, is one of the most influential works in the history of taxonomy, and has shaped much of the science to this day. Whilst the system of classification proposed by Linnaeus in Systema Naturae, that of three kingdoms (animal, plant & mineral), is not accepted, there are two major successes arising from the book which have shaped taxonomy significantly today. The first is binomial nomenclature, and the second is rank-based classification.

Whilst certainly not the first to come up with rank-based classification of organisms, Linnaeus popularised the system that is still the most widely-accepted today, and indeed the principal taxa (groups associated

with taxonomic hierarchy) he set out (kingdom through to species), with a few additions, are the taxa we still use today.

Fig.1 helps explain the concept of rank-based classification of organisms. If the metaphor of a tree is used to represent life, each species is represented by a leaf. Similar species can be represented by other leaves on the same branch, forming genera (plural of genus). Similar genera are put into families, which are larger branches, eventually stemming from even bigger branches – orders. This continues until you arrive at the trunk, with each branch leading outwards from the trunk separating into smaller and smaller sub-divisions. In the same way all life can be split into sub-divisions repeatedly until you arrive at species, so that each species has an identity comprised of all of the preceding taxa that lead to it. The main taxa we use, in descending order of magnitude, are: Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species. Prefixes are also often added to enable further division (e.g. the true bugs, Order Hemiptera, are split into the suborders Heteroptera and Homoptera), whilst additional taxa such as tribe (between supergenus & subfamily) and variety (for plants, beneath species) are occasionally used.

Most people are comfortable today with the binomial nomenclature of taxonomy, which lists an organismʼs generic name (the name of its genus) followed by its specific name (the name of its species), sometimes followed by subspecies name if a subspecies is being described (forming a trinomen – as opposed

to a binomen). For example, the African bush elephant is Loxodonta africana and the Sumatran tiger is Panthera tigris sumatrae. This system is highly effective at providing individual, unambiguous & universally recognised names for organisms – this enables taxonomists to identify organisms anywhere in the world with clarity, without the confusion of common names (such as African bush elephant, pill millipede or cabbage white butterfly), which often refer to taxa higher than species, generally vary from region to region, and which there are often multiple of for each species, to name just a few problems.

Phenetics

The first & simplest method of classifying life revolves around phenetics (or taximetrics) – that is, grouping organisms solely according to their similarities in terms of morphology (observable physical characteristics). Nowadays phenetics has expanded to include far more precise elements, such as the presence of particular molecules within organisms, however typical

Page 11

Fig.1 Rank-based classification via a taxonomic hierarchy

Page 12: Olavian Natural Sciences Society Journal

2011

phenetics involves the examination of certain anatomical features at an increasingly precise level. For instance, the distinction between an insect and an arachnid (two Classes) is primarily made on the basis of number of legs, presence of wings & number of body-parts; the distinction between a beetle and a fly is made on features including the presence of elytra (wing-cases) or halteres (small structures taking the place of wings) & basic body-shape (e.g. position of the eyes). Phenetics becomes even more precise at species level – the distinction between most species can only be made by observing highly specific features such as genitalia. This is largely because measurements such as size or colouration vary significantly between individuals, whereas features like genitalia vary only very slightly between individuals.

Phenetic diagrams are called phenograms and offer a visual representation of a phenetic taxonomy. In order to draw one a table must be created showing similarities & differences between organisms by shared characteristics, from which morphological data matrices can be drawn & relationships thus established. Fig.2 offers the simple idea behind a phenogram, based off a Similarity Coefficient drawn from morphological data matrices, whilst Fig.3 is a phenogram based on a Distance Coefficient (again from morphological data matrices), in this case showing suborder Gerromorpha (part of the true bugs) in Illinois, USA.

Phenetics is in many ways a good method to use, as it creates a taxonomy based on logical &

quantifiable parameters. However, it has limitations, namely that it constructs phenograms based purely on observable characteristics, not on lines of evolutionary descent. Whilst phenetics can display evolutionary relationships (e.g. rats & bears both having fur & both being mammals), it does so only when homology is present, i.e. the organisms share a characteristic because of a common ancestor. This offers a very limited view of phylogeny, one which excludes characteristics arising from such mechanisms as convergent evolution, as ultimately if you use phenetics to describe phylogeny, you automatically assume that an organismʼs phenotype is always representative of its genotype, when in reality this isnʼt the case.

This does not necessarily discredit phenetics as a taxonomic method, rather it is a limitation; in order to understand phylogenetic relationships, cladistics is used.

Cladistics

Cladistics is fundamentally different from phenetics in that it approaches life from a purely

evolutionary standpoint, attempting to organise all life into clades (defined by an inclusive group of organisms with a single shared ancestor, forming a monophyly, see fig.4). As the diagram shows, clades can comprise very few species, or very many, and can contain other clades, creating a nested hierarchy.

In order to establish clades, a method similar to that used in phenetics is employed in that shared characteristics of organisms are observed. However, cladistics is far more specific and searches for

synapomorphies, traits which are shared by at least two organisms & also their most recent common ancestor. This enables cladists to identify homologous features (features inherited through a direct path of evolutionary descent) rather than analogous features (features which have adapted and not passed down through direct evolutionary descent, e.g. by convergent evolution). Thus cladists are able to establish the path of evolution by building cladograms around sister groups (two species & their most recent common ancestor), which are determined by homologous features. In order to

Page 12

Fig.2 Basic phenogram of recognisable fruits

Fig.3 Phenogram of Gerromorphan species in Southern Illinois (March-

September 1999)

Page 13: Olavian Natural Sciences Society Journal

2011

build a cladogram, similar steps to those used to create a phenogram are taken (involving character matrices, etc.), until something like fig.5, one of several proposed cladograms for the dinosaurs, is produced.

Unfortunately, there are many issues with cladistics as a system of taxonomy. One of the most obvious ones is that in order to build a cladogram, cladists must differentiate between analogous & homologous features. In order to build a cladogram (particularly where little is known about the taxa involved) a principle called parsimony is applied, whereby if there are multiple potential descendants of an ancestor, the path which requires the least amount of evolutionary change is used. Thus cladistics assumes that evolution always takes ʻthe path of least resistance,ʼ and whilst this should usually hold true, in practice itʼs not an absolute truth, thus itʼs very difficult to get a ʻtrueʼ cladogram. This, coupled with debate over

whether certain features should be considered homologous or analogous, is why many different cladograms exist for any one large taxon (Dinosauria is a good example), and this element of uncertainty creates problems when we try and create one universal ʻtree of life.ʼ Taken to its logical extreme, this asserts that almost every cladogram ever produced for a major taxon is, to a certain degree, probably wrong.

Another issue is that cladistics necessarily over-simplifies evolutionary change in the way that it bases its entire classification on binary speciation events. As Meyr puts it, “Those who allow for a different weighting of different adaptational processes and events (e. g., by giving greater weight to the occupation of a major adaptive zone) may arrive at a very different classification from someone who uses branching as his only basis (as do the cladists).” Essentially, the representation of genealogy presented by cladistics is inaccurate as the simplified way it represents speciation is insufficient to also represent actual evolutionary change.

These are of course assuming that every potential cladogram can be analysed thoroughly – in reality very few taxa have their cladograms analysed thoroughly, and creating a universal tree of life would require unrealistic amounts of time & computing power. A more minor issue is that many taxonomists dislike the way in which cladistics dispels groups such as the reptiles, which traditionally (i.e. phenetically) form a distinct group, but which, when analysed with cladistics, arenʼt a group unto themselves.

This somewhat reduces the utility of cladistics systems from the perspective of zoologists or other group-specific biologists who depend upon taxonomy.

Conclusion

Ultimately there are major problems inherent in both cladistics & phenetics as methods used to produce a taxonomic classification. Whilst phenetics has a longer history & is potentially more objective than cladistics, it has been proven that cladistics offers a better representation of phylogenies, or evolutionary relationships. That said, cladistics is itself over-simplified and perhaps over-ambitious, attempting to display phylogenies which are inaccurate without a huge degree of data & advanced study of the taxa in question, and which even then displays an over-simplified depiction of phylogenies.

Some of these problems can be overcome by resorting to genetic or molecular methods. Whilst these are often treated as separate to phenetic or cladistic methods, fundamentally they rely on the same principles and I feel itʼs not incorrect to view them simply as extensions of the same method and analogous to, for instance, using a type-writer rather than a pen & paper to write a book. Some of these methods can be used to measure distance (or [dis]similarity between species), one example of which is DNA-DNA hybridisation, which measures change in thermal stability of DNA mixed from the species being investigated and thus determines their similarity in a similar way to the phenetic methods of distance discussed earlier. DNA sequencing has also opened up a host of new

Page 13

Fig.4 Clades

Fig.5 Cladogram of Dinosauria

Page 14: Olavian Natural Sciences Society Journal

2011

possibilities, for instance by comparing certain genes & using statistical methods to extrapolate rates of change. One obvious problem with particularly this method, but also all methods involving DNA analysis, is that we currently have no DNA record for fossils or for pretty much any organisms which arenʼt alive today, which produces big issues when you consider that the overwhelming majority of species that have ever existed are extinct. This means that when DNA sequencing (or indeed other any other DNA-dependent method) is used to calculate distance, it relies largely on the principle of parsimony, and as has been discussed, evolution may not always be parsimonious. DNA sequencing is also used to calculate molecular clocks, from which we can roughly tell when species diverged. Unfortunately, again these rely on extrapolating current data a long way, thus when (as is often the case) rate of change in what is being measured (e.g. the gene) is not constant, problems present themselves with the accuracy of the molecular clock. Thus whilst DNA-based methods of analysis help to counter many of the issues presented by more traditional methods of phenetics or cladistics, unfortunately they are unable to counter all of them, and present major hindrances of their own accord. The molecular-based taxonomic future that was once promised (including proposals for the replacement of Linnaean nomenclature with a single alphanumeric code) has unfortunately not been realised.

Unfortunately, there is something looming on the horizon which promises to throw the entirety of taxonomy into disarray, and it is

advancement of human knowledge in the field of microbiology, namely that we are beginning to realise that the vast majority of life on earth is microscopic. The early taxonomists could not have predicted this, and the classification we use, developed with macroorganisms in mind, is changed almost unrecognisably when we try to reconcile the world of the microscopic with it. For instance, fig.6 is a modern phylogenetic conception of life on earth, a far sight from fig.7, which is the traditional 5-kingdom approach, still taught in many textbooks, and even less similar to fig.8, Darwinʼs famous hypothetical conception of phylogeny. The assertions made by phylogenies such as fig.6 are contentious, as they depict life from a standpoint which seems unnatural to most humans, namely that the differences between two microscopic ʻblobsʼ are far more significant than the differences between humans and, for instance, jellyfish, or between humans and mosses, or even between humans and other microscopic ʻblobs.ʼ More traditional classifications, closer to fig.7 than fig.6, are used by many taxonomists, however, as there are numerous issues with modern microorganism-inclusive classifications. For one, if itʼs difficult to produce accurate phylogenies of even the most well-known taxa, itʼs many times more so to do so for taxa of microorganisms, not least because the vast majority of microorganisms remain unknown to us, to the extent that our current understanding of even the most major taxa of microorganisms could well be wrong. Another fundamental problem with trying to classify

microorganisms is that the boundaries between species are even less well-defied than in macroorganisms, with horizontal gene-transfer & very rapid rates of evolution making any sort of classification extremely difficult to produce.

Indeed it becomes apparent that many of the problems with taxonomy are not specific to one method of taxonomy, but to taxonomy itself on a fundamental basis. Taxonomy involves categorising (and thus simplifying) life into divisions

Page 15

Fig.6 Proposed phylogenetic tree of life

Fig.7 All life as 5 kingdoms

Fig.8 Darwinʼs tree of life

Page 15: Olavian Natural Sciences Society Journal

2011

which are meaningful and useful for humans. Unfortunately, our understanding of evolution tells us that life is not black and white, but is constantly evolving, and speciation is not a single event but a gradual change that, for macroorganisms, generally occurs over huge time periods. Indeed if we were to develop a ʻtrueʼ taxonomic classification for

life, perhaps it would have to involve a genetic mapping of every generation of organisms on the planet in order to display speciation over time, requiring far more data collection & analysis than is even conceivable today. Indeed the compromised classifications that come out of many of the methods described earlier, whilst far from perfect, at

least provide insight into phylogenies and certainly provide utility to the scientists who work with them. As the horizon of human knowledge is pushed back, so too will our systems of taxonomy, and consequently our understanding of life, be furthered.

By Asher Leeks

Page 15

Sources:http://evolution.berkeley.edu/evosite/evo101/IIBPhylogeniesp2.shtmlhttp://rainbow.ldeo.columbia.edu/courses/v1001/5.htmlhttp://www.ucmp.berkeley.edu/clad/clad1.htmlhttp://homepage.mac.com/wis/Personal/lectures/evolutionary-anatomy/Phylogenetic%20Recon.pdfhttp://homepage.mac.com/wis/Personal/lectures/human-evol/2.htmlhttp://joelvelasco.net/teaching/systematics/mayr%2074%20-%20cladistic%20analysis%20or%20cladistic%20classification.pdfhttp://en.wikipedia.org/wiki/

Page 16: Olavian Natural Sciences Society Journal

2011

Are We Still Evolving?When asked what people think humans might look like millions of years into the future, one of two answers is usually given. Either the old science-fiction vision of a big-brained human with a high forehead and higher intellect (this doesn ʼ t ac tua l ly have any scientific backing) would be given, or [most] people would say that humans have stopped evolving and natural selection no longer appl ies to us. That everything we have built – our cities and our culture itself – were built with the same body and the same brain. However, they are wrong. Over the last 10,000 years data shows that human evolution has occurred a hundred times more quickly than in any other period in our speciesʼ history. Data shows that if anything the rate of human evolution is speeding up, not slowing down or stopping.

Genetic adaptations found, some 2,000 to 3,000 in total, are not limited to the well-recognized differences among ethnic groups in superficial traits such as skin and eye colour. The mutations relate to the brain, the digestive system, life span, immunity to pathogens, and bones. In short, virtually every aspect of our functioning. Many of these DNA variants are unique to their continent of origin, meaning that, as opposed to the human race becoming one, we are evolving further apart. Trends such as smaller teeth, smaller skull size and smaller stature are similar in many parts of the world, but other changes, especially over the past 10,000 years, are distinct to spec ific e thn ic g roups . In Europeans, the cheekbones slant backward, the eye sockets are

shaped like aviator glasses, and the nose bridge is high. Asians have cheekbones facing more forward, very round orbits, and a very low nose bridge. Native Australians have thicker skulls and the b iggest teeth, on average, of any population today. Thanks to amazing advances in sequencing and deciphering DNA in recent years, scientists have begun to uncover, one by o n e , g e n e s t h a t b o o s t evolut ionary fitness. These variants, which emerged after the Stone Age, seemed to help popu la t ions be t te r combat infectious organisms, survive frigid temperatures, or otherwise adapt to local conditions.

Several scientists have reviewed demographic data recently and have theorised that it was the population boom that has led to such a genetic variety and they found that ten thousand years ago, there were fewer than 10 million people on earth. That figure had soared to 200 million by the time of the Roman Empire. Since around 1500 the global population has been r ising exponentially, with the total now surpassing 7 bi l l ion. Since mutations are the basis on which natural selection acts, it stands to reason that evolution might happen more quickly as our numbers surge. The genomes of

a n y two individuals on the planet are more than 99.5 percent the same. Put another way, less than 0.5 percent of our DNA varies across the globe. That is often taken to mean that we have not evolved much recently, however humansʼ and chimpsʼ DNA is no more than 1 or 2% different and the change that that small amount of DNA causes is very significant.

A different group of scientists believe that it wasnʼt population that has caused the acceleration of human evolution but cultural shifts instead. They believe that an exceptional period in the history of our species occurred about 50,000 years ago. Humans were leaving Africa and fanning out across the globe, eventually taking up residence such diverse places as the Arctic Circle, the rainforests of the Amazon and the A u s t r a l i a n o u t b a c k . Improvements in clothing, shelter, and hunting techniques allowed for the opportunity to spread a c r o s s t h e g l o b e . T h e s e i nnova t i ons a re gene ra l l y believed to have protected us from natural selection, however this group of scientists came to the opposite conclusion – that as the human race began to expand over the globe, they would have

Page 16

?

Page 17: Olavian Natural Sciences Society Journal

2011

encountered very different forces as they adjusted to new foods, predators, climates, and terrains. And as the human race became more innovative, the pressure to change only intensified.

To study natural selection, both teams of scient ists looked t h r o u g h t h e I n t e r n a t i o n a l Haplotype Map, and this shows common patterns of genetic variation, for long stretches of D N A fl a n k e d b y a s i n g l e nucleotide polymorphism (SNPs). An SNP is an altered base. When the exact same genetic block is present in at least 20 percent of a population, according to the scient ists, i t indicates that something about that block has go t a surv iva l advantage; otherwise, it would not have become so prevalent. Because genes are reshuffled with each generation the presence of large unchanged blocks of DNA means they were probably inherited recently. And this points to natural selection.

It was discovered that 7 percent of human genes fit the profile of a recent adaptation, with most of the change happening from 40,000 years ago to the present. These apparent adaptations occurred at a rate that jumped a l m o s t e x p o n e n t i a l l y i n p reva lence as the human population exploded. To counter the view that human evolution had be occurring at the same speed the scientists ran a computer simulation to see what would have happened if humans had evolved at modern rates ever s i n c e w e s e p a r a t e d f r o m chimpanzees 6 million years ago. And the result was that the difference between the two species today would be 160 times greater than it actually is which clearly shows that human evolution has hit the accelerator recently.

These finding strongly support the theories that human evolution h a s b e e n a c c e l e r a t e d b y population growth and cultural shifts. When the human race left Africa it led to one of the biggest

changes to humans: skin colour. Paler complexions are a genetic adjustment to low light intensities, as people with dark skin have trouble manufacturing vitamin D from ultraviolet radiation in northern areas, making them more susceptible to serious bone deformi t ies . Consequent ly, Europeans and Asians over the last 20,000 years evolved lighter skin through two dozen different m u t a t i o n s t h a t d e c r e a s e production of the skin pigment melanin. Similarly, the gene for blue eyes codes for paler skin colouring in many vertebrates and hence might have come along with lighter skin. Clearly something made blue eyes evolutionarily advantageous in some environments. Another huge change that occurred was when communities began to c h a n g e f r o m h u n t i n g a n d gathering communities to farming communit ies. When people began to keep cattle herds, it became an advantage to get nutritious calories from the milk throughout life rather than just as a baby. A mutation arose about 8,000 years ago in northern Europe that allowed adults to digest lactose, which is the main sugar in milk, and it spread rapidly, allowing the rise of the modern dairy industry. Today the gene for lactose digestion is p r e s e n t i n 8 0 p e r c e n t o f Europeans but in just 20 percent of Asians and Africans. The creation of farming communities formed the basis for the first towns and cities and in these towns hygiene was pretty much non-existent – pathogens and disease spread like wildfire, so genetic variations that prevented some of these diseases became more prevalent.

Page 17

Page 18: Olavian Natural Sciences Society Journal

2011

However, not all scientists believe that the human race is evolving at this rate. Their argument is based upon the fact that the tools for studying the human genome are getting more accurate all the time and that right now they are only in their infancy. Also, some of the data used, for example the growth of the human population

in some areas, is based upon documents that were created in an era of poor documentation. M a n y s c i e n t i s t s w a n t t o understand the human genome better before they consider the ethnic variation. Many scientists believe that human evolution accelerating is just wishful thinking.

So, in 50,000 years the human race has managed to adjust its diet, change its skin and eye colour and many more smaller changes. What do you think the human race will look like in the future?

By Rahul Bagga

Page 18

Page 19: Olavian Natural Sciences Society Journal

2011

Can Humans Live Forever?For years, in reality and myth, people have longed to live forever. Whether through joining a religion and praying to various gods or turning to science for a possible solution, the reason why is simple. Many people are unafraid of death by a belief that they will one day live again after they die, or fear the finality of death and wish to find a way to avoid it. While many people who did actively try to stop death were ridiculed for their actions, there are a number of educated people, including doctors and scientists, who do believe that we will one day be able to find a cure for death. Can this really be done, or are we simply deluding ourselves?

Some scientists believe that the limit on a human life is about 125 years. Others believe that we can live for 1000 years; it is simply ageing that stops us from doing so. Currently, it is ageing that is stopping our organs from working for this length of time, because our bodies cannot make new cells to replacing the dying cells in our organs when we age. It is this process which some scientists are trying to slow down.

One way of doing this is to create an “anti-ageing pill”, which some scientists are currently working on. One such anti-ageing pill being developed is a pill that combats dangerous free radicals in the body. Free radicals are particles with unpaired electrons. Essentially, particles with free radicals are dangerous to the body and cause cell damage. If you find a way to reduce or eliminate the free radicals in the body, there is less cell damage, so you potentially live longer.

One way that was thought of doing such a thing was antioxidant pills. The reasoning behind it was simple: free radicals have unpaired electrons, but antioxidants can donate electrons to these, neutralising the particles and stopping them causing damage. Give people tablets of antioxidants and they will live for much longer! This was the reasoning behind the explosion in demand for antioxidant pills in the late 20th century. However, it is believed that, far from being beneficial, too many antioxidants are useless in the body and can even be proactively harming. It is believed (though nobody knows for sure) that too many antioxidants can make our bodyʼs own antioxidant production sluggish and that some free radicals are needed to keep our antioxidant production high, not dissimilar to the effect of not enough pathogens in the body leading to a weak immune system. Clearly then, antioxidant tablets seem not to be the solution.

Another way to try to increase life is a calorie-restricted diet. The reason why this has been proposed as a way to increase human life is because it has been observed in rodents and other animals that reducing their calorific intake while making sure that the animals get everything that they need in terms of water, vitamins and minerals, leads to an increase in a healthy life span for them. Scientists are trying to find a way for humans to be able to do this healthily, but even so, it is unlikely that this will allow us to live much more than we currently live at present, and certainly will not allow us to live forever.

Other natural methods, such as injecting telomerase, an enzyme which protects cells, or injecting human growth hormone have been tried. However, “natural” ideas of prolonging life have been dismissed as nonsense, with many people saying that they are bad as beauty creams and that they just donʼt work. Perhaps then, the future of immortality lies with technology? Technology has already revolutionised our lives, but can technology increase their span? The first thing that comes to mind when we think of immortality and genetics is probably cryonics. The idea that you can freeze someone, create a still snapshot of their body, so that hundreds, possibly thousands of years later, they can be unfrozen and step out of a freezer as if nothing had happened to them, is a particularly imaginative idea . This is another idea that has been ridiculed by some, but people have fallen into lakes and saved after 1 hour, surviving only because they fell into icy water and their bodyʼs metabolic rate was slowed so much, to a point

Page 19

Page 20: Olavian Natural Sciences Society Journal

2011

where they needed no oxygen at all.

At the moment, cryonics is expensive; apparently living forever comes at a cost of £100,000 plus £250 annually. You also cannot be frozen when you are alive, as you can only be legally cryogenically frozen if they are “legally dead”. This is when the heart stops beating but some cellular function still remains, as opposed to “totally dead” where you are totally dead. When you are at this point, a team quickly take you to their facility and “pre-freeze” you, and inject you with an anti blood clotting chemical. When you arrive at the facility, you are then frozen. Water is removed and a chemical not unlike anti-freeze is injected into your body before you are frozen at -320°C.

So far, only 200 people have been frozen in the world. With the successful freezing of Dr James Bedford, the first person ever to be frozen, other people, especially famous people such as the baseball player Ted Williams have been frozen and others like Simon Cowell have expressed a desire to be frozen. Cryonics has not been without its controversy, though. Ignoring the moral and ethical objections, it suffered a major setback when 9 bodies had discovered to have been thawed due to depletion of funds. That said, safeguards are now in place to make sure that people who are frozen stay frozen.

Cryonics survives on the idea that one day the technology will exist to revive people. Without belief in this idea by people, cryonics simply wonʼt take off. Cryonics itself is relying heavily on nanotechnology; a technology which it believes shows the

greatest promise of being able to reverse damage done to human cells. Nanotechnology is working with single atoms to be able to do things. It is believed by some that one day nanotechnology will be able to replace organs and human tissue, as well as do a whole host of other things. At the far end of the scale of optimism, a Dr Kurzweil believes that in only 20 years time, nanotechnology will mean that red blood cells will be able to be replaced by nanobots, claiming that “nanotechnology will let us live forever”. Other sci-fi type ideas exist for how we can escape the clutches of death, such as the idea that we will become fused with machines in a Terminator-esque fashion, and that our brainʼs “data” wil be stored on hard drives and our limbs will be replaced by silicon limbs. Of course, both these things should be treated with extreme scepticism as to whether they will actually become true.

So, given the vast numbers of ways that people are trying to live forever, should we believe the hype? Itʼs certainly difficult to see some of the predictions of some of the scientists actually happening. The moral and ethical considerations are important as well. Should we really attempt to try to live forever? Even if we can do it, should we try to do it? The worldʼs population has reached 7 billion and is set to continue growing at a fast rate, so it is possible that even by the time we manage to crack the technology, there wonʼt be enough space for everyone. And of course, the million dollar question, do we really want to live forever? Escaping death might be nice for those of us who donʼt believe in some sort of afterlife, but do you

really want to be frozen, or want to have nanobots replacing your blood cells? Either way, it is unlikely to happen in our lifetime, so itʼs probably best not to worry about any of this at all.

By Fadil Nohur

Page 20

Page 21: Olavian Natural Sciences Society Journal

2011

String Theory, a SummaryFaults in the standard model:

The standard model lists all of the known particles and is split into bosons (force carrying particles) and Fermions (which obey the Pauli Exclusion Principle). Fermions can then be divided into quarks and leptons. Quarks have colour charge. The charge of the strong nuclear force and can be ʻredʼ, ʻgreenʼ or ʻblue or their respective anti colours. These are just labels to make the charges easier to visualise. The charges must add up to neutral white in a hadron such as a blue up quark, red up quark and a green down quark in a proton. These colours can be rapidly exchanged between quarks via gluons. Leptons are either electrons or neutrinos (or their similar, heavier cousins).

All interactions between particles can be thought of as exchanging bosons between them. The resulting energy changes on the particles are the effects of the force. The standard model has proved very useful at making accurate predictions about subatomic events but it is far from satisfactory.

The standard model does not explain why the elementary particles have the masses and charges they do. They have been chosen from experiment to fit with the theory. There is also no known reason why 3 generations of matter exist. It also assumes that the Higgs boson exists and without it, the theory would fall apart. There is currently no conclusive evidence that the Higgs boson exists.

The most important flaw with the standard model is that it does not

include gravity. When the graviton is used as the messenger particle for gravitational interactions, infinities appear in the calculations which make no physical sense.

The standard model:

More problems arise when Quantum mechanics is combined with Relativity. Quantum mechanics states that there is an inherent degree of uncertainty in the universe. This means that

there has to be slight unknown variations or fluctuations of virtual particles being created and destroyed. At smaller and smaller scales, these fluctuations get increasing violent. This quantum frenzy is known as quantum foam and when this is taken to account with the equations of general relativity, infinities appear everywhere. Some of these infinities can be cancelled by invoking other techniques in a technique known as renormalisation but there are still infinities which remain.

Infinities also appear in relativity in singularities where the equations say that the curvature of space time is infinite and that the singularity is a zero-dimensional point – an answer which does not make sense.

Both quantum mechanics and Relativity have been tested extensively and all experimental evidence supports each theory and their predictions have been confirmed to an astonishing degree of accuracy – yet they are fundamentally incompatible with one another.

The basics of string theory:

String theory states that rather than modelling elementary particles as zero-dimensional point-like particles, they can be thought of as vibrating one dimensional strings of energy with different vibration patterns corresponding to different elementary particles. These strings are the ultimate building blocks of nature. Each string is extremely tiny (roughly the Planck length which is 10^-35 m) and under extreme tension (1039 tonnes). There also needs to be 11 dimensions - one of time and 10 of space (although for a while the theory required only 10 dimensions) Out of the ten spatial dimensions, 3 of them are uncurled and macroscopic whereas the other 7 are curled to a size smaller than the plank length and are usually unnoticed.

The energy of a string is determined by 2 factors related to its vibration: The amplitude of the waves and the wavelength. A higher amplitude means higher energy as does a shorter wavelength. As energy is effectively the same as mass through E=mc^2, Low amplitude, long wavelength vibrations correspond to low energy and therefore low mass elementary particles and vice versa. Other properties such as electrical charge, weak charge and strong (colour) charge also arise from specific vibrational patterns.

Page 21

Page 22: Olavian Natural Sciences Society Journal

2011

The effects of tension on string properties:

The strings are also under extremely high tension (plank tension) and because they are not anchored to a particular point at each end (such as guitar strings) the strings contract to a miniscule plank length size. This also means that strings have a lot of vibrational energy because it takes a lot more energy to ʻpluckʼ

a string under such high tension. The higher the tension, the higher the energy so the tension of a string also has a direct effect of the properties of the elementary particle the string represents.

The energy of a string is also quantized by the fact that the vibrational pattern (e.g. number of wavelengths) and the tension of the string must be discrete. This means the overall energy of the strings must be an integer multiple of the plank energy which is the minimum energy of a string. However, since the plank energy is so large (1019 times the mass of a proton) it may seem that the strings cannot correspond to any elementary particles smaller than a proton (in mass and therefore energy.)

Quantum mechanics also states through Heisenbergʼs uncertainty principle that there must be unpredictable quantum fluctuations which affect the stringʼs vibration as well as everything else. The energies associated with these fluctuations can be negative and reduce the

overall stringʼs energy by about the plank energy giving rise to lighter strings which agree with the particles of the standard model. When this was taken into account with the graviton, the quantum fluctuations cancelled the energy of the string perfectly to reveal a zero mass boson which agrees with the expected properties of the graviton.

The high tension of strings means that if there is a never ending sequence of elementary particles increasing in mass by an integer of plank energy, then we would not notice them because they could only be created at extremely high energies either shortly after the big bang or in immense particle accelerators which would have to span galaxies.

Combining Gravity and Quantum mechanics:

String theory can reduce the cataclysmic effects which quantum mechanics has on space time on extremely small scales. Because strings are not point-like particles and occupy the dimensions of space, they cannot be used to probe lengths smaller than themselves (the plank length) and less information is also gained through the quantum fluctuations within the string as a result of the uncertainty principle. As a result, we cannot know anything about the sub plank lengths and they cannot affect the rest of the universe. For the purpose of economy we say that sub-plank length quantum fluctuations do not exist.

A more precise way of visualising this is by imagining a typical quantum mechanical interaction represented by a Feynman

diagram and then as strings instead of point-like particles:

Notice how there is an unambiguous point where the interaction first occurs (represented by the squiggly line) in the particle Feynman diagram. However, with the strings replacing particles, there is ambiguity in when they first join together (to create a boson) and when they split apart. If the interaction can be visualised by taking vertical slices of time then the two strings will first touch at a certain point. However, if the slices of time are taken at a different angle, then the time in which the strings touch will be different. As no observer has a privileged inertial frame of reference then it is not clear who is right. This means that the interaction could have occurred

at various points in time and the violent quantum fluctuations are drastically smoothed out.

Super symmetry:

An object is said to display symmetry if it can undergo a particular transformation and still look identical to its original form. For example, a triangle will look exactly the same if you rotate it about its centre by 120 degrees.

Symmetry in physics refers to when something is observed to be the same even if the observer is in different states. For example, the law of gravity holds everywhere in the universe and the measured speed of light in a vacuum is always the same.

Page 22

Page 23: Olavian Natural Sciences Society Journal

2011

There is also no way to tell if you are moving or stationary as you could say that everything else is moving relative to you and that any feeling of acceleration is due to gravity.

Super symmetry is a special type of symmetry which is not unique to string theory and states that all fermions and bosons have a super partner with the fermions being paired with bosons with a spin a half integer less and the bosons being paired with a fermion with also a spin of a half integer less.

Bosons have an integer spin and fermions have a half integer spin so by reducing the spin by a half, we can expect bosons to be paired with fermions and vice versa. These super partners are not a part of the current standard models. I.e. a gluon could not be paired with an up quark and instead should be paired with a (yet unobserved) gluino. This pairing cancels out some of the quantum fluctuations because the energy associated with a boson is positive whereas the energy associated with its super partner will be negative. This means that the parameters of the standard model no longer need to be fine-tuned to cancel out violent quantum fluctuations.

Grand unification energy:

The electric field increases as you get closer to a charged particle. This is because at a distance, the quantum

mechanical fluctuations reduce the full effect of the field but as you get closer, these effects get less and you feel the electric force greater. However, with the strong and weak nuclear force, the quantum fluctuations amplify their effective force so as you get closer, the effect of their force is reduced. This means that at a certain energy where you can probe small distances the strength of each of the forces is almost the same.

However, when the quantum fluctuations of the super partners are factored into the calculations, the forces meet exactly at a certain (very high) energy. This means that the electromagnetic, strong and weak nuclear forces become indistinguishable from one another.

Super symmetric string theory:

One of the earliest forms of string theory was bosonic string theory which only applied to particles with an integer spin (bosons). This meant that it couldnʼt be a theory of everything as it did not include fermions and more importantly, some of the vibrational patterns meant that there was a particle with imaginary mass called a tachyon which would travel faster than the speed of light and back in time. This did not make sense and the only way for string theory to incorporate fermions and get rid of tachyons was to include supersymmetry.

Extra dimensions:

In the 1920s, Kaluza realised that new equations would arise from

Einsteinʼs equations of relativity if they were modelled to include a fourth spatial dimension. These equations were exactly the same as Maxwellʼs equations of electromagnetism and showed how gravity could be united with the electromagnetic force through higher dimensions. However, Kaluzaʼs theory did not agree with experimental data and was only revived in the 1970s when the weak and strong nuclear forces were better understood. The forces could be unified through higher dimensions and reduce the quantum fluctuations if the extra dimensions were curled up to a size smaller than the plank length.

String theory required there to be extra dimensions otherwise some of the probabilities of quantum events occurring are either negative or infinite – clearly not possible in our universe. The solution was to say that strings can vibrate in more than 3 dimensions rather than just up/ down, left/right, forwards/backwards. If a string can vibrate in 9 spatial dimensions then the negative and infinite probabilities cancel out. Because the strings are so small, the extra dimensions can be curled up to around the plank length and go unnoticed on macroscopic (and even microscopic) scales.

An important consequence of curled up dimensions in string theory is that they limit and influence the possible vibrational patterns of the strings. The way

Page 23

Page 24: Olavian Natural Sciences Society Journal

2011

these extra dimensions are curved affect the properties of strings in the 3 dimensions we observe them in (as elementary particles).

Calabi-Yau shapes:

The Calabi-Yau shapes are the possible configurations of the 6 curled up dimensions and their shapes are determined by the current equations of string theory. If you were to move your hand, you would pass through the higher dimensions numerous times but because they are so small and curled up, you would circumnavigate them many times ending up to where you started at with no notion that you had passed through them.

The number of possible Calabi-Yau spaces which are possible to describe the curled up dimensions can also be reduced through the assumption that there are only 3 generations of matter. This means that the Calabi-Yau shapes must have 3 multidimensional holes in them.

A symmetry with Calabi-Yau shapes is that if you pinch parts of the shape and transform it in the right way, then the number of holes in the even dimensions is equal to the new number of holes in the odd dimensions. This means that the Calabi-Yau shapes are paired with other Calabi-Yau shapes which yield the same physical properties. This is known as mirror symmetry and made some of the calculations in string theory a lot easier. It also means that is a tear occurs in a Calabi-Yau shape then in its mirror, the tear would not occur.

Dualities in string theory-T duality:

For the supposed theory of everything it was an embarrassment for there to be 5 equally valid but conflicting versions of string theory. For a while they all seemed to be different with different values of the coupling constant and Calabi-Yau shapes. The 5 different string theories were type I, type IIA, type 11B, Heterotic type 0 (32) and Heterotic type E8 x E8. They have subtle but important differences:

In type IIA, vibrations travelling clockwise are identical to those travelling anti-clockwise.

In type IIB, vibrations travelling clockwise have an opposite spin compared to if they were travelling anticlockwise.

The Heterotic strings theories have clockwise vibrations the same as type II but anti-clockwise vibrations travel in 26 dimensions (the same as bosonic string theory)

Type I differs to type II in that the strings can be open and have loose ends as well as being closed.

One of the first dualities to be discovered was T-duality. This duality results from the fact that strings can wrap around dimensions unlike point particles. The energy of a string is given by its vibrational pattern and its winding number. The winding number is effectively the minimum energy of a string and is determined by the size of the radius of the dimension and the number of times the string is wrapped around the dimension (winding mode). Therefore, the

longer the length of the string, the higher the minimum energy the string carries. This means that the winding number is directly proportional to the radius of a dimension.

The energy of a string is also dependent on its vibrational energy. With a smaller radius, the string is confined to a smaller space and so the fluctuations from the Heisenberg uncertainty principle increase. This causes the energy of the string to increase at smaller scales meaning that radius is inversely proportional to vibrational energy.

The total energy of a string is given by the sum of its winding and vibrational number. This means that the energy of a string with radius R is exactly the same as a string with radius 1/R. This means that the distinction between the large and small vanishes and according to string theory, our universe which has a size 1061 times the plank length is identical to a universe which was 10-61 times the plank lengths. If we use an unwound string such as photons to measure the universe we will arrive at the familiar 15 billion light years but if we measure it using wound strings which have a much higher energy then we will find it to be 10-61 times the plank length. (All of this assumes that the extended spatial dimensions follow the no boundary condition so if you were to travel in any given direction you would end up where you started)

This has important implications for a universe which undergoes a big crunch. As the universe contracts, it will eventually be easier to measure the size of the

Page 24

Page 25: Olavian Natural Sciences Society Journal

2011

universe using wound strings and because they will measure the reciprocal, the universe will appear to expand in a big bounce.

This also links some of the apparently different string theories together. For some theories, the model of the universe with radius R is exactly identical to another theoryʼs model for 1/R. This duality links Type IIA and type IIB together and also links together both of the Heterotic string theories. Each of the pairs of theories no longer conflict each other.

Dualities-S Duality:

For most of string theory, Perturbative methods were used in calculations. These involved making an approximation and then adding in more details to refine the approximation by ever

smaller amounts. For example, if the approximation is 1 and the sequence of refinements is 1, 0.1, 0.001… then the approximation is accurate. However, if the sequence of refinements is 1, 10, 100… then the approximation is not accurate and this is known as a failure of perturbation.

Again, because of quantum mechanical fluctuations, an interaction between 2 particles is never simple and can occur in a variety of ways. The likelyhood of a loop interaction occuring is given by the coupling constant and varies for each string theory. If the value is small i.e. less than 1 then interactions with a higher number of loops are less likely and the perturbative methods are still valid. However, if the coupling constant is larger than one then the interactions with greater numbers of loops become more likely and perturbative methods failure.

If greater than 1, more likely (strong coupling constant)

If less than 1, less likely (weak coupling constant)

Dualities appeared between string theories between weak and strong coupling constants. It was found that type IIB displayed symmetry between a strong and weak coupling constant. The calculations show that if a weak coupling constant is used, then it will have the same result if a strong coupling constant is used. A strong coupling constant in Type I string theory corresponds to a weak coupling constant in Heterotic type O (32) and vice versa.

Unification through M-theory:

In the annual string theory conference of 1995, Edward Witten suggested the ground breaking possibility that there could be an eleventh dimension to string theory which could unite all the other string theories and include super gravity.

The eleventh dimension comes into view when we increase the value of the string coupling constant. If we do this for Heterotic strings, the strings form two dimensional membranes. If we do this for Type IIA strings then the 1d loop changes to look like a bicycle tyre with a two dimensional curved surface.

By James Teoh

Page 25

Page 26: Olavian Natural Sciences Society Journal

2011

Society Speeches – Autumn Term 2011

Page 26

Greatest scientific discoveries of the last century

A discussion led by James Teoh and Joe Barrow, we decided the most important scientific discovery was the introduction of antibiotics, due to its profound implications throughout the world, with the discovery of the structure of DNA coming second.

Extra-terrestrial life A talk led by Sam Bentley explored the probabilities involved with extra-terrestrial life and its discovery, as well as the different possibilities for the forms extra-terrestrial life might take.

The science of flight A talk headed by Morgan Roberts looked into the concept & science of flight from a number of standpoints, leading to thought-provoking discussion & debate.

Importance of insects A presentation by Asher Leeks discussed the importance of insects to humans & to the world as a whole and culminated in exploring potential future uses of insects as a global food source.

Time travel A presentation by James Teoh investigated the possibilities of time travel, exploring the human & physical paradoxes involved and offering insights into the nature of time itself.

Immunology A talk on the broad topic of human immunology led by Tom Russell, which offered an introduction to the mechanisms of the human immune system & the various pathological threats that exist.

The future of nuclear energy

A presentation by Mounif Kalawoun explored the various forms of nuclear power, going into detail on both uranium and thorium based fission, as well as the possibilities for nuclear fusion.

The maddest scientific experiments of all time

A presentation prepared by Sudhir Balaji and Ghajajan Surenthiran, presented by Sam Rix and Julin Chandrarajah, discussed the most bizarre experiments ever conducted, concluding that a Soviet experiment involving keeping a dog’s head alive outside its body was the most bizarre.

The deadliest & weirdest diseases known to man

A talk by Akwasi Bamfo discussed nine of the deadliest diseases in history, including diseases such as ebola and Spanish influenza, before exploring some of the most rare and unusual medical conditions on record.